FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech
Abstract
We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a variety of speech tasks, including Automatic Speech Recognition (ASR), Speech Language Identification (Speech LangID), Translation and Retrieval. In this paper, we provide baselines for the tasks based on multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable speech technology in more languages and catalyze research in low-resource speech understanding.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 10
Browse 10 datasets citing this paperSpaces citing this paper 1
Collections including this paper 0
No Collection including this paper