Meta Releases AI Model to Preserve More Than 4,000 Languages

Yana Khare 25 May, 2023 • 3 min read

Meta’s latest invention, Massively Multilingual Speech (MMS) models, has been a game-changer in language preservation and communication. This revolutionary advancement expands the capabilities of text-to-speech and speech-to-text technology, allowing support for over 1,100 languages and identifying more than 4,000 spoken languages. This article will delve into Meta’s groundbreaking idea of preserving endangered languages and bridging communication gaps.

Meta | language preservation | MMS Models | text-to-speech and speech-to-text technology

Urgent Need to Protect Endangered Languages

We must celebrate and preserve linguistic diversity, an essential aspect of human culture. However, according to UNESCO, more than 43% of the world’s languages are endangered. The urgency to protect these languages and bridge communication gaps drove Meta’s dedicated team to develop MMS models.

MMS Models: A Solution to Linguistically Diverse Communities Worldwide

Massively Multilingual Speech models: A Solution to Linguistically Diverse Communities Worldwide

Meta’s MMS models have vast potential across various industries and use cases. It includes virtual and augmented reality technology, messaging services, and more. These powerful AI models can seamlessly adapt to any user’s voice and inclusively comprehend spoken language, providing individuals with access to information and enabling device usage in their preferred language.

Open Sourcing the Models and Accompanying Code

Meta has decided to open-source the MMS model and accompanying code to ensure worldwide collaboration and build upon this pioneering language preservation work. Researchers and developers worldwide can now leverage this technology to foster cooperation to preserve linguistic diversity and bring humanity closer together.

Ingenious Use of Religious Texts

Ingenious Use of Religious Texts | Meta | Language preservation | text-to-speech and speech-to-text technology

There are limited existing speech datasets, approximately 100 languages. Thus, posing unique challenges to language recognition technology. Meta ingeniously leveraged religious texts, such as the Bible. They have been translated into numerous languages and extensively studied for language translation research to overcome this hurdle. These translations provided publicly available audio recordings featuring individuals reading the texts in different languages.

Also Read: Improving the Performance of Multi-lingual Translation Models

Dataset Expansions and Unbiased Output

Meta in the MMS Model curated a dataset containing readings of the New Testament in over 1,100 languages, with an average of 32 hours of audio data per language. By incorporating unlabeled recordings of various Christian religious readings, the dataset expanded to encompass more than 4,000 languages. The models perform equally well for male and female voices, despite the predominantly male speakers in the religious audio recordings. Furthermore, the models remain unbiased in their output without favoring religious language based on the content of the audio recordings.

Meta’s Commitment to Future Advancements

Meta's Commitment to Future Advancements | AI | Future of AI | Massively Multilingual Speech models

Meta said it remains committed to future advancements in language accessibility. The company aims to expand the coverage of MMS models to support even more languages. At the same time, they want to address the complexities of handling dialects—a challenge that has eluded existing speech technology.

Also Read: Meta Open-Sources AI Model Trained on Text, Image & Audio Simultaneously

Our Say

Meta’s Massively Multilingual Speech models have revolutionized language recognition technology. By bridging communication gaps, preserving endangered languages, and enabling device usage in preferred languages, advancing the capabilities of text-to-speech and speech-to-text technology, Meta’s MMS models offer a solution to the challenges faced by linguistically diverse communities worldwide.

Yana Khare 25 May 2023

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear