Dataset Viewer

The dataset viewer should be available soon. Please retry later.

Waxal Datasets

Dataset Description

The Waxal project provides datasets for both Automated Speech Recognition (ASR) and Text-to-Speech (TTS) for African languages. The goal of this dataset's creation and release is to facilitate research that improves the accuracy and fluency of speech and language technology for these underserved languages, and to serve as a repository for digital preservation.

The Waxal datasets are collections acquired through partnerships with Makerere University, The University of Ghana, Digital Umuganda, and Media Trust. Acquisition was funded by Google and the Gates Foundation under an agreement to make the dataset openly accessible.

ASR Dataset

The Waxal ASR dataset is a collection of data in 19 African languages. It consists of approximately 1,250 hours of transcribed natural speech from a wide variety of voices. The 19 languages in this dataset represent over 100 million speakers across 40 Sub-Saharan African countries.

Provider Languages License
Makerere University Acholi, Luganda, Masaaba, Nyankole, Soga CC-BY-SA-4.0
University of Ghana Akan, Ewe, Dagbani, Dagaare, Ikposo CC-BY-4.0
Digital Umuganda Fula, Lingala, Shona, Malagasy, Amharic, Oromo, Sidama, Tigrinya, Wolaytta CC-BY-SA-4.0

TTS Dataset

The Waxal TTS dataset is a collection of text-to-speech data in 16 African languages. It consists of approximately 240 hours of scripted natural speech from a wide variety of voices.

Provider Languages License
Makerere University Acholi, Luganda, Kiswahili, Nyankole CC-BY-SA-4.0
University of Ghana Akan (Fante, Twi) CC-BY-4.0
Media Trust Fula, Igbo, Hausa, Yoruba CC-BY-SA-4.0

Loud and Clear | Kikuyu, Luo | CC-BY-SA-4.0

How to Use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale.

First, ensure you have the necessary dependencies installed to handle audio data. You will need ffmpeg installed on your system.

Google Colab / Ubuntu

sudo apt-get install ffmpeg
pip install datasets[audio]

macOS

brew install ffmpeg
pip install datasets[audio]

Windows Download and install from ffmpeg.org and ensure it's in your PATH.

pip install datasets[audio]

If you encounter RuntimeError: Could not load libtorchcodec, please ensure ffmpeg is correctly installed or check for compatibility between your torch, torchaudio, and torchcodec versions.

Loading ASR Data

To load ASR data for a specific language, specify the configuration name, e.g. sna_asr for Shona ASR data.

from datasets import load_dataset, Audio

# Load Shona (sna) ASR dataset
asr_data = load_dataset("google/WaxalNLP", "sna_asr")

# Access splits
train = asr_data['train']
val = asr_data['validation']
test = asr_data['test']

# Example: Accessing audio bytes and other fields
example = train[0]
print(f"Transcription: {example['transcription']}")
print(f"Sampling Rate: {example['audio']['sampling_rate']}")
# 'array' contains the decoded audio bytes as a numpy array
print(f"Audio Array Shape: {example['audio']['array'].shape}")

Loading TTS Data

To load TTS data for a specific language, specify the configuration name, e.g. swa_tts for Swahili TTS data.

from datasets import load_dataset

# Load Swahili (swa) TTS dataset
tts_data = load_dataset("google/WaxalNLP", "swa_tts")

# Access splits
train = tts_data['train']

Dataset Structure

ASR Data Fields

{
  'id': 'sna_0',
  'speaker_id': '...',
  'audio': {
    'array': [...],
    'sample_rate': 16_000
  },
  'transcription': '...',
  'language': 'sna',
  'gender': 'Female',
}
  • id: Unique identifier.
  • speaker_id: Unique identifier for the speaker.
  • audio: Audio data.
  • transcription: Transcription of the audio.
  • language: ISO 639-2 language code.
  • gender: Speaker gender ('Male', 'Female', or empty).

TTS Data Fields

{
  'id': 'swa_0',
  'speaker_id': '...',
  'audio': {
    'array': [...],
    'sample_rate': 16_000
  },
  'text': '...',
  'locale': 'swa',
  'gender': 'Female',
}
  • id: Unique identifier.
  • speaker_id: Unique identifier for the speaker.
  • audio: Audio data.
  • text: Text script.
  • locale: ISO 639-2 language code.
  • gender: Speaker gender.

Data Splits

For the ASR Dataset, the data with transcriptions is split as follows: * train: 80% of labeled data. * validation: 10% of labeled data. * test: 10% of labeled data.

The unlabeled split contains all samples that do not have a corresponding transcription.

The TTS Dataset follows a similar structure, with data split into train, validation, and test sets.

Dataset Curation

The data was gathered by multiple partners:

Provider Dataset License
University of Ghana UGSpeechData CC BY 4.0
Digital Umuganda AfriVoice CC-BY-SA 4.0
Makerere University Yogera Dataset CC-BY-SA 4.0
Media Trust CC-BY-SA 4.0

Considerations for Using the Data

Please check the license for the specific languages you are using, as they may differ between providers.

Affiliation: Google Research

Version and Maintenance

  • Current Version: 1.0.0
  • Last Updated: 01/2026
Downloads last month
1,973

Models trained or fine-tuned on google/WaxalNLP