← Back to blog

NLP Ā· Low‑Resource Languages

Low‑Resource NLP Journey: Lessons from Building an Igala Dataset

What it actually feels like to build a dataset for a language the internet mostly ignores.

When you read about NLP, most examples quietly assume English, maybe a handful of European languages, and occasionally something like Chinese thrown in. If your language isn’t in that list, it’s almost invisible as far as tools, datasets, and benchmarks are concerned.

Igala is one of those languages. It’s spoken by millions of people, but its digital footprint is tiny. That gap is what pushed me to start working on an Igala dataset instead of yet another English experiment.

Starting with almost nothing

The first surprise was how little text was available in a clean, usable form. No ready‑made parallel corpora, no nice crawls from big websites, nothing you could just download and plug into a model.

That meant going back to basics: collecting sentences from multiple sources, cleaning them by hand, and being very honest about the quality of what I had. It’s slower, but it forces you to understand the language you’re working with instead of treating it as abstract tokens.

More than just translation pairs

It’s tempting to think ā€œparallel sentences = dataset = doneā€. In reality, the first version of the data is only the beginning. You quickly run into:

  • Inconsistent spellings for the same word.
  • Dialectal differences that leak into the text.
  • Sentences that look fine on the surface but are noisy inside.

A big part of the work was standardising where it made sense, documenting where it didn’t, and making sure future users would understand what they were looking at instead of trusting it blindly.

Why I built an explorer, not just a CSV

I decided early on that I didn’t want the dataset to live as a zip file on someone’s hard drive. That’s how resources disappear. The Streamlit explorer is my way of forcing the data to be visible and inspectable.

You can scroll through sentences, filter, get basic statistics and visualisations, and get a feel for the dataset without writing a single line of code. That matters for researchers, but it also matters for speakers of the language who want to see what’s being built in their name.

What this taught me about ā€œlow‑resourceā€

ā€œLow‑resourceā€ is often treated as a technical label, but it’s also a political one. It reflects which communities have had the time, money, and infrastructure to put their language online.

Working on Igala reminded me that every dataset is a choice: whose voices are included, whose spelling is considered ā€œstandardā€, and whose stories end up being fed into the next generation of models.

On the technical side, it also made me more careful. When you only have a few thousand sentences, every error matters. The cleaning, auditing, and documentation are just as important as the model you eventually train.

Where I’d like to take it next

The current dataset is a starting point, not a finished product. My goals going forward are:

  • Grow the corpus beyond ten thousand sentence pairs.
  • Add audio to move towards speech technology.
  • Work with linguists and native speakers to refine the annotation guidelines.
  • Make it easier for other African languages to reuse the same pipeline and explorer.

If you’re working on similar problems or on another under‑served language, I’d be happy to compare notes.