Subscribe
About

Putting AI to work in the Deaf community

Edward van Niekerk
By Edward van Niekerk, vocational programme coordinator at Belgium Campus iTversity.
Johannesburg, 09 Jun 2025
Edward van Niekerk, cluster head: Business Science, Belgium Campus ITversity.
Edward van Niekerk, cluster head: Business Science, Belgium Campus ITversity.

Despite challenges, there is cause to be optimistic about the future of artificial intelligence (AI) for Deaf students. AI is rapidly progressing, and there is hope that in the coming years, technological advancements will make these tools more accessible.

However, when AI finally becomes capable of fluently translating sign language, it must begin with children.

If AI can give children a signed version of fairytales, history, science and more, the cognitive benefit is just remarkable. Imagine Deaf children being able to “read” a story about space travel or dinosaurs.

It will be a lot easier to start with children because the language isn’t as complex as it is in higher education, where terms and concepts can be very technical.

Still, that level of development remains a long way off for Deaf students, especially in fields like computer science. AI still faces significant challenges, most notably, the translation of sign language.

One of the primary hurdles in AI's ability to translate sign language effectively is the lack of adequate training data. AI systems require vast amounts of data to learn, refine and improve their accuracy over time. However, sign language presents a unique problem.

Sign language isn't just a visual form of communication; it's three-dimensional and heavily relies on depth perception, facial expressions and subtle hand movements, all of which are hard to capture in traditional videos. This is quite unlike spoken languages, which are linear and one-dimensional.

One of the primary hurdles in AI's ability to translate sign language effectively is the lack of adequate training data.

Computer science, being a highly-visual and conceptual field, would benefit greatly from a ‘signed’ AI interface. However, the limited number of videos and images in South African Sign Language (SASL) means there simply isn’t enough for AI systems to learn from.

SASL is distinct from American Sign Language (ASL) or British Sign Language (BSL). In fact, each country’s sign language is developed locally and is consequently unique.

Most of the available digital data is in ASL or BSL, which aren't relevant for our Deaf community due to different terminology in signs.

A key issue that makes capturing sign language particularly difficult for AI is depth perception. When you're signing, one hand might obscure another, and that’s an essential part of the meaning. AI needs to be able to capture the depth of movement, the facial expressions, and the nuances that make sign language different from spoken language.

Despite the potential for AI to improve the way Deaf students access education, as yet no platform can effectively translate sign language at a level close to natural language processing.

This is something we already see with text-based AI systems like ChatGPT, which can translate between languages like English and Spanish with high accuracy. Currently, AI can only manage very basic tasks − like spelling the alphabet or saying simple phrases − but it’s nowhere near the fluency needed for real-time conversations, or in-depth academic discussions.

For computer science students who use sign language as their primary form of communication, the gap between spoken language and accessible educational resources can be substantial.

While Deaf students can use text-based AI tools like ChatGPT, which can provide a written form of communication, they miss out on the more natural experience of learning through sign language. AI tools currently fail to provide the depth and contextual understanding that a signed version would deliver.

As for the future, imagine learning programming languages or complex technical concepts with the ability to have them explained in a 3D visual space, where the tutor can sign in real-time with facial expressions and hand movements that bring clarity. This would allow for a much richer learning experience for Deaf students.

The problem is that the development of such tools isn't seen as a priority for many companies. From a business perspective, investing in AI for sign language translation is costly, and the market of Deaf users is small compared to the wider population. That’s a huge disadvantage for the Deaf community.

Even in cases where 3D models of sign language are created, they often lack the nuance and fluidity of natural signing. The only way to effectively generate accurate sign language with AI is by tracking real people's movements and translating that into models. But true generative AI, which can create sign language from scratch, is still quite far off because of the complexity involved.

Therefore, the real breakthrough will come when AI starts working with children. When AI can teach and communicate with children using sign language, it will lay the foundation for a more inclusive educational system.

Working with children is vital because they are at the stage where language foundations are laid, and the terminology and concepts are simpler. AI should focus on creating tools that are accessible to children first and then build up the complexity over time.

In the meantime, as AI technology advances it will begin to prioritise the needs of minority groups, such as South African Deaf students, to ensure no one is left behind in the digital age.

Share