For coverage of NYU Abu Dhabi's Engineering Capstone Festival, we focus on senior Tamas Aujeszky's Arabic sign language interpreter. With more than 120 million hearing-impaired people worldwide, the idea was born when Aujeszky realized that many of these individuals are hindered in their ability to communicate with hearing people. Salaam spoke with Aujeszky about the project earlier this month.
Tell us about your Arabic sign language interpreter.
It is a concept system that acts as a sign language translator based on the Arabic sign language. I made use of the Microsoft Kinect device and had additional software written for it.
How does it work?
Using its own algorithm, the Kinect captures the depth information of the user standing in front of it and creates a skeleton model of the user. Our software uses this skeleton information to decide which gesture the user signs. Once the recognition is made, the software presents the output onto the screen.
Why is it important?
It is important for multiple reasons. Arabic sign language is relatively undocumented in this region, as the culture of helping people with disabilities on a collective level is generally new. The aim is to aid in this process and raise awareness about the Arab hearing-impaired community. Also, with more than 85 percent of hearing-impaired people coming from low- or middle-income countries, the affordability of this prototype — which costs around USD 200 — also makes it a very viable solution.
What inspired it?
Many research laboratories around the world, in countries such as the United States, Singapore, and Turkey, have come up with Kinect-based sign language translators for the native sign languages of their countries. However, Arabic has not been represented up to this point. Our Capstone project also includes a mandatory ethical consideration — which pointed us toward the Arabic sign language interpreter.