Learn sign language using machine vision


Learning a new language is a great way to exercise your mind and learn about different cultures, and it’s great to have a native speaker to enhance the learning experience. Without it, it is still possible to learn through videos, books and software. The task becomes much more complicated when trying to learn a language that is not spoken, such as American Sign Language. This project allows users to learn the ASL alphabet using computer vision and some machine learning algorithms.

The build uses a computer vision model in MobileNetV2 that is trained for each sign of the ASL alphabet. A sign is shown to the user on a screen, and the user must demonstrate the sign to the computer in order to progress. To do this, OpenCV running on a Raspberry Pi with a PiCamera is used to analyze user frames in real time. The user sees images of the correct sign and is rewarded when the correct sign is created.

Although this currently only works for alphabetic signs in ASL, the University of Glasgow team that built this project plans to expand it to include other signs as well. We’ve seen other machines built to teach ASL in the past, like this one that relies on a specialized glove rather than computer vision.

Previous Kiaro Announces DTC Eligibility Approval
Next LA writer Kim Dower examines everything it means to be a mother in a new collection of poems