ASL alphabet learning application
This project focused on building an interactive desktop application to help users learn the American Sign Language (ASL) alphabet through real-time gesture recognition.
The application was developed for Windows using Qt in C++, with a focus on creating a smooth and user-friendly interface. OpenCV was integrated to capture and process live video input, enabling the detection and interpretation of hand gestures. To boost recognition accuracy, TensorFlow models were incorporated via Python, allowing the system to classify ASL signs with high precision.
The project brings together computer vision, machine learning, and interface design to support accessible language learning through technology.