SignCV (Experimental)
Updated 2025-11-04
Introduction
SignCV is an ongoing computer-vision experiment that translates sign language gestures into captions in real time. The project focuses on responsiveness, low-light conditions, and workflows that make it easier to collaborate with interpreters and learners.
Current focus
- Collecting training data that respects community input and accessibility needs.
- Exploring on-device inference to keep captioning fast and private.
- Designing overlays that work alongside existing assistive tools.
Watch a demo
A walkthrough video will appear here soon. I’m preparing a YouTube embed that shows the prototype in action.
Get involved
If you work in accessibility or sign language interpretation and want to collaborate, reach out.