Own Your Communication

An automated interpreting service available on any camera-enabled device.

We believe in empowering the Deaf Community through equal access to communication. We help to facilitate communication between the Deaf / Hard of Hearing and the Hearing communities. We provide an automated service that automatically translates American Sign Language and captions speech in real time for immediate communication.

ASL Translation

Supports automated translation of American Sign Language into Speech or English Text.


Never be left out of the conversation with group conversation captioning. (Coming Soon!)


Add your own custom signs and phrases to truly communicate on your own terms.


What is Sign-Speak?

First and foremost we are a communication tool, not a replacement for interpreters. We automatically recognize basic ASL and speech to facilitate communication. Specifically, you set up your phone, sign into it, and what you sign is spoken aloud. Then, a hearing person can speak to the phone and whatever they say will be transcribed on the phone. We work on any phone with a camera, anywhere, anytime, on demand. We specifically target ad-hoc (unplanned) low stakes conversations where a few errors are acceptable. We do not currently recognize advanced ASL features such as classifiers.

Will you replace interpreters?

No, our goal will never be to replace interpreters. We do not support businesses who are looking to use us to replace interpreters. Sign-Speak was not developed to be used where an interpreter should be (e.g. medical, legal, or financial services). If you feel forced to use our application instead of an interpreter, please immediately contact us at We recommend only using us in low stakes (often social or repeated) situations where an interpreter cannot be otherwise obtained (e.g. ordering food or chatting to strangers on the street).

What stage is Sign-Speak?

Sign-Speak is currently in pre-release beta stage. All of our core algorithms have been written but still need to be trained by Deaf and Hard of Hearing signers to improve accuracy. Additionally, we currently require users to add the signs they wish to use to their phone’s database. With proper usage, Sign-Speak is currently about 80-90% accurate. As you use the app, our algorithms will continue to improve and understand ASL better.

Why do I need to pay for Beta?

Unfortunately running our computer vision, machine learning, and computational linguistic models is fairly expensive. Paying will support not only our servers but also continued growth in access technology. We plan on offering beta for the discounted rate of $5 / month (rather than the full $20 / month). We do realize that this may be prohibitive and if you are in need of financial assistance, please contact us.

Where can I sign up?

As our infrastructure is still new, we are limiting the amount of beta users. To join go to “Get Involved” -> “Beta” to sign up for our waitlist and we will let you know when we have a spot available! This is first come-first served so do sign up if you’re interested.

When will it be ready?

Our first wave of beta will be ready around the beginning of March. From there we plan on improving accuracy and adding a bunch of critical features including fingerspelling, grammar, one handed signing, and group conversation transcription. We plan on being fully released by Summer 2020.

How does it work?

We use custom Machine Learning, Computer Vision, and Computational Linguistic models to learn and model ASL. These models learn from examples in our database and become increasingly accurate to the level of human accuracy. This process will be accelerated as our data becomes more diverse through our data collectors and users like you.

Where will I be able to use it?

You can use it anywhere, anytime. However, we recommend you use it with no other people in the background (our current algorithms get confused when there’s more than one person in the image), in a well-lit, high-contrast environment, and with your sleeves rolled up (similar to the environment where an interpreter would interpret). Additionally we currently require that you sign slowly and clearly with your entire upper body on screen. Over time these limitations will improve as we continue to develop and improve Sign-Speak.

Who makes up your team?

Our team is made up of Deaf and Hearing engineers, linguistics, and machine learning researchers. In addition to our core team, we are currently working with advisers from RIT (Rochester Institute of Technology) and NTID (the National Technical Institute for the Deaf) . Our members are ex-Google, ex-DOW, and ex-Microchip. This is largely a labor of love and completely self-funded (if you want to help support us, feel free to sign up for the Beta waitlist). We are currently looking to hire more members of the Deaf community (see our job postings!)

Have other questions? Feel free to contact us at