https://s3-us-west-2.amazonaws.com/secure.notion-static.com/6a8129b6-b6ed-4f28-a12c-af11286ebf14/Frame_25.png

Vocable AAC is a free, open source app that allows those with conditions such as MS, stroke, ALS, or spinal cord injuries to communicate using an app that tracks head movements, without the need to spend tens of thousands of dollars on technology to do so. First released in March 2019, we have over 3,500+ downloads and the app has been localized in English, German, Italian, Arabic, Spanish, and Dutch. Due to applications with COVID-19 patients, there has been increased use and interest in Vocable AAC within the medical community.

https://vimeo.com/394212430

Links

Background

Talk about Vocable (formally eyespeak) that I gave at WillowTree's annual summer conference in 2019

Talk about Vocable (formally eyespeak) that I gave at WillowTree's annual summer conference in 2019

This project began mid-2018 when a loved one was diagnosed with Guillain-Barré Syndrome which left her temporarily non-verbal and lacking motor function. At the time, we thought that medical technology must have gotten to a point where she could communicate with us or her caregivers, but every solution was thousands of dollars and not available immediately. Neither were any apps available that did something remotely similar to what she needed. At the 3:00 mark in the video, you can see how we used a poster board with the alphabet written on it to communicate with her. Frustrated that there were no hardware or software solutions available to effectively and affordably communicate without speech, I worked with colleagues from WillowTree to build one.

Testing out eye tracking

Testing out eye tracking

Interacting with UI

Interacting with UI

The first keyboard we built—a two-stroke keyboard

The first keyboard we built—a two-stroke keyboard

Through months of trial-and-error, research, and interviews with speech language pathologists and assistive technology professionals, we built a proof-of-concept app that could track someone's head movement with ARKit technology, the same technology that allows FaceID or Animoji to work. Slowly, we added more layouts, identified best practices when designing head-controlled interfaces, and hooked up our tracking method with the UI. In January 2020, the team got larger and we were able to add more features to the iOS app and, in addition, build an Android app with feature parity. In March 2020, we launched the first version to both app stores and received great feedback from those in the medical community. We launched the app under an MIT open source license and are working with contributors to localize the app in as many different languages as possible as well as building new features as we get user feedback.