On the last day of WWDC 2021 I watched the sessions for the new SharePlay framework which allows users to have shared app experiences when they are on a FaceTime call together. Immediatly after the session I thought about some of the stuff I wanted to build for Unmute in April. I tried (unsuccessfully due to technical reasons) to provide subtitles for a phone call to help deaf people have more meaningful phone conversations. It seemed like this would be possible to create with the SharePlay APIs.

On saturday I participated in a hackathon and managed to get a first version up and running by performing Speech to Text on the Mac to generate realtime subtitles on both iOS and macOS devices. Combined with a translation API I was able to let people who spoke different languages communicate through FaceTime.

In the weeks since then I've been trying to discover what's possible with this new framework, and I've been working hard on improving the user experience. For example, on macOS, the app is directly integrated into the FaceTime window, basically adding subtitles as a new feature of the app.

I can't wait to build this out over the rest of the summer with more options for accessibility as well as easy search of transcript etc.

<aside> 📲 More information and download at www.getnavi.app

</aside>

https://www.youtube.com/watch?v=lmBwn-THjtE

https://www.youtube.com/watch?v=wRO0mS0Udhs