After some time trying to connect ESP32 to the computer, I was testing the touch sensor on a paper with a graphite designed button. A question arose who needs touch to communicate? From that I assembled a mental map of insights that were emerging. I ended up coming up with an idea of a Talking Menu.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/5c236946-c2a9-447f-97d4-31b21df18134/20201216_001727.jpg

The idea came from a situation that I remembered from a series of Netflix, which shows the day to day of deaf-mute students, and called my attention to the difficulty that they have an order in a fast way food or drink in restaurants or bars. And an idea came up to make a Talking Menu. The user just needs to touch the buttons of what he wants to order of food and drink, and the menu plays an audio referring the choices that were made. At the same time the audio will be transcribed in the computer so the attendant can read it in case it is a very noisy place and make the order without any interference.

Who: Deaf-Mute Community

What: Talking Menu

When: When a deaf-mute goes to order food or drink

Where: In restaurants, cafes and bars

Why: To enable deaf-mutes to communicate with attendants who do not have knowledge of the sign language of services such as restaurants quickly in everyday situations, such as ordering food or drink.

How: The menu will play audio and at the same time transcribe the request selected through buttons selected by the user

Example: Order a coffee in a coffee shop

Sketches

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/f5b1dc81-29a0-4e3c-b9d3-c253daff5957/20201216_005527.jpg

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/00dc0bc5-3b63-4e50-a5d4-d208907529af/20201216_005542.jpg