After some time trying to connect ESP32 to the computer, I was testing the touch sensor on a paper with a graphite designed button. A question arose who needs touch to communicate? From that I assembled a mental map of insights that were emerging. I ended up coming up with an idea of a Talking Menu.
The idea came from a situation that I remembered from a series of Netflix, which shows the day to day of deaf-mute students, and called my attention to the difficulty that they have an order in a fast way food or drink in restaurants or bars. And an idea came up to make a Talking Menu. The user just needs to touch the buttons of what he wants to order of food and drink, and the menu plays an audio referring the choices that were made. At the same time the audio will be transcribed in the computer so the attendant can read it in case it is a very noisy place and make the order without any interference.
Who: Deaf-Mute Community
What: Talking Menu
When: When a deaf-mute goes to order food or drink
Where: In restaurants, cafes and bars
Why: To enable deaf-mutes to communicate with attendants who do not have knowledge of the sign language of services such as restaurants quickly in everyday situations, such as ordering food or drink.
How: The menu will play audio and at the same time transcribe the request selected through buttons selected by the user
Sketches