I built upon this project that ask users to define their own categories and then return the out put with the confidence. While the other three examples all built upon existing datasets and models, I thought it would be fun to simply play around the confidence, since a lot of time in the past lectures, many unexpected output came from the difference in confidence.
And the simplest thing I can think of is using colors as I used in the voice model assignment.
So I edited a bit on the got result function.

I used a global variable currentAConfidence to store the confidence of a category.
Then update the background color in draw function.

And the demo video is below.
https://drive.google.com/file/d/1NfKVUBN1wZXujz22-VvopbLfSBpdoP6X/view?usp=drive_link
This is not a big change and just adding a bit more reaction to the results. I feel like the confidence changes are not that useful for researches and real world application, but I think it has great potential in creative projects.
This small addition helped me explore how even subtle feedback can make the interaction feel more alive and engaging. By linking the model’s confidence to visual elements like background color, I could immediately see the effect of different inputs and gestures, which made the learning process more intuitive and fun. It also highlighted the creative possibilities of combining machine learning with visual coding—how real-time AI predictions can be used not just for accuracy, but for aesthetic and interactive experiences. Overall, this project gave me a clearer understanding of ml5.js, webcam input processing, and the potential of simple neural networks in experimental and artistic applications.