Research and find a project (experiments, websites, art installations, games, etc) that utilizes machine learning in a creative way. Consider the following:
The project that I found: ins-bonnie2.0.0
Screenshots:


- What type of machine learning models did the creator use?
- It’s using the MediaPipe plugin to do the hand-tracking and eye-tracking in touch designer. blinking changes the planes’ position randomly and distance detection between hand 1’s thumb and middle finger controls the depth of the planes, while rotation of hand 2 determines the rotation of the planes
- What data might have been used to train the machine learning model?
- I think the models are trained on large image and video datasets of hands and faces in different positions, lighting conditions, and from different people.
- Why did the creator of the project choose to use this machine learning model?
- I guess it’s because MediaPipe provides pre-trained and real-time models, so the creator didn’t need to train a model from scratch and is able to quickly connect body movements and facial gestures to control visual effects in TouchDesigner.
Coding Exercise
I was inspired by a Chinese classical poem that I really like. I wanted to create an interactive experience with the text. Using hand gestures, the interaction works in two ways:
- When the palm is open, the position of the hand on the screen (left or right) changes the size of the poem’s lines.
- When making a fist with one finger extended, the finger can “stir” or disturb the text on the screen.
However, the hand-tracking is not very stable, especially when I make a fist. Because of this, although I intended to highlight two particular lines of the poem, their color and size keep changing due to unstable gesture detection. I’m not fully satisfied with this effect. 😢
Code
I created many objects in the characters group, and then used split to extract individual characters from the sentence and arranged them vertically.
