The reading reveals the fact that the interpretation and classification of images are actually influenced by factors like politics, ideology, prejudices, history etc. In the digital age, Magritte’s example shows that there is always a gap between labels and images. However, machine learning are trying to assume the unstable and non-absolute relation between images and labels can somehow be fixed as “objective”.
As for who has the power to label images, I think there are several stakeholders involved.
As for how do those labels and machine learning models impact society:
For this assignment, I tried to make a Paper-Scissors-Stone game.
For the image classification part, the three classes correspond to the three hand gestures. At first, I collected about 40 images for each gesture, but during testing I found the accuracy was not good enough. So I added more images with different angles of the hand gestures, and eventually built about 150 images for each class to improve the accuracy of Teachable Machine.
https://drive.google.com/file/d/181CZyJ5HeEThmgGPY4gnJdpEbSx6IBVA/view?usp=sharing
For the coding part, I basically made some adjustment and changes based on the Image Classification example code.
After importing the machine learning model derived from teachable machine, I tried to randomized the computer’s choice and made a comparison with the player’s choice.
At first, the result was always "You're lose," and I thought it was because the judgment was made before the model was completely loaded. So I added a condition to check if the model was completely loaded before making the judgment.
I also noticed that there were some errors. For instance, though it worked well in the teachable machine web page, the model sometimes failed to recognize gestures after being imported into p5js. The model wasn't very stable, so I wanted to add a three-second countdown before capturing the player's image and selected the label. I wondered if this would make it more stable so I created this countdown effect.
Code for countdown: