Q1

The relationship between labels and images in machine learning datasets is never neutral. Images do not contain inherent meanings; they are given meaning through the labels attached to them. A photograph of a person smiling might be labeled “happy,” “flirtatious,” or even “slattern,” depending on the taxonomy chosen by dataset creators. As Kate Crawford and Trevor Paglen argue in Excavating AI, this act of labeling transforms ambiguous, context-rich images into fixed categories that machines then treat as objective truth.

The power to label images is concentrated in the hands of dataset builders researchers, corporations, and the armies of low-paid crowd workers they employ. These groups make critical decisions about which categories exist, what counts as a “neutral” label, and how to classify people. For example, datasets like ImageNet once included labels such as “loser” or “kleptomaniac” applied to ordinary people’s photos scraped from the internet, showing how value judgments and cultural prejudices can be coded into technical infrastructure.

Once these labels are used to train models, their impact spreads far beyond the lab. Machine learning systems built on such data are now used in hiring, policing, education, and healthcare. They can determine who gets flagged as a “threat,” who is seen as “trustworthy,” or who is deemed a “qualified” job candidate. In this way, the biases and assumptions embedded in training sets become institutionalized, shaping people’s opportunities and rights.

Ultimately, labels are acts of power. They decide what is visible, what is ignored, and how people are represented. When models trained on biased or harmful labels are deployed at scale, they risk reinforcing old hierarchies of race, gender, and class under the guise of scientific objectivity. This makes it crucial to question not only how accurate models are, but also who has the authority to define the categories that structure them, and whose voices are left out of that process.

Q2

References:https://thecodingtrain.com/tracks/teachable-machine/teachable-machine/2-snake-game,https://www.youtube.com/watch?v=fTKUPehFF2A

My p5js: https://editor.p5js.org/Sitong_Zhou_Silvia/full/S3c6HmS7r3

Documentation:

  1. I first trained the model with photos of my hand pointing in four directions, I gave it around 400 photos just to make sure it learns well.

Screenshot 2025-09-15 at 11.43.00 AM.png

  1. Then I followed a YouTube tutorial to make this Snake game, which matches my Teachable Machine directions.

Screenshot 2025-09-15 at 11.59.33 AM.png

  1. put my model link in the sketch.

Screenshot 2025-09-15 at 12.20.58 PM.png