<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> sketch with Chanel

We came to this idea when browsing various videos. One video's thumbnail has the channel's icon perfectly positioned over a person's mouth. We suddenly realized that why not create a food eating game Allowing people to savor the pleasure of eating delicious food without the worry of gaining weight?Then this game appeared. ;-D

We actually met a lot of problems in the coding part. There is still one waiting to be solved. We uploaded 6 different food’s images. We hope that after people “eat”, the newly generated food can transform into a different type. How however, we can only make it consistently change no matter the mouth’s position or stick to one type.

截屏2023-09-11 下午12.07.40.png

截屏2023-09-11 下午12.09.55.png

Meanwhile, I seem to forget how to let one object leave its track when moving it. I tried to set the background in the set up function but failed to get the result I want.

</aside>

<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> I like the sign language translator the most. Sign language is of immense significance to many deaf individuals as it serves as their primary mode of communication and expression. While it plays a crucial role in fostering a sense of belonging and communication within the Deaf community, it's important to recognize that, in our daily lives, the majority of people do not use sign language. This could create a gap between the Deaf individuals and the ordinary people. The development of a sign language translator has the potential to bridge this gap, enabling Deaf individuals to participate more in mainstream society. This not only promotes inclusivity but also contributes to increased self-esteem and a strengthened cultural identity within the Deaf community. I used to write a paper about ASL and the importance of bilingual code-switching in ASL for my WAI courses. This idea reminds me of the feeling when I was looking for different academic sources about the Deaf community and was surprised about how big a challenge they are facing. the research paper

</aside>

<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> Here is the answer provided by ChatGPT:

"Coded Bias" raises several important issues related to artificial intelligence (AI) and algorithmic bias. Here is a list of some of the key issues addressed in the documentary:

  1. Algorithmic Bias: The film highlights how AI and machine learning algorithms can perpetuate bias and discrimination, particularly against marginalized (to specify, it should be female and people of color) groups, by producing inaccurate or unfair results.
  2. Facial Recognition Technology: "Coded Bias" explores the problems associated with facial recognition technology, including its potential for misuse, invasions of privacy, and the harmful consequences of misidentifying individuals.
  3. Racial and Gender Bias: The documentary specifically addresses the racial and gender biases present in AI systems, as seen through Joy Buolamwini's research, which revealed that many facial recognition systems were less accurate in identifying darker-skinned faces and women.
  4. Civil Rights and Privacy: The film discusses the civil rights implications of biased AI, particularly in areas like law enforcement and surveillance, where biased algorithms can lead to unfair treatment and privacy violations.
  5. Social Justice: "Coded Bias" underscores how biased algorithms can exacerbate existing inequalities and contribute to systemic discrimination, raising questions about fairness and justice in AI applications.
  6. Ethical AI Development: The documentary emphasizes the need for responsible and ethical AI development, including transparency, accountability, and the importance of diverse teams in creating AI systems. (If I don’t miss something, I suppose in the documentary, Joy said that one facial recognition system only work effectively for white men was invented by white men. Of course it could be inferred that the diversity should be in the tech team. But for others, they might think “oh, there could be someone to check if this work effectively for everyone to avoid unfairness due to limited staffing. If this were true, then ChatGPT seems to help us make further inferences, which limit the thinking of people)
  7. Surveillance Society: It examines the growth of surveillance technologies and their impact on personal freedoms and human rights, including concerns about mass surveillance and data collection.
  8. Activism and Advocacy: "Coded Bias" profiles individuals and groups who are actively advocating for greater awareness of algorithmic bias and pushing for reforms in AI development and deployment.
  9. Regulatory Challenges: The film discusses the challenges of regulating AI technologies, including the need for governments and organizations to develop policies and safeguards against biased algorithms.
  10. Education and Empowerment: It emphasizes the importance of educating the public about AI and algorithmic bias, as well as empowering individuals to understand and question the technology that impacts their lives.

Overall, "Coded Bias" serves as a thought-provoking exploration of the complex and multifaceted issues surrounding AI bias, privacy, and social justice, encouraging viewers to critically examine the role of technology in society.

Whatever, ChatGPT did a good job and make the conclusion far better than me. However, I still think there is one point left. It is humans' conscious or unconscious biases that can be embedded in the technology.

</aside>

<aside> <img src="/icons/chat_lightgray.svg" alt="/icons/chat_lightgray.svg" width="40px" /> For COCO:

I roughly searched and didn't find any criticism specifically for COCO. It seems that COCO is doing a good job now. I explored the raw training data, and there is quite a wide range of images for different categories. To better train the AI, photos were taken to capture different angles, encompassing various appearances and shapes of every class, as well as varying distances of objects from the camera. Almost every image was carefully annotated to figure out the objects shown. But I also noticed in some pictures, that certain objects should have been circled but were not.

截屏2023-09-09 下午8.41.37.png

In this photo, not all people are annotated. I’m wondering if they do it deliberately. Because there are too many people in this photo, they are blocking each other, and their individual features and postures are not fully displayed. Maybe some unclear features there could misguide our artificial intelligence? Maybe there is a number limit? However, I’m not sure what might be the result if all the people are circled out in this photo.

截屏2023-09-09 下午9.18.47.png

截屏2023-09-09 下午9.18.24.png

Another point I find is that there might be some imbalance between different classes. For example, there are 66,808 photos containing humans, while there are only 3,362 photos containing birds. This could lead to the result that COCO runs better to recognize our humans.

</aside>