click here to view the whole animation

“This traditionally created animation traces the impact of generative AI tools on the lives of cultural workers, represented by insects and a glowing yellow orb (the LLM). Zijin Huang reflects here on her own role positioning after the emergence of AI. On a separate screen, the audience can chat with any of the three main protagonists (the LLM, the technologist, the user), enabled by a language model.”

My initial idea was actually about dementia in the elderly. I wanted to use AI and code to create an interactive storytelling experience. I still remember what our professor said: “It could be a little bit tricky to compare the human brain to AI.” I suddenly realized that this is indeed a weighty topic, and I might not be able to handle these subtle relationships well within the limited time and understanding.
This led me to reflect on what kind of project could invite audience interaction at the IMA show, be relevant to the course content, and be something I could stay motivated to pursue. That's when I considered animation. Maybe a story reflecting on my/human role in the world after AI’s emergence? Additionally, I decided to apply the coding knowledge acquired in this class to construct a user-friendly interactive dialogue system.
At this point, I experienced a mix of anticipation and concern. The concern stemmed from my lack of prior experience in animation. I recognized that creating characters, developing a compelling storyline, and ensuring fluid animation, especially for someone unfamiliar with the medium, would demand a considerable investment of time and might not guarantee optimal outcomes. Yet, the excitement from my enduring aspiration to bring my own animated creation to life. This project presented a unique opportunity to realize that dream. Moreover, the story I intended to convey authentically mirrored my current emotional state. From an artistic standpoint, this would enable me to communicate my emotions more seamlessly. Who knows, perhaps AI could even lend a hand in the drawing process!🤩


My primary idea is: AI is just a tool for other people’s better control. Carefully selected training data will contain some potential personal stereotypes and bias, which is not good for the users. While using AI, they are, in fact, unconsciously embracing these viewpoints.
So based on this point, I searched for some sources online.
Frederic Bachelet begins his post in Linked in with
“If Artificial Intelligence has no intent, then it can't be responsible”
and then
Seems that the idea - interestingly both ethically and legally - is that since AI and algorithms have no goals and are not making conscious decisions, they can't be liable for "errors" or illegal outcomes... We hit a dead end here, after all 'it' is clearly not a pronoun normally associated with either legal or ethical responsibilities... Can't fine it, can't send it to jail...The management of unexpected consequences will be a big issue to solve as AI and Machine Learning continue to progress towards greater independance. And the agreement of who will be the custodian of 'it' a real issue - the creator? the initial developer? the platform? the network? the user? all of us?
Since AI lacks consciousness, intentionality, and goals in the human sense, holding it liable for errors or illegal outcomes becomes a challenging proposition.
Yes, who to blame seems a very big problem. As Tim Harford mentioned in “Just because you’re paranoid, doesn’t mean the algorithms aren’t out to get you”
It is not an entirely new problem. Before there was tacit collusion between algorithms, there was tacit collusion between sales directors. Before companies blamed rogue algorithms for embarrassing episodes, they could blame rogue employees, or their suppliers. Can we really blame the bank whose cleaning subcontractor underpays the cleaning staff? Or the sportswear brand opposed to sweatshop conditions, whose suppliers quietly hire children and pay them pennies?