Ian Cheng is an American artist known for creating “live simulations” - artworks built with video game engines and machine learning–inspired agents that evolve in real time. Cheng merges psychology, AI, and storytelling to explore how living systems adapt and change. His major works include the Emissaries trilogy (2015–17), BOB (Bag of Beliefs) (2018–19), and Life After BOB (2021–), all of which stage narratives inside unpredictable, ever-changing simulations. In this short research I will mainly talked about his work BOB (Bag of Beliefs).
Artist’s bio: https://iancheng.com/
• What type of machine learning models did the creator use?
Cheng has not explained very deep into the technical aspect of this work, but based on my research, Cheng used agent-based systems with reinforcement learning–style logic. BOB is built as a kind of neural-network creature inside Unity, with “belief weights” that shift as it encounters situations. Instead of a generative GAN or large language model, he designed a custom decision-making model inspired by reinforcement learning and neural nets. In BOB, he design it with four mini-brains - the reptile brain (handles threat detection and fight/flight logic), the limbic brain (simulates slower desires like hunger, social needs, and play), the narrative brain (contain some pre-scripted goals), and the memory brain (that stores “memory” like metabolic state, environment context, and viewer presence in in every 10-15 seconds). So Cheng’s “ML” is less about deep neural networks or standard supervised learning, and more about architectures of competing decision systems, memory comparison, and situational adaptation.
• What data might have been used to train the machine learning model?
For BOB, the system scans ~25 parameters (e.g. body size, energy/metabolic state, environmental features, viewer face) periodically and stores them as memory snapshots. Those snapshots along with the current state form the basis for comparing “then vs now” to generate emotional judgments (i.e. positive/negative valence). The agents also exist in a simulation environment, so environmental stimuli (other agents, objects, threats, resource appearances) serve as “data” influencing activation among the mini-brains. The narrative brain is “pre-scripted” with narrative goals, so those narrative trajectories are encoded as structured data or goal states, but can be overridden or interrupted by more immediate agent dynamics.
• Why did the creator of the project choose to use this machine learning model?
The collage of mini-brains (rather than a single unified model) is inspired by theories of modular mind (e.g. Marvin Minsky’s Society of Mind) and neuroscience: he resisted trying to build one unified mind and instead lets modules compete for control. The memory brain lets BOB evaluate present states through historical comparison, enabling “feelings” that are always relational (present vs past) — reflecting how human emotion is often grounded in memory. Philosophically, he emphasizes the “Orient” piece (how AI builds internal models, not just perceives) as underexplored; he uses his systems to explore that.
Sources:
https://www.livingcontent.online/interviews/ian-cheng?utm_source=chatgpt.com
https://www.moma.org/calendar/exhibitions/3656
https://plinth.uk.com/blogs/in-the-studio-with/ian-cheng-life-after-bob
I made a Pac-Man game that uses the hand post to control Pac-Man. Players are trying to eat 10 heart emojis and avoid hitting the ghost to win the game.