Project Link: https://refikanadol.com/works/unsupervised/
What type of machine learning models did the creator use?
Anadol’s installations lean on deep learning, notably convolutional neural networks for feature extraction, autoencoders for latent representation, and GANs for generative synthesis, often paired with embedding techniques like t‑SNE/UMAP to organize learned visual features.
What data might have been used to train the machine learning model?
Large, curated image archives. Such as architecture, nature, and space imagery, sourced from public datasets and museum collections. These datasets are typically standardized and used in an unsupervised or loosely labeled manner to learn visual structure and style.
Why did the creator of the project choose to use this machine learning model?
CNNs capture rich perceptual patterns, autoencoders reveal and manipulate latent structure, and GANs generate novel yet coherent visuals. Ideal for immersive, evolving aesthetics. This stack supports high‑resolution, responsive visuals that fit interactive art installations where expressiveness and performance are both crucial.
P5 Sketch: https://editor.p5js.org/my3037/full/_iCODyFGa