<aside>
</aside>
<aside> 📍
You are not lost.
You have arrived at a sub-page in the AIxDESIGN Archive.
This is a public repository – our way of working in the open and sharing as we go.
Have fun!
</aside>
<aside>
INDEX
</aside>
Open Culture Tech is an initiative by Thunderboom Records and the Netherlands Institute for Sound & Vision that makes immersive technology more accessible to artists through residencies, open-source tools, and educational showcases. As part of OCT 2.0, AIxDESIGN partnered to host a series of workshops exploring how AI shows up in music production—both its creative potential and challenges around agency, equity, and sustainability.
In October 2025, we hosted Barcelona-based musician, educator, and live coder Lina Bautista for a hands-on workshop exploring variational audio autoencoders at DOOR Open Space in Amsterdam.
Lina is a musician, artist, educator, and developer who combines modular synths, DIY electronics, and computers to make music and engage audiences in sound technologies. She's part of Toplap Barcelona and Axolot colectives, and teaches at universities in Barcelona. Her approach treats AI as a craft technique—designing tools for expression rather than replacement.
This philosophy shaped the entire workshop: rather than treating AI as a black box that generates finished tracks from text prompts, Lina guided participants through the inner workings of neural audio synthesis, showing how to manipulate sound at a fundamental level.


Workshop announcement → https://www.instagram.com/p/DPluNnZjLl8/?img_index=1
Event RSVP → https://www.dooropenspace.com/program/gwm-rf6tz
Lina began by contextualizing RAVE within the history of AI music generation, walking participants through different deep learning architectures—from the sequential processing of recurrent neural networks to the attention mechanisms of transformers and the noise-reduction approach of diffusion models. This grounding helped everyone understand not just what RAVE does, but where it fits in the evolving landscape of AI-assisted music creation.
<aside>
Recurrent Neural Networks (RNNs): Designed for processing sequential data where order matters—like melodies unfolding over time. DeepBach (2016) used this approach to generate convincing four-part chorales in the style of J.S. Bach.
Transformers: The "attention is all you need" architecture that powers much of modern AI. In music, this means tools like MusicGen (Meta) and MusicLM (Google)—the text-to-music generators you've probably encountered.
Diffusion Models: Starting from noise and gradually refining it into structured audio. Examples include Harmonai's Dance Diffusion and Riffusion, which applies image diffusion techniques to spectrograms.
Variational Autoencoders (VAEs): This is where RAVE lives. Instead of generating from text prompts or refining noise, VAEs learn to compress audio into a compact "latent space" of manipulable variables, then reconstruct it back into sound.
</aside>

From https://cartography-of-generative-ai.net/

At the heart of the workshop was RAVE (Realtime Audio Variational autoEncoder), an open-source tool developed at IRCAM.
RAVE encodes audio into latent space—think of it as a compressed mathematical representation of sound's essential features. You can then navigate this space, morphing between different sonic characteristics, discovering sounds that don't exist in the training data. It's not about convenience or automation; it's about creative exploration.
This latent space is where the magic happens—it's like a secret map of sound's hidden dimensions. By navigating these dimensions, you can go beyond simple timbre transfer or voice cloning to discover entirely new sonic possibilities.