A set of large language models that I can co-write with, as well as other models for co-creation, all within the limits of Heart of Gold v2. Includes retraining, some sort of interface that isn’t too janky to use, and enough optimizations to make this work smoothly.

Tools so far:

  1. Stable Diffusion from lstein’s branch. Comes with a bunch of other models for upscaling and reworking faces: essentially mimics the Discord experience of midjourney or dreamstudio.
  2. KoboldAI; various GPT-Neo models. It does very much what I set out to do, except far more competently; now all I have to handle is model training.
  3. Visions of Chaos. Mostly for art of a fractal nature. I’m surprised this isn’t more popular.
  4. Geopattern and Korpus. Simple, but interesting - I may find a use for them somewhere.

General idea for some of the workflow: https://blog.replit.com/ai

Worth keeping in mind: this note from AI Artists. Of particular interest is this bit: Researcher and professor Margaret Boden estimates that “95% of what professional artists and scientists do is exploratory. Perhaps the other 5% is truly transformational creativity.” Generative systems are helping to explore much broader ground faster than ever before.

"Generative art is the ceding of control by the artist to an autonomous system,” explains Cecilia Di Chio from the book Applications of Evolutionary Computation.

“With the inclusion of such systems as symmetry, pattern, and tiling one can view generative art as being old as art itself. This view of generative art also includes 20th century chance procedures as used by Cage, Burroughs, Ellsworth, Duchamp, and others."

My general thoughts on AI Art: https://alserkal.online/words/machining-the-dream. See also the story I worked on for Google Research and the associated paper.



Text and image generation is finally up, if not fully tuned yet. At some point I may/should pick up Processing: it seems more fruitful than, say, writing galaxy generators in R. But we’ll take this as it goes; I don’t want to get bogged down in the ‘shoulds’ and lose the fun of it.

Success! Kobold is running, and it looks like we’re capped at GPT-Neo 2.7B versions. The GPU returns answers in seconds and stays at a nice 40 Celcius throughout. Good; I can finally sleep.



I’ve spent two days now trying to get the hivemind/gpt-J 8-bit model running on Heart of Gold v2 . So far, complete failure, but I have:

  1. Learned a hell of a lot more about how these types of LLMs work