Project Name

1001 Nights

Brief Introduction

1. What type of machine learning models did the creator use?

The creator of 1001 Nights used large language models (LLMs) to generate narrative text in response to the player’s input.

In later versions, the project also incorporated text-to-image generative models (LoRa) to visualize objects or scenes mentioned in the story.

Together, these models formed the basis of the AI-native game mechanics: transforming language into reality.

2. What data might have been used to train the machine learning model?

The underlying LLMs were trained on a broad corpus of internet text, books, stories, and dialogue data, which allowed them to generate coherent, stylistically varied narratives.

The image generation models were trained on large datasets of image–text pairs containing captions and their corresponding images. Importantly, the game designers themselves did not custom-train these models; instead, they built on top of existing general-purpose pretrained models provided by research labs and companies.

Quotes from steam page:

Technically, we used the open-source CheeseDaddy and revAnimated libraries as a foundation, combined with a licensed pixelization algorithm. We used a proprietary model (a Lora-style model) trained entirely by our in-house artists for final image generation. All training images were sourced from in-game CG and authorized artwork from our artists. This ensured that the game only generated images that adhered to our original "1001 Nights" art style and did not resemble any artwork created outside of our team, ensuring the security and stability of the data source and generated content.

3. Why did the creator of the project choose to use this machine learning model?