This week: What impact DALLE-2 has on us + the world, how text input turns into user interface with AI, generate a SVG grid using HTML, and try spent Bill Gates’ money :D
Suggestion: Welcome to our first issue of the summer! The Interaction Nerds newsletter is back with two different types of issues: Archive & Collection and Investigation & Connection. This week, we’re super excited to present to you the very first Archive & Collection newsletter, which focuses on AI-generated art and its impact on HCI! Hope you enjoy :)
Many of you may have heard about DALLE-2 already, transformer models developed by OpenAI to generate digital images from natural language descriptions. For a quick example, let’s say you type in the phrase:
An astronaut riding a horse in a photorealistic style
The AI will generate images based on that within a few seconds.
Checkout here if you wish to play around with some pre-generated contents
Similar to DALLE-2, Midjounrey and 6pen Art also allows you to create images based on only text input.
This blog from UCB took a deeper dive into how DALLE-2 functions, in short, it’s a process of understand input using autoencoder → generate output with transformer → connecting correlations using loseless compression.
How is it so good ? (DALL-E Explained Pt. 2)
There will be no human artist / firm that can process through this scale of requests, 24/7, continously update, revise, and expand idea from its client.