[**Creating User Interface Mock-ups from High-Level Text Descriptions with

Deep-Learning Models](https://arxiv.org/pdf/2110.07775.pdf)**

Authors:

Forrest Huang, Gang Li, Xin Zhou, John F. Canny, Yang Li

Institutions: UC Berkeley, Google Research

Summary:

Have you ever been overwhelmed with the sheer number of possible formats even the simplest screen can take? Have you ever wished you could see all the possibilities before investing the time and effort into creating a prototype?

In this paper, researchers created and compared three different types of deep learning models that generate user interfaces straight from text input. Based on feedback from UI/UX practitioners, they theorized that tools using these models could help designers quickly create a diverse array of lo-fi prototypes to explain different design choices to clients.

Figure 2 (pg. 5) from the paper—shows the 3 different types of models

Figure 2 (pg. 5) from the paper—shows the 3 different types of models

After creating these models, the researchers evaluated them along multiple criteria and then interviewed real UI/UX practitioners to see how they would (or why they wouldn’t) use a tool like this in their own practices. The researchers found that each model had its pros and cons, which could potentially complement the different stages of the design process.

How could I use this?

Let’s take a look at how this would work in action! Say you want to create a screen with a list of topics, but have no idea how to get started. You could just give a text prompt to any one of these models and get a different kind of design! (in the paper, they gave each model the prompt: screen displaying list of topics under pocket physics)

Text-only Retriever

Text-only Retriever

Because of these differences, the UI/UX practitioners that were interviewed preferred different models based on the prompt. Also, many of the practitioners noted that they would prefer to use the models at different stages of their processes. It’s not only interesting to see how AI can generate UIs, but how different models can be used for different steps in a designer’s workflow.

Another important point to highlight is that the study emphasized that the point of the deep learning models was to help designers work more efficiently by allowing them to spend less time on creating lo-fi prototypes, rather than generating finalized UIs that obviate the need for designers. In this sense, the paper not only focuses on the creation of tools for UX purposes but shows the importance of ****HCI in creating AI-based art and design tools themselves.

Screen Shot 2022-07-26 at 9.43.28 PM.png

Figure 6 (pg. 17) from the paper, shows 15 UI/UX practitioner’s preferences on UIs generated by the different models

Figure 6 (pg. 17) from the paper, shows 15 UI/UX practitioner’s preferences on UIs generated by the different models