<aside> šŸŒŸ Dr. Sherry Tongshuang Wu is an Assistant Professor in the HCII. Her research specializes in the interactions between humans and AI in HCI and Natural Language Processing. Her work is focused on helping users debug AI and building more practical AI systems humans can benefit from!

</aside>

Untitled

šŸžĀ Could you tell us a little about who you are and what you do?

I'm Sherry Wu, and I'm a new assistant professor in the HCII. I finished my PhD at University of Washington and joined the HCII in September. My research and work are in the field of Human-Computer Interaction and Natural Language Processing. My whole vision is to help build more practical AI systems that humans can actually benefit from. AI is on a really fast track towards becoming more and more advanced, but even with this really fast process, because machine learning models are built in a way that tries to fit on some training data and generalize that data, the models created will never be 100% accurate. They will always have some mistakes. So the question is how do we help people identify those mistakes and mitigate whatever problems that might be caused by incorrect models. The way I do that is work with a lot of users, either experts who are designing the models or end users who might not know a lot about models, to interact with models. When working with experts, itā€™s mostly analyzing where their models might be buggy and identifying how they might train their next version of the model. For end users, we try to help them recognize how a model might be wrong in a particular way and how they can use the model in a slightly different way to get more benefit from it.

šŸžĀ Your research is focused on supporting interactions between humans and imperfect AI. What is imperfect AI?

Imperfect AI basically means it performs the task you want it to perform in a way that's erroneous. It might perform some of the tasks correctly but others incorrectly. This could be as simple as if you say ā€œI really like this movieā€ and your model correctly says this is a very positive sentence. But if you say ā€œI love waiting 2 hours to get into the restaurantā€, your model may still categorize the sentence as positive even though sarcasm makes it negative. As humans, we understand sarcasm, but the model might say itā€™s a positive sentence because it has positive words. This is more of a harmless mistake, but sometimes it can make more harmful mistakes. Sometimes Google photos might recognize your friend's face as some other person or as an animal, and in those cases it's usually more frustrating. People may also want to use AI in medical domains to help doctors read scans or make diagnoses. If the model says you don't have some disease when you actually have it, it can delay your recovery by a lot, and that is also something we want to avoid. So by imperfect, I mean the model is wrong, sometimes in harmful ways.

šŸžĀ What does imperfect AI look like in Natural Language Processing?

Natural language processing is a really large umbrella that basically says you're using AI that deals with text or language, and that's pretty much everywhere. For example, in Gmail, there is some built-in classifier that's deciding if your email is a genuinely important email or if it's spam. Imperfect AI might put your very important email into your spam folder causing you to miss important meetings or interviews.

šŸžĀ What does interactively debugging and correcting AI look like?

There are two ways that debugging can work depending on who is interacting with the model. For the people that may be developing and iterating on the model, they might have a very particular data set that they want to evaluate their models on like. Maybe they have a very dedicated data set that just involves examples that are really tricky spam emails or very tricky important emails that you want to make sure end up in your main folder. They will test this and maybe 60 of the examples are correct. In this case, these developers will go in and look at each of the incorrect examples and get a sense of how it's wrong, and then they will make a decision of how the next model should emphasize certain behaviors or not. This is the case where you have access to and knowledge of the model.

For the end user case, they usually only have access to an API of that model and not to the internal structure of the model. They may also not have knowledge of how these models are trained. These users tend to debug in a slightly more passive way. If Iā€™m using Google translate and it keeps getting some of the words wrong, then the next time Iā€™m translating something that uses those words, Iā€™ll have a mental model of how I cannot trust the system in those cases. This kind of debugging results in changes in your own behavior.

šŸžĀ Will we ever get to the point where we donā€™t have to adjust our own behaviors to fit AI models?

A lot of models are trying to take in and process human feedback. If you give an instruction and the model reacts in one way, we want to be able to say ā€œthis is such a bad reference! You should never respond in this way. Instead, you should respond in this other wayā€. You can penalize your model for making that wrong decision to tune it to the correct response. This tries to make the model work better with humans. Itā€™s going to take a long time to get to that point partly because it's technically more difficult, but also because humans are a bit unpredictable. Different people can submit very different instructions because of how you tend to talk. If you talk to Siri, you don't talk in the same way you would talk to another person. You talk to it with very short and clear commands. That is an implicit behavioral change that you do, because you observe Siriā€™s behavior. If you realize your model reacts to some kind of commands better than others, you naturally turn yourself towards that direction. Our job is to help people become more aware of this.

šŸžĀ How can people be made more aware of imperfect AI and how to handle such?

One way is to give people a lot of examples. You can show examples of working cases: if you submit this kind of instruction to the model, it will react correctly. By observing these examples, you can see patterns of what kind of instruction works better. Another and really natural way is giving people a mental picture of how you should think of AI. OpenAI, for example, will say you need to explain your tasks to the AI as if you're explaining it to an average middle school student. This changes peopleā€™s language if you were to compare it to speaking to a 5 year old or a college student. There are also things such as if you ask the model to work on really complex cases. Sometimes the model becomes confused because you have multiple tasks going on at the same time and it doesn't really know which task is the most important or how to combine the tasks. So a natural way to fix that is to do decomposition. You divide your task into a lot of sub tasks, and you ask the model to do the tasks individually.

šŸžĀ Considering these obstacles to the use of AI, what are your initial thoughts on using AI in design? Do you have any experience with projects that have used AI when designing?

When a model is generating things, it can create an illusion of creation, but in reality it might have just memorized certain things. This brings up the question of whether or not the model is actually designing, or is it just mixing stuff that already exists. This is a general ethical consideration in this domain. I think a lot of experts and designers that work in this field value a lot of really particular things that the model is not trained to exploit. The model only knows how to generate an image of your prompt; they donā€™t really have creativity of their own.

So it's something I think I can definitely see causing negative feelings in those communities that are more concerned about authorship. For general people who are trying to test out their ideas or trying to get a first taste of what a design of an image that they can only describe words will look like, I think it's still beneficial that you can do fast iterations on it. I sometimes work using AI for prototyping systems. If you want to build a big application or AI-infused software that has a very particular AI component, you would have to go and collect a large amount of data and train an actual model which can be very expensive. With AI, you can get a sense if your model is going to work and what some mistakes the model could make. So in those cases, I think AI is generally agreed to be quite helpful. For people who donā€™t have as many requirements regarding creativity or authorship, the model can help with ideation. You can get a lot of interesting ideas and draw pictures you maybe have never thought about. If that is your goal, I think AI is a good design tool. Using AI in design is very goal driven.

<aside> šŸ’” Learn more about debugging AI and more of Professor Wuā€™s work on her website!

</aside>