Overview
In order for design research to be valuable, it needs to be directly actionable and intelligible to the product side of the company. For us, that realistically means that the insights must directly impact what we model, what data we train on, or what outputs the model gives. These insights, however, can be as nuanced or niche as needed to accurately describe the situation or particular mental model.
Measurement
We want a set of concrete timelines for these insights to be delivered. If possible, a set of dates and quantities per deliverable type. The results can be entered into a Notion Database if possible so that our team can sort and comment on each precise example. We could also, for example, take the most recent 150 prompt examples and evaluate our model on them and create a report.
We want to see direct, tangible examples of realistic prompt examples that our users are likely to enter into our tool
We want to see precisely what kinds of images someone would upload as a style embedding, and what impact/effect they would like that image to have on the output
What are some examples of “batch edits” or “design-system-wide” commands that someone would like to have? What are some precise examples of these?
Provide a concrete list of “brushes” that a user could or would want to use. Examples could be things like “gradient eye dropper”, “style copy/paste”, “set completion brush”. Some evidence would be useful to support the brush idea (e.g. a Dovetail clip show)
Before / After examples of conversational editing. In other words, provide a triplet of input image, edit command, and output image
Before / After examples of mask-based editing. In other words, provide a triplet of input image, edit command, and output image
Examples of set completion queries / commands that users are likely to want to input into our system. In other words, provide a set of vectors, a “command” (prompt, embed, etc.) and a sample output.