Dashcams are cameras that one can mount on the rearview mirror in a car. One of its main purposes is to provide evidence in case of an accident. However, that also means that the camera has to record constantly, which costs a lot of storage. For this reason, dashcams constantly overwrite old material when the storage is full. People only care about crucial seconds where they had to make sure manually that these are not getting overwritten.

With this problem statement, we developed a concept that solves this problem and makes it a comprehensive experience for the upcoming Yi Smart Dash Camera.

The camera detects through an acceleration sensor a sudden change of speed (e.g., car bump), which then is used as a timestamp. The connected phone can save this timestamp with additional pre and post seconds to ensure a holistic coverage of the situation that resulted in the abrupt stop. Additionally, meta-information like location, date, and time are stored and visualized as a clip on an integrated map component.

Yi Smart Dash Camera App Concept: Key screens

Yi Smart Dash Camera App Concept: Key screens

Clips can also be dropped manually on a map. Brief interviews with car and dashcam owners showed that having active control is still desired. People can share actively, e.g., traffic jams with friends and family members. A feature like such is meant to be automated in the future through automated detection of stop and go traffic jam recognition.

For safety reasons, the app locks itself above a certain speed limit to prevent distraction while driving.

Yi Smart Dash Camera App: Manual clip drop

Yi Smart Dash Camera App: Manual clip drop

Yi Smart Dash Camera App: Switching from map to menu

Yi Smart Dash Camera App: Switching from map to menu

From Concept to...

The app concept has been iterated and refined and was then developed and released by Yi. As the hardware portfolio of Yi quickly grew, Yi redesigned the app. However, key concepts like the detection and creation of clips continued to stay in the app.


Responsibility

My key focus has been on the interaction models between camera and app, as well as the interaction models in the app itself including prototyping and animations.