Coding Exercise

Based on a mini project I made for another course, I combined machine learning models that mainly detect the eyes and mouth. In this project, the player controls a small fish using their gaze — the fish swims in the direction of the player’s eye movement, freely exploring the pond. There are some obstacles that block its way, and occasionally, food appears in the pond. When the fish reaches the food and the player opens their mouth, the fish can “eat” it.

image.png

7ad1023dee8d7c0a0c6a50fcfdfddedb.png

Demo:

https://drive.google.com/file/d/1O6wGuagXcAXxXN0On3_q0qT5kHO4195A/view?usp=sharing

Code Analysis

In my code, I only used part of the FaceMesh data — specifically, the keypoints of one iris and the upper and lower interior lips. I used the distance between the lip keypoints to detect whether the mouth is open. As for the iris, since its actual movement range is very small and confined within the eye socket, I mapped it to a larger range so that the fish’s movement would appear more dynamic and responsive.

image.png

image.png

Previous Code before

NOC mini assignment 10