For the final project I wanted to make a dance pose classifier. I initially started with the idea of strapping two arduino nano BLE's to my legs and train using a variation of the magic want code.

This proved to be rather difficult because the braces that I designed allowed the Arduinos to shift around, making the training data change significantly. This made it almost impossible to train and then replicate dance poses.

As an alternative I decided to use PoseNet and ml5. I found great examples from Daniel Shiffman's Coding Train series. I looked into the pose estimation example and built off of that:

ml5.js: Pose Estimation with PoseNet

The final output is a pose classifier that can classify between, staying alive, superman and just do it. I wanted to do moves that were mostly upper body movement because feet get cut-off in the frame quite often.

p5.js Web Editor

I didn't have a chance to add more options in my model but I hope to make visuals that correspond with the outputs moving forward.

Also my current implementation for playing sound is to wait for 20 consecutive outputs of the same result to load the intended song and then mouse-click to play. I would also like to improve the interaction so that the user doesn't need to repeatedly go back the the computer to play.