P5 Sketch Link: https://editor.p5js.org/my3037/full/_yVp2SasM

This project is an interactive real-time trainer that combines p5.js (for creative visuals and interaction) with ml5.js (for machine learning in the browser).

It lets you:

You can even use mic loudness and device orientation as extra inputs — making the system responsive to gestures, sound, and motion at once.

Core Flow

  1. Capture Input Data → hand keypoints, mic level, and orientation.

  2. Add Labels or RGB Targets → decide what the model should learn.

  3. Train the Neural Network → ml5 normalizes and learns from samples.

  4. Predict in Real Time → the trained model outputs a label or RGB values.

  5. Visualize the Result → the output drives background color or shapes.

    Screenshot 2025-10-23 at 3.52.01 PM.png