Github
https://github.com/raphy0316/basketball-form-analyzer
Intro
Built a system enabling anyone to receive reliable shooting feedback by analyzing their motion and benchmarking against professional basketball players.
Tech Stack
- Languages: Python
- Pose Estimation: TensorFlow, TensorFlow Hub, MoveNet
- Object Detection: YOLOv8 (Ultralytics), PyTorch
- Similarity Scoring: DTW (dtaidistance, fastdtw)
- Computer Vision: OpenCV
- Data Processing: NumPy, Pandas
- Backend/Infra: FastAPI, Docker, CHTC Server
- Integration: OpenAI API (LLM-based feedback)
- Client: React Native
Key Contribution
- Optimized pose estimation for mobile inference: Benchmarked multiple pose models and selected MoveNet Thunder (TensorFlow Hub) for real-time deployment (>83% perfect matches, >94% near matches). Enhanced robustness with preprocessing (cropped bounding boxes, color correction, confidence-based filtering) to mitigate noise and lighting variation.
- Implemented reliable basketball tracking: Fine-tuned YOLOv8 (Ultralytics, PyTorch) on a basketball dataset, achieving 94% detection accuracy for the ball. Enabled stable motion segmentation by resolving challenges in small-object tracking.
- Standardized features across diverse video conditions: Normalized keypoint coordinates (0–1 scaling, torso-length normalization, hip anchoring) to ensure resolution- and position-invariant motion representation.
- Designed a robust phase detection pipeline: Built a 6-phase segmentation system (setup → loading → rising → release → follow-through) with cancellation logic to reject false transitions. Tuned thresholds on 100 labeled videos, achieving >91% accurate segmentation across varied shooting styles.
- Engineered similarity scoring with DTW: Compared player motion sequences using Dynamic Time Warping and kinematic features (angles, release timing, jump height, velocities). Generated both per-phase and overall similarity scores for interpretable performance analysis.