Each year, communities across North America spend billions cleaning up litter that mars parks, sidewalks, and roadways. Manual surveys—where teams walk or drive predefined routes—are slow, costly, and subject to human error. Hotspots can go undetected for weeks, leading to unchecked accumulation and environmental harm. I saw an opportunity to apply drone technology and computer vision to automate and supercharge this survey process.
I built a system where a multicopter flies at a safe altitude of 10–15 meters carrying a lightweight Raspberry Pi that captures images and tags each one with GPS coordinates in real time. After the flight, the collected images and location data are transferred to a separate computer where a YOLOv5 object detection model processes the images to identify litter—such as bottles or bags. The results are then visualized on a web-based GIS dashboard, generating a heatmap to help cleanup teams prioritize the most affected areas.
The heart of the system is the onboard computer vision pipeline. I began by training YOLOv5 on 772 annotated images taken from our test drone flights. To improve robustness, I implemented data augmentation—adding motion blur, contrast shifts, and random crops to simulate real-flight conditions.
A Python script captures camera frames and stores them on board. On the server side, a computer analizes all the images and visualizes these points as a dynamic heatmap. Dashboard users can filter by time window, litter type, or flight mission.
In controlled test flights over parks, the drone covered 5000m² in under 10 minutes—highlighting how aerial surveying can drastically speed up data collection compared to manual methods. While I haven’t fully deployed the system with cleanup crews yet, early results show promise: the generated heatmaps clearly pinpoint clusters of litter, which could help teams prioritize cleanup zones more efficiently.
https://www.youtube.com/watch?v=L3JwvKmZ_t4