<aside>
💡
Notion Tip: Kickstart your project by responding to the prompts in the toggles below. Add more context to your answer by embedding links, images, files, and other blocks.
</aside>
- Testing: Continuously test the project for bugs and usability issues. Consider both unit testing and integration testing.
- Gather Feedback: If possible, gather feedback from potential users or hackathon mentors. Use this feedback to refine and improve your solution.
- Iterate: Quickly iterate on your project based on testing outcomes and feedback.
▸ Testing
- End-to-End & Integration Testing: Given the rapid pace of development, we prioritize core integration flows over strict unit test coverage. We manually trigger critical data pipelines (e.g.,
npm run scrape:instagram) to confirm the Puppeteer bot can navigate changing Instagram layouts, handle rate limits, and save high-resolution media natively.
- Data Integrity & API Validation: We closely monitor data outputs (such as data/scraped_posts.json) to validate deeply nested, hierarchical integrity. We also use tools like Postman or native
curl requests to confirm our local Express server interacts smoothly with the Google Cloud Vertex AI API to parse events efficiently.
▸ Gather Feedback
- Mentor & Peer Reviews: We will demo our scraping and AI-parsing pipelines to hackathon mentors and backend peers to gather feedback on performance bottlenecks and Vertex AI prompt accuracy.
- Frontend Usability Testing: By having frontend contributors connect directly to our core API endpoint (
http://127.0.0.1:3000), we establish a tight feedback loop to ensure the backend schemas clearly support the user interface and event-listing requirements.
▸ Iterate
- Rapid Script Refinement: Because third-party platforms like Instagram have unpredictable, dynamic DOM structures, we iterate quickly on fallback extraction strategies based on real-world failures (e.g., pivoting from cropped
og:image metadata to parsing direct article img sources for high-resolution images).
- Fail-Safe Development: We implemented frequent, incremental saves and graceful error handling (
SIGINT listeners) across our ingestion scripts. This lets us test code changes aggressively and stop running Terminal commands without risking the loss of bulk-scraped data mid-execution.