This guide offers a practical way to evaluate mentorship programs by combining simple data collection methods with easy-to-use analysis tools. Instead of prescribing a one-size-fits-all approach, it provides a menu of options that program coordinators and data analysts can adapt to their own context and capacity.
The goal is to help programs capture both the human experience of mentorship and the organizational value it creates without adding unnecessary complexity.
📋 Menu of Data Collection Options
A flexible set of methods for gathering input from participants, supervisors, and stakeholders. Includes quick pulse surveys, structured reflection prompts, interview guides, and artifact collection so coordinators can choose what fits their capacity and goals.
Ongoing methods for keeping evaluation alive throughout the program, rather than only measuring at the end. Includes participant-level feedback (reflection prompts, after-action reviews, visual mapping) and program-level feedback (pulse surveys, liaison check-ins, mid-program co-creation sessions). These approaches make feedback a living process that supports immediate course correction, deeper learning, and long-term organizational insight.
🗃️ Program Outputs Archive – Submission Form
An optional place for mentorship pairs to share concrete deliverables that emerged from their collaboration — such as presentations, SOPs, tools, or resource guides. Submissions are collected through a simple form and curated into an archive so future cohorts and teams can benefit from the work already created.
✂️ Templates & Copy-Paste Tools
Ready-to-use questions, prompts, trackers, and forms that reduce the lift for coordinators and make it easy to start right away. (Coming soon)