Ellen Ensher uses the Kirkpatrick framework for evaluation, create an evaluation plan focusing on reaction to the program, key learning obtaining, on the job and/or career application and organizational results.
- According to economist Milton Friedman, "One of the greatest mistakes is to judge "policies and programs by their intentions "rather than their results." I think this quote is a good reminder that before you start your evaluation of your mentoring program, recall your original intentions or goals of your program. In fact, I recommend you map out your evaluation well before your mentoring program begins. I'm gonna walk you through a modified version of Kirkpatrick's evaluation that can help you make your plan.
Consider these four levels of evaluation. Level one is reaction. This level measures how much people liked your mentoring program, and enjoyed the overall experience. Typically this is measured with surveys or perhaps even focus groups. I recommend a combination of open and closed questions. Typical questions might include: on a scale of one to ten, with one being miserable, five being meh, and 10 being fantastic, how satisfied were you with your overall experience in the mentoring program? What was most/least enjoyable about participating in the mentoring program? Level two is learning.
This level measures what and how much mentors and proteges learned as a result of participating in your mentoring program. Measure this in traditional ways with surveys or interviews. Or, if you used action learning, the final projects can be a great measure of learning. Also, ask your mentors and proteges to review how well they accomplished their respective smart goals as this provides a terrific measure of learning as well. Level three is behavior back on the job.
In other words, examine whether proteges' behavior and perhaps also mentors' behavior change as a result of participating in the mentoring program. Again, let the smart goals set by mentors and proteges offer guidance for your evaluation. For example, I had a faculty protege who wanted to increase her publishing outputs. So, we set a goal of submitting four articles a year. In this case, it was pretty easy to measure behavior as we could look at the frequency of the behaviors that go into getting an article published, like conference submissions, data collection, and even networking with editors.
Level four is results and/or return on investment. For more information on evaluation formal mentoring programs, I recommend the work of Tammy Allen and her colleagues as discussed in their book, Designing Formal Mentoring Programs: an Evidence-Based Approach. Admittedly, measuring the results of your mentoring program is difficult and costly. We don't live in a perfect world, so evaluate what you can with your time, budget, and resources.
Consider phasing in your evaluation, so perhaps the first time around, just measure the reaction, and then as your program gains momentum, you can get fancier with your evaluations in the future.
- The benefits of formal mentoring programs
- The types and purpose of mentoring programs
- Designing a framework and a needs assessment
- Creating a mentoring culture
- Ensuring organizational support
- Choosing participants
- Training essentials for mentors
- Concluding and celebrating your program
- Evaluating your program
- Making your mentoring program last