Decide what to evaluate
There are many different but equally legitimate possibilities. Do you want to measure bottom-line benefits to the organisation or ROI? Or are you interested in the transfer of learning – changes in performance or behaviour in those who have been coached?
Are you concerned with the effectiveness of the coaching process (the relationship) or whether coaching is aimed at the right people? Or do you want to process-find which of the organisation’s systems helped or hindered achievement of the expected benefits?
Make it work
- Choose the right ruler before you start measuring. And stick to it. The answers do not have to be mutually exclusive.
- Be clear about your evaluation budget. Check time constraints and scope before you start. Measuring ROI takes time, specialist expertise and money.
Fatal flaws
- Leaving it too late to evaluate.
- Making evaluation overly complicated. Focusing on one key indicator is enough – for example, sales figures for coached sales staff.
- Ignoring the fact that measuring ROI and identifying the impact of coaching are not the same thing.
Measure change over time
The most convincing evidence is gathered over time. Margaret Chapman looked at the impact of coaching on team emotional intelligence and found that developing “EQ” worked best by combining one-to-one and group coaching.
Make it work
- Establish pre- and post-measures. Combining pre- and post-coaching results is much more credible than post-hoc evaluation.
- Repeat the measures and look for ongoing benefits after the coaching has ended to see if they have been sustained.
- Compare the results with a “control group” or “wait list” of similar people who have not yet been on the programme.
Fatal flaws
- Not planning how to evaluate the coaching before the programme begins.
- Not telling all parties involved that the coaching will be evaluated, how this will be done, and what they will need to contribute.
Carry out a stakeholder analysis
Although coaching is a one-to-one intervention, it takes place in a specific context, with a multitude of stakeholders with different priorities, needs and expectations. You will need to decide who to include and find out what data to collect in order to satisfy those needs.
Make it work
- Be clear who the stakeholders are and establish what influence they have and their expectations.
- Make sure your evaluation plan identifies to what extent stakeholders’ expectations have been met.
- Identify who else might have an interest, such as the coached and their managers. What do they hope for?
- Consider using stakeholders as data sources.
Fatal flaws
- Listening only to what the coaching client has to say. Research by Alison Carter suggests that this can paint too rosy a picture of the programme.
- Using external providers as your only feedback. They may have a biased, financial interest in the outcome.
- Not knowing at the outset how you will identify and report on value added.
What is the evidence?
This depends on your responses to the previous step. In general, numbers can convince stakeholders who are paying/sponsoring the coaching, while individual success stories can easily be remembered and used as illustrations.
Make it work
- Be pragmatic. Stick to your budget and select a data collection method that meets stakeholders’ objectives and the reason for evaluation.
- Use an integrative approach, combining both qualitative (in-depth interviews) and quantitative (questionnaires/surveys) methods to get a fuller, comprehensive view.
Fatal flaws
- Relying on a single measure from a single source, for example, “happy sheets” from those coached.
- Not being clear about what is being measured, how and for what purpose.
- Not knowing what to do with the data, once it has been collected.
Ensure evaluator competence
In a tight economy practitioners can add value by demonstrating the ROI and impact of coaching. However, this requires a high level of competence in evaluation. Critics have suggested that HRD professionals have relied too much on Kirkpatrick’s four-level model of evaluation. Writing in 2008, Darlene Russ-Eft and her colleagues talk about the “evaluation imperative”, our need to reframe how we look at evaluation and to see this as being about learning, growth and change (as in coaching).
Make it work
- Engage in continuous professional development. There are other approaches beyond Kirkpatrick.
- Recognise that evaluation is about asking questions, making decisions and creating the future.
- Consult the CIPD’s online tool: Developing Coaching Capability, www.cipd.co.uk/subjects/lrnanddev/coachmntor
Fatal flaws
- Believing that ROI is the only way of demonstrating the utility and worth of coaching.
Further reading
- V Anderson, “The value of learning: from return on investment to return on expectation”, Research into Practice Report, CIPD, 2007.
- A Carter, “Practical methods for evaluating coaching”, IES Research Report 430, UK, 2006.
- M Chapman, “Emotional intelligence and coaching: an exploratory study”, in M Cavanagh, A M Grant and T Kemp (eds), Evidence-Based Coaching, vol 1, Australian Academic Press, 2005.
- CIPD, Developing Coaching Capability: How to Design Effective Coaching Systems in Organisations, 2008.
- D Russ-Eft et al, Evaluator Competencies: Standards for the Practice of Evaluation in Organizations, John Wiley & Sons, 2008.
Alison Carter is principal research fellow, HR research and consultancy, at the Institute for Employment Studies. Alison.carter@employment-studies.co.uk
Margaret Chapman is a chartered scientist and principal psychologist at EI (Emotional Intelligence) Coaching and Consulting. mc@eicoaching.co.uk.
Both are executive coaches and active researchers and are leading national and international specialists in evaluation, EQ and coaching.
Volume 4, Issue 2