There are a variety of ways to measure impact. Theories suggest what the proper data should be, but reality has a bad habit of intruding. This leads to a variety of ways to determine impact, with increasing value. Some are indirect, others are more direct. We should choose ones that we can collect data on, but also always be looking to up our game.
A starting point
A core framework is the so-called Kirkpatrick model (with caveats about the legitimacy of the claim to fame).
Here, there are four levels that are assessed:
- Level 1: what did performers think of the intervention?
- Level 2: were there direct outcomes from the intervention in the abilities of performers?
- Level 3: are there persistent behavioral outcomes in the workplace as a result?
- Level 4: are those changes in workplace behavior leading to improvements?
There are several caveats around this. For one, a novice assessment of the value of a learning experience doesn’t correlate highly with the measurable impact. Unless we want to have happy learners (if we’re selling the solution, or we’re working aiming for a true learning experience), we really don’t care what they think about it. Also, others have suggested that there’s a level ‘0’, whether they actually attend and/or complete the learning! In addition, the Phillips’ have proposed a level 5, Return on Investment (ROI). To be fair, we don’t disagree with ROI, but that can be misleading.
Too many organizations assuage any real concerns about learning by saying they’re at least doing level 1, conducting a survey at the end. Yet, properly, the proper implementation is to start at level 4, determining what measurement you need to improve. From there, you can determine what different behaviors in the workplace would make that improvement. Finally, that drives the design of an intervention to achieve the behavior change. As previously noted, that can be from a number of sources. Here, we’re considering job aids and training.
The idea of starting with a measurement in the organization – KPI, OKR, etc – is, in practice, harder to implement. There are several reasons for this, including that it requires working outside the comfort zone, in having to engage with another business unit. There’s also a potential challenge that folks, even internally, don’t necessarily like sharing data.
Flexibility
Despite the lack of desirability, pragmatically it makes sense to consider several steps along the measurement path. We can, and should, move to more important measurements, even if it’s too big a step to redo the entire evaluation approach. There is increasingly valuable data we can get as we move along.
While we shouldn’t start with nothing, a ‘smile sheet’ response is also inadequate. Whether learners like an intervention doesn’t provide us with any useful information. At least we should be asking whether they thought it was valuable, but even this isn’t really informative unless they’re beyond novice status.
As a next step, we should strive to determine performers’ ability at the end of the experience. Thus, we should be evaluating their ability to ‘do’, at least as a result of the intervention. If it’s a job aid, we should see that they can use it; if it’s training, we should expect them to demonstrate a level of capability. This shouldn’t mean arbitrarily achieving 80% on a knowledge test, however. We can still use multiple-choice questions, but they should require making decisions, not just reciting information. We can also use branching scenarios or full simulations, depending on how important success is, how complex the task is, and how frequently it’s performed. This at least tells us that they can do the task appropriately after the intervention.
Moving beyond just the intervention, we can be evaluating whether it’s leading to change in the behaviors in the workplace. This really has to come at least some days after the intervention. To evaluate whether the behavior has persisted, we should ask several weeks later. We can ask the learners themselves for one level of value. Increasing value comes if we ask stakeholders – supervisors, peers, customers, … – about whether the change is observed. Objective data, collected via instrumented systems (think: xAPI or the like), can give us even more accurate insight. We can also look at information from business intelligence systems or combine the two.
Finally, we can look to see if changes are occurring in the organization. Are targeted metrics increasing or decreasing appropriately? Have sales increased or time to close decreased? Is throughput up or errors down? These are the types of changes we should strategically be seeing, and ultimately this is the level at which we want to be playing.
Still, making strides is better than not. If you’re not collecting anything, start testing. If you’re testing, see if it’s persisting. If it’s persisting, see if it’s making the needed change. All of this is indicative of impact, it’s just how direct the measurement is. Organizations can be hard to change, and all this doesn’t come without interacting with others, so small steps are better than none. It’s best to take the first one!
In conclusion, the journey to measure learning impact demands adaptability and a step-by-step progression. Begin by gauging learner perceptions, advance to evaluating practical abilities, and finally witness tangible changes in workplace dynamics and organizational metrics. Incremental steps are key. Explore our eBook, Designing for Learning Impact: Strategies and Implementation for a comprehensive exploration, offering actionable strategies, valuable insights, and a roadmap for navigating the complex terrain of learning impact.