Microlearning has the promise of being a useful tool in your kit of solutions. Yet, there are ways to go obviously wrong. Knowing ways that you can go wrong increases the likelihood that you don’t.
Going through the process of developing microlearning provides an opportunity to understand where such initiatives can go awry. Initial awareness, and every step along the way provides challenges that can undermine achieving success.
The first way to go wrong is to not know about microlearning, or to not truly understand it. Thinking it’s just about breaking courses up will definitely lead you astray. Similarly, thinking it’s just being a ‘how to’ video, can also limit the possibilities.
It’s important to understand the problems microlearning solves and when it makes sense. With that, you can avoid the initial challenge of being unaware or misconstruing the opportunity.
Microlearning, particularly in its two very different forms, can catch folks out. Not using microlearning when it would be a solution is an obvious problem. Applying the wrong form to a particular problem is another way to go off-track.
The right approach is to have done a suitable analysis. This means identifying the gap between what is needed and what’s currently being seen. This helps you accurately identify the nature of the problem. Accompanying that gap analysis needs to be a root-cause analysis. That is, you need to know what is causing the performance gap. If it’s a situation where knowledge ‘in the world’ helps, you’ve got a call for one form of microlearning. If this is a situation where a skill needs to be developed over time, you’re pointed toward the other. Of course, microlearning won’t help other situations, such as misaligned incentives, a need for rapid skill development, development in ambiguous situations, and so on.
One other thing to do during the analysis phase is to identify the criteria that you’re shooting to achieve. What is the change needed to address the identified gap in performance? How is it measured? What resources are willing to be expended to address this, both time and money? Establishing these criteria upfront gives you a basis to determine when you’re done.
While this should be true for all learning, it’s critical here to match the solution to the problem. You want to make sure that microlearning as a solution is targeted to the right problem. The challenge is to ensure that you’ve understood the problem, and microlearning, sufficiently to know that there’s a match here.
Once a match is established, a further challenge is in appropriately designing the solution. Each approach has its own nuances, and ensuring that the design is aligned is a necessity.
For the ‘drip irrigation’ model of spaced learning, it’s knowing what the initial foundation must be, what size chunks must be used, what should be the quantity and frequency of reactivation, and when to use models, examples, activities, reflection, and more, to support the skill development. This actually is fairly complex, and appropriate resources should be dedicated to ensuring that the solution is aligned with the need and the audience.
For performance support, there are different design criteria. The solution needs to be available at the time of need, usable, and match the cognitive need. The problem itself dictates whether a decision tree, lookup table, checklist, procedure guide, etc., makes the most sense. Then, the particular solution needs to be designed to include the necessary information in an appropriate way, with no extraneous information.
Once designed, the challenges aren’t done. Deploying solutions requires consideration beyond the technical ones. There are technical challenges, but recognizing the ancillary considerations is also important.
For one, there needs to be a match of media to need. While some variation is valuable, the core cognitive processes need to be invoked, and in a manner appropriate to the audience and the context. An auditory component in a noisy environment may be wrong, just as a web solution in a network-free location isn’t going to work.
There are also the people issues. Who will be available to clarify any concerns or issues? Will you include some people evaluation in your spaced learning, or will access to an expert be part of your ‘on the spot’ support? How will people even be aware of the availability of the solution, and supported to incorporate the solution into their work habits?
A successful implementation is a change initiative and needs to be treated as such.
The final challenge is evaluation. “If we build it, it is good” is not a strategy. Ideally, you have goals that the intervention is designed to hit (set at your analysis phase). Testing to see how close you are, then making changes and iterating until the metrics are being met is the right way to evaluate an initiative.
Too often we don’t do this. All learning interventions should have a goal to achieve, and microlearning is not exempt. We should be evaluating our impact, and then determining if the cost was appropriate. In fact, we should initially identify the cost of not making the change and the cost of making the change versus the benefits.
Microlearning, like other initiatives, can fail or succeed. Knowing, and being prepared for the challenges, gives you an opportunity to successfully address them and create the desired outcomes.
To explore the full potential of microlearning, we invite you to download our comprehensive eBook titled ‘Microlearning: It’s Not What You Think It Is’. We would love to hear your thoughts on its usefulness. If you’re interested in delving deeper into the world of microlearning, don’t hesitate to reach out to us or our team of experts at firstname.lastname@example.org. We’re here to chat and provide you with valuable insights.