How do you do microlearning well? What practices in implementation make a difference? The recommendations go from generic guidance about eLearning to specifics around the major two delivery methods. Here we review the practices through the design process.
In general, we can view the design process through the lens of ADDIE: Analysis, Design, Development, Implementation, and Evaluation. While we are moving to more iterative approaches (which is in itself a best-principle recommendation), it serves as an organizing approach to the discussion.
The first step to microlearning success is a proper analysis. Too often, we can take orders for an approach, or throw the latest approach at every problem. Instead, the two different forms of microlearning, spacing or performance support, spring from different needs.
What’s critical is to start by analyzing the performance gap. What isn’t happening that should (or vice versa)? We should have metrics that indicate what the current status is, and a good basis for what it should be. Then, we need to couple that with a proper root cause analysis.
A root cause analysis identifies the underlying problem that creates the gap. We need to know if it’s a resource, incentive, knowledge, or skill gap. Some we can’t do anything about (and shouldn’t waste our energy). If it’s a skill gap that has to be in the head, spaced learning has the potential to help. If we can put the answer in the world, we can explore performance support.
Knowing the actual problem to be solved isn’t unique to microlearning, but it is a necessity to get it right.
Once the solution is chosen, designing appropriately is key. You want to apply the right techniques to the problem. Applying information mapping to spaced learning is as inappropriate as applying instructional design is for performance support. Matching the approach to the need is another success factor.
For performance support, you need to identify the cognitive lack and engineer an appropriate solution. There are substantive differences between solving a large information problem with a lookup table and managing a repeated process with a checklist. How-tos, whether video or static, similarly have their own design principles.
Alternatively, for spaced learning, just taking apart an existing course isn’t likely to yield the necessary outcomes. You want to identify the right sequence of activities, and the right spacing, to generate both the appropriate retention and transfer. You’ll want to gradually expand the scope while ensuring that the base is sustained.
In both cases of design, you’ll want to iterate and test. Don’t assume your design is correct because you followed a venerated design process. The human brain is arguably the most complex thing in the known universe; simplistic approaches aren’t likely to lead to sustained change. You’ll want to base your design first on the best evidence available and then expect to test it. You’ll need evaluation (below), which modern models put at the center of the design process, but prose requires us to represent it linearly.
Also, don’t force people into unusual situations. You want to align with how people actually work. It’s a frequent occurrence that there will be beautiful pathways that are ignored and people walk across the grass, leaving trails, because that’s the easiest way to go. You want to work with what they’re already doing. You can choose to introduce a new solution, but be prepared to work hard to make it a default. It’s easier not to.
To successfully introduce a change, plan for it. Peter de Jager made the point, in Pocketful of Change, that people don’t resist change, they make life changes all the time. The point is that people resist changes imposed upon them. His recommendation was to make it a choice. If you do that properly, it should be obvious to them. You will, he goes on, still need to expect backsliding and engineer in support for the change.
Also, just as with mobile, you want to recognize that microlearning is a platform. Whatever you do first can be leveraged for other uses. Use this perspective from the beginning. Recognize that even a pilot test can set expectations. Have a plan to cope with increasing demand.
Finally, don’t just build it and assume it’s good. From the analysis, you should have some indication of what the initial problem was. How will you know if it’s solved? You need to test. Good evaluation starts with knowing what data you’ll collect even before you begin. You should know how you’ll get the necessary performance data, and how you’ll analyze it.
You’ll want both formative and summative data. That is, you’ll want data to help you course correct as you iterate, and then final data that tells you how it’s working. Prototyping and testing in tight iterations has emerged as a more viable solution to the complexity of human cognition.
In general, we don’t recommend best practices, we recommend best principles. Just taking what others have done and replicating it doesn’t have a high likelihood of success, because it’s likely a different context. Thus, we talk about the best principles that are abstracted from the instance and then recontextualized for the new situation.
Here we’ve really presented the best principles that you can apply to your situations. We’re happy to assist.
Ready to unlock the full potential of microlearning? Dive deeper into the topic by downloading our comprehensive eBook, ‘Microlearning: It’s Not What You Think It Is’. Discover valuable insights and best principles that can be applied to your specific situations. If you found the eBook useful and want to discuss microlearning in more detail, feel free to reach out to our team of experts at firstname.lastname@example.org. We’re here to assist you every step of the way.