While practice is, perhaps, the most important thing to remedy in making deeper learning, there is much beyond practice that is ripe for improvement. These include extending the experience, going into detail on the ‘content’, and considerations about context. Each of these areas provides the opportunity to go deeper.
First, we typically don’t develop learners to a whole new level of expertise before returning them to the fray. That requires considerable time and, consequently, expense. Instead, we bring them to a minimum level of capability, and then expect to develop them while working. (At least, we should have this expectation!) Too often, we instead practice ‘til they get it right, not until they can’t get it wrong. This is a major problem with much of our current learning.
This further development (extension of experience) can take a number of different paths, depending on a variety of factors. If the task happens frequently, preparing supervisors to observe and coach, after the learning experience, makes sense. If it’s rarer, that should be coupled with reactivation, whether reconceptualization (new models), recontextualization (more examples), and/or reapplication (more practice). We can also prompt reflection or reports on how it’s being applied. However, we should explicitly design for and deliver the reflection that accompanies learner action after the learning experience.
That last option, the question about application, gives us insight into the actual impact. We can also ask supervisors for changes observed in the workplace, or look at external metrics (which we should be looking at to determine whether the intervention had the necessary effect). Our goals should be impact, and we not only want to engineer it, but then assess it as part of a process of determining whether we’re done.
A second area is the area of content. Too often, it appears to be a generic label for things that may be necessary. Instead, content should be minimal, focusing on two major categories of content that assists in learning: One is models, the other is examples. Each has specifics that we should understand. We get models and examples from our subject matter experts in analysis. Our job is to make these elements comprehensible and useful.
Models are conceptual and causal. Model elegance comes from providing the maximum benefit with the minimum content. They should provide guidance about how to proceed by explaining how the world works or serve as a framework for evaluating performance. They are frequently represented by visual support: a diagram or equation, or an animation if dynamics play a role.
They are causal in that they give a basis for predicting outcomes of choices. Our brains build causal explanations, and if we build a bad one, we don’t tend to replace it—instead, we patch it. It is therefore that we should provide good models up front, and show how they play out. They’re also conceptual, not tied to specific items. We want learners to match models to the specifics of a situation (appropriately).
That latter goal, seeing how models play out in real contexts, is the role of the other content component, examples. Examples are specific situations that are addressed. They detail situation, steps, and outcomes. They should also, importantly, show the underlying thinking. For instance, they should explicitly refer to the models.
Cognitively annotated examples, where the underlying thinking is made clear, are also known as worked examples. John Sweller’s research on cognitive load theory has shown that for novices, seeing worked examples is a useful precursor to actually engaging in practice. As a consequence, consider showing examples before practice, at least for those learning new skills as opposed to those continuing their development.
Examples are best when they’re minimal but meaningful. They should have the form of a story, and naturally incorporate the thinking as well as any necessary dialog or narrative. Formats can vary; graphic novel formats are engaging and allow thought bubbles to show underlying thinking, and videos can be compelling, but even prose can work. The point is to make the challenge and the outcomes plausible and visible. Ideally, the stories are intrinsically engaging, too, with important outcomes.
One other important element goes across examples and practice. The contexts that are seen and used cover both. Here I mean the particular situations, challenges and outcomes, and how they differ from one another. Most every skill we’re developing has a ‘space’ of application, a range of circumstances in which it’s relevant. For some, this is small, e.g. how to operate projector X. For others, such as negotiation, the space can be large, covering with vendors, employers, shopkeepers, and more. In broad transfer situations, we can’t provide sufficient contexts to span the entire space, so we have to provide a suite of contexts that will successfully transfer to the necessary areas. Those contexts, seen across examples and practice, provide the space of transfer. If there are too few, we likely won’t get sufficient retention nor transfer. If they are too narrow in scope, we won’t get sufficiently broad transfer. Yet, practically, we don’t want to build too many.
The success path is to identify the minimal set that provides the broadest transfer. The suite of contexts we need should include an initial one that serves as the simplest example. Then we should gradually add complexity at the same time that we cover different circumstances that will generalize best to the ones the learner should be able to handle. Unfortunately, there’s no algorithm to this, as it depends on the complexity of the skill, the space of application, frequency of application, and importance of the skill. The best approach is to create an initial draft, and test and refine until you get the outcomes you need.
This is more work than you’re probably used to. The good news is that it becomes easier and even second-nature, with practice. The bad news is it still adds in a requirement for testing. However, that’s to be expected when you’re moving from dumping content to actually creating sustained change. Which, after all, is the ultimate goal of deeper learning! It’s past time we do it right.