When it comes to deeper learning, the biggest differentiator, I maintain, is the role of practice. In traditional learning, it seems to be about 80% content to 20% practice (if that). Which is probably about backwards. Practice, when it comes to learning, is significant. The question then becomes: how do you design sufficient meaningful practice?
It starts, from the analysis, with the performance objective(s). Whatever people need to be able to do, the final practice is doing that. It may be in a fantastic setting, but whatever they need to be doing in performance after the practice is what they ultimately should be practicing. Thus, the final practice is our first design goal.
In general, however, we don’t assume that they’ll be able to do it fully (or we wouldn’t need the learning experience). We’ve identified where they currently are as another outcome of the analysis. Thus, we need to work backwards from the final practice to where they are now, developing the necessary practice to succeed on the final practice. We need to fill in the enabling objectives, the objectives that are precursors to the final objective, that will get them to the end. This includes both knowledge and skill. Channeling Jeroen van Merriënboer and his Four Component Instructional Design, it’s about the knowledge they need and the complex problems they apply that knowledge to.
This involves creating a learning path, a series of practices, that accomplish two goals: providing sufficient practice for retention over time, and across suitable contexts to support transfer to all appropriate situations. While ideally there would be good prescriptions for this, the reality is that it depends on a variety of factors. For one, it depends on how complex the final skill is, as well as where the learners start. It also matters how frequently the task is performed in the real world. If infrequently, you’ll need more practice than if it occurs regularly. Similarly, it depends on how important the outcome is. If people’s lives depend on it, you’ll want a lot of practice!
In general, there’s art as well as science here, or at least a recognition of a requirement to iterate. That is, you make your first best guess on principle, and then test to ensure that it’s adequate. Assume, as a starting point, that you’ll want and need some revision. You’ll choose the progression of practice, and contexts in which to practice. (As an aside, the contexts that support transfer also include the ones seen in examples.)
The format of practice matters. While you can use recognition practice, asking people to come up with definitions for terms or choose the correct response from distractors, people generally use knowledge to do things. Thus, the best practice is having people make decisions (as we discussed for working with subject matter experts. We can use a wide variety of formats for this. While mentored live performance is arguably best, it doesn’t scale well. So, we can use simulations, or branching scenarios (whether human-run or programmed). Even a better written multiple-choice question can serve as a mini-scenario.
Several elements matter in creating effective practice. The practice itself should be contextualized; working on abstract problems isn’t retained and doesn’t transfer as well. The second is that the challenge level needs to be matched to the learner’s ability. I frequently ask audiences if they’ve ever seen a question where the alternatives to the right answer are so silly or obvious that you didn’t need to learning anything to get it right. Everyone has. This doesn’t help; learning is best when it’s not so easy as to be boring but also not so difficult as to be frustrating. This changes as the learner acquires capability, importantly.
Similarly, the alternatives to the right answer should come from ways learners reliably go wrong. While there’s some randomness in our cognitive architecture, most mistakes come from inferring the wrong conceptual model to guide performance. We have a chance to address it if they get it wrong in this way, so it’s a good alternative. Thus, we need both models and misconceptions from our experts (also discussed previously).
What needs to accompany both the choices of right answers or wrong ones is appropriate feedback. If each alternative to the right answer is a different way learners go wrong, we need different feedback for each answer, specific to why it’s wrong. It has to be neutral in tone (not criticizing nor cajoling), specific to the situation, and as minimal as possible while addressing both the reason why it’s wrong (referring to the model) and providing the right answer. It may be important, for complex problems, for learners to see their wrong answer in context with the right one.
Each practice should follow these principles, but there is guidance about sequencing practice as well. Simply put, practice needs to be spaced. It turns out that, effectively, the ‘learning’ – strengthening the connections between neural patterns – can only happen so much in one day before the strengthening function fatigues. Thus, for learning to build on previous learning, some time has to elapse for that strengthening function to recover. Typically, that’s after sleep. Thus, learning should be revisited over time, such as a bit more practice every few days.
With sufficient appropriate practice (meaningfully aligned with what’s needed in contexts that learners recognize are relevant and realistic) and specific feedback, we’ve addressed the most important part of developing new abilities in our learners. Which is what we should be about. So, please, practice making good practice; you’ll get better at it as you do!