Clark concludes the book with the details of the design process on how to make your learning initiatives more meaningful. I’d suggest you go through the book first to try to understand and think about the process that Clark suggests, and then come back to this blog. Or vice versa. Your choice.
The process is fairly straightforward, and I’d want to condense it down to something that I’d work with, inspired by Clark. Although I have condensed the process into three simple steps, the details and considerations under each step are what make this process a robust one in my opinion—and I feel that’s an area that Clark makes you think and reflect on. It’s the details of considerations that differentiates this process from everything else.
I would divide this process into three broad steps:
Step One: Determine Big Goals & Metrics
Clark states that we should frame our big goals with respect to business, learning, and performance in a way that they are observable and measurable. He also goes on to say that we should be able to determine when the learner is capable of performance and at what level of expertise and under which contexts.
When we’re framing the business, learning, and performance goals, we should keep in mind the following consideration:
- Constraints: Are there any scope, budget, or resource constraints that get stretched with the framing of the goals and metrics?
We have to be precise when we’re framing metrics that we want to achieve with these meaningful learning interventions.
Step Two: Gain insights from SMEs & learners
SMEs are treasure troves of content. As designers, we have to ensure that we probe SMEs and find insights about the content that will help the performance of the learners from the complex knowledge architecture that SMEs possess. Knowledge can be extracted from the SMEs into four different buckets, namely:
- Mental models that SMEs have built over time about the decisions they make or are obvious to them,
- Common misconceptions about the knowledge and common mistakes that occur while they make those decisions,
- Stories around the decisions or knowledge that may be useful, and
- What intrinsic motivators made them develop expertise in the subject they’re an expert in.
Collecting insights across these four buckets should lead to a strong foundation for the learning intervention.
Learning about the learners is also extremely important. It’s not just about gathering the demographic details of the learners. We need to dig deeper to understand what their interests are, what they care about, and what motivates them to do whatever it is that they’re doing. If we’re going to be demanding their time and attention, we must get to know them better to serve them better. This helps us in designing experiences that are as close as possible to the expectations of the learners. These insights also help build better engagement with the learners.
Step Three: The [Goal>Role>World] Process for meaningful learning experiences
This is another (three-step process) within this larger three-step process. Its purpose would be to focus on the quality of experience you’d want to build for your learners. You can learn more about that process here. It’s majorly about determining Goal, Role, and World; choosing the treatment that you deem appropriate; and then running your early prototypes through a creativity checklist to fine-tune the experience.
It’s not just about getting through these three steps; it’s also about doing them well and in an effective way. The way this can be achieved is by considering the following things:
Continuous Iteration: Keep testing the smallest ideas with outputs that take the lowest possible effort to communicate the idea and then keep building on it and improving it.
Documentation: Document almost everything. We need to borrow the principles of documentation from the world of products and game development. Documentation helps bring all the stakeholders on the same page and helps us stay on track and achieve the set goals.
Testing: We need to rethink the way testing is usually done. Clark has mentioned research where they’ve found that iterating between the expert review and user testing helps, and as few as 5 users can identify up to 80% of usability problems.
There’s a sequence that Clark suggests that we follow for the best results:
- First, do a usability testing,
- Secondly, test for its educational outcome, and
- Only then test for user engagement.
This sequence helps us to spend our testing resources wisely.
As I mentioned in the previous blog, designing meaningful learning experiences is not easy. But it’s fun—it’s hard fun, the thing we should be designing for in these experiences for the learners.
Clark says, “Learning should be hard fun”, who knew this would apply while you design for learning as well? Personally, for me, the learning interventions that I design or am a part of must have people excited. The people here refer to not just the audience now, it also translates to the team I’m working with.
After all (and paraphrasing Clark) we are at the forefront of delivering human transformation through educationally effective and emotionally engaging experiences—and how can that be easy?