Developing Deeper Learning

Written by

Step into the world of Deeper Learning, where the art of designing impactful educational experiences comes to life. Explore iterative testing and refining processes that lead to real outcomes. Discover the importance of usability, performance objectives, and metrics in crafting effective learning journeys. Embrace the concept of 'learner experience' and witness how it enhances engagement and accelerates knowledge retention.

We can have all the principles we want about Deeper Learning, but we also need an associated design process that allows us to systematically deliver on the promise. We need ways to create, test, and refine, our designs. We’ve already talked about creativity, now we need a development plan.

An important thing to keep in mind is that people’s properties aren’t perfectly predictable. We might be able to design and build a bridge or a chair according to material specifications, but human brains have more variability. In fact, it’s been said that the human brain is the most complex thing in the known universe! To believe that we’re going to systematically impact it, changing the very behavior, with a waterfall approach, isn’t realistic. Instead, we’ll need to design, test, and refine.

Interface design, and similar related fields, are ahead of the learning design field when it comes to testing. They expect to test, and design it into their solutions. We notoriously haven’t, despite moves from ADDIE to iterative ADDIE. Newer models, such as Michael Allen’s SAM and Megan Torrance’s LLAMA have iteration built into their processes, as has David Merrill’s Pebble in a Pond. Do include testing and revision cycles in your project plan, allocating more as the scope of the work increases.

In our analysis phase, we should identify metrics that determine our outcome. Our performance objective stipulates how we will know they can do it, but it should also be derived from a real metric in the organization that needs to be moved. Then, we should iterate until our solution makes the needed change. However, there are other metrics we should care about.

For one, we need to test for usability before anything else, if there are problems in navigating the learning experience and making decisions that could mask a learning design problem. The standard things to evaluate are ability to accomplish tasks, errors in doing so, time to accomplish the tasks, time to learn, and to relearn, and satisfaction. All these should essentially be moved to where there’re no errors in accomplishing the goals, and tasks take only a minimum of time, e.g. one click to make a choice.

Once we’ve ensured that given the tasks, people can accomplish them with essentially no errors in a reasonable amount of time, then we want to know if the learning experience actually accomplishes the learning. Here, we’re looking for first ability to perform after the learning experience, then transfer from the learning experience to the workplace, and finally achieving the actual impact in the organization. The performance objective guiding the final practice achieves the ability to perform. Next, we need evidence from either instrumented tools (e.g. a digital record of action) or supervisor report to see if we’re achieving change in the workplace. We finally look at the business metrics to determine if we’ve moved the needle.

After we’ve determined that we’re achieving our goals, we should be also thinking about the ‘learner experience’. Tuning the experience to achieve ‘hard fun’ both increases learner experience and optimizes the time to success. If learning sticks faster and better, our results are obtained sooner. We don’t need to go to extremes, such as measuring adrenaline in the blood or galvanic skin resistance, learner subjective experience is sufficient. They can tell you!

Our process should be iterative and escalate slowly. Our testing should be low tech, and only gradually move in audience from the team to the learner. Our prototypes can evolve from mental simulation, to paper prototypes, to the key interactions before adding all the window dressing. We similarly should first test upon ourselves, then uninvolved teammates, reserving the (typically hard to arrange and possibly expensive) learners until the bugs are worked out and it’s deemed to be ready for such exposure.

Testing should be early and often, as the mantra has it. Also, the core interactions, the practice, should be prototyped and tested first. Practice is the critical step in learning, and it’s where interactions happen. Getting the practice right, then gradually developing associated materials, as well as the introduction and closing, is the recommended approach.

The usability field suggests that prototypes are supposed to be thrown away. The idea is that you’re less prone to be willing to start anew if you’ve invested too much, thus the call for low tech. Prototypes can be hand-drawn sheets of paper or made up of post-its, in early stages. Later on, you’ll probably implement sample practice interactions before working on models, examples, intros, and more.

There’s more beyond the testing and release. One of the questions I often like to ask my audiences is whether they’ve any content on their server or LMS that’s out of date, but still extant. Everyone raises their hands. This is content management. Every piece should have an owner, and a review date. For content that’s less volatile that date may be in years, but for things that change fast, you’ll want a more frequent review. Ownership of the content may be matrixed, with a design team member and a subject matter expert, but it needs a process.

Similarly, content granularity is an issue. The movement to microlearning, regardless of how ill-conceptualized, is yielding smaller chunks. If you look at content experts like web marketing, you’ll see that they have a well-defined structure and tags that allow them to assemble content by rule (e.g. recommendations, such as at Amazon) instead of hardwiring. Our long-term goal should be to do the same. To the extent we identify, and separate out, the elements of learning, we’re creating the infrastructure necessary to take advantage of personalization and adaptive learning platforms when we’re ready. The technology already exists, it’s now up to our initiative.

We also shouldn’t expect our learning to stand on its own. We should be thinking about extending the experience. I know one L&D unit that didn’t release content until the manager or supervisor training had also been developed. We typically can’t prepare learners completely, but get them to a minimal level and then expect them to continue to develop. We shouldn’t leave that to chance, but instead should have implemented mechanisms to support ongoing learning, whether on-the-job coaching, reactivation mechanisms, or communities of practice.

Learning is a complex process, and success in creating deeper learning is a matter of attending to the details. Yet, learning having impact is far superior to the current estimates that only 10% of training interventions are actually effective. That’s a tremendous waste of money that we can and should do something about. Doing Deeper Learning is a necessary start. Let’s take that step and start achieving real outcomes.

Write a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

GET INSIGHTS AND LEARNING DELIGHTS STRAIGHT TO YOUR INBOX, SUBSCRIBE TO UPSIDE LEARNING BLOG.

    Enter Your Email

    Published on

    Don't forget to share this post!

    Achievements of Upside Learning Solutions

    WANT TO FIND OUT HOW OUR SOLUTIONS CAN IMPACT
    YOUR ORGANISATION?
    CLICK HERE TO GET IN TOUCH