Learners as learning evaluators

Written by

Many years ago, I led the learning design of an online course on speaking to the media. It was way ahead of the times in a business sense; people weren’t paying for online learning. Still, there were some clever design factors in it. I’ve lifted one to new purposes, but also have a thought about how it could be improved. So here are some thoughts on learners as learning evaluators.

The challenge is the result of two conflicting challenges. For one, we want to support free answers on the part of learners. This is for situations where there’s more than one way to respond. For example a code solution, or a proposed social response. The other is the desire for auto-marking, that is independent asynchronous learning. While it’s ideal to have an instructor in the loop to provide feedback, the asynchronous part means that’s hard to arrange. We could try to have an intelligent programmed response (c.f. artificial intelligence), but those can be both difficult to develop and costly. Is there another solution?

One alternative, occasionally seen, is to have the learner evaluate their own response. There are positive benefits to this, as it gets learners to become self-evaluators. One of the mechanisms to support this is to provide a model answer to compare to the learners’ own response. We did this in that long-ago project, where learners could speak their response to a question, then listen to theirs and a model response.

There are some constraints on doing this; learners have to be able to see (or hear) their response in conjunction with the model response. I’ve seen circumstances where learners respond to complex questions and get the answer, but they don’t have a basis to compare. That is, they don’t get to see their own response, and the response was complex enough not to be completely remembered. One particular instance of this is in multiple response choices where you pick a collection out.

I want to go further, however. I don’t assume that learners will be able to effectively compare their response to the model response, at least initially. As they gain expertise, they should, but early on they may not have the requisite support. You can annotate the model answer with the underlying thinking, but there’s another option.

I’m considering the value of having an extra rubric that states what you should notice about the model answer and prompts you to see if you have all the elements. I’m suggesting that this extra support, while it might add some cognitive load to the process, also reduces the load by supporting attention to the important aspects. Also, this is scaffolding that can be gradually removed, allowing learners to internalize the thinking.

I think we can have learners as learning evaluators, if we support the process appropriately. We shouldn’t assume that ability, at least initially, but we can support it. I’m not aware of research on this, though I certainly don’t doubt it. If you do know of some, please do point me to it! If you don’t, please conduct it! Seriously, I welcome your thoughts, comments, issues, etc.

This blog was originally published on Learnlets.

Write a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

GET INSIGHTS AND LEARNING DELIGHTS STRAIGHT TO YOUR INBOX, SUBSCRIBE TO UPSIDE LEARNING BLOG.

    Enter Your Email

    Published on

    Don't forget to share this post!

    Achievements of Upside Learning Solutions

    WANT TO FIND OUT HOW OUR SOLUTIONS CAN IMPACT
    YOUR ORGANISATION?
    CLICK HERE TO GET IN TOUCH