The final instalment of our four-part series on eLearning assessments takes us to Articulate Storyline, where we’re going to level up one final time by creating realistic scenario interactions!
Welcome to our series on eLearning assessments! In these four posts we’ll cover a lot of ground, taking you on a journey through the theory behind effective eLearning assessments and practical tips for building assessments in three different authoring tools: Microsoft PowerPoint, Articulate Rise, and Articulate Storyline. We’ll lay the instructional design foundations in this post before getting into the authoring tool tips.
In this post:
- Why should you assess eLearning?
- How do you use learning objectives?
- Should you score eLearning assessments?
- What types of assessment can you use?
- eLearning quizzes
- Authentic assessments
- How do you write effective feedback?
Why should you assess eLearning?
Assessments measure whether the learners mastered the knowledge and skills addressed in your eLearning course. There are many reasons why you might want to include assessments in your training. Assessments can help you to:
- Measure how well learners have mastered course objectives.
- Examine how effective the course design has been.
- Draw attention to the most important elements of the training.
- Keep track of learners’ progress.
- Identify strengths and weaknesses.
- Reinforce learning through corrective feedback.
There are a few different types of assessment that you might include within your eLearning courses. Two really common types of eLearning assessment are formative and summative assessments.
Formative assessments evaluate how well a learner is doing as they work through a course. They focus on the learner’s knowledge at a moment in time, usually at the end of a topic or module. Formative assessments are ongoing, flexible and relatively informal, so they act as a great motivator and checkpoint for learners. Formative assessments produce feedback and learning.
Summative assessment comes at the end of an eLearning course and is used to check learner understanding of an entire course. These assessments typically help you to know whether the learning has been achieved and to what level. This usually (but not always) requires something a bit more quantitative, structured, and easily evaluated . Summative assessments produce a measure of success.
Quality eLearning content will combine both formative and summative assessment at appropriate points in the learning journey.
How do you use learning objectives?
At the beginning of your eLearning development process, it’s worth spending time defining clear and appropriate learning objectives: these objectives are invaluable when deciding how best to assess learners. Ask, “What kind of task will demonstrate whether learners have achieved the learning objectives that were defined?” Use measurable verbs in each learning objective to guide you when picking the assessment approach and defining specific questions.
When it comes to creating your assessments, you will need to match the assessment method to the level of Bloom’s taxonomy for the learning objectives you are assessing:
An assessment that is aimed at the wrong level can create uncertainty. For example, if your objectives and learning content focus on evaluation but your assessment tests only for understanding, your learners may later find that they are not able to demonstrate the evaluative skills they need. And even if they can evaluate well, you won’t know about it. Not even the learner will know for sure, until a situation arises that requires them to use the intended knowledge – at which point it might be too late!
Poor matching of learning objectives to assessment will also impact learners’ motivation. If learners have been told that they will need to analyse something, but the assessment asks them to just recall information, or vice versa, this could be a frustrating and demotivating experience.
When you match the expectations set by learning objectives, you increase the validity of your eLearning module and assessment. So, make sure you are testing what is supposed to be tested!
Should you score eLearning assessments?
The decision whether to score an eLearning assessment or not will very much depend on the learners and purpose of the eLearning content you are creating. Here are some general points to consider:
Arguments for scoring:
- Most learners are interested in knowing how well they did in an assessment.
- Scores can motivate learners to learn and master the content.
- Scores can provide information about learner progress in a formative assessment and be used to determine whether a learner is ready to move on to any further levels of learning.
Arguments against scoring:
- Some believe that scoring is not suitable for adult learners. They say that – unlike children – adults simply need guidance to demonstrate improvement, not scores.
- Some learners are intimidated by the potential to fail. This can impact their motivation to take assessments.
When you are deciding whether to score an assessment, make sure you have understood the course requirements, organisational culture, and available technology. All of this will help you decide whether scoring will add value to the learning experience. In Part 2 of this series, we look at creating assessments in PowerPoint, which is a great option if you don’t want or need to score learners’ work. In Part 3 and Part 4, we look at Articulate tools that enable you to score a learner’s answers.
What types of assessment can you use?
Undoubtedly the most popular type of assessment when it comes to eLearning, quizzes can be a quick, simple, and measurable way to test knowledge. Some common question types are: multiple choice/response, true/false, matching, and fill-in-the-blank. Let’s look at each of these in turn!
The most common and versatile quiz question type in assessments are multiple-choice questions which require the learner to pick the answer from a number of options. Multiple response questions are similar, but the user is able to indicate more than one of the options in their response.
Multiple-choice/multiple response questions are often cited as being able to fulfil only the Remember and Understand levels of learning. However, you can manipulate the format to test higher order thinking. For example, you could ask your learners to read a page of text and then ask multiple choice questions that require them to analyse the information given. As an example, this question is testing learners in analysis:
- Multiple-choice questions can be very effective in checking learning.
- They are versatile across different levels of learning and topics.
- They are familiar to most learners and are quick and easy to complete.
- When written well, they are not easy to guess.
- It takes practice to be able to write effective multiple-choice questions and realistic “incorrect” answers (distractors). This guide to multiple choice-question writing is a great place to start if you’re new to the game.
True or false questions are really just multiple-choice questions with two options. These require learners to pick between the two options to demonstrate their understanding of facts and information.
These questions should only really be used to test learning objectives at the lowest level (Remember). At higher levels, a true/false question will not give adequate and reliable results.
- They can test lots of types of content.
- Learners can respond quickly.
- They can be too easy: there is a 50/50 guess rate.
- You need lots of true/false questions in order for an assessment to be reliable and valid.
A matching question consists of a list of statements or questions and a list of responses. The learners are required to link each item to the appropriate response(s).
A matching question can effectively test learning at the level where learners need to remember information. When constructed with appropriate instruction, they can also be used to explore how well a learner is able to understand.
- You can cover lots of content with minimal space and preparation. For example, to cover the content in the question above using multiple-choice, you would have to have three questions (one for each animal). With matching, all that content is captured in one question.
- They are good for fact-heavy content.
- They can be time-consuming for learners to complete.
- Some common matching formats – like drag and drop interactions – are difficult to make accessible to keyboard users.
Learners are required to finish a sentence by filling in the correct word or phrase in space of a blank. These types of questions can be useful particularly when a specific/exact word or number is important to remember.
Fill-in-the-blank can be used to test at the two lowest levels: Remember and Understand. This type of assessment question is heavily reliant on recall.
- There is less opportunity for a learner to ‘guess’ the right answer.
- Learners must remember, as opposed to just recognise, the correct answer.
- They are good for ‘who’, ‘what’, ‘where’, ‘when’ content.
- They can be difficult to construct, especially if there is more than one possible correct answer. eLearning Industry’s article on best practices for fill-in-the-blank question writing is a good place to start.
- They can put too much emphasis on factual recall.
- Scoring can be time-consuming, especially where eLearning software requires exact input matching.
- Text-entry interactions must be treated with caution if translating into another language.
An approach that addresses some of these pitfalls is to include a drop-down menu for learners to pick the correct word or phrase from, instead of typing their answer. This is essentially a version of a multiple-choice question.
A combination of different quiz questions used at the appropriate time can create an effective assessment of learning that can be easily distributed, measured, and reported. We look at how to build an eLearning quiz using just Microsoft PowerPoint in Part 2 of this series. The limitation of an eLearning quiz is that it is difficult to reach the higher levels of learning required on some occasions. To find out about how to hit those dizzying heights in your assessments, read on!
eLearning quizzes are great for testing learning objectives at low levels of Bloom’s taxonomy, but when the purpose of training is to provoke a change in skills or behaviour, one of the best tests of learning success is an authentic, performance-related assessment.
Putting the learner in a real-life scenario and assessing their ability to use what they’ve learnt goes beyond factual recall. Authentic assessments focus on a learner’s ability to integrate what they’ve learnt into an experience that is tangible and appropriate for their role.
For a longer course carried out via eLearning, you could consider building in opportunities for authentic assessments at various points throughout the course (i.e. as a formative assessment). This will allow your learners to continually practice and embed their learning, as well as acting as a monitor of progress for the client.
For an authentic assessment to be successful, you need to spend some time defining a set of instructions and rules that mean the outcomes are measurable or gradable. Use the learning objectives and input from subject matter experts to create measurable criteria and set a grading scale against each of the criteria. By creating this rubric, you make it possible for either your eLearning software or a real-life person to provide feedback and scoring to your learners. Learning Solutions Mag’s guide to evaluation rubrics is a great starting place if you are looking to learn how to write a rubric for your next eLearning project.
You can use different types of authentic assessment in your eLearning:
Free response/short answer/essay questions
Learners are presented with a stimulus and are asked to respond in writing, orally, or through the creation of an original piece of work (e.g., a drawing, a diagram etc.). Learners’ freedom to explain their comprehension and demonstrate how they are utilising their new knowledge makes this a very powerful tool for assessing analysis, evaluation and creation.
These types of questions give little-to-no opportunity for learners to guess a correct answer: they are reliant on the knowledge they’ve acquired and how well they are able to use it. Free response questions are also quite easy to construct once you have an assessment rubric.
However, a possible disadvantage of this type of assessment question is that they require human input in order to be assessed. An instructor, manager, or peer will need to review learners’ responses in order to provide meaningful feedback and scoring. If there is no time or resource for this type of assessment within your eLearning project, you’ll need to take a different assessment approach.
A portfolio documents learning over time. Learners have to create a collection of work, showing achievement or growth. This approach can be used to show a fuller picture of what the learner knows and is able to do and is great for longer-term training programmes. Much like free response questions, this type of assessment relies on another person being available to review learners’ portfolio.
Performance tests enable learners to put their skills to the test in a variety of authentic contexts. A performance test can be created for any skill and real-life situation including vocational, managerial, administrative, leadership… the list goes on! We look at how to create effective scenario-based performance tests in Part 3 and Part 4 of this series.
How do you write effective eLearning assessment feedback?
In face-to-face training, learners can get pretty swift feedback about anything they have misunderstood. This is called corrective feedback: learners can learn a lot from the process of getting something wrong and then correcting based on feedback. When working with eLearning, the distance between you as the instructional designer and the learner makes corrective feedback a little more challenging!
Corrective feedback is important because it helps the learner to progress and can highlight any parts of the learning that need to be revisited in order to meet the learning objectives.
The way you choose to deliver corrective feedback will be determined in part by how complex the content is and the level of maturity of the learner. Some best practices to keep in mind are:
- Don’t leave corrective feedback until the end of a training course. Provide it regularly and as soon as possible.
- Feedback is better when balanced – yes, the learner needs to know what they got wrong. But did they get anything right?
- Will they get to try again? If so, what do you want to tell them? You can make some suggestions about changes (e.g., “You may want to consider…”) or you can leave it up to the learner to work it out.
- Keep the tone friendly and supportive. The goal is to promote learning, not embarrass or scold.
- Be specific about what needs to change. Avoid general criticism.
- Think carefully about the learner experience. If your feedback is contained within an eLearning course, what will the learner experience if they get something wrong repeatedly? How can you help them to succeed without just giving them the answer (unless that’s the right thing to do)?
Intrinsic and extrinsic feedback
Another decision to make when approaching feedback is whether it will be intrinsic or extrinsic.
Intrinsic feedback is indirect. Learners know immediately that they have made a mistake and can adjust their actions based on the feedback. An example would be showing a character in a story that makes a confused or sad facial expression when an incorrect answer is given.
Extrinsic feedback is more explicit and direct. A comment is made on the learner’s performance in a clear and simple way (e.g. “That’s right!”).
Once again, the type of course and methods you are using will guide you. Extrinsic feedback works well on a multiple-choice quiz but may not work as well for a game, where it may feel like an interruption.
In game- and scenario-based assessments, intrinsic feedback works nicely. In the rest of this series, we’ll look at incorporating both intrinsic and extrinsic types of feedback into interactive eLearning assessments.
That’s it for the theory side of things! You are now ready to start building some awesome eLearning assessments. Next, we’re looking at building eLearning quizzes in PowerPoint. Or, check out our posts on building authentic assessments in Rise and Storyline.Leave a comment
Branching scenarios are a great way of providing authentic assessment in Articulate Rise. They offer learners a chance to put theory into practice, by facing realistic situations that they might encounter in real life and getting feedback on what they’ve done well or need to improve.
Interactive PowerPoint quizzes are a great way of levelling up your eLearning content. Whether used for formative or summative assessment, interactive quizzes help learners to have a more engaging experience, identify areas where they need more help, and retain more of what they learned in the course of their training. In this post we cover all the practicalities of creating interactive assessments in PowerPoint.
BrightCarbon provided us with a fantastic service ... and left us with a presentation that secured us a £4 million contract. BrightCarbon is our first choice for presentations in the future.Matthew Mitchell NHS