Have you worked on projects where you have limited access to learners? Do you roll out the course anyway? Let us first list the reasons why learner testing cannot be conducted for all projects.
- Learners may be situated halfway across the world and therefore, they are accessible only over phones/Skype/emails.
- Clients may not wish to give you direct access to their learners as they are not comfortable about this.
- Learners work on tight schedules and therefore, it is difficult to arrange learner testing.
- There is no time to conduct a detailed learner testing (This should have been taken into account when the schedule was being drawn)
Even in these situations, you can test your course. Given below are the three techniques that you can use to test your course:
- Recruit proxy learners
- Expert usability evaluation and
- Expert learning audit
Recruit Proxy Learners
Based on the learner profile, you can recruit people who have a similar profile as that of your learners. For example: Your course is designed for a bank staff in Standard Chartered Bank, England. You can recruit people with a similar profile from Standard Chartered Bank, India. The process followed in the banks will be more or else common. There are bound to be more similarities. However, you must remember that there are bound to be cultural differences. But, this technique is better than not testing it all.
Therefore, you can recruit proxy learners and conduct learner testing. Share the results with your clients and check for cultural deviations, if possible.
Expert Usability Evaluation and Learning Audit
Both these evaluation techniques have their roots in usability inspection. We have modified these techniques to suit the requirements of the learning field. If you use some other techniques, please do share. We are always eager to learn more effective ways of doing things.
How are these evaluations conducted? An expert (usability/learning) evaluates the effectiveness of the course based on a set of parameters. He/she will take the course as a learner would. He/she will click on links and buttons on the interface, try interactivities included in the course, and so on. The expert will keep an eye open for obstacles that hinder learner’s progress. These include ambiguity, functionality issues, cognitive overload, audio, and so on. Detailed reports are generated at the end of the evaluation.
What is an expert usability evaluation?
Using this technique, you evaluate the usability of an online course. An expert lists the parameters based on which the evaluation will take place. These could include:
- Navigation: What is the primary form of navigation? Is this intuitive? This would ideally mean that we do not include ‘Click Next to proceed’ kind of instructions. The learner should intuitively know what the primary navigation is.
- Visual hierarchy: Is the information organized in a logical manner? Eye movement is typically from left to right and from top to bottom. Are all elements positioned keeping this in mind? Will the learner know where the information starts and where it ends?
- Accessibility of information: Are important elements places upfront? Will the learner be able to access the most important information easily? Will the learner know where he/she will find what he/she is looking for?
- Affordance: Do buttons have the affordance of a click? Will the learner know that he/she needs to click to view? Will the learner know what is expected on an interactive screen? During learner testing, I have seen learners click images that are not clickable or miss buttons that need to be clicked. This is because the element does not have the affordance of a click. Therefore, it is important to identify such issues.
- Fonts and font sizes: Will the learner be able to read the text easily? Do font colors hinder readability? Are these fonts and font sizes consistent across?
What is the difference between a QA and an expert usability evaluation?
- A QA checks whether the online course maps to the signed off storyboards/wireframes. It also checks functionality, consistency, and ensures a bug-free course.
- Expert usability evaluation, on the other hand, checks whether the elements in the course are usable. It also takes into account user experience. Does the eLearning application cater to the five principles of usability: Learnability, Efficiency, Memorability, Errors, and Satisfaction?
Let us look at an example to understand this better.
QA comments on the Screen
Usability comments on the Screen
A QA is more content-centric while an expert usability evaluation is more user-centric. This is the main and the most crucial difference.
What is an expert learning audit?
- Learning objectives-content mapping: Are the learning outcomes met? Can the content be directly mapped to the learner objectives? Is there extra information? Is the information sufficient?
- Learner-content mapping: Is the content specific to the learner profile? Is it relevant? Will it help the learner meet the learning objectives? Will the learner be able to grasp the language? Will he/she be able to relate to the examples or scenarios?
- Learner motivation: Is the course motivating enough for the learner? Why will he/she complete the course? Will they find it interesting? Will he/she be motivated to complete an exercise? Will the learner see any benefits from taking this course?
- Visualization: Do the visual elements aid learning? Are they generic or specific to the content? Are they used for beautification or for reinforcing learning? Are they similar in look and feel across the course?
- Language: Will the learner understand what is written? Is there any ambiguity?
What is the difference between an ID review and a learning audit?
The most important difference is that it is conducting by a third party. By the time a course is developed, several ID reviews have already been done. The learning audit is conducted by an external person who has not been a part of the design phase. Therefore, the course is looked at by a fresh pair of eyes. The expert looks at the course solely from a learning perspective and does not take into account constraints. Several times we compromise on learning because of constraints. We also include several information because the SME/client wants us to or we are afraid of the learner missing out on information. The expert focuses on identifying the obstacles that hinder learnability. He/she can help us identify the instances where learning has been compromised. The expert also shares suggestions to rectify the issues.
Keep the following in mind if you are conducting an expert evaluation:
- Ensure that there are no distractions. Evaluation requires a lot of concentration, else you may miss a critical issue.
- Try everything. What if the learner were to click this? What would happen if I go here instead of there?
- Use screen grabs to highlight issues. This is helps during fixes. The reader will not have to shuffle between a report (xls, word) and the course.
- Ensure that you mark the issues that are repeated across the course as a global comment. However, in the report, you will have to elaborate all instances in which this issues is present. This will help the reader identify those screens when fixing.
- If you have a set of parameters, you could check a screen for each parameter in a logical order rather than just scan a screen for issues. This way you will not miss anything.
- Include a sound logic when highlighting an issues.
- Include suggestions wherever possible. Provide two or three alternatives and show a suggestion visually.
- It can become very tedious, tiring, and repetitive. So, be prepared.
The reports generated from both expert usability and learning audio are values sources of feedback. These help identify issues that obstruct usability and learnability. The suggestions helps identify the possible solutions. Based on the feasibility, you can rectify an issues. Use these techniques to evaluate your online course. Try it once and see how much difference it actually makes. But, remember, this is still no match for direct feedback from learners.
This post has been viewed 4734 times.