Phishing Microlearning
A short eLearning module on Phishing.
Problem and Solution
According to AAG, “Millennials and Gen-Z internet users are most likely to fall victim to phishing attacks.” These groups have spent a large portion, if not all, of their lives with regular access to technology daily. They have experience with email in the business context and likely have previous experience with more obvious forms of phishing and online scams.
This microlearning module reveals the user’s awareness of phishing tactics in a work environment through formative feedback in realistic scenarios. The user is then formally assessed on their ability to recognize the characteristics and signals of phishing messages.
My Process
Storyboarding
After determining the overall design solution, I planned the structure of the learning experience. I chose to make it a short, interactive eLearning module. I crafted the following learning objectives:
​
-
The learner will be able to select common characteristics of phishing messages from a list of options with 100% accuracy.
-
The learner will be able to recognize signals of phishing messages by correctly selecting phishing messages with 100% accuracy.
I created the storyboard on the left to demonstrate the project's sequence, content, and graphics.
Development
The development process actually started while creating the storyboard. I chose stock graphics to suit the style of the design and customized them in Adobe Illustrator. I used those assets in the storyboard and immediately imported them into Articulate Storyline. This was a much more efficient use of time than finding stand-in graphics since my role was both designer and developer.
​
I animated the content presentation and then used trigger functions associated with layers to create a more engaging learning experience. During interactions, the learner is presented with realistic emails and text messages to support knowledge transfer to real-life situations.
Takeaways
I made a second version of this module with a pretest to evaluate its effectiveness with a small, representative testing group of the target audience. There was also a survey for testers to provide feedback on the experience. I was pleased to see an increase in overall average scores. At first, I was surprised at the three participants whose scores remained at 20% after the lesson. Reflecting on the assessment tool, I realized it wasn’t precise enough. I couldn’t make the multiple-response questions worth partial credit if any correct responses were unmarked due to constraints from the software used to build the module. I haven't discovered a workaround for this issue yet. This made the results less valuable since I couldn’t see whether learners with low scores missed just one of the response items, all of them, or somewhere in between.
The responses to the survey were overall very positive. It was in the comments that I got some valuable critiques. I originally intended for the module to be completed on a desktop so that the hover-states would be accessible to everyone. A few participants asked if they could complete it on a mobile device, so I thought it would be interesting to get feedback on that experience as well. Looking back, I wish I had asked which type of device was used in the end survey to see how the device type affected scores.