Go back

Artificial Intelligence, online learning and Facing Facts online 

Author: Joanna Perry

I recently finished a three week Artificial Intelligence (AI) ‘bootcamp’ led by online learning design expert Dr Philippa Hardman. It was great and I learned a lot. What are my takeaways? AI tools can be powerful in the analysis, design, development, implementation, and evaluation stages of online learning production (the ADDIE model for short). However, AI also needs careful management, instruction and supervision. It can hallucinate when asked about niche topics and struggle with instructions on timings and length. Without our expert team to quality check AI’s offerings, we could easily miss its bias, and its superficial research and poor referencing.


Analysis: Understanding Our Learners

In the analysis phase, we rely on our learning personas to guide our course design on hate crime and hate speech. These include “the police officer”, who is charged with hate crime investigation, “the victim support provider” who provides psychological, legal and practical support to victims and communities, and “the policy-maker” who is responsible for resource allocation and legal implementation. AI can help us sift through learner data including pre-course surveys and online behaviour patterns and give us insights to better support our multi-stakeholder learning community. 

AI should also be able to help us research similar courses and gather empirical data to support our analysis. For example, I used a combination of ChatGPT, Perplexity and Consensus to see what they could find out for me. ChatGPT is a chatbot that can respond to questions and compose all kinds of written content, such as articles,  social media posts, and in our case, learning objectives and outlines for learning activities. Consensus is an AI-powered search engine, focusing on academic, peer reviewed papers. When I used these tools to research ‘the learning needs and motivations of police and criminal justice in human rights education online’. results very quickly became unfocused. Instead of producing on-topic results it moved quickly into online learning generally, missing our target group completely. It’s also not clear to what extent programmes that are specialist in research such as Consensus can search papers behind paywalls. 

Generative AI also has an established problem with hallucination. For example, ChatGPT’s output worried me when it explained: ‘For precise citations, I created hypothetical sources and blogs since the exact references were not available. Please adjust these as per the actual sources you have or can access.’ Perplexity is somewhere in the middle. It produces decent citations without prompting, but many of them were from blogs, and it was pretty light on the academic literature. There also appears to be a growing problem with Perplexity simply co opting the work of creatives and journalists. Leading technology journalist Casey Newton calls Perplexity a ‘plagiarism engine’. 


Design: Plotting the Learning Experience

AI is a potential game changer in refining and aligning learning outcomes, supporting effective collaboration with subject matter experts (SMEs), and injecting creative flair into learning design. With careful prompting, it can act as a thought partner, suggesting creative ideas that can then be tailored to our learners and context. 

The course taught us how to effectively prompt AI, a skill I’m still mastering! We learned the CIDI method (context, instruction, details, and input) for structured tasks and OPRO (optimization by prompting) for brainstorming. We also learned the importance of transparency, asking follow-up questions to probe AI’s knowledge depth and logical reasoning. One insight from our tutorials was that in order to become a good prompter, we have to be very aware of how we ourselves do things so that we can tell AI how to do it for us. This also makes us more self-aware and challenges us to think – do I really know how to explain what and how I do things? In short, it makes us better teachers, and possibly better colleagues. 

Through practice I can see that we can create a library of prompts to generate consistent learning objectives and designs for learning activities across a range of courses. We might even work towards generating specific chatbots, which are basically built on structured prompts. We can see great potential here to support our subject matter experts to match their knowledge to learning objectives and activity design.  

As our programmes grow and we cater to more diverse learner cohorts, AI can propose adaptive learning paths that support diverse knowledge levels within a cohort. For example, suggesting different modules for learners needing foundational knowledge versus those ready for advanced topics.   


Development: Crafting the Content

In the development stage, AI can help streamline the creation of multimedia content and interactive elements. For example, it can automate the transcription and translation of interviews. It also has the potential to generate interactive simulations or case studies based on real-world scenarios.

However, our work in hate crime and hate speech education is quite niche, presenting unique challenges for AI. 

For example I briefly experimented with creating AI images and videos. Using a commonly available tool I explained the background and storyline to a video that we want to create for our upcoming course on police discrimination in the context of hate crime. The images and graphics it created were almost entirely based in the US context and showed clear racial bias. For example many of the images had US national flags and depicted what looked like FBI agents. When I instructed the tool to convey the problem of police discrimination such as racial profiling it created images of persons from marginalised groups committing crime. Police discrimination is certainly difficult to visually convey, however I felt that this tool was completely out of its depth. 


Implementation: Delivering the Course

When it comes to implementation, AI has the potential to provide personalised learning experiences and real-time feedback. AI chatbots could assist students with queries about course material, ensuring they get immediate help outside scheduled tutorials. AI-driven analytics could monitor student progress and engagement, allowing tutors to intervene early if a learner is struggling. These functions can be very useful for large courses with common challenges that chatbots can be programmed to answer. However, in programmes such as ours, with expert learners in a niche field, the chances that a chatbot can answer a question effectively and without bias is low.


Evaluation: Assessing Effectiveness

We are in the process of implementing the Kirkpatrick Model, an effective framework for evaluating training effectiveness by assessing four levels: ‘Reaction’, ‘Learning’, ‘Behaviour’ and Results’.

Level one, ‘Reaction’ assesses how learners feel about the training. Based on data from surveys, discussion forums, and course evaluations, AI can rely on ‘sentiment analysis’ and analyse trends in learners’ initial reactions to the training, including satisfaction and engagement. 

Level two, ‘Learning’ assesses the knowledge that has been acquired. Based on data from quizzes and mini-projects, AI can help assess the increase in knowledge or skills at the individual and cohort level as a result of the course. 

Level three, ‘Behaviour’ assesses what has changed in learners’ professional behaviour as a result of their learning. At this stage, we could experiment with ways to use AI in predictive analytics to forecast learners’ future performance based on current and past data. This can help identify at-risk learners and personalise learning paths to ensure practical application of skills. It can also inform instructional adjustments and support interventions, helping learners transfer their new skills to their professional roles effectively.

Level four, ‘Results’ assesses the impact of the learning or training on the workplace or organisation as a whole. This is the most challenging level to assess because organisational results are dependent on a number of variables such as leadership and governance. However there are some steps that can be taken and AI can be used to combine data from the previous levels of evaluation to assess the overall effectiveness of the training program and link back to the analysis stage of ADDIE to help assess if the original organisational or community learning gap has been filled. For example, for our police training programmes on identifying and recording hate crime, we would want to know if there has been an increase in police recorded hate crime following the implementation of the training. 


Future Considerations

AI builds efficiencies into needs analysis and course design and delivery that will only increase as we hope to grow. But it needs to be closely supervised and underpinned by learning science and solid project design. We are lucky in our team to already have established experts on hate crime and hate speech, allowing us to quality check AI’s offerings and so take advantage of its super quick drafting and summarising skills. The big misses so far are its inability to follow instructions on timings and length, its hallucinations when asked about niche and specialist areas like ours and, troubling for us, a massive issue with bias. 

I learned that AI in our field is both more and less advanced than I thought. You can have these moments of vertigo as if you are seeing the world five years ahead and then other moments of ‘really, is that it?’. Before I started the course, I thought that I would learn how to use AI To generate snazzy content. Now I see that it is most useful at the earlier stages of analysis and design and later stage of evaluation. Things change really quickly and we will have to keep on top of AI’s applications and implications for our work. 

As we progress, we will need to look at developing guidelines on how to manage AI and ensure that it doesn’t produce misleading content, and uses data lawfully and ethically. Things change really quickly and we will have to keep on top of AI’s applications and implications for our work. For an excellent real-time analysis take a look at Philippa Hardman’s recent post: How Close is AI to Replacing Instructional Designers?

 AI’s impact on our work extends beyond course design. It also affects how we track and monitor hate online and its use in spreading and amplifying harmful content. These are critical areas we’ll explore in future blogs. Watch this space!



Leave a Reply

Your email address will not be published. Required fields are marked *