by Dr Laura Zahra McDonald, ConnectFutures, UK.
Children and young people are growing up in a complex, often chaotic digital environment, increasingly shaped by artificial intelligence, but also by the same forces that have defined online life for the past decade: algorithms, anonymity and attention-driven content. Navigating their dynamic and rapidly developing online social spaces and this context of information disorder, young people also encounter, engage and utilise digital Generative AI tools such as chatbots within the ecosystem.
While much of the current discussion around AI in education focuses on plagiarism or homework support, it risks overlooking a more urgent issue intimated above: how these technologies intersect with the wider online environments that already shape young people’s beliefs, behaviours and sense of reality.
In our work across schools, colleges, Pupil Referral Units (PRUs) and Alternative Provision Settings (APs), we see this clearly. Young people are not encountering AI in isolation. They are engaging with it alongside social media, online communities and content streams that can reinforce mis/dis/malinformation, bias and, in some cases, harmful or extremist narratives. AI tools can both mediate and amplify these dynamics, offering answers that feel personalised, authoritative and immediate.
This reflects a wider trend we have observed through our work with the Organization for Security and Co-operation in Europe, where practitioners across multiple regions highlighted how digital environments are becoming more complex, more persuasive, and harder for young people to navigate critically. Across these contexts, a consistent theme emerges: the challenge is no longer just access to information, but understanding how that information is constructed, shaped and delivered.
This changes the nature of the challenge for educators.
Students are not just using AI to complete tasks. They are increasingly encountering it as a source of knowledge that appears reliable, even when it is not. Chatbots rarely show uncertainty. They do not always make clear where information comes from. And they can reproduce or reinforce existing biases present in the data they are trained on, as well as hallucinate wholesale fabrications.
For young people, particularly those already navigating complex or vulnerable online spaces, this creates new risks. They may begin to trust AI outputs in the same way they might trust a teacher or textbook, without the same mechanisms for verification or challenge.
This is why media literacy is more important than ever, but also why it needs to evolve.
Traditionally, media literacy has focused on helping students increase critical thinking, identify misinformation, evaluate sources and recognise bias. These skills remain essential. However, AI requires us to go further. Young people need to understand not just what they are seeing, but how it is being generated, shaped and presented to them.
This includes asking questions such as:
- What data might this response be based on?
- Whose perspectives are included or excluded?
- Why does this answer sound so confident?
- And when should I trust it?
These are not simply technical questions. They are about knowledge, power and influence in digital spaces.
In practice, we often see two unhelpful responses from students. Some over-trust AI because it feels authoritative and efficient. Others disengage once they realise it can be wrong. Neither response builds the kind of resilience young people need in increasingly complex information environments.
For educators, the implication is not to remove AI from the classroom, but to make it visible and open to critique.
This might involve asking students to analyse AI-generated responses, compare them with other sources, or explore how small changes in prompts can produce very different outputs. These activities help demystify AI and position it as something that can be questioned, rather than passively accepted.
However, one of the clearest lessons from our work is that schools cannot do this alone.
Parents and carers are a critical, and often overlooked, part of this picture. Many feel unsure about how AI works or how their children are using it. Yet young people’s engagement with AI often happens at home, outside structured learning environments.
Through our international and UK-based delivery, including with practitioners and families, we consistently see that effective engagement does not require technical expertise. What matters is creating space for simple, ongoing conversations. Asking a child how they used AI for homework. Discussing whether they trust the answers they receive. Exploring together how different prompts can lead to different results.
These conversations help build a consistent message across home and school: that AI is a tool to be used thoughtfully, not unquestioningly. It is also not easy, engaging parents logistically and compellingly can be hard, and requires specific and sustained work to do well and effectively.
Looking ahead, the role of education is not just to respond to AI, but to situate it within a broader understanding of digital life. Media literacy, in this context, is not an optional addition. It is a core educational and safeguarding priority. AI will continue to evolve, and its presence in young people’s lives will only deepen and normalise. The question is not whether they will use these tools, but how they will understand and engage with them, and the implications this will bring.
By embedding media literacy into everyday teaching, and by bringing parents actively into the conversation, we can move beyond reactive, fearful responses. Instead, we can support young people to successfully navigate not just AI, but the wider digital environments in which it operates.
In practice: three ways to bring AI into media literacy teaching
1. Compare and critique
Ask students to generate an AI response to a curriculum question, then compare it with trusted sources. What is missing? What seems uncertain? What might need checking?
2. Make the invisible visible
Explore how AI works at a basic level. Why does it sound confident? What kinds of data might shape its answers? Where might bias appear?
3. Bring parents into the conversation
Share simple prompts with families: “Ask your child how they used AI this week” or “Would you trust this answer?” Small conversations at home can reinforce critical thinking developed in school.

Dr Laura Zahra McDonald is a Founding Director of ConnectFutures, a UK-based organisation specialising in media literacy, digital resilience and the prevention of online harms. She has worked internationally, including with the Organization for Security and Co-operation in Europe, supporting educators and practitioners to address misinformation, extremism and the evolving challenges of digital environments.



