It’s been a while since my last post. I have been patiently gathering my thoughts. As a parent myself, I feel an extra layer of responsibility when it comes to the delicate matter of parenting in the age of AI. That, and a couple of competing writing projects, have focused my gaze in other directions. It feels good to revisit this piece - part of a broader collective on how different groups of people are incorporating AI into their lives.
AI is a technology with a divided soul. We tend to see this technology through the lens of what it gives us and what it takes away. This is a useful framing device for parents, as it focuses our vision on how our child might be using AI. Those familiar with AI will know of the benefits. We can save time, offset tasks, engage with a critical friend and much more. In a recent project, conducted through focus groups with primary and secondary teachers in the UK as part of a collaboration between STEM Learning and the Good Future Foundation, I was able to examine this theme in greater depth. Teachers shared the plethora of ways that students in both settings were engaging with AI in the supervised (in school) and the unsupervised (away from school, at home) settings.
How children are using AI in the supervised setting
Adaptive Learning Platforms. Examples: Century Tech, Seneca Learning
ChatBots and Virtual Assistants. Examples: ChatGPT, Google, Alexa, Duolingo
AI Writing and Feedback Tools. Examples: Grammarly, ChatGPT, Google
Gamified Learning Platforms. Examples: Kahoot, Slido
Content Creation Tools. Examples: Canva, Padlet, Adobe
Curriculum AI Tools. Examples: Curipod, Kognity
How children are using AI in the unsupervised setting
Voice Assistants. Examples: Siri, Alexa, Hey Google!
Streaming Platforms. Examples: ChatGPT, Google, Alexa, Duolingo
Toys with Embedded AI. Examples: Drones, Hello Barbie
Education Apps and Tools. Examples: Khan Academy Kids, Duolingo Streaks
Large Language Models (LLMs) - ChatGPT, Gemini, CoPilot, Claude
Content Recommendation Engines. Examples: Khan Academy, Century Tech
The data above illustrates how children are experiencing AI in different settings, but it is not the main point of this post. The use cases for what AI gives children in terms of support are well-documented. I am more interested in what AI takes away from children, what values it threatens to erode, or, in some cases, replace. Here are my contentions based on two upcoming academic papers due to be published on the topic, and the findings from the AI Focus Groups.
We can group the contentions into six subgroups, each with a distinct identity and potential consequence. I call these areas of loss.
What AI Risks Removing: Six Areas of Loss in the Classroom
Cognitive ownership
Creativity and expression
Epistemic agency
Privacy and autonomy
Social and emotional development
Critical consciousness
Here’s how each plays out in the classroom, and why it matters.
Cognitive Ownership
AI tools can make learning more efficient, helping students reduce cognitive load, create visuals to aid memory, and summarise complex texts. This is what AI gives children in terms of access to knowledge and support. But when used as shortcuts, these same tools risk eroding independent thinking, problem-solving stamina, and critical engagement. Over-reliance can also blur a student’s authentic voice, as AI’s style begins to shape their own.
Creativity and expression
With AI now widely accessible, students’ unique voices are under threat. Trained on vast datasets that reflect dominant styles and norms, AI outputs can homogenise ideas and expression. When students skip the creative struggle by leaning on AI, they lose opportunities to refine originality, take risks, and explore the ‘productive mess’ that fosters real creativity. We humans are messy thinkers - a trait that leads to organic creativity. In contrast, AI systems refine and refine, narrowing outputs to a single entity. This, I feel, is problematic for how children will learn to think now and in the future.
Epistemic agency
AI doesn’t just deliver knowledge, it decides what counts as knowledge and whose voices are heard. Limited transparency in AI training data means students may unknowingly receive a narrow, filtered worldview. This can marginalise certain perspectives and reduce students to passive recipients, rather than active agents in shaping and questioning the knowledge they encounter. This point finds support in the work of Paulo Freire, as it links to critical consciousness, conscientização (Freire, 1970). This, I feel, is an area of loss in its own right.
Privacy and autonomy
AI systems have the potential to reduce learners to data points, tracking scores, attendance, and behaviour. This creates a digital ‘disciplinary gaze’ (Foucault, 1975) that can monitor and shape learning without students realising it. While these systems can be useful, they risk turning classrooms into surveillance spaces where conformity is rewarded and individuality is diminished. Children risk becoming subjects of surveillance systems.
Social and emotional development
Human connection is at the heart of good education. Yet, as AI takes over more teaching functions, opportunities for trust-building, empathy, and shared humour can fade. Peer collaboration, so vital for problem-solving and communication, can also diminish if AI provides instant answers, reducing the need for discussion and collective learning. This was a key theme to emerge from the Focus Group discussions, and clearly an area teachers and parents should be aware of.
Critical consciousness
Critical thinking was a consistent theme within the Focus Group discourse and the wider body of literature surrounding AI in the classroom. We must confront a fundamental question: Critical of what, and to what end, do we wish to shape our children’s thinking?
Freire’s concept of conscientização, critical awareness of the world and one’s place in it, is more important than ever in the AI era. Students need to question not only how AI works, but why it operates as it does, and whose interests it serves. Without this, AI risks becoming just another ‘banking model’ of education, depositing pre-packaged knowledge rather than fostering critical, dialogical engagement. This last point is the focus of two upcoming academic papers due for publication next month.
I will close with a commitment to write more and to write more often. There’s much to explore in this field, notably the hidden voices of the children who are most invested in the future we speak so fondly of.
Footnote:
The results of the AI Focus Group findings will be published via a series of five CPD Drops that will appear on the Good Future Foundation and STEM Learning Community websites next month (Sept 2026 TBC)
Good Future Foundation: Access here
STEM Learning Community page: Join for Free
The six areas of loss were taken from an upcoming chapter: Navigating the AI Epoch: A critical examination of student voice and agency through the lens of Foucault and Freire by Alex More © from the book Ethical AI and Data Science: Building Trustworthy and Transparent Systems, Taylor & Francis - due for publication Sept 2026.
References
Freire, P., 1970. Pedagogy of the Oppressed. New York: Herder and Herder.
Freire, P., 1996. Pedagogy of the Oppressed. Rev. ed. London: Penguin Books.
Foucault, M., 1977. Discipline and Punish: The Birth of the Prison. London: Allen Lane.
(Originally published in French as Surveiller et punir in 1975.)