Digital autonomy in education: a public responsibility

Report GDS & Kennisnet Symposium

Impressie symposium
Photographer: Ari Purnama

Schools are increasingly using the platforms and AI infrastructure of big tech companies. However, this platform-driven transition to digital and data-driven education comes with a major risk: a strong dependency that leads to less autonomy. How do we ensure that the public education sector itself decides how education works, rather than tech companies? This was the focus of the symposium 'Digital autonomy in education: a public responsibility' organised by Utrecht University (GDS) and Kennisnet on 7 November.

Led by Niels Kerssens (UU) and Remco Pijpers (Kennisnet), a diverse group of around 70 researchers, education professionals and policy makers discussed education and 'digital autonomy', the freedom of schools to manage the digital educational environment. This freedom is under pressure. This is not an individual matter for schools, but a public responsibility. It is an issue of such importance to the sector that it deserves the collective attention of academics, education professionals, students, public education organisations and politicians

The discussion was stimulated by two keynote speeches (Dr. Ben Williamson, University of Edinburgh, and Dr. Anne Helmond, Utrecht University) and two panel discussions. The keynotes outlined the challenges, while the panel discussions looked at how the sector can respond to them.

Ben Williamson: 'Unproven prejudices and inflated expectations'

In the first keynote, 'AI in schools: keywords of a public problem', media scholar Ben Williamson described AI in education as 'a pressing social issue', on a par with clean air, clean water and global warming. Such issues require broader dialogues, which risk being narrowed down to how AI as a technological innovation can improve education. The key concept is collective responsibility. To exercise that responsibility properly, critical research is needed, Williamson said. This research should revolve around the (potential) impact of digitalisation and AI on education.

To support this research, Williamson proposed 10 'critical keywords' that can guide critical reflection. Two of these are 'intensification' and 'assetisation'.

  • Intensification: There is increasing pressure to adopt AI, often driven by an implicit sense of urgency to 'keep up'. This places an additional burden on schools and teachers and increasingly shifts influence to external experts, which can be at the expense of the value of internal pedagogical knowledge.
  • Assetisation: AI in education can turn schools into data sources for large technology companies, using their data as a financial asset for these companies. This carries the risk of dependency, as schools become increasingly intertwined with commercial interests rather than maintaining their own autonomy.
     

On his blog, Williamson clearly explains all the critical keywords: ‘Critical keywords of AI in education’.

Williamson denounced the "unproven prejudices and inflated expectations" surrounding AI. AI research is often overly optimistic, focusing on technical problems rather than social implications and unintended consequences. In his publication ‘Time for a Pause’, earlier this year, Williamson called for more effective public oversight of AI in schools. Without such oversight, AI in education will do more harm than good.

Anne Helmond: 'The industrialisation of AI' and its implications for education

In her talk ('Big AI in education'), Dr. Anne Helmond made clear how much the discussion about AI in schools is influenced by the 'cloud wars', the fierce competition between Amazon Web Services (AWS), Microsoft Azure and Google Cloud, among others. Good education would not be possible without AI, not without the cloud. But behind it all is a cut-throat battle for market share, a growing market omnipotence – on a truly different level than when educational publishers competed for patronage.

Helmond points out that in the world of AI, superpowers such as Amazon, Microsoft and Google are leading the way. Without these tech giants, AI's leap from the research lab to everyday applications would not be possible. This is what Helmond calls the "industrialisation of AI". In this complex ecosystem, Google, Amazon and Microsoft have intertwined their cloud services with AI. This symbiosis between AI and Big Tech is called 'Big AI'. Increasingly, small businesses and large organisations are relying on this system's infrastructure and platforms for their AI initiatives.

This AI race, says Helmond, is accompanied by influential narratives about what AI will bring to the world. Companies such as Microsoft, Google, Meta, Amazon outline a new era where AI will be ubiquitous and reshape the world. This kind of future is also being outlined for education. Helmond emphasises that in the current situation, it is very difficult to imagine any form of AI without big tech companies. This raises questions for education. Can education use AI while remaining autonomous?

Panel 1: Practitioners have their say

The first panel discussed the question: "How can we use critical research in practice to build a balanced and inclusive AI future? Monique Leygraaf (IPABO), Per-Ivar Kloen (De Populier), Paul Zevenbergen (SIVON) and Theresa Song-Loong addressed the importance of critical research in guiding the role of AI in education.

Key points raised:

  • In order to have a conversation about what education wants and needs in the face of technologies like AI, we first need a clearer picture of what education is and should be.
  • How do we understand teaching and what is the role of the teacher? This is a very fundamental discussion that we need to get back to first.
  • Big tech companies are global players and therefore difficult to direct, but education can make a fist. The negotiations with Google a few years ago showed this: the SIVON cooperative, in collaboration with other stakeholders, forced Google to include privacy by design for the first time.
  • Teachers are currently suffering from a lack of clarity in the use of generative AI. Each teacher makes their own rules. This gives students too little guidance. It is important that schools get involved in policy-making and that this happens from the bottom up. This also gives students a foothold in a situation that creates a lot of ambiguity for them.

Panel 2: Research speaks

In the second panel, José van Dijck (Utrecht University), Koen Frenken (Utrecht University), Ben Williamson (University of Edinburgh) and Duuk Baten (SURF) discussed the question: what new critical academic research is needed and how can we achieve this together?

The panel raised the following issues:

  • Does AI in education need a break? And how do we get away from it, now that everything is so intertwined? Perhaps a pause is not the right answer. But stop procuring and actively deploying AI until there is better regulation. Bring back some calm.
  • Procurement is a crucial means of maintaining autonomy. When procuring ICT systems, educational institutions can make explicit choices based on values. What makes it difficult is that a value such as security is always paramount in order to protect data properly. And it is precisely the systems of the big technology companies that can build in this security well. Open source alternatives, which better guarantee other values such as autonomy, therefore have less of a chance. All this could be explored further.
  • How can we better harness the power of European cooperation? Education can change the rules of the game. Europe is the biggest market in the world. When we regulate something, you see other countries copying it later because tech companies have already had to adapt to the new rules. The European GDPR is spreading rapidly around the world. This is a force to be reckoned with, especially when it comes to AI.
  • There are major concerns about the impact of AI in education on sustainability. Again, critical research is needed. A search on ChatGPT costs 10 times as much as a search on Google. There are also other hidden costs, such as the poor working conditions of the people who help train the systems. Is it up to individuals to factor this into their decisions about using AI, or is it more of a systemic issue? But again, the freedom of individuals is shrinking. AI will be built into the tools we use, such as search engines, and it will not always be clear whether you are using AI or not. How does education maintain autonomy in the face of this development?

2025

The main takeaway from the meeting at the University Hall in Utrecht was the coming together of different sectors of education and research. Key insight: big tech companies have a lot of power, but do not underestimate the combined power of teachers, students, parents, unions, boards and research, among others. In 2025, Utrecht University and Kennisnet will develop new initiatives to promote the digital autonomy of schools.