“Whether we are powerless against artificial intelligence? No, absolutely not”
Much has been written about the pros and cons of artificial intelligence (AI) recently. The reliability of (the products of) ChatGPT and the dominance of tech companies can be questioned, to name but two. Should we therefore be anxious? No, Professor of Media and Digital Society José van Dijck argues in a mini-masterclass ChatGPT for members of the Dutch parliament. But something has to be done.
“Are we powerless? No, absolutely not. We need to invest in people, laws, and technology,” Van Dijck briefly summarises the message of her mini-masterclass on the consequences of ChatGPT for education and media. Van Dijck underlines the risks and dangers of AI tools, but points out that there are also solutions. People need to get to grips with AI, she says.
Van Dijck sees nothing in banning the tools. “We mainly need to raise users’ awareness by embedding AI in learning methods and making people well aware of what it can and cannot do, of what is possible and what is not possible, and of the long-term and short-term risks.”
Adapt laws and regulations
Properly anchoring rules in legislation is of utmost importance, Van Dijck stresses. The current privacy legislation, the GDPR, is insufficient: specific AI legislation is needed. “At European level, the EU AI Act is currently being written. It is crucial to join in and keep an eye on it.”
The most important part of AI legislation, Van Dijck believes, is enforcing transparency. “Transparency about what datasets are used in the ‘training’ of data models and about exactly how the models are trained,” she explains, referring to the information AI tools are fed with to make them ‘smart’.
Masterclass for members of parliament
José van Dijck, together with Professor of Artificial Intelligence Eric Postma (Tilburg University) and senior connector technology Arjan Goudsblom (TechLeap.nl), was invited to give a masterclass on ChatGPT for members of parliament. In seven minutes, she succinctly summarised the social implications of ChatGPT and its impact on education and media. She also briefly answered the question: what should we do to curb short- and long-term risks and dangers of ChatGPT implementation?
Monitor and enforce
“The last thing I would like to mention is monitoring and enforcement,” Van Dijck continues. “You can develop technical tools, and ‘watermark’ information, for example, so you know where it comes from, but you will always lag behind the technology you want to control. Monitoring and enforcement is something quite different.” AI tools are available and new tools will come to market, but as long as you cannot access their training models, monitoring is basically impossible.
In the end, it comes down to transparency and manpower, Van Dijck concludes. “You have to enforce continuous access to constantly monitor and assess the underlying language models of AI tools. And right now, only a few people have the knowledge to do this monitoring, so investing in new experts is incredibly important.”