‘AI and Law must take each other seriously’

Inaugural lecture Floris Bex

Do computer scientists and legal scholars truly understand one another well enough to build responsible AI together? This question was at the heart of Floris Bex’s inaugural lecture on 20 March. The Professor of Innovation in AI & Law believes collaboration between legal and technical disciplines is therefore not a luxury, but a necessary condition for responsible AI within the rule of law.

Floris Bex has been Professor of Innovation in AI & Law for a year. This chair is a collaboration between the departments of Computing and Information Sciences and and the School of Law. In addition, he is scientific director of the National Police Lab AI, in which Utrecht University works together with the national police on responsible AI applications.

In these roles, he sees how AI is emerging everywhere, from hospitals and the police to courts and social media. And wherever AI appears, regulation follows: from impact assessments to ethical frameworks and new AI legislation. For legal scholars, these are essential tools to gain a grip on AI, whereas computer scientists often experience them as restrictive, according to Bex. “But if we want AI and Law to move forward together, computer scientists must accept that the law is more than a set of irritating rules. Legal scholars must understand that AI is more than hallucinating systems that make us less intelligent. These are recognisable caricatures, but they stand in the way of genuine collaboration.”

Three practical examples

Using practical examples, Bex demonstrated how collaboration can succeed. For instance, by linking classical argumentation theory to modern generative AI. In doing so, he referred to the work of his PhD candidates. “They do not set old and new forms of AI against each other, but combine them so that the strengths of both come to the fore. For example, we test which errors generative AI makes when providing legal reasoning, and we have explored how to extract legally relevant elements from texts. We have also developed chatbots that do comply with legal rules.”

He further emphasises that computer scientists and legal scholars must take each other’s fields seriously. The law has its own precision and logic, in which principles, legislation and case law play a central role. And AI is more than a computer system: it always involves choices about data, models and techniques. Bex: “Computer science without any legal understanding overlooks the limits set by the rule of law. Law without technical insight misses the reality of modern systems. Collaboration between legal and technical disciplines is therefore not a luxury, but a necessary condition for responsible AI within the rule of law.”

His third message: theory alone is not enough. In collaborations with, among others, the national police and the Council for the Judiciary, Bex shows that AI only truly has value when it also works in everyday practice. “What makes the Police Lab special is that researchers work simultaneously within the police and at the university. As a result, they are embedded both in policing practice and in the academic community. So not just AI for practice, but AI with practice.”
 

The real test for both AI and the law is not whether it works in theory, but whether it works in the real world

The work of his PhD candidates shows what this looks like in practice. For instance, one doctoral researcher is examining how AI performs on messy, incomplete police data, while others are working on ways to make machine learning outcomes understandable. The intelligent reporting form, which helps tens of thousands of citizens each year to report fraud, also demonstrates how this interaction takes concrete shape. “The real test for both AI and the law is not whether it works in theory, but whether it works in the real world.”

More information
AI Labs