Frequently asked questions
Below you will find answers to a number of frequently asked questions about AI at Utrecht University.
The university has an open attitude towards AI. As a student, you are given the space and opportunity to learn with AI. This also allows you to tailor your study programme to your future research or work environment.
The university embraces AI technology insofar as it adds value and enriches us (and society) rather than impoverishing us. We see AI not only as a challenge but also as an opportunity. An opportunity to innovate our education and contribute positively to the goals of the UU education model.
However, the use of AI systems and tools must be ethically responsible and legally compliant, whereby we as a university and as humans-in-the-loop retain our own freedom of action. We want to exploit the opportunities offered by AI and limit the risks. We call this, in short, Responsible AI.
As a university, we prefer sustainable AI tools and language models (LLM) wherever possible. In AI systems and AI platforms, we prefer to use small language models (SLM) whenever possible. This is also part of our UU AI ethical code of conduct, item 3.
We also advise users of AI to consider whether an AI tool is necessary for the chosen purpose or whether there are more sustainable alternatives, such as using an internet search engine (‘googling’).
Within our AI labs, together with UMCU we conduct specific research into AI and sustainability in the AI & Sustainability Lab.
UU assesses AI systems and tools before applying them in the educational domain or business operations. To this end, the AI systems are subjected to a combination of privacy, security and AI compliance scans to identify the risks and ensure ethical and legal compliance. Only approved AI systems and tools may be used with students and staff. Certain conditions of use often apply. An example of the conditions of use of AI tools can be found in the lecturer section of this site.
Pilots involving AI systems and tools in which students, lecturers or other staff are involved must also undergo a general AI compliance screening, with the aim of preventing the use of prohibited AI or high-risk AI without appropriate supervision (in accordance with the GDPR and the EU AI Act, among others).
Of course, this is not intended to hinder innovation, but is aimed at safeguarding privacy and data protection, among other things. UU's attitude towards the use of AI systems and tools is: “yes, provided that” rather than “no, unless”.
The UU AI approach in the field of education is primarily reflected in the integrated holistic approach and the UU AI ethical code of conduct approved by the Executive Board.
In doing so, we recognise the potentially significant impact of AI technology on our institution, the academic community, our values and the effects on our students and staff. Our code of conduct describes the frameworks within which we learn, teach and innovate responsibly with AI.
At UU, the emphasis is on all AI systems and tools as well as impactful algorithms, and the policy is not limited to Generative AI.
Furthermore, the UU AI labs are characteristic of UU and UMCU, and we encourage AI spin-offs and incubators.
As teachers or teaching assistants, you can find an overview of AI tools in the AI tool library (Dutch, behind login).
The approval process for AI tools for all our students takes more time, but we are working on it. We are not thinking of well-known AI tools such as ChatGPT, MS Copilot, Gemini, Claude, etc., but rather digitally sovereign alternatives that keep our data secure.
No. For example, MS Copilot (a well-known AI tool from Microsoft) has not been approved. As an employee, you can find more information on the intranet (Dutch, after logging in).
You can find this at the bottom of the relevant web page or in the central contact persons section.