Limited and minimal risk AI
In addition to prohibited AI and high-risk AI, the EU AI Regulation also distinguishes between AI systems with limited risk and minimal risk (Art. 50).
AI systems that fall under limited risk and minimal risk are often referred to collectively as low-risk AI in practice, as the distinction between the two categories is small.
It should be noted, however, that AI systems in this low-risk category do not necessarily mean that they can be used without further ado. As with any software, prior approval of the software is required, partly due to the privacy and security requirements that we apply at UU. These requirements are intended to protect users and data.
What does this mean in practice?
For example, chatbots can be used for your studies or in education if you create an account based on your private email address and comply with the standard terms of use for AI systems and tools.
Examples of limited risk AI
Examples of limited risk AI systems are: For this category of AI systems and tools, only transparency obligations apply to the supplier of those tools (from 2 August 2026). Transparency obligations do apply to all AI systems (Art. 13), including high-risk AI.
- Chatbots
- Deepfake video
- AI avatar
- Sentiment analysis of voices
Art 50.1: This article says that it must be clearly indicated to the user who has an AI interaction (e.g. a chatbot) that this is an AI tool (and not a human being). This transparency requirement must be tailored to the user target group. If, for example, the AI tool is aimed at persons with disabilities, this must be taken into account (Recital 132). The application of an invisible watermark by the supplier is therefore not considered sufficient transparency.
For AI tools that generate deepfakes (Art 50.4), it must be indicated that the content (image, video or audio, etc.) has been generated or manipulated by AI. This rule is slightly more flexible when it comes to artistic or satirical content.
Examples of minimal risk AI
Examples of minimal risk AI systems are: This category of AI systems has no legal requirements under the EU AI Act, but other obligations under the GDPR and information security/cybersecurity do apply.
- Spam filters
- AI in videogames
- Recommender systems for personalised news and e-commerce
- Route advice, such as in navigation systems, to predict traffic jams or travel time
- Language recognition
Please note
AI systems and tools in this low-risk category may, under certain circumstances, fall into the category of high-risk (or even prohibited) AI systems. For example, if a tool such as ChatGPT is used for automatic grading. The determination of the risk classification of AI is therefore always use case- and context-dependent. This also makes it necessary for this determination to be carried out by certified employees (such as CAICOs).