Brussels, May 21, 2024, The Europe Today: EU ministers have signed off on a groundbreaking law establishing stringent regulations for the use of artificial intelligence (AI) in sensitive sectors. The legislation aims to ensure AI systems used in areas such as law enforcement and employment are transparent, accurate, and secure, meeting high standards for data quality and cybersecurity.
This decisive vote comes two months after the European Parliament endorsed the AI legislation. The new law goes beyond the voluntary compliance approach seen in the US, potentially setting a global standard for AI regulation.
Key Provisions of the AI Law
The law mandates that AI systems designated for “high-risk” applications must obtain certification from approved bodies before being introduced to the EU market. High-risk situations include those where AI use could impact health, safety, fundamental rights, the environment, democracy, elections, and the rule of law.
Certain AI applications, such as social credit scoring systems similar to those used in China, are banned outright. The legislation also prohibits biometric categorization based on religious views, sexual orientation, and race.
While real-time facial recognition in CCTV is generally banned, exceptions are made for law enforcement purposes, such as locating missing persons, preventing human trafficking, or identifying suspects in serious criminal cases.
AI systems deemed to pose limited risks are subject to lighter transparency requirements. These systems must disclose that their content is AI-generated, allowing users to gauge the level of trust they place in it.
Implementation and Oversight
A new “AI Office” within the European Commission will oversee the enforcement of the law at the EU level. After being signed by the presidents of the EU legislature and published in the EU’s statute book, the law will officially go into effect 20 days later. However, most of its provisions will not be implemented until two years after this date.
Global AI Safety Commitments
Amid growing concerns over the potential risks posed by AI, over a dozen leading AI firms made new safety commitments at a global summit in Seoul, co-hosted by South Korea and Britain. These companies, including Google, Meta, Microsoft, and OpenAI, pledged to enhance transparency and accountability in developing safe AI technologies.
UK Prime Minister Rishi Sunak emphasized the significance of these commitments in a statement from Britain’s Department for Science, Innovation and Technology, highlighting the collaborative effort to ensure responsible AI development.
The new EU legislation and the global safety commitments underscore the international effort to regulate and manage the rapid advancement of AI technology responsibly.