The European Parliament has adopted the landmark Artificial Intelligence Act, marking the first comprehensive legislation globally to regulate AI systems. This legislation, proposed by the European Commission in April 2021, aims to create a uniform regulatory framework across the EU to ensure the safe and ethical development, deployment, and use of AI technologies.
The AI Act categorizes AI systems into four risk levels: minimal or no risk, limited risk, high risk, and unacceptable risk. Each category has specific requirements and obligations. For example, high-risk AI systems will face stringent requirements to ensure they do not compromise fundamental rights, while systems deemed to pose unacceptable risks, such as those used for cognitive manipulation or social scoring, will be banned.
This regulatory approach is designed to balance innovation with safety, fostering an environment where AI can thrive while safeguarding public interest. The Act promotes the EU’s vision of human-centric AI, ensuring technologies are developed in line with European values. It includes provisions for establishing regulatory sandboxes to enable controlled experimentation and innovation.
Moreover, the AI Act aligns with the EU’s broader digital strategy, which emphasizes fostering excellence and trust in AI. This strategy includes significant investments in AI research and development, with initiatives like the Public-Private Partnership on AI, Data and Robotics, and AI Excellence Centres to support innovation and industry collaboration.
The Act’s adoption reflects the EU’s leadership in setting global standards for technology regulation, similar to its impact with the General Data Protection Regulation (GDPR). This legislation is expected to influence global AI governance frameworks, promoting a safe and trustworthy AI landscape worldwide.