Europe has taken a significant stride toward adopting regulations governing the use of artificial intelligence (AI) and models like Microsoft-backed ChatGPT. Member states endorsed a political agreement reached in December, endorsing the rules proposed by the European Commission three years ago.
The regulations aim to establish a global standard for AI technology, covering a broad range of industries, including banking, retail, automotive, and aviation. They also define parameters for the use of AI in military, crime, and security applications.
EU industry chief Thierry Breton emphasized the historic nature of the Artificial Intelligence (AI) Act, marking a world-first effort to find a balance between innovation and safety. Digital chief Margrethe Vestager highlighted recent incidents, such as the spread of fake explicit images of Taylor Swift on social media, underscoring the necessity for the new rules. The concern revolves around generative AI contributing to the rise of deepfakes—realistic yet fabricated videos.
The agreement became inevitable after France, the last holdout, withdrew its opposition to the AI Act. France secured stringent conditions that strike a balance between transparency and business secrets, aiming to reduce the administrative burden on high-risk AI systems. The objective is to foster competitive AI models within the EU.
The AI Act’s next steps include a vote by a key committee of EU lawmakers on February 13 and a European Parliament vote in March or April. While challenges are anticipated, the legislation is expected to enter into force before summer, with full implementation by 2026.