EU agrees world’s first artificial intelligence law: what does it mean?

The European Union marks a historic milestone by agreeing to the first global regulation of artificial intelligence (AI), a legislative process that has not yet ended, but which already means before and after the management of this technology.

This agreement, reached ahead of the European elections in 2024, reveals the importance and urgency of regulating an area as vast and potentially disruptive as artificial intelligence.

Video: Do ​​you know how much water artificial intelligence uses?

Big deals, long negotiations

The road to this agreement was not easy, marked by extensive negotiations and intense debates that culminated in two marathon sessions lasting 22 and 14 hours. This joint effort between the European Parliament and the Council of the EU, while still provisional, underlines major progress in AI legislation, although its full implementation is not expected until the end of 2026.

Neurolaw experts warn against artificial intelligence

The Artificial Intelligence Bill, known as the AI ​​Bill, was first introduced in April 2021. The President of the European Commission, Ursula von der Leyen, emphasized the value and possibilities that this new law opens up.

The regulations focus on “identifiable risks” and seek to balance safety and human rights with innovation efforts. The classification of AI according to the risk it poses is one of its keys, with the category ranging from “minimal risk” to “unacceptable risk”.

The regulation also addresses specific and potentially dangerous uses of artificial intelligence by establishing prohibitions for certain applications and strict requirements for others, such as those involving critical infrastructure. The European Parliament details banned apps that pose a threat to civil rights and democracy, including certain biometric recognition and categorization systems.

AI and surveillance

One of the most controversial aspects has been the use of biometric identification systems due to their implications for government control and civil rights. After intensive negotiations, strict limits and conditions were set for its use, especially in the context of the police.

This Chinese robot with artificial intelligence could create oxygen on Mars

The law also affects generative AI models, like ChatGPT, introducing special rules to ensure transparency and risk management. The European Parliament succeeded in imposing stricter obligations on “high-impact” models, including risk assessments, incident reporting and cybersecurity safeguards.

A fundamental and sensitive aspect that the legislation addresses is the relationship between artificial intelligence and copyright. The law requires AI systems and models to respect transparency requirements and comply with EU copyright regulations.

Before the AI ​​Act

Prior to the European Union’s introduction of the AI ​​Act, both Europe and the United States took several initiatives to address the ethical, legal, and technical challenges posed by AI.

In Europe:

  1. Ethical Guidelines for Trusted AI: The European Commission presented ethical guidelines for the development and use of artificial intelligence in April 2019. These guidelines focused on trustworthiness and ensuring that AI is legal, ethical and technically robust.

  2. General Data Protection Regulation (GDPR): Although the GDPR does not specifically address AI, the GDPR, implemented in May 2018, has had a significant impact on how data is handled in AI, particularly with regard to privacy and data protection.

  3. National initiatives: Several EU countries, such as France and Germany, have developed their own national AI strategies focusing on ethics, innovation and support for AI research and development.

In the United States:

  1. The White House Guide to AI Regulation: Posted by the White House in January 2020 a set of principles for the development and regulation of AI. These principles focused on promoting innovation and economic growth while protecting public safety and promoting American values.

  2. National Artificial Intelligence Initiative Act– Passed in late 2020, this bill aimed to coordinate and unify AI efforts across the United States federal government and promote AI research, education, and training.

  3. Guidelines and technical standards: Organizations such as the National Institute of Standards and Technology (NIST) are working to develop standards and frameworks for AI, including aspects of reliability and security.

  4. Private sector initiatives: The private sector has also played an important role in shaping ethical practices in AI. Companies such as Google, Microsoft, and IBM have developed their own ethics policies for artificial intelligence and are involved in various initiatives to promote its responsible use.

In summary, before the EU legislation, both Europe and the United States took several steps to address AI issues, although these were mostly ethical guidelines, national strategies, and general data regulations rather than specific legislation. and detailed information about artificial intelligence. like the one recently approved by the EU.

This agreement therefore sets a significant precedent in the regulation of AI at the global level. With its focus on security, transparency and respect for human rights and democracy, the EU is taking the lead in creating an ethical and legal framework for the development and use of artificial intelligence.

Although it is still a long way from full implementation, the EU’s AI regulations are laying the groundwork an era of technological innovation guided by strong ethical and legal principles.

Leave a Comment