The State of Our Intelligence: Regulation in AI Development
- André Teixeira
- Oct 3, 2024
- 5 min read
Updated: Dec 13, 2024
Author:
André Teixeira - Editorial Director of iGlobal.Lawyer
AI Engineer & Trainee Lawyer
The Pervasiveness of AI
The fever of Artificial Intelligence (AI) has swept across the globe. From smartphone integration to text generation and targeted advertising, the utilization of AI has become commonplace, and the public has begun to take notice. While all new technologies inherently carry risks and can instigate social and political transformations, AI is poised to exert the most significant influence on our societies since the advent of the World Wide Web.
It is often asserted that policy tends to lag behind technological advancements; however, the last decade has underscored the necessity for legislation governing a tool that can either uphold or undermine liberal democracy.
The dangers associated with AI—such as disinformation campaigns aimed at national elections, waves of unemployment resulting from skill mismatches, media manipulation that erodes public trust, and social networks that foster destructive cognitive bubbles—are becoming increasingly evident, even to lawmakers and administrators who typically exhibit slower responses.
The urgency for action is both clear and pressing.
The EU's Artificial Intelligence Act: The EU's Pioneering Legislation
Among the most significant contemporary legal initiatives related to AI is the European Union’s (EU) Artificial Intelligence Act. Following its initial proposal by the European Commission, the Act was adopted by the European Parliament in March 2024 and subsequently approved by the Council in May 2024.
It is set to become fully applicable 24 months after its enactment, with certain provisions taking effect at varying intervals. Systems classified as posing severe risks will be prohibited six months after the Act's entry into force, while codes of practice will commence enforcement nine months into the process. The EU's AI regulation is already attracting international attention as it represents the first comprehensive piece of legislation aimed at addressing the substantial effects and disruptions that AI is likely to continue causing in our lives.
Its potential impact cannot be understated, particularly given the EU's position of soft power through its extensive network of programs and trade relations.
Similar to the influence of the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act has the capacity to establish a global benchmark for how AI shapes lives worldwide.
The Council of Europe (CoE) Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law: International Collaboration & Human Rights
Adopted by the Council of Europe Committee of Ministers on 17 May 2024, another noteworthy advancement in AI regulation is the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, representing the first legally binding international agreement in this domain.
Negotiated by 46 Council of Europe member states alongside input from private sector representatives, civil society, and academia, this initiative emphasizes fundamental principles, remedies, procedural rights, safeguards, and risk management requirements.
While it possesses notable limitations—applying only to signatory parties willing to be bound by its contents—it remains an impressive global initiative with signatories from across the world.
A Comparative Overview of the EU AI Act and the Council of Europe Framework on Artificial Intelligence
While there are numerous similarities between the two, their fundamental nature differs significantly.
The EU AI Act constitutes a piece of supranational legislation exclusive to the European Union and its 27 member states, imposing binding legal obligations on entities operating within the EU. In contrast, the Council of Europe Convention is a treaty that countries may opt to ratify, granting states the discretion to determine how to incorporate its provisions into their national laws. The Convention has been signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, the United States of America, and the European Union, with countries from around the world eligible to join and commit to it.
The AI Act is distinguished by its market-centric approach, characteristic of EU legislation, providing clear regulatory guidelines that foster a safe and innovation-friendly environment for businesses while safeguarding consumer rights. Conversely, the Framework Convention encompasses a broader scope, with its primary strength rooted in a strong emphasis on Human Rights, Democracy, and the Rule of Law. This extends beyond economic considerations to ensure that AI systems respect fundamental freedoms.
Most importantly, the Council of Europe document includes specific exemptions for national security and defense, whereas these areas are explicitly excluded from the scope of the EU AI Act, which was to be expected since the latter is subject to direct enforcement.
Conclusion
In the nineteenth century, industrialization spurred significant economic growth but also introduced military technologies such as tanks, machine guns, and mustard gas. The development of nuclear weapons presented an even greater threat that policymakers continue to grapple with today.
Computers transformed how individuals work, learn, and communicate; however, they have also exposed previously isolated systems to cyberattacks. AI is poised to bring about similar shifts, with many of its impacts being positive—such as boosting economic growth and enhancing daily life in myriad ways. Nevertheless, like any emerging technology, AI possesses a darker side.
Addressing its risks now is essential to ensuring that humanity reaps the benefits of AI rather than confronting its dangers. Having discarded typical narratives of machines taking over the world propagated by pop culture, these dangers are real and very much current.
From altering warfare dynamics and necessitating a reevaluation of work and social security models to influencing how we consume information and form our beliefs and thoughts, AI can have palpable and immediate consequences on our lives. While it is up to us if that impact will be positive or not, hesitation and procrastination will only take us so far.
We know the “ghost in the machine” is alive and well, so it is high time we regulate adequately and take Artificial Intelligence for what is: not magic, but a particularly useful and dangerous tool, to be used by us good, old and flawed human beings.
References
European Commission, 2024. "Artificial Intelligence Act (Regulation (EU) 2024/1689)". Official Journal of the European Union, version of 13 June 2024.
Council of Europe, 2024. "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". Council of Europe Treaty Series - No. 225.
Available at: https://rm.coe.int/1680afae3c
Edwards, L., 2021. "The EU AI Act: A Summary of Its Significance and Scope". Artificial Intelligence (the EU AI Act), p. 1.
Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., and Floridi, L., 2023. "Taking AI Risks Seriously: A New Assessment Model for the AI Act". AI & Society, pp. 1-5.
Hagendorff, T., 2020. "The Ethics of AI Ethics: An Evaluation of Guidelines". Minds and Machines, 30(1), pp. 99-120.
Available at: The Ethics of AI Ethics: An Evaluation of Guidelines | Minds and Machines (springer.com)
Scharre, P., 2019. "Killer Apps: The Real Dangers of an AI Arms Race". Foreign Affairs, 98, p. 135.
Available at: The Real Dangers of an AI Arms Race | Foreign Affairs
Vdignum, 2024. "How Europe is Shaping AI for Human Rights". Umeå University AI Policy Lab.
Available at: https://aipolicylab.se/2024/09/05/how-europe-is-shaping-ai-for-human-rights/ (Accessed 16 September 2024).
Comments