Regulating Deepfakes Diffusion: Between Hard Law and Soft Law
- Virginia Perna
- Apr 3
- 22 min read
We are pleased to announce that the winning essay from the Local Essay Competition organized with ELSA Pavia, focusing on the theme "The Role of International Law in the Age of Artificial Intelligence: Navigating Soft Law and Hard Law", is authoured by Virginia Perna, a fourth-year law student at the University of Pavia.
Please note that some information may be outdated, as the essay was written in December 2024 and the field is constantly evolving.
Abstract
The objective of this essay is to discuss the various national and international legal regulatory models for addressing deepfakes related disinformation issues and to evaluate the most effective one.
The term “deepfake” originates from the vocabulary of Artificial Intelligence (AI), combining Deep Learning (DL)–a subset of Machine Learning (ML)–with "fake" meaning deceptive content. The result is an entirely fabricated content almost indistinguishable from a real one. The technology generally used to create deepfakes is the Generative Adversarial Network (GAN), an AI technology consisting of two components: a generator and a discriminator. The first creates fake content, while the second distinguishes between real and fake. Through this process, there is an improvement of the quality of the content, as the discriminator guides the generator to create more realistic fabrications.
The problems that can be caused by those fabricated contents go far beyond the disinformation, since these can harm an array of individual rights.
For instance, through non-consensual deepfake porn (1)—namely a synthetic video of a person engaged in sexual activities that never happened—or through identity theft and impersonation (2). In all cases, among those issues, the discussion will focus solely on disinformation. In a world where media can spread across the globe in seconds—through digital platforms—deepfakes are a serious and extremely challenging source of disinformation.
The risks are particularly noticeable in political, social, and economic contexts, where deepfakes can mislead, manipulate, and harm individuals or groups. For clarity, it is important to note that these phenomena existed even before the advent of AI, for example in relation to contents created with photoshop. Those one, which were more affordable and conventional, were and are called “cheapfakes”.
However, deepfakes are becoming significantly more accessible, user-friendly, and, most notably, far more accurate.
Moreover, for a comprehensive discussion, it is essential to examine the existing regulations addressing disinformation—both to evaluate their applicability to deepfakes and to understand the different legislative approaches. Additionally, attention should be given to the regulation of what can be described as the "foundation" of AI: the digital elements that predate AI and are essential for its operation, such as data.
Data is pivotal for ML and is governed by various and different legal frameworks across the globe. However, the regulation of data in relation to deepfakes will not be addressed in this essay.
In this context, it is crucial to assess the critical issues that emerged in connection with deepfakes, drawing on real-world cases and their foreseeable implications, and to examine the legislative trends emerging in various countries to address this phenomenon.
Furthermore, this essay will explore what has been done and what more can be done from an international and coordinated perspective, balancing technological innovation, free speech, and the protection of Human Rights.
Deepfake Disinformation
The aim of this section is to provide an overview how deepfakes creation and diffusion can negatively impact democratic systems and collective rights enhancing disinformation power. For this purpose, we will consider real cases and the possible perspectives.
The first episode that materialize the issue occurred in 2019 and regarded an Italian politician, Matteo Renzi. In this instance, an AI-generated video was broadcasted during a television program, wherein Renzi was insulting several politicians, including the President of the Republic of Italy. The presenter, Ezio Greggio, failed to clarify that the video was a deepfake, leading many viewers to believe that the statements attributed to Renzi were genuine. It was only hours later that the program broadcasted a statement acknowledging the falsity of the content (3).
This case highlights a critical concern: individuals who are not used to technology and social media are often unable to distinguish between manipulated and fabricated content. Indeed, that this television program is certainly followed by older individual. It is highly likely that younger generation would have detected the synthetic origin of the video, being more aware of the possibility of encountering AI generated content and, probably, having yet seen them through digital platforms.
However, it should be borne in mind that these detection capabilities will be increasingly useless, since AI generate content becomes more and more realistic and hard to detect, even for other AI systems (infra). In all cases, while this incident may seem humorous, it serves as a valuable example of the impact that deepfake technology can have on public perception.
Moreover, this incident–which was not in itself dangerous–reflects the broader risk of disinformation that can affect political matters. Regimes around the world, such as those in Russia and China, have historically used fake news and disinformation to influence public opinion. However, with the advent of advanced deepfake technology, the scale and sophistication of such propaganda efforts have been significantly enhanced.
For instance, China has created fabricated videos featuring invented "broadcast journalists" from a program called "Wolf News". These videos were designed to spread hate and fear of adversarial nations, such as the United States, by misrepresenting key events. In one such video, a broadcaster claimed that the United States experienced 600 mass shootings annually, a figure that was entirely false and invented (4).
Even in democratic nations, it is imperative to remain aware of the potential dangers posed by these technologies. As highlighted by the U.S. Department of Homeland Security (DHS) in their report titled “Increasing Threat of DeepFake Identities”(5) the accuracy of these technologies continues to improve, making it increasingly difficult to detect AI-generated content also for other AI systems. This growing sophistication entails significant challenges in determining the authenticity of information, raising serious concerns about the potential risks of misinformation.
While democracies are built on the principles of transparency, free expression, and access to truthful information, the rise of deepfakes threatens to undermine these fundamental values. Indeed, if individuals are aware that the content they encounter may be AI-generated, their trust in the authenticity of the information they read will inevitably decrease. This negative change in people's perception could undermine their fundamental right to be accurately informed, which, as we know, is intrinsically linked to the broader concept of freedom of expression: one of the paramount principles of democracy.
We must recognize that the convergence of deepfake technology with advanced profiling techniques presents an even greater threat to democratic systems. A notable case that highlights the potential dangers of such technologies occurred in 2018, with the Facebook-Cambridge Analytica data scandal. In this instance, the personal data of millions of Facebook users, gathered throughout the 2010s, was exploited to create detailed psychological profiles. These profiles were then used to target individuals with tailored propaganda, primarily by Cambridge Analytica, an advertising and political consulting firm (6).
The scandal had far-reaching consequences, resulting in significant fines for Facebook in both the United States and the United Kingdom, as well as a public apology from Mark Zuckerberg (7).
This practice of data manipulation and psychological profiling was notably employed during high-profile political campaigns, including Donald Trump’s 2016 presidential run and the campaign for the United Kingdom's Brexit. However, despite the imposition of fines, it is important to note that these penalties do not equate to genuine compensation for the harm caused. Particularly when psychological factors are involved, it is difficult to quantify, as the consequences are inherently intangible.
Such techniques, which are designed to predict and influence individual behaviour, can be weaponized further when combined with deepfake technology. By profiling individuals based on their psychological traits, political beliefs, or vulnerabilities, malicious actors can more effectively identify and target those most susceptible to believing in and sharing deep fake content. This creates a potentially dangerous feedback loop, where individuals are not only exposed to misleading media but are also specifically targeted due to their predispositions, further eroding the integrity of democratic processes and public trust in media.
Existing Nation(8)-Based Solutions
This section explores various solutions to the social issue outlined earlier. It is divided into two parts: the first addresses the diffusion of deepfakes, as many of the challenges stem from the ease with which content can be shared globally, allowing it to spread rapidly across borders; the second tackles the root issue, which is the creation of deepfake.
It is important to note that, for the first issue, it is both possible and necessary to apply existing legal frameworks to classify the phenomenon, potentially strengthening current regulations. In contrast, the second issue is largely addressed through proposals for new legislation, guidelines, or, in a few cases, very recent forms of hard law.
1. Disinformation Diffusion from the Origin of the Internet
As previously noted, issues such as fake news and disinformation have existed online long before the advent of advanced AI technologies. The key difference today lies in the scale and intensity of these problems. In this context, it is essential to examine the regulations governing digital platforms, as they are the central actors in the online ecosystem. The legal frameworks of different nations reflect varying approaches to the regulation of online contents and digital platforms, particularly in relation to their liability for user-generated content.
This section will focus on the legislative frameworks of European Countries—considering ECHR interpretation and decisions, since these are representative for the European Countries approach—and the United States, as these represent two distinct and paradigmatic models of regulation, offering contrasting approaches to platform responsibility.
One of the key factors in understanding the differences between U.S. and European approaches lies in the interpretation and evaluation of freedom of speech, particularly in the context of the advent of Web 2.0 (9).
While these approaches differed before, as evidenced by the First Amendment and Art. 10 of the European Convention on Human Rights (ECHR), which show that for the U.S., freedom of speech is considered an absolute right, and for the ECHR, it is subject to balancing with other compelling factors such as social needs or individual rights (10), these differences were exacerbated with the rise of digital platforms.
The U.S. Supreme Court viewed this new landscape as an opportunity to further promote the “free marketplace of ideas,” while the ECHR immediately recognized emerging risks and saw the need for more restrictive regulation than for traditional media.
Two cases that illustrate this stark contrast are Reno v. ACLU (521 U.S. 844, 1997) and Editorial Board of Pravoye Delo and Shtekel v. Ukraine (3104/05). In Reno, the U.S. Supreme Court addressed the government’s attempt to regulate online content that could potentially harm minors. The Court rejected the government’s argument, stating: “The dramatic expansion of this new marketplace of ideas contradicts the factual basis of this contention. The record demonstrates that the growth of the Internet has been and continues to be phenomenal. As a matter of constitutional tradition, in the absence of evidence to the contrary, we presume that governmental regulation of the content of speech is more likely to interfere with the free exchange of ideas than to encourage it. The interest in encouraging freedom of expression in a democratic society outweighs any theoretical but unproven benefit of censorship.”.
In contrast, the ECHR, in Editorial Board of Pravoye Delo and Shtekel v. Ukraine (Para.63), noted: “The risk of harm posed by content and communications on the Internet to the exercise and enjoyment of human rights and freedoms, particularly the right to respect for private life, is certainly higher than that posed by the press. Therefore, the policies governing reproduction of material from the printed media and the Internet may differ. The latter undeniably have to be adjusted according to the technology’s specific features in order to secure the protection and promotion of the rights and freedoms concerned.”. (11)
The second crucial element to consider in the context of disinformation and defamation is the responsibility of digital platforms.
Two key provisions addressing this issue are found in the Communications Decency Act (CDA) and the Digital Services Act (DSA)—in this case we will only consider the EU countries. Regarding the CDA, Section 230 was designed to promote the growth of the internet by providing platforms with an exemption from liability for user-generated content (UGC).
This exemption establishes a clear distinction between digital platforms and traditional media, since platforms were initially viewed as playing a passive role in content distribution, unlike editorially responsible media outlets.
Moreover, Section 230 includes a “Good Samaritan” clause, which protects platforms from liability when they restrict access to objectionable material in good faith (12).
Regarding the Digital Services Act (DSA) (Reg. (EU) 2022/2065), it follows in the footsteps of the E-Commerce Directive (Dir. (EC) 2000/31), with the goal of integrating and harmonizing the EU Single Market. These provisions, while sharing several common elements, distinguish between three categories of services: mere conduit, caching, and hosting (13).
For the purposes of this essay, only the latter two categories are relevant.
The key point for this discussion is that platforms can be held responsible for user-generated content (UGC) under certain conditions, including their active involvement in the content, acknowledgment of its unlawfulness, or receipt of an order from a national authority to remove it. In essence, in the EU there is no safe harbour provision. Furthermore, the DSA introduces a version of the "Good Samaritan" clause, as outlined in its Art. 7, which allows platforms to act in good faith to remove illegal content without assuming full liability. However, it is crucial to note that to avoid unduly burdening platforms, the DSA explicitly excludes any general monitoring or active fact-finding obligations, as stated in Art. 8.
With this foundational understanding of the legal framework in place, we are now prepared to assess the two critical issues of disinformation and defamation generated by AI.
1.1. About Disinformation
The primary challenge of deepfakes lies in balancing regulation with freedom of expression, as efforts to limit their spread risk infringing on free speech. While some deepfakes are used maliciously, others serve satirical or ironic purposes, making intent difficult to determine.
Accountability for their dissemination is equally complex, particularly as detection technologies advance. Developing reliable algorithms for detection will be crucial, but concerns about transparency and bias in these systems must also be addressed.
Starting with the U.S. approach, it is noteworthy that there is currently a lack of hard law provisions specifically addressing disinformation. Although several states have attempted to introduce legislation, none of these efforts have passed into law. A paradigmatic example is California’s Senate Bill 1424 of 2018, which aimed to establish an advisory board tasked with developing a comprehensive strategy to combat disinformation. Although the bill was passed by the Senate, it was ultimately vetoed by the Governor (14).
The existing legal framework has predominantly a private basis. Indeed, it consists primarily of soft law agreed among private entities, and the creation of private agencies/organizations tasked with monitoring disinformation (15).
Furthermore, Tech Companies themselves adopted guidelines binding for the users’ community to address the disinformation, taking advantage of the room left by the Good Samaritan Clause (16).
The prevailing legal philosophy in the U.S. is that disinformation should be addressed within the broader context of freedom of speech and expression. Under this framework, if an individual disseminates false or misleading information, others are free to counteract it by expressing alternative viewpoints, the so called Counterspeech doctrine (17). The Disinformation Governance Board (DGB), an advisory body established by the Department of Homeland Security (DHS), exemplifies these challenges. Operating briefly from April to August 2022, its role was limited to researching best practices and advising the DHS on countering disinformation, without any operational authority. Despite this, the Board faced heavy criticism, with detractors claiming it posed risks to free speech and potentially conflicted with the First Amendment (18)(19).
To be precise, it should be noted that the Department of Homeland Security (DHS) continues to monitor disinformation, even in the absence of the Disinformation Governance Board (DGB), given that disinformation is regarded as a national security threat.
Moreover, it should be emphasized that not only is there a lack of hard law or public forms of soft law regulating disinformation, but there is also legislation that appears to take the opposite approach. For instance, in 2021, Florida enacted SB 7072, a law that imposes restrictions on digital platforms, limiting their ability to censor, shadow-ban, or manipulate UGC through algorithms.
However, the rise of deepfake technology has fundamentally altered this dynamic.
The ability to debunk false content becomes significantly more challenging when digital media—indistinguishable from real content to the naked eye—can be fabricated and disseminated. In this regard, the traditional freedom of speech framework faces unprecedented challenges posed by the technological capabilities of AI.
In response to these concerns, many U.S. states have started to take legislative action specifically aimed at regulating deepfakes, particularly in the context of advertising for political campaigns (20). The typical regulatory approach mandates that creators or distributors of deepfake content related to political candidates or ballots must disclose the fabricated nature of the content by labelling it. Violations of these provisions are subject to both civil and criminal sanctions.
At the federal level, two notable bills have been introduced to address the growing issue of deepfakes, especially in the lead-up to elections: the AI Transparency in Elections Act and the Protect Elections from Deceptive AI Act. The former mandates that disclaimers be attached to AI-generated political ads, while the latter aims to prohibit the distribution of such deceptive content.
The rapid advancement of deepfake technologies, driven by increasingly sophisticated Generative Adversarial Networks (GANs), poses enforcement challenges. Detecting AI-generated content will demand advanced technologies, a concern highlighted by DARPA, which is actively developing tools to address GAN-driven media manipulation (21). For instance, with the Semantic Forensic program, which seeks to advance in the semantic technologies for analysing media contents (22).
With respect to the EU legal framework on disinformation, the context differs significantly. Although the EU also recognizes the challenges in balancing freedom of expression with the need to combat disinformation, its approach involves some general dispositions of hard law and soft law for a more precise intervention. In relation to the first, the main provision is in the Digital Service Act (DSA) and regards solely Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs).
Indeed, in Section 5 of the DSA there some specific rules posing more obligations above these entities. What is material for this essay objective, is the obligation to provide a risk assessment and to mitigate the risks that emerges in that assessment. Art. 34 provides a minimum list of risks to be included in the assessment and, even if deepfakes are not specifically indicated, these are integrated in para. 1(c), which regards “any actual or foreseeable negative effects on civic discourse and electoral processes, and public security”.
Then, this provision combined with Art. 35, that states the necessity to put in place measures to mitigate the risks detected, makes it mandatory for VLOPs and VLOSEs to act also against deepfake that can cause negative effects on civic discourse and electoral processes, and public security. This substantive law provision must be integrated with more concrete requirements, that are provided by soft law.
Regarding this more flexible legal model, the EU has taken two key steps by explicitly addressing the scale of the problem in the Recitals of the DSA 23 and, by public communications by EU institutions (24), and second, through the publication of the Code of Practice on Disinformation (hereinafter, “Code”). This voluntary framework allows digital platforms to adhere to its principles and adopt the recommended practices.
Initially published in 2018, the Code was updated in 2022 to address emerging risks and reinforce the original framework and comply with the new DSA provisions (25). Notably, the 2018 version did not address the issue of manipulated media (26), whereas the updated version explicitly incorporates measures to tackle these concerns—further illustrating how AI technologies have introduced new challenges in this domain. For the purposes of this essay, we will focus solely on the most recent iteration of the Code.
The structure of the Code outlines a series of objectives, such as demonetizing the spread of disinformation, ensuring transparency in political advertising, empowering users, fostering collaboration with fact-checkers, and improving researchers’ access to data. Having clear these objectives, it sets out commitments for signatories, alongside the corresponding measures required to achieve them.
In this analysis, we will briefly examine Commitments 14 to 16, which pertain to the creation and dissemination of manipulated media, including deepfakes. These commitments emphasize the following measures:
Community Guidelines: ensuring clear policies on manipulated content;
Transparency Obligations: implementing mechanisms to detect and disclose AI-generated content while warning users of its nature;
Cross-Platform Collaboration: sharing critical information among signatories, including insights on cross-platform manipulation, foreign interference, and emerging incidents within their respective services.
Signatories are also required to align their policies with findings from the DISARM Framework (27) (referred to as AMIIT in the Code, reflecting its earlier name) on emerging tactics, techniques, and procedures (TTPs). Additionally, platforms must track and report actions taken against manipulated content and ensure that algorithms used for detection are trustworthy.
It is worth emphasizing that–as specified in the introduction of these Commitments– these measures are framed within the safeguards provided by Art. 10 of the ECHR and Arts. 11, 47, and 52 of the CFRU, ensuring the respect of Fundamental Rights (28). Regarding its practical impact, the Code has garnered adherence from major digital players, including Meta, Google, TikTok, and Microsoft. As we will explore further, this type of soft law appears to address the complex needs involved in combating disinformation more effectively than hard law, which may struggle to balance the competing interests at stake.
2. Addressing the Creation of Deepfakes
In this section, we will briefly analyse the legal provisions adopted by various countries to address the creation of deepfakes that contribute to disinformation. In this case two examples of legislation are from EU and China, even though also other countries that acted against this phenomenon. Among these regulations, our focus will be on the AI Act, as the Chinese legal framework is extremely peculiar and is less likely to influence other countries legislation. We will also briefly examine to other notable national solutions to provide a broader comparative perspective.
The EU AI Act imposes transparency obligations on AI systems capable of generating synthetic content. Specifically, Art. 50 requires that AI-generated content be clearly marked as such in a machine-readable format. This obligation ensures that even if downstream deployers fail to disclose or deliberately omit the AI origin when distributing the content, the material can still be identified as AI-generated. Notably, this transparency approach has been adopted by several other countries as well.
The provisions of Art. 50 apply broadly to all AI systems covered under the regulation, including limited-risk and general-purpose AI systems. However, it is worth highlighting that Art. 50 is particularly relevant for these systems, as prohibited AI systems—by definition—are banned under Art. 5, and high-risk systems concern applications outside the scope of content creation, on the condition that it does not concern one of the critical areas indicated in Art. 6 (29).
An intriguing case may arise with AI systems explicitly designed to produce deepfakes for disinformation. Such systems would likely fall under the prohibition set out in Art. 5, which targets, among others, AI practices that create deceptive content aimed at manipulating groups of people and undermining their ability to make informed decisions. Furthermore, Art. 50 anticipates the need for practical implementation guidance. It states that the AI Office will issue more concrete provisions (30) in the form of Codes of Practice, ensuring harmonized and consistent enforcement of transparency obligations. Beyond the substantive text, numerous Recitals within the AI Act also emphasize the importance of transparency, particularly concerning deepfakes (31).
This dual approach—where foundational principles are established through hard law and further operationalized via soft law—provides both stability and flexibility, which is critical for regulating innovative and rapidly evolving sectors.
For what regard in brief China, the main regulation is the 2022 Provisions on the Administration of Deep Synthesis Internet Information Services, which does not regulate AI systems in general (contrary to the AI Act), but just addresses the deep synthesis, namely AI systems capable of generating deepfakes the use of technologies such as DL and virtual reality, that use generative sequencing algorithms to create contents (32).
This legislation is far much stricter, as it does not only require the labelling of contents, but also imposes the adoption of an array of mechanism to monitor and control to the compliance and to prevent those services to create possible illegal contents and fake news, without taking into consideration freedom of speech.
International Solutions and Possible Perspectives
In relation to international law, it is important to emphasize that the consistent and uniform application of rules across “relevant” (33) states is essential for achieving the maximum effectiveness of AI regulation. Transparency obligations, for instance, lose much of their value if they are imposed by only a few States and without a uniform application. In this section, we will examine the existing international legal framework and, drawing insights from it, as well as from the national frameworks discussed earlier and the practical realities of deepfake dissemination, we will propose what could be an effective solution for regulating deepfake-related disinformation.
This analysis will remain within the boundaries of what appears to be feasible given the diverse and sometimes conflicting interests and legal cultures of different countries.
Existing Legislation
The legal models of legislation within the international framework are highly diverse, varying based on territorial scope, legal enforceability, the generality or specificity of provisions, and the type of institution (public or private) that issued them. For example, the UNESCO Recommendation (hereinafter, "the Recommendation") and the Council of Europe Framework Convention on Artificial Intelligence (hereinafter, "the Convention") adopt a general approach, as they do not provide detailed measures. Both documents come from public institutions, yet they differ significantly in legal force and territorial scope.
The primary goal of these two documents is to promote trustworthy, human-centered, and Human Rights-compliant AI.
Although the Recommendation and the Convention are highly general in addressing the issue of disinformation, both documents acknowledge it as a concern. For instance, the Recommendation identifies the problem in its preamble, in the document’s scope (para. 4, letter d), and in para. 114 under the policy area of “Communication and Information”.
In this last paragraph, the proposed solution to fight disinformation is education.
Moreover, another provision requires Member States to implement mechanisms to prevent the fostering of cultural bias, the spread of disinformation and misinformation, and the disruption of freedom of expression and access to. As evident, these provisions remain largely abstract and lack actionable measures.
A similar vagueness is observable in the Convention. Relevant articles include Arts. 4, 5, and 8. The first two broadly refer to the Convention’s scope, emphasizing the need to uphold Human Rights and democracy, which implicitly involves combating harmful disinformation. Art. 8, on the other hand, addresses principles to be followed during the AI system lifecycle, such as transparency and oversight. Notably, it requires the "[...] identification of content generated by artificial intelligence systems," a provision that reminds the above discussed national-level solutions, though it lacks concrete implementation measures.
It is important to note that while the UNESCO Recommendation constitutes soft law, the Convention, once ratified and meeting its entry-into-force conditions (34), will acquire the status of hard law, becoming binding on ratifying countries as an international treaty.
Other international organizations, such as the OECD and G7/G20, have also published documents addressing these issues. However, as these are similarly principle-driven and closely resemble the UNESCO and ECHR documents, they will not be analysed in this essay. A notable and innovative initiative is the Partnership on Artificial Intelligence (PAI), a private-sector project involving major technology players. PAI has developed guidelines for ethical and trustworthy AI, including the “Responsible Practices for Synthetic Media”, which outline how to responsibly develop, create, and share synthetic content.
This document offers practical measures tailored to three key groups: Builders of Technology and Infrastructure, Creators, and Distributors and Publishers. Transparency remains the central theme. However, the guidelines distinguish themselves by taking a more practical approach. Through detailed descriptions and hypertextual links, the document provides actionable suggestions, including various methods for content labelling. This specificity makes the PAI guidelines a valuable resource for fostering ethical practices in synthetic media.
Key Takeaways
To identify an effective solution, we must address the key challenges identified during the discussion:
Legal cultural differences;
Global nature of the issue, which complicates enforcement and harmonization;
Regulating a rapidly evolving sector while balancing certainty and flexibility;
Involvement of private entities alongside nation-states.
Given these complexities, a single document cannot suffice. Instead, the solution requires a two-tier approach.
First, an international framework must be established to define overarching objectives—namely, protecting democratic processes and safeguarding public discourse from malign interference. This framework should impose minimum practical standards, such as transparency obligations for AI service providers and digital platforms across all content types. It must reflect diverse legal cultures and foster cooperation among states and companies to ensure uniform technological standards. This international agreement would primarily involve states and could leverage existing international organizations as platforms for dialogue, though this is not strictly necessary.
Second, a non-binding guideline should complement the treaty, involving both states and private stakeholders, such as tech companies. These guidelines would promote consistent implementation of the treaty while encouraging harmonized practices across jurisdictions.
Finally, it is crucial to recognize that regulating AI systems alone is insufficient if foundational elements, like data governance, are overlooked. Without robust data protections, adversarial entities will continue to operate beyond international measures, leveraging advanced technologies to exploit vulnerabilities.
This multi-layered approach balances enforceability, flexibility, and inclusivity.
References
(1) Ayyub R., 2018. “Deepfake porn is on the rise – and it’s terrifying”. HuffPost UK.
(2) Magramo K., 2024. “Deepfake CFO scam targets Hong Kong company in $25 million fraud”. CNN.
Retrieved from: https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk/index.html
(3) Cosimi S., 2019. “Il deepfake di Renzi è un gioco molto pericoloso”. Wired.
Retrieved from: https://www.wired.it/attualita/media/2019/09/25/il-deep-fake-di-renzi-e-un-gioco-molto-pericoloso/
(4) Vincent J., 2023. “Artificial intelligence is training deepfakes to be more persuasive”. The New York Times.
Retrieved from: https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html
(5) U.S. Department of Homeland Security, 2020. “Increasing threat of deepfake identities”. U.S. Department of Homeland Security.
(6) Zengler T., 2018. “Facebook-Cambridge Analytica: A timeline of the data hijacking scandal”. CNBC.
Retrieved from: https://www.cnbc.com/2018/04/10/facebook-cambridge-analytica-a-timeline-of-the-data-hijacking-scandal.html
Confessore N. & Isaac M., 2018. “Cambridge Analytica scandal fallout: A tech crisis with deep political consequences”. The New York Times.
Retrieved from: https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html
(7) McCallum S., 2022. “Deepfake video of Ukraine's Zelenskyy sparks warning on fake news”. BBC News.
Retrieved from: https://www.bbc.com/news/technology-64075067
(8) Even though it is not correct to classify the EU among “Nations”, this section includes EU secondary legislation, for the peculiarity of its implementation inside the Member States.
(9) This term refers to websites involving user-generated content, ease of use and, in general, participatory culture for end-users.
(10) The U.S. Supreme Court embraced the concept of the "free marketplace of ideas," as articulated in Justice Holmes' dissenting opinion in Abrams v. United States (250 U.S. 616, 1919). The Court has established a high threshold for restricting freedom of speech or the press, as demonstrated in cases like, ex multis, New York Times Co. v. Sullivan (376 U.S. 254, 1964) and Brandenburg v. Ohio (395 U.S. 444, 1969). In contrast, the European Court of Human Rights (ECtHR) takes a more relativized approach to this right. Art. 10(2) of the European Convention on Human Rights explicitly allows for limitations.
(11) Pollicino O., Bassini M. & De Gregorio G., 2022. “Internet Law and Protection of Fundamental Rights”. BUP.
(12) This assessment does not consider provisions related to copyright and terrorism, as they fall outside the scope of this essay.
(13) See Arts. 4-6 of the DSA.
(14) The rationale behind this veto can be found in the message of Governor Edmund G. Brown Jr., which states that “As evidenced by the numerous studies by academic and policy groups on the spread of false information, the creation of a statutory advisory group to examine this issue is not necessary.”
(15) For instance, the Trust News Initiative (TNI) started by the BBC and then joined by many US companies in the media sector (from news media to social media) works to tackle real time disinformation.
(16) See among others, Meta, 2024. “Community standards and misinformation policies”.
(17) Luis D. Brandeis concurring opinion in Whitney v. California (1927).
(18) Lorentz T., 2022. “How the Biden administration let right-wing attacks derail its disinformation efforts”. The Washington Post.
Retrieved from: https://www.washingtonpost.com/technology/2022/05/18/disinformation-board-dhs-nina-jankowicz/
(19) Sands G., 2022. “DHS shuts down disinformation board months after its efforts were paused”. CNN.
Retrieved from: https://edition.cnn.com/2022/08/24/politics/dhs-disinformation-board-shut-down/index.html
(20) See, ex multis, Nex Mexico, HB 182, 2024; Indiana, HB 1133, 2024; Texas, SB751, 2019.
(21) Defense Advanced Research Projects Agency (DARPA), 2024. “News release on AI research initiatives”.
Retrieved from: https://www.darpa.mil/news-events/2024-03-14#
(22) Defense Advanced Research Projects Agency (DARPA), 2024. “Semantic Forensics (SemaFor) Program”.
Retrieved from: https://www.darpa.mil/program/semantic-forensics
(23) DSA, Recitals 9, 69, 84, 88, 95, 104, 106 and 108.
(24) European Commission, 2024. “Communication on tackling online disinformation: A European approach”.
Retrieved from: https://digital-strategy.ec.europa.eu/en/library/communication-tackling-online-disinformation-european-approach
(25) DSA, Recital 106.
(26) The section relating to the Integrity of services (II.C) considers only the issue of automated accounts generated to spread disinformation.
(27) DISARM Foundation, 2024. “Online disinformation mitigation strategies”.
Retrieved from: https://www.disarm.foundation/
(28) European Commission. Code of Practice on Disinformation. Section IV, Integrity of Services, letter (b).
(29) Indeed, high risk AI Systems ex Art. 6 of the AI Act refers to the application of this technology in certain critical fields, such as education, healthcare, employment or law enforcement and justice.
(30) The AI Office is a centre of AI expertise tasked to implement the AI Act, it is inside the Commission.
(31) AI ACT, Recital 134.
(32) Provisions on the Administration of Deep Synthesis Internet Information Services, Art. 23 para. 1.
(33) Relevant States means those countries that have an impact on the development of AI technologies.
(34) Council of Europe (CoE). Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Article 30.
AI Contents: Artificial Intelligence Systems (Chat GPT and DeepL) have been used to enhance the clarity and the quality of some sentences of the text.
Comments