The Brussels Effect in the Age of AI: Still Strong or Losing Influence?
- Edip Han Okyay
- 7 days ago
- 7 min read
Author:
Edip Han Okyay - Lawyer & EU Digital Law Specialist
Abstract
The AI Act is finally here. It took a long time to finalize through many discussions in the European Parliament. It made headlines as it was the first of its kind and innovation versus regulation became the new debate after the AI Act was published. Now, the compliance sector is rushing toward the AI Act, and guidelines, whitepapers, and opinion papers are being published nearly every day to understand it.
There is also a global rush to regulate AI: multilateral frameworks for AI governance exist, from the OECD AI Principles to UNESCO initiatives. The AI Safety Summit, which took place in the UK in 2023, also showed the global will to regulate and understand the risks of AI. The AI Act seems to be at the frontier, introducing a risk-based approach to AI governance. The risk-based approach is grounded in Fundamental Rights (Recital 26), and the AI Act includes categorizations to determine whether new technologies fall under the high-risk system category.
Two things define the essence of the AI Act, which are the risk-based approach and its foundation in Fundamental Rights.
The raison d’être behind the risk-based approach is grounded in Fundamental Rights, which are the cornerstone of the EU. This is nothing new. The EU’s approach to the General Data Protection Regulation (GDPR) sets a good example, showing how the EU is well grounded in protecting its citizens’ Fundamental Rights. These rights are reflected in the Charter of Fundamental Rights of the European Union, which protects privacy and data protection as Fundamental Rights under Articles 7 and 8.
The AI Act treats Fundamental Rights even more comprehensively. The “fundamental rights impact assessment” requirement, outlined in Article 27, sets a high standard. There is no doubt that with its firm commitment to the rule of law, the EU is now creating a new dimension in AI governance through these elevated standards.
The question is: can these new high standards for AI governance spread beyond the EU?
The markets are familiar with the stance of the EU regarding its unshakable commitment to protecting its citizens' Fundamental Rights.
The famous “Brussels Effect” even extended the influence of the EU’s regulations beyond its borders. The "Brussels Effect," coined by Professor Anu Bradford, refers to the EU’s ability to shape global regulations as companies voluntarily apply EU rules worldwide in a de facto manner, or as other governments adopt EU-style laws in what is known as the de jure effect.
The “Brussels Effect” not only demonstrated the EU’s importance in the global economy but also its ability to regulate emerging technologies in a way that aligns with Fundamental Rights. The GDPR is a great example of how the EU shaped the technology industry by showing others how to regulate privacy. Once a trend is set, it is hard to reverse, and the growing number of privacy regulations resembling the GDPR proves this point. Some scholars, however, argue that the existing literature may overstate the EU’s influence as a global standard-setter in regulation.
Eventually, it could be said that the EU has influenced global privacy regulations, though with limitations. To what extent the “Brussels Effect” has truly reached is a question of perspective. Yet, the same level of impact may not be expected from the AI Act, as the underlying conditions have since changed.
Even with the GDPR, significant issues have emerged regarding its enforcement and its relationship with U.S. companies. These include the collection of fines, the use of privacy-invasive practices by non-EU companies, and controversies over data transfer mechanisms, with the most notable example being the U.S.-EU agreements and the latest adequacy decision, which followed years of legal uncertainty after the previous framework was invalidated.
With the global race to regulate AI, what role does the “Brussels Effect” still play?
Regulation of AI and its effects are constrained by geopolitical and economic realities.
The AI race is nothing new, but rather part of a broader strategic rivalry between global political blocs. However, the regulation of AI is certainly a new phenomenon in international law. At its core, it is a contest of values over how AI governance should be shaped, whether it should be based on Fundamental Rights or driven by national security and corporate priorities.
The time when the EU was at its peak, when EU–US relations were at their best, and when there was no war near the EU’s borders now feels distant. That time gave birth to the “Brussels Effect.”
The “Brussels Effect” depends on how strong the EU can remain. There are two constraints on the EU these days: the first is the harsh new reality of the Euro-Atlantic relationship after the Trump administration took power, and the second is the highly competitive AI industry led by the US and China. The EU’s regulatory influence now stands at a crossroads, challenged by shifting alliances and technological rivalry. The changing geopolitical order also makes cooperation in AI more difficult and the competition more intense.
The US is now building a new AI infrastructure center worth 500 billion dollars to surpass its counterpart, China. Regulation is seen as an impediment and a blockage to innovation by the Trump administration. Trump’s AI Executive Order rescinded President Biden’s Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”; this clearly shows the direction in which things are heading in the AI race. Under these conditions, the widespread adoption of a similar risk-based approach to AI does not seem likely.
The realities of geopolitics and the fierce competition to dominate AI could even jeopardize Human Rights beyond borders. The “fundamental rights impact assessment” is a preventive measure in the AI Act to protect the rights of EU citizens, but borders are not so strict on the internet, and with a few clicks, those rights could also be in peril.
How could the “Brussels Effect” take place regarding the AI Act under these circumstances?
The order of the power structure follows as the economy, politics, and regulation.
AI will bring a lot to the table economically, so politics will be shaped around that and the last step is precisely regulation. Even with the GDPR, there are still unresolved issues on the table, and it is hard to enforce. The new Trump administration sees any regulation directed at U.S. companies as a threat and claims it will disregard any fines imposed on those companies. When the AI Act is seen only as a financial burden, that perception will affect its reputation.
Another threat is that what began as a commitment to building AI technologies that uphold the Fundamental Rights of EU citizens is now at risk of being reduced to simple rhetoric, framed as a choice between innovation and regulation. Under the constant attack of this rhetoric, perspectives on the AI Act could shift in the mid- and long-term, increasingly viewing it as an obstacle rather than a form of protection.
There is a long way ahead for the AI Act to have global influence and being seen as redundant or as a block to innovation will be at the top of the list for its critics. The underlying reasons will affect how other countries respond, although the current trend seems to favor a risk-based approach.
Certainly, the AI Act is a game changer in the legal tech scene and will have an impact on those who want to do business in the EU. How far its effects can reach will depend on multiple factors and on the outcome of the rhetorical battle that continues to frame the debate as innovation versus regulation. The road ahead is not the same as it was, and the AI Act’s relevance will be put to the test globally.
The long-term influence of the AI Act will depend not only on its legal architecture but on the EU’s ability to maintain regulatory credibility in an increasingly polarized global environment. As geopolitical tensions rise and economic competition intensifies, the EU must demonstrate that its model of rights-based governance is the best way to deal with the possible problems that might arise in the near future. Only then can the AI Act move beyond European borders and serve as a blueprint for global AI governance, potentially reigniting the "Brussels Effect."
Bibliography
Apacible-Bernardo A., and Fischer L., 2024. "Identifying global privacy laws, relevant DPAs". IAPP News, 19 March 2024.
Bradford A., 2012. "The Brussels Effect." Northwestern University Law Review, 107(1), pp. 1–67.
Bradford A., 2020. "The Brussels Effect: How the European Union Rules the World". Oxford: Oxford University Press.
Available at: https://academic.oup.com/book/36491
European Union, 2012. "Charter of Fundamental Rights of the European Union". Official Journal of the European Union, C 326/391, 26 October 2012.
European Parliament and Council of the European Union, 2024. "Regulation (EU) 2024/1689 Laying Down Harmonised Rules on Artificial Intelligence (AI Act)." Official Journal of the European Union, L 2024/1689, July 12, 2024.
IAPP Research and Insights, 2024. "Global AI Law and Policy Tracker". IAPP.
Iyengar R., 2025. "Trump teams up with OpenAI, Oracle and SoftBank for $500 billion AI project". CNN Business, 21 January 2025.
Available at: https://edition.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment/index.html
Jacobson J.B., Swanson S.A., Helleputte C.-A., and Emery R.P., 2025. "A New Era: Trump 2.0 Highlights for Privacy and AI". The National Law Review, 11 February 2025.
Rankin J., and Waterson J., 2025. "EU accuses Google and Apple of breaking its rules, risking Trump clash". The Guardian, 19 March 2025.
Available at: https://www.theguardian.com/business/2025/mar/19/eu-google-apple-trump-digital-markets-act
Young A.R., 2015. "The European Union as a global regulator? Context and comparison." Journal of European Public Policy, 22(9), pp. 1233–1252.
Biography of the Guest Expert
Edip Han Okyay is a Berlin‑based lawyer whose expertise in EU technology and privacy law has guided companies through the complexities of GDPR, the Data Act, and the emerging AI Act. He is a trusted advisor for international clients navigating digital compliance, having worked in different working environments like TechGDPR where he specifically focused on Data Protection.
Holding a Law degree from Ankara University, he deepened his interdisciplinary perspective with an M.A. in International Security Management from the Berlin School of Economics and Law. His academic work combined doctrinal legal analysis with strategic insights into cybersecurity, technology policy, and regulatory frameworks across Europe and Turkey.
He began his professional journey at Celik Law Firm in Ankara, where he gained hands‑on courtroom experience prosecuting cybercrime cases, analyzing digital evidence, and crafting litigation strategies in criminal proceedings. He then transitioned to Tech GDPR in Berlin as a Digital Law & Privacy Consultant, drafting comprehensive privacy policies, data‑processing agreement, co‑authoring expert articles on EU tech regulations, including the Data Act and AI Act.