The AI Regulatory Landscape of the EU, UK and US in 2026
As AI technology and, most importantly, AI agents continue to become more advanced and precise, we have seen a shift in the regulatory landscape for this technology. A few years back, AI governance focused on voluntary ethical standards, on compromise on behalf of enterprises. Now, regulatory bodies have moved towards hard enforcement, leaving no room for interpretation.
Within this context, we have seen three main players with completely different views on the matter, which are shaping how different geographical areas approach AI. The European Union has taken a stricter approach, prioritising risk management above all; the UK has taken a more sector-led approach, prioritising flexibility and laissez-faire; and the US is leaning towards an “all in” to AI, prioritising free markets and competitiveness.
The European Union: hard enforcement
For years, scholars in the legal sector believed that within AI legislation and governance, there would be a so-called “Brussels effect”, meaning every country would follow European regulations and become stricter on AI governance. This has not been true; however, the EU Artificial Intelligence Act remains the most comprehensive and holistic legislative framework to date.
2026 marks the beginning of a new challenge for the EU, as August 2nd marks the deadline for the majority of the Act’s provisions to take full effect. This means even stricter regulation for “high-risk” AI enterprises, those who deploy AI in education, infrastructure, employment and law enforcement. These companies must, by August 2nd, provide documented risk management systems, rigorous data governance, clean training datasets, and human oversight mechanisms.
Some might say that the EU is overly regulating the AI space, as this year’s controversy in Parliament is the “2026 Digital Omnibus Package”. As the EU has severely regulated the Cybersecurity and Data space, the new AI regulation has generated friction and compliance fatigue, resulting in a governance slug that slows businesses' adoption and deployment of new technology. Therefore, the Digital Omnibus Package seeks to harmonise and restructure all three acts for a smoother implementation. However, it is easier said than done; as many members of Parliament are worried that critical provisions affecting the AI space will have to be delayed as the standard-setting bodies struggle to keep up with legislation.
Regardless of current and future challenges, the European Union remains the most cautious of all economic areas. The parliamentarians’ focus on reducing risk, children’s access and interaction with agents, and handling of sensitive information seems to be common priorities that are allowing these Acts to move forward with unanimity. But, is this placing the EU behind in the AI race to innovate and become a global leader?
The United Kingdom: agility above all
The United Kingdom has decided to opt out of the Digital Omnibus Package, and the EU is proposing to attract talent who are “fleeing” neighbouring countries due to stricter legislation. The UK has taken a completely different approach, seeing AI not as a threat, but as an opportunity to lead the forefront of innovation and recruit talent, viewing the islands as an opportunity to develop their dream product with ease.
To remain agile, the UK has opted to rely on its existing regulators, such as the ICO (Information Commissioner’s Office) and the CMA (Competition and Markets Authority). The first of the two has taken an aggressive stance on Agentic AI and data handling, which is closer to the EU’s AI Act, although not as precise or severe. However, the CMA’s main goal regarding the AI era is to avoid “Big Tech” colonisation of the innovation space. This is what is mainly attracting business owners and talent from all over the world: the gratifying feeling of being protected and, most importantly, celebrated for taking a chance with AI technology.
A great example is Synthesia, a unicorn that uses AI to create professional videos with human-like avatars. Founded in 2017 and based in London, the company raised $200 million in a Series E funding round in January 2026. Companies such as Synthesia are clear examples of how the UK’s policies regarding AI ventures affect and drive small businesses. However, the near future might catalyse the AI sector or lead to a massive talent exodus.
March 19th marks the deadline for the copyright submission of the AI Bill, which will be presented in May and will explain the government’s stance on AI training in great detail. This Bill is expected to avoid broad categories or overly technical, strict provisions, but to focus on sector-specific issues and the safety of general-purpose AI.
Some say that this is not enough regulation, that the risks of a broad “GenAI safety” bill are too high, and that it could lead to terrible outcomes, as we have seen with Antropic’s agent threatening to murder an engineer who suggested shutting it down during a simulation. However, the UK’s position might be the smartest economically and socially for the country, as in recent years, 60% of high-skilled workers have left the country seeking opportunities elsewhere (The Migration Observatory, 2026). Businesses have also fled the country, so providing sector-specific provisions might be the key to ensuring companies remain agile and continue to view the UK as the leading country for business ventures.
Therefore, the UK’s regulation, or lack of regulation, regarding AI might be what positions the country at the forefront of innovation, becoming one of the AI leaders of the future, providing incumbent and first-mover advantage that no other country could take away in the European area.
The United States: the legislative war
In contrast with the previous countries, the United States is in the middle of a constitutional war between Washington, D.C. and the different states, which are taking completely different approaches to handling AI.
In the absence of a unified federal law, some states have taken initiative and drafted their own legislation. California’s SB 53 (Transparency in Frontier AI Act) took place on January 1st, 2026, which targets “Frontier Models” trained with massive compute power. The Act asks companies to publish safety frameworks and report on critical incidents, not after 15 days of their occurrence. These critical incidents refer to unauthorised access, materialisation of catastrophic risks such as the death of more than 50 people, developers losing control of models, and deceptive evasion of orders by the model.
Colorado’s SB 24-205, which will commence on June 30th, has taken a stance similar to the EU’s legislation. Like the Europeans, Colorado has decided to be extremely strict regarding AI involved in high-stakes decisions such as housing and banking.
Trump’s inauguration brought the federal law closer than these states thought. At the beginning of the year, President Trump proposed the TRUMP AMERICA AI Act, whose central idea is AI supremacy. The bill aims to preempt state laws such as those in California and Colorado, which seem to be destroying “America’s competitiveness”; limit the immunity of AI platforms that currently facilitate illegal or inappropriate content; and create “safe harbours”. These safe harbours have been the main focus of controversy, as the Act proposes creating exemptions to companies that abide by NIST (National Institute of Standards and Technology) frameworks, which translates to no state persecution in exchange for “voluntary” disclosures of standards.
It comes as no surprise to anyone that the United States has decided to take this approach. Although a federal law is more efficient than state-led regulation, the TRUMP AMERICA AI Act is as vague as it could be to promote competition and position the US at the forefront of innovation without ethical standards. Moreover, competition without regulation in the AI space leads to confidential data exploitation and natural resources scarcity. We have already observed this, as more than 60% of US data centres based abroad drain the resources of neighbouring countries.
With a regulation this lax, we can only expect the US to be a leader in AI advancement, adoption, and innovation, but an enduring black box in which data handling and processing remain a mystery that likely no business is willing to disclose.
The best way forward
Analysing these different regulators separately and collectively is key to understanding the AI landscape and its implications for talent and the economy. Jobs might be created or destroyed, economic growth is a possibility, and resource scarcity is not far from reality. However, independently of personal opinions, we are unable to identify the best way forward as AI governance is still a fairly new space, so we will have to wait and see which of the three manages to strike the right balance between opportunity and safety.
Bibliography
AI.Gov | President Trump’s AI Strategy and Action Plan. (n.d.). https://www.ai.gov/
Bill Text - SB-53 Artificial intelligence models: large developers. (n.d.). https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202520260SB53
Digital Omnibus Regulation Proposal. (n.d.). Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal
EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. (n.d.). https://artificialintelligenceact.eu/
Net migration to the UK - Migration Observatory. (2026, January 26). Migration Observatory. https://migrationobservatory.ox.ac.uk/resources/briefings/long-term-international-migration-flows-to-and-from-the-uk/
SB24-205 Consumer Protections for Artificial Intelligence | Colorado General Assembly. (n.d.). https://leg.colorado.gov/bills/sb24-205
Synthesia. (n.d.). Synthesia: #1 AI video platform for business. https://www.synthesia.io/