In early December 2023, provisional agreement was finally reached between the European Council and European Parliament on the text of the EU’s new AI Act. Although some work remains before the text is finalised, its content is, for all practical purposes, now agreed upon. For me, this significant news came just after I completed an intensive AI Governance course organised by the IAPP. While not everyone's cup of tea, the course, involving 300 slides on the regulation of AI, has certainly made me a little wiser about how the tectonic plates of global AI regulation may shift and settle over the coming months and years.

The Brussels Effect

One area of particular interest is the debate around whether the so-called Brussels Effect, the idea that EU regulations often become de facto global standards, will apply in the case of the AI Act as it has for the GDPR.

The key to the Brussels Effect in the case of the GDPR lay in the combination of the EU’s power and influence internationally, the GDPR’s status at the time of its enactment as the most rigorous piece of data protection legislation in the world, and the regulation's extra-territorial reach. In addition to applying to EU-based businesses, the GDPR applies where entities outside the EU are offering goods and services to living individuals within the EU, or monitoring the behaviour of those individuals there. We are quite used to this now but at the time of its enactment, this extra-territoriality was regarded as quite aggressive and no doubt a driver behind the moves of some nation states, such as China, to try to counter such outreaching by enacting their own blocking legislation.

AI Act approach

The AI Act looks to tread a similar path in extending influence beyond the 27 Member States. The AI Act's extraterritorial application (Article 2) mandates compliance from any provider or deployer of an AI system, regardless of location, if “the output produced by the system is intended to be used” in the EU. It is not yet clear what “output” specifically means and how this provision will be interpreted. However, this uncertainty may increase the likelihood that non-EU entities will adopt a cautious approach and lean towards compliance with the Act.

An EU Act, based on universal principles 

The Act classifies AI systems by risk levels, with “high-risk” systems subject to rigorous obligations. High-risk systems will include certain safety-related applications and a number relating to biometric identification, employment, law enforcement, and similar areas. These obligations include adherence to data governance standards (Article 10), transparency guidelines (Article 13), and human oversight mechanisms (Article 14). Global entities are very likely to be incentivised to align their AI systems with these stringent EU standards, given the EU’s significant market size and the operational complexity of maintaining different standards for different regions.

In addition, the AI Act's emphasis on protecting fundamental rights, such as non-discrimination (Article 5) and privacy, aligns with the global discourse on AI ethics. Companies globally will perceive these standards as ethical benchmarks and not merely regulatory requirements.

What happens now?

As a Regulation rather than a Directive, the AI Act will be directly applicable throughout the EU without the need for implementing legislation in each Member State. Although the Act will not be enforceable until two years after its adoption, businesses, both within and outside the EU, may draw from experiences with the GDPR’s implementation and strive for early compliance within this period, rather than at its end.

Of course, there is a chance that the global picture of AI regulation will play out differently and that the AI Act's global impact may be less pronounced, leading to diverse regional AI compliance strategies. However, considering the EU's influence, the robustness and breadth of the legislation, and those extra-territorial effect provisions, a universal alignment with the Act's provisions seems more likely.