The rise of artificial intelligence is depicted as a global race – resembling a high-performance vehicle engineered for speed. Success is measured by acceleration: faster model releases, larger compute investments, and shorter deployment cycles. Far less attention is paid to the quality of the steering, the reliability of the brakes, or whether the vehicle can be brought safely under control once it reaches full speed. Furthermore, this technological momentum has been fuelled by extraordinary levels of public and private investment.
The United States leads the race in terms of scale, capital concentration, and number of global AI platforms. American officials and industry figures have become increasingly outspoken in casting Europe’s regulatory approach as restrictive or inherently at odds with innovation. While the European Union (EU) faces its own challenges, European policymakers have warned that persistent framing of EU regulation as ‘innovation-killing’ can depress startup valuations or even make European firms more susceptible to acquisition by US competitors at a discount.
Technopolis Group has been actively following the evolving AI race, working closely with the European Commission’s AI Office and national bodies as well as performing assessments of EU digital innovation and funding programmes. Breaking through geopolitical narratives, this article contends that Europe’s competitiveness depends less on matching the competition’s speed than on defending a third course – grounded in safety, legitimacy, and institutional control.
Who owns the AI narrative
As stated above, the US occupies a dominant position in the AI ecosystem. It hosts the most highly valued AI companies, attracts a disproportionate share of venture funding, and has long defined what technological leadership looks like.
This dominance extends beyond technical capability. Through its companies, investors, and policy debates, the US largely sets the tone of the global AI discourse, setting benchmarks, defining success, and shaping expectations. When American companies launch new models, they are celebrated as breakthroughs of US innovation. When it comes to European companies, the achievements and governance environment that enabled achievements tend to be overlooked. Instead, regulation is often used to paint the EU’s approach as obstructionist or risk averse, rather than as a strategic effort to shape innovation responsibly. To reiterate, this framing has tangible economic consequences.
Thus, narratives influence how markets value ecosystems, where capital flows, and how policymakers prioritise investment. With AI still at a relatively early stage of diffusion, and many productivity gains yet to materialise, Europe’s window of opportunity remains open. If Europe is persistently described as regulation-heavy and innovation-light, investors may discount the long-term potential of its startups and ecosystems, even when the underlying technological capabilities are strong. The challenge for Europe is, therefore, not only to innovate, but to assert its own narrative while the AI race is still being run. This assertion begins with a clear assessment of Europe’s reality.
Europe’s AI reality
To reiterate, Europe’s AI ecosystem is often underestimated. While the US leads in venture capital funding and China in publications and patents, Europe possesses significant technological capabilities of its own. Home to a substantial number of AI companies, such as Aleph Alpha and Mistral AI, Europe has the biggest share of AI research players compared with the US and China. France, Germany, and the United Kingdom have the highest concentration of AI engineering talent, a testament to strong educational institutions that produce world-class AI researchers and engineers.
The EU and its Member States have also invested heavily in AI infrastructure. Initiatives, such as AI factories, high-performance computing facilities, and new data centres, are explicitly designed to strengthen Europe’s capacity to train, deploy, and scale advanced models within its own jurisdiction. These efforts are complemented by funding mechanisms and industrial policy tools aimed at reducing dependence on non-European infrastructure. For instance, ASML Holding NV, a semiconductor equipment manufacturer, has invested EUR 1.3 billion in Mistral AI’s Series C funding round. Looking to the future, Europe has ambitions to double its global market share in semiconductors to 20%.
Together, the above elements point to a European AI ecosystem that is more capable, better integrated, and strategically positioned than the dominant narrative allows. Alongside these technological and industrial capabilities, Europe has deliberately chosen to develop AI within a governance framework that prioritises trust, safety, and accountability. The AI Act is the most visible expression of this strategy, embedding oversight into the design and deployment of AI systems.
The AI Act: Regulation as a competitive feature
The AI Act can function as an infrastructure for a competitive European AI ecosystem. First, the AI Act establishes a single regulatory system across all 27 Member States. For companies operating at scale, this offers legal clarity and predictability, often preferable to regulatory absence followed by fragmentation. A single rulebook is not an obstacle; it is a structural advantage. Critics frequently present the AI Act as evidence that Europe regulates first and innovates later, yet this account detracts from the many other countries and regions that are, themselves, pushing for regulations. In fact, the US has passed more than 35 different AI-related acts across its 50 states. Surely this fragmented patchwork complicates compliance, raises costs for scaling, and creates uncertainty for startups and global firms alike compared to the single AI Act.
Second, the AI Act does not apply uniformly across the AI sector, nor does it block US or international companies from providing AI services in the EU. Through its risk-based approach, it targets a narrow set of high-impact use cases, with obligations limited to a select number of AI systems deployed in sensitive contexts such as recruitment, access to credit or social benefits, education, and biometric identification, where automated decisions can directly affect rights, livelihoods, or safety. Most AI applications face only minimal transparency requirements. Customer-service chatbots must disclose that users are interacting with an AI system, while generative AI tools are required to label synthetic content. Everyday applications, such as recommendation systems, spam filters and industrial optimisation remain outside the regulatory scope. By focusing regulatory scrutiny where potential harm is greatest, the framework preserves space for experimentation and innovation across the broader ecosystem.
Third, the EU has demonstrated that the AI Act is both flexible and responsive to technological developments. Through the AI Office, the European Commission was able to react swiftly to the rapid rise of general-purpose AI by facilitating an inclusive, multi-stakeholder process to develop a voluntary Code of Practice for general-purpose AI models. This initiative brought together a broad range of actors from industry, academia, civil society, and public authorities, including major model providers (i.e. Microsoft, OpenAI, Google, Anthropic, Mistral) to agree on common approaches to transparency, risk mitigation, and copyright safeguards. The Code of Practice demonstrates the industry’s need to start setting up standards on safety and transparency. This is, by most measures, a regulatory success that has received far less credit than it deserves—and one that Europe could do far more to present as evidence that its model of AI governance is adaptive, pragmatic, and innovation-compatible rather than rigid or reactive.
The above features show that the AI Act is not a brake on innovation but a form of institutional infrastructure, one that lowers long-term risk, reduces uncertainty, and enables trusted deployment in sectors such as healthcare, finance, and public administration, where scale and legitimacy matter. In the long run, competitiveness may depend not only on who develops AI fastest, but on who can deploy it at scale while maintaining trust, legal certainty, and societal legitimacy.
The AI bubble and artificial general intelligence
The importance of the safety measures of the AI Act becomes clearer when viewed against the backdrop of a rapidly evolving AI industry, whose successive models now operate across text, voice, image, and video. These advanced models have improved reasoning capabilities, while AI agents capable of autonomous planning signal a shift from assistance to delegation.
Growing technological capabilities have attracted extraordinary levels of investment. High-profile partnerships, such as the deepening financial and strategic ties between OpenAI and Nvidia or between ASML and Mistral, exemplify an ecosystem where infrastructure investment alone now rivals the scale of past industrial revolutions. Hyperscalers and semiconductor firms are committing unprecedented capital to data centres, chips, and model development. Unsurprisingly, some analysts warn that expectations may be running ahead of sustainable returns.
Yet the debate over an AI investment bubble risks missing a more immediate reality. Regardless of whether artificial general intelligence materialises in 5, 10, or 15 years as acclaimed by leading industry actors, AI is already being used socially as if it were a form of general intelligence. Users increasingly treat AI systems as companions, confidants, therapists, doctors and decision-makers. AI systems are increasingly shaping access to information, influencing public discourse, and automating decision-making in sensitive domains such as healthcare, criminal justice, social welfare, and education. Some governments even treat AI as a government official. Albania has anthropomorphised AI into a government ‘minister’ responsible for procurement, complete with a human name, face and parliamentary speeches.
The change is rapid and unprecedented, but so are the risks. No country or power in the AI race is free from these risks. With the AI Act, Europe has positioned itself as a place where AI will be safer and thus adopted faster and will serve society’s interests. Earlier compliance readiness reduces ‘stop-start’ deployment cycles, in which systems are launched rapidly only to be paused, litigated, or retrofitted once concerns about safety, bias, or legality emerge. To understand the importance of this, we must consider the systemic risks of AI within social structures, institutions, and power dynamics.
Systemic risks of AI must be taken seriously
Systemic risk in AI governance encompasses a broad range of threats that arise from the widespread deployment of advanced systems at scale. These include risks related to chemical, biological, radiological, and nuclear misuse, large-scale cyber offence and security vulnerabilities, among others. In addition to the acute or catastrophic threats of AI, there is another systemic category of risk to consider: sociotechnical risk.
Sociotechnical risks unfold gradually. They are often less visible than security or safety failures, but they can be more pervasive, durable, and difficult to reverse. AI deployment can negatively affect a wide range of fundamental rights, including human dignity, mental and physical integrity, data protection and non-discrimination. There are already documented cases of individuals forming emotional attachments to AI and, in extreme instances, being drawn into patterns of isolation, dependency, psychological harm and even encouraged suicide.
The harms associated with general-purpose AI go beyond the individual. At the institutional level, sociotechnical risks raise profound governance questions. AI systems can shape public discourse, influence electoral processes, reinforce or create new societal beliefs, and shift collective behaviour at scale. In such cases, harm is not limited to a single user or decision, but diffused across society, making attribution, redress, and accountability significantly more complex.
While these risks are increasingly acknowledged by AI CEOs themselves, policy debates globally remain dominated by competitiveness concerns. The AI Office is working to evaluate systemic risks and making sure that models are evaluated on their safety measures. The European Commission has also launched a whistleblower tool for the AI Act, providing a secure and confidential channel for reporting suspected breaches directly to the AI Office. These are important developments that increase the likelihood of detecting and acting on real-world breaches.
The next chapter: Simplification or deregulation
Even if the EU has good standing as a leader in trustworthy AI, this does not mean that this position is secure or that the race is finished. Policymakers in Europe must acknowledge the deployment challenges across the Member States. Europe lacks a Silicon Valley type of ecosystem and is dependent on non-European hyoperscalers. Moreover, AI talent brain drain and data quality standardisation issues, are a reality that need to be addressed. Unavoidably, concerns about the EU’s dense regulatory framework have been echoed at the highest political levels. This is because the AI Act operates alongside the General Data Protection Regulation, the Copyright Directive, the Digital Services Act, and the Digital Markets Act. These measures create a seemingly dense regulatory framework, particularly for smaller companies with limited compliance capacity.
In response, the proposed Digital Omnibus package aims to: broaden access to regulatory sandboxes, including the creation of an EU-level sandbox by 2028; expand opportunities for real-world testing; and reduce fragmentation by strengthening the role of the AI Office and centralising oversight of general-purpose AI systems. At the same time, it proposes delaying the application of harmonised standards for high-risk AI systems by up to 16 months. The Omnibus proposal has elicited mixed reactions from industry and policymakers. Some interpret the proposed delay as evidence that the EU is yielding to international pressure, particularly from the US, to soften its regulatory stance. Others argue that postponing risks prolonging uncertainty for businesses that have already begun preparing for compliance. Supporters of the AI Act have voiced concern that delaying key standards could weaken Europe’s regulatory credibility and dilute its leadership in trustworthy AI, while dismissing some deregulatory narratives around the Act as industry ‘propaganda’.
In closing, the global AI race is not stopping, but progress should not be framed by speed alone. A technology capable of unprecedented acceleration without effective braking and steering mechanisms is not a competitive advantage. It is a systemic liability. Europe’s strength lies precisely in refusing this false choice between speed and safety. By insisting on governance that recognises systemic risk, Europe is not slowing technological progress but ensuring that it remains controllable, accountable, and socially sustainable in an era where the consequences of failure are societal, structural, and potentially irreversible.
The real test now is whether Europe is prepared to defend this approach not only as sound policy, but as the core of its competitive narrative in the next phase of the AI race. The direction taken in the forthcoming Digital Omnibus negotiations will offer an early indication of whether Europe intends to hold that line.
