Technopolis Group

Artificial intelligence is reshaping how public administrations operate, from automating repetitive tasks to improving the efficiency and quality of services. With this transformation, governments throughout Europe are making services more efficient, human centred, and inclusive. Yet, when it comes to innovation, the public sector is usually contrasted with the private sector as being too slow. One could also say that budget constraints and concerns around digital sovereignty and security may hinder AI implementation in the public sector.

This ‘public = slow, private = fast’ narrative fundamentally misunderstands what the public sector is trying to achieve and the values that should inform public policy. Technopolis Group has conducted evaluations and impact assessments of technological innovation in governments across Europe1, including a recent study on IT accessibility for the European Commission’s Reform and Investment Task Force and the German Federal Ministry for Digitalisation and State Modernisation.

This work has led us to a new approach for making public services and information more accessible (‘digital accessibility’). This approach ensures governments are putting the citizen, and the institutions that protect them, at the centre when pairing this new technology with public services.

Leadership in AI: Public versus private perceptions

Governments around the world are investing in AI, developing publicly owned AI chatbots, Large Language Models, and more. Still, this investment varies across countries and at a slower rate than the private sector. This is not an issue in itself, but there are worries that a widening gap between public and private sector capacities could hinder the credibility and efficiency of governments in the face of their citizens. 

In order to understand the public-private AI deployment dichotomy, it is important to consider what and who is at stake. The ‘move fast and break things’ Tech Leader mindset is not reassuring when lives and livelihoods hang in the balance. Beyond shareholders and sales targets, public institutions (we hope) are accountable to citizens. In Europe in particular, public institutions face particular expectations and constraints: budget limitations, concerns around digital sovereignty, and the duty to guarantee accessible and trustworthy services for all citizens. These conditions profoundly shape how — and how quickly — AI can be deployed. In Europe, when it comes to public service delivery, there are laws governing the treatment of personal data and data managementregulating the digital sector, and requiring accessibility for all. This legal framework extends to AI development and use in the public sector (most explicitly in the 2023 EU AI Act). 

When these protections are breached, governments lose public trust and credibility, which can be difficult to recover from. In 2020, the Netherlands was left reeling after a scandal broke, involving the wrongful rejection of tens of thousands of welfare applications. An interesting detail of the toeslagenaffaire was the algorithmic bias of the AI model used to review applications. The automated risk profiling system incorrectly flagged some applications as fraudulent. Many of the rejected applications were from lower-income households, migrants, and other vulnerable groups. A decision, which previously would have been made by a human in a physical office, became even more alienating and difficult to resolve. 

Against this backdrop, ‘slow’ adoption in the public sphere is not a weakness. It could be a form of ethical leadership, ensuring that technology serves society rather than the other way around. 

Accessibility and AI: A requirement and an opportunity

AI uptake should aim to build trust and serve everyone, regardless of skill, situation, or ability. A key part of achieving this is adhering to digital accessibility standards. In the public sector, this means ensuring that assistive technologies such as screen readers are available to citizens, that content is written in plain language, and that alternative ways of accessing services — offline or by phone — are always on offer. Through the paradigm of digital accessibility, new solutions are emerging within public administrations and the general public, particularly to assist persons with disabilities. 

Trends and enablers in public sector AI: Principles in practice

Access to information

A common trope of the public system is the maze of administrative paperwork and heavy processes. In contrast, AI-driven tools are being integrated into smaller, administrative processes, where there is low risk and clear benefit (e.g. routine Q&A chatbots, triaging queries, drafting standard messages).  For instance, employment agencies can use AI to pre-screen applications for completeness as well as verify student certificates. On the one hand, this saves staff time on basic administration so they can concentrate on more complex cases. On the other hand, citizens can access information quicker than ever – at any time of the day, and in clear language. 

Upskilling access

AI is increasingly viewed as a solution for staff shortages, an ageing workforce, and high workloads in the public sector. Automated and data-driven tools can streamline workflows, freeing up time for more personal interaction with citizens. For this to work, public servants still need to adapt, since their roles and skills will change. For some, especially those who have worked the same way for years, this shift can be challenging. There has been growing focus on narrowing the digital divide. For public administrations, this can mean changing work culture and processes as well as training staff and building institutional capacity. Collaboration and knowledge-sharing is also increasing, both between different departments and between academia and the private sector.

Fairer access

Historically, access to public information has been a challenge for persons with disabilities, whether due to limited public resources, specialised staff (e.g. interpreters), or legal protections. Now, governments are using AI to improve and expand access to public information and services. One of the most promising developments is AI-based sign language translation for persons with hearing impairments. In the legal system, automated transcription of spoken word to text can improve access to court proceedings, for instance, in Spain and Slovenia, according to the OECD. This innovation, which automatically converts speech, video, or text into sign language, could dramatically expand access to public information and services. Still, accuracy, cultural nuance, and trust remain essential conditions for success. 

Four reminders for responsible innovation

From the above examples, we can see that governments must be mindful of the limitations of digital innovation and act responsibly when new technologies are embedded in processes. 

First, not every process can (or should) be automated. Have you ever had a chatbot fail to answer your question? While AI can make processes faster, human reasoning and interaction should never be removed entirely, particularly when it comes to public benefits and programmes. In Germany for example, AI-based tools are used to make information accessible to persons with learning disabilities. Generally these systems translate text into plain language; however, generated translations always need thorough review for mistakes or mistranslations. This raises an important question: can AI tools end up being more labour-intensive than translating texts manually?

Second, do not underestimate the design phase. Governments may avoid large-scale deployments until they better understand the long-term effects. Furthermore, before jumping to deployment, testing and evaluation of applications is needed to ensure that AI initiatives strengthen inclusion from the outset, rather than initially creating new barriers. For example, the AI-based sign language tool, mentioned above, can have tremendous added value. To ensure it is effective for all potential users and all possible issues or challenges are identified and remedied, it must be co-designed and piloted with the user communities that reflect the diversity of potential end users. Testing with real users helps identify potential issues or challenges or issues experienced by users with disabilities that may lead to unintended new barriers. 

Third, AI needs to be part of a broader strategy for digital transformation. For AI solutions to not only improve operational efficiency, but have a strategic impact on the public sector, administrations need the right data infrastructure, governance, and skills. Digital sovereignty, and initiatives like Eurostack, have garnered a lot of attention in the recent EU policy discourse. This is for good reason, given that most AI computing, chips and other systems used in Europe come from the United States or China. With the eGovernment Benchmark, there is a lot of effort going into monitoring and building European infrastructure and tools hosted on European servers, as well as making this infrastructure interoperable across governments. For this to be effective, local and regional administrations need to be supported on data and tech. Digital twin projects, like the one recently undertaken by Technopolis Group, help to advance local-level digital capacity.

Fourth, owning mistakes can salvage credibility. Public institutions should be transparent when mistakes occur, especially when errors affect human rights or benefits (and in the Dutch employment scandal case mentioned above, have devastating consequences). Citizens should be encouraged to report errors or contest automated decisions. This transparency does more than just restore public trust in institutions. Other public sectors and departments can learn from these mistakes and improve services.

A fairer, more accessible future

This piece takes stock of how AI is actually being used in the public sector, acknowledging residual systemic challenges related to accessing services. The demand for AI in the public sector is just a strong as in other sectors, but the public sector has additional responsibilities which will and should influence the pace of change. By treating digital accessibility as a compass, the public sector can ensure AI is not only a formidable technological innovation, but a tool to make public resources and services more available, fairer, and trustworthy. 

As new technologies (whether AI or beyond) emerge, governments need to adapt to new needs, harness trends, and invest strategically in technology going forward. Even if infrastructural and resource limitations persist, governments must keep accessibility at the core of their digital transformation strategies when adopting new tools and systems. Turning principles into practice requires technical support and training, but also a shared culture of openness and collaboration. Technopolis Group supports this transition through projects that help public bodies assess AI implications, strengthen governance, and build inclusive public services. 


  1. To name a few: GovTech: https://technopolis-group.com/new/how-can-govtech-help-modernise-the-public-sector/ and Local Digital Twins Toolbox Procurement: https://technopolis-group.com/new/shaping-the-digital-and-sustainable-transition-of-cities-with-local-digital-twins/  ↩︎

Technopolis in the spotlight

All articles All news