AI systems travel globally, but AI rules and practices do not (necessarily). They are borrowed, adopted, or even copied by countries whose contexts differ dramatically. This qualification matters. Because if AI is going to support development, instead of deepening dependency, governance must be locally meaningful, not globally imposed. Otherwise, top-down governance models risk leaving citizens unprotected and development goals unmet.
What is at stake when policymakers and AI regulators truly go local? As part of our work on AI governance for Latin America , Technopolis Group recently performed a scoping exercise for the Canadian International Development Research Centre. This article uses the Latin American story to craft a regional account of effective, transparent, and accountable AI governance.
A global playing field or a widening gap?
Around the world AI presents a development challenge. This is not inherently negative; AI can be a catalyst for improving agricultural forecasting, investment in infrastructure, and fostering gender equality. However, international reports already point to potential gaps in digital and AI readiness along the lines of at-risk regions and vulnerable communities. In fact, the revolutionary perception of the technology may bring unwanted economic, employment, and social shifts, particularly in lower-income countries. For instance, struggles are emerging around water consumption. Redirecting water destined for crops to cool data centres risks disrupting food supply chains.
Furthermore, there is the issue of who provides – versus who uses – AI systems. For most countries around the world, there is a deep technopolitical dependence on external providers across all sectors. This dependence may leave some governments seeking to use AI for innovation, but with limited leverage (sometimes willpower) to question or adapt them.
Proponents of ‘global AI regulation’ will point to global regulatory models as promising guardrails or guides for innovation that any country can opt into. These declarations, principles, and safety frameworks are accumulating at record speed, including UNESCO’s Recommendation on the Ethics of Artificial Intelligence (2022), OECD AI principles (2019), and guidance offered by the EU’s General Purpose AI Code of Practice (2025). These documents offer a valuable starting point for countries to develop their own AI regulatory standards, but too often they are treated as templates rather than references, adopted not because they neatly fit local realities but because little else is available.
Amidst these developments, another, quieter process is taking place; throughout the ‘Global South’, there is a call for AI governance that fulfils the promise of regional empowerment and transformation. Latin America’s experience with AI uptake and regulation in the public sector illuminates the realities of our global AI framework.
Latin America’s AI regulation story…so far
Latin America is at a pivotal moment. On the one hand, the region sees AI as a chance to modernise public administration, strengthen transparency, and deliver services faster and more equitably. On the other hand, there is a call to improve the competitiveness of private companies, fostering innovation and productivity based on AI. Policymakers are trying to move quickly to create safeguards and strategies – for both private and public sectors – as well as equip public institutions.
What do these efforts look like? A regional repository of AI systems in the public sector, prepared by Universidad de los Andes (with data sourced from 26 LAC countries), includes examples of countries drafting strategies, launching pilots, and experimenting with new regulatory tools. The database is filtered by subregion, identified users, type of system (e.g. chatbot, object/pattern/sound recognition, prediction, human machine support) and more.
These are just a selection of examples, but they speak to a will (and potential) to integrate AI systems in different sectors and for different beneficiaries. Yet still, underlying conditions in which these efforts unfold make governance extraordinarily difficult.
The risks of copy-and-paste governance
Despite these ambitions, there is no Latin America-wide AI governance coordination mechanism. It appears that Latin American countries are adopting international guidelines (like UNESCO or OECD, mentioned above) faster than they are adapting them to their own context. Oversight becomes not only technically complex, but structurally constrained. Alongside this, governance tends to be fragmented across institutions. Ministries draft strategies, legislatures propose bills, and agencies craft guidelines, often without coordination, and sometimes working at cross-purposes. This reflects longstanding patterns of policy siloing that AI, with its cross-cutting nature, is now exposing in sharper relief.
Moreover, regulators and public agencies are evaluating algorithms and attempting to manage risks without the necessary technical expertise, infrastructure, or data ecosystems. Peru has been active in enacting AI-specific laws, but many LAC countries have no data protection authority or law to ensure suitable assessments and meet governance requirements. And still, these local authorities are facing expectations that even well-resourced regulators in advanced economies struggle to meet.
AI governance is not straightforward. In Latin America, it is advancing faster in theory than in practice, with high-level commitments that remain several steps ahead of what institutions can realistically implement.
The rewards of responsible AI
Amidst the above-mentioned challenges, there is a quieter but equally important story emerging across Latin America: the rewards that come from building responsible AI ecosystems are not abstract. Responsible AI connotes principles like fairness and inclusivity, fundamental human rights as well as a commitment to mitigating risks related to the technology (e.g. discrimination, algorithmic bias, etc.).
Even with uneven levels of readiness, some countries are outpacing neighbours and showing that progress can take root quickly when institutions collaborate and share what they learn. According to the 2025 Latin American Artificial Intelligence Index (ILIA) Chile and Brazil have taken measures to update national strategies and adopt good governance, as has Uruguay.
Throughout the region, several governments have established regulatory sandboxes and algorithmic transparency assessments, among other initiatives. Brazil, Costa Rica, and Colombia, for instance, are finding ways to pilot these initiatives and test out responsible AI rules.
Three priorities for Latin American policymakers
Latin America has its own vulnerabilities, opportunities, and priorities, when it comes to AI. When it comes to responsible innovation and digital transformation (including the AI transition), policymakers should consider three overarching priorities:
- Create a regional reference model for AI regulation. This does not mean simply translating the EU AI Act or another law into Spanish or Portuguese. To avoid copy-pasting existing laws or models, policymakers must first identify which challenges such a regulatory framework needs to address, and these challenges are connected to specific institutional realities, development priorities, and the region’s array of regulatory ecosystems.
- Set up an observatory for responsible AI in Latin America. Repositories are a good starting point to collect evidence in a transparent way. Taking inspiration from the African Observatory on Responsible Artificial Intelligence, Latin America needs its own platform: an entity capable of monitoring use cases, compiling evidence, surfacing best practices, and offering a baseline for transparency and accountability. For small and lower-capacity countries, this observatory could be nothing short of a lifeline.
- Increase south-south collaboration. AI is moving too fast for isolated national responses. Regional learning networks could help countries make strides, exchange data responsibly, and avoid costly mistakes.
Continuing the global conversation: Digital transformation that is not only innovative, but responsible
Latin America is not just another case study. When it comes to responsible AI, the region needs a playbook of its own: aligned with our public-sector capabilities, and attentive to profound socio-economic asymmetries. This is not a call for isolation. It’s a call for relevance. And relevance, in AI governance, begins with self-understanding.
For AI to be regulated effectively, it must be governed with an honest understanding of how each country actually functions and existing political and socio-economic realities. This is an understanding that policymakers all over the world are grappling with, and which requires evidence-gathering, policy analysis, stakeholder mapping and engagement.