Author contributions: This article is the result of numerous conversations regarding the current literature, policy context, and insights from recent Technopolis Group projects. Thank you to Adebisi Adewusi for your core insights, and to Tjerk Tillman for getting it all down on paper. Both authors reviewed the final text. Contact mail@tjerktiman.nl.
Sometimes a foe, other times competitor or confidant, AI tools are becoming increasingly powerful and instrumental in how we make decisions, learn, or even make a livelihood. The proliferation of large language model-based applications (LLMs) has spawned a renewed stream of ethical questions about AI related to autonomy and human rights, wellbeing, and environmental justice.
The topic of ‘AI Ethics’ covers more than societal impact, and the moral considerations related to AI go far beyond the scope of this article. For AI to serve society, or at least not harm citizens, high-level-principles, such as norms and moral obligations, should be translated into the day-to-day workings of our organisations and institutions in private and public sectors. This piece examines the relevance of ethics for AI governance and regulation at different levels of action.
The arena of digital ethics: Devices, interfaces, autonomy
Let us start with the physical and digital spaces where ethical principles or questions emerge in relation to societal norms and values. The leading voices in Science and Technology Studies and Philosophy of Technology tell us that technologies are neither good nor bad, a priori, but that they are never neutral. This is partly because the creators (e.g. developers, technologists, innovators) cannot always predict how their technology will be used in society. In the early days of the landline, according to media history professor Lynn Spigel, the telephone was seen a tool for professional communication. The inventors and designers (mainly white male engineers) never imaged that its main user group would be housewives using it to socialise. In the wake of Web 2.0 and the commercialisation of the social web, Silicon Valley engineers have attempted to better understand the social and network effects of technologies in terms of consumer behaviour. Progressively, instead of talking to people, organising focus groups or engaging in participatory design, users of new digital services and platforms become unwitting research subjects in constant A/B testing to achieve ‘maximum engagements’ or ‘automated experimentation’. In a race for our constant attention, many techniques are in place to track user behaviour and recommendation engines push us into self-fulfilling echo-chambers.
The presence of low-quality AI-generated content (AI slop) in our digital environments threatens to degrade the digital space and user autonomy. Even before the current AI hype, the role of algorithms to track and trace user behaviour raised ethical questions and a need for do-no-harm principles across citizens, users, policymakers, and technology developers. This concern is most recently exemplified by agentic AI, which refers to autonomously running and self-learning software programmes (‘bots’ or ‘agents’) that start, adapt, and autocomplete daily digital activities and interactions. If these semi-self-adapting AI systems and ‘agents’ can be connected to robotics or other forms of actuators, they could function in the physical world (e.g. drones or robots). Currently, this is primarily an issue when it comes to drones in military operations, but examples are appearing: from robot vacuums that map your home and could sell that data to police-robots deployed on our streets.
From ethics to governance: In need of a framework
If the societal use cannot always be anticipated, shouldn’t there be restrictions on technological innovation to prevent harm or damage to foundational societal institutions, norms, and principles? Scholars, journalists, policymakers, and NGOs have been calling for frameworks that align core ethical principles and guidelines for the development and adoption of AI. The list is long, but UNESCO’s Recommendations on AI ethics takes a human-rights approach to distinguish a set of core values, 10 connected principles, and a set of policy areas where nations or regions can adhere to, ideally via binding treaties or common national laws. The OECD has a similar approach to be taken up as an international (governmental) standard.
Many regions, including Latin America, have made responsible AI uptake a priority, developing or translating the above principles into local ethical guidelines and accompanying regulations. Other examples include the EU’s independent expert group and its guidelines for trustworthy AI, but also a South Koreas AI law focused on safety. The Council of Europe has released a practical guide to help policymakers apply human rights and humanitarian principles (such as do no harm) to real-world decisions related to AI.
There might be a high-level framework for ethical AI, but the means for operationalising such frameworks into day-to-day routines and social behaviour is more nuanced. What would it mean to factor ethical concerns into AI governance equations at different levels of action?
Three levels of action
On an organisational level, ethical concerns about data and AI workflows, in many cases, extend far beyond the physical or legal domain of a company or institution. For any type of organisation, ethical issues must be translated into practical and governance questions; for example:
| Procurement: Do I know the company I am buying AI from? Has their code been scrutinised and tested? |
| Worker autonomy and job satisfaction: How will AI-based systems change the work culture? What kind of harms can AI bring to workers (see the case of Uber)? What digital skills are lacking in the current workforce? |
| Increasing dependence and digital sovereignty: The economic dominance of certain platforms pose enormous risks – code is not only law, but code also shapes culture. Ethical questions here concern technical and cultural self-determination as well as the right and duty to develop alternative AI futures and narratives. |
On an environmental level, a key ethical concern is balancing use of resources against the benefits of AI. It is becoming increasingly clear that large AI platforms, models, and data centres are consuming more energy and water than the self-reporting shows. In some instances, the needs of local communities are neglected to create ‘favourable investment conditions’ for private firms and their data centres, which depend on water and other basic resources. The extent of harm is being felt and seen on an environmental and humanitarian level. The ethical questions governments, companies, and organisations should ask themselves include:
| Energy policy: Is it worth letting all employees/citizens use LLMs or LVMs to generate text and images if it increases the company’s environmental footprint? On a governmental level: how and when are competing interests treated on energy use, the electricity grid, and who is paying and/or compensating for AI energy use? How do AI and data centres factor into policy priorities? |
| Fairness and proper use of resources: What are the consequences for resource extraction, and fairness of resource use when adopting AI models? Can we know the environmental score of a model? If so, who is auditing such systems? |
On a societal- or socio-technical level, bias and discrimination are being replicated in AI-based hiring and HR systems, face recognition algorithms are misidentifying people of colour and chatbots are spewing racist and misogynistic epithets in a matter of minutes. There is also a massive mental crisis across different age and demographic groups, in those vulnerable to exploitative Generative AI applications. To manage these and the many other risks (e.g. deepfakes, sextortion, etc.), companies, organisations, and governments should consider:
| Duty of care: What is my duty and to whom, in preventing harms caused by AI, even if indirect and long-term? How can harms be monitored, risks reported? |
| Education: How can citizens be better educated and informed on the logics and economics of (Gen)AI, and its social costs? |
| Accountability: How do we create accountability in a complex and multi-layered system of hardware, middleware, software, data, algorithms and interfaces? How can rights be better protected and redress organised? |
From armchair philosophy to social consensus
While AI is neither good nor bad, there will always be good and bad actors who make use of it. The speed of opening up such technologies to society has left many societies defenceless and incapable of formulating proper moral-or legal responses. It is hard to anticipate the effects of Generative AI on our work, education, social relations and mental health, among other concerns. Across all these areas, what is important is that we make the ethical principles, the risks and the rules understandable for everyone. This will allow societies to navigate the risks they face as individuals and so that they can be effective participants in the process of forming new norms and governance structures around AI.
Technopolis Group has been monitoring how AI is being deployed in society and across different sectors and domains, including supporting the EU’s AI Office (see another article in this series on the EU’s approach to AI regulation)[1]. This article builds on studies of AI in different sectors and subsets of society as well as the Group’s Digital Thematic Business Unit.
Bibliography
- SPIGEL, L. Make room for TV: Television and the family ideal in postwar America. University of Chicago Press, 1992
- MBIOH, W. Beyond echo chambers and rabbit holes: algorithmic drifts and the limits of the Online Safety Act, Digital Services Act, and AI Act. Griffith Law Review, 2024, vol. 33, no 3, p. 189-208.
- STEEN, M, VAN DIGGELEN, J, TIMAN, T, et al. Meaningful human control of drones: exploring human–machine teaming, informed by four different ethical perspectives. AI and Ethics, 2023, vol. 3, no 1, p. 281-293.
- ULNICANE I, Governance fix? Power and politics in controversies about governing generative AI, Policy and Society, Volume 44, Issue 1, January 2025, Pages 70–84,
- BANKINS, S, OCAMPO, A C, MARRONE, M, et al. A multilevel review of artificial intelligence in organizations: Implications for organizational behaviour research and practice. Journal of organizational behaviour, 2024, vol. 45, no 2, p. 159-182.
- KWET, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & class, 60(4), 3-26.
- DUARTE, T, KHERROUBI Garcia, I, ANSHUR, R, HUMFRESS, H, ORCHARD, D, Wright, S (2025) Resisting, Refusing, Reclaiming, Reimagining: Charting Challenges to Narratives of AI Inevitability, We and AI, DOI: 10.5281/zenodo.17343830
- VIVODA, V, BORJA, D, & KRAME, G (2026). AI’s energy paradox: governing the trilemma of security, justice, and sustainability. The Extractive Industries and Society, 25, 101773.
- CRAWFORD, J, BEARMAN, M, COWLING, M, PANI, B (2024). Deepfakes, sextortion, and virtual lovers A new world order for generative artificial intelligence in universities? University of Tasmania. Conference contribution. But similar examples are abundant and known.
[1] Most recent work covers, among others, the AI adoption in UK businesses (DSIT), Alan Turning Institute evaluation (Alan Turing), Ondersteuning NTS Actieagenda AI & Data (Topsector ICT), Study on Opportunities and challenges of AI Technologies for the Cultural and Creative Sectors (DG CNECT), Cloud & AI study (DG CNECT), Technical Assistance for AI Safety: Sociotechnical Risk Modelling and Evaluation (AI Office), Support to AI Office on AI Act and GPAI (AI Office)
