An important aspect of responsible innovation is stakeholder involvement. The incentive to do so is to include different perspectives and gain a broader picture of the issue at hand.

But what exactly is meant by “issue” then? In the following I want to briefly argue that when talking about innovation and responsible innovation, scientists, technologists, ethicists tend to focus (too much) on the visible and tangible end product (i.e., technological achievements).

Ethical AI

For example, if we look at the current ethical debate on different machine learning models, a common issue is how to “unbias” them in order to make them more “fair” and “trustworthy”. This is undoubtedly a normative task for which there is no universal solution, but which depends heavily on the context. By simply applying ethics to make the technology “better”, we might however overlook the appropriateness of the model in the first place. Do we, for example, want a predictive policing system that uses facial recognition to be “fair” in the sense that it is not biased towards black people, but stops people equally? Besides, merely “unbiasing” a system does not make the underlying social inequalities disappear. In the end, “ethical AI” is not a question about bias (i.e., right or wrong), but about power.

Hidden labour

Instead of focusing on how to align certain objects or products with certain values, we should also consider the conditions and circumstances that shape the technology.

In the case of the currently much-debated ChatGPT, the model is able to produce texts that contain less prejudices and are less harmful compared to its predecessor GPT-3. In that sense, ChatGPT could be considered as a “responsible innovation” - or perhaps rather an incremental improvement instead innovation. This does not mean, though, that ChatGPT does not produce any offensive, toxic or dangerous content. Nevertheless, a recent TIME investigation1 found that in order for ChatGPT being able to filter out harmful content, people in Kenya had to flag or label certain text outputs in order for the model to learn accordingly. Not only were the workers paid less than $2 per hour, but they also had to suffer psychological distress because they were confronted with texts describing child abuse, murder or torture.

Materiality of Software

Given the requirement of human input or knowledge in the form of data labeling, “artificial intelligence” is not intelligent. At the same time, it is not artificial either. In a way, software is intangible and operates invisibly in the background of our environment, yet the Internet as well as the devices we interact with (to use ChatGPT), rely heavily on a material infrastructure. Various oversea cables connect large server farms consisting of various hardware components that require energy, and so on. Besides, data that is required to train these models stem from physical entities, most often from people, who have not given their consent. Software thus has a physical component that should not be neglected.

Accordingly, various issues emerge. The computing power required to train current models, for example, has an enormous energy consumption, which in turn has drastic ecological consequences. And the more data these model process in order to improve their “accuracy”, the more energy they require. Also, additional data that is generated has to be stored, which then require larger data farms, and so on. Further, miners work under inhumane conditions to extract minerals for the microchips, which are manufactured by other workers under similarly poor working conditions. At the end of the lifecycle of hardware components, they are disposed of, creating e-waste. All these aspects have physical impacts.

In light of this, Kate Crawford23 argues that the entire AI infrastructure relies on extractive and exploitative practice. We can see how the notion of “ethical AI” or “fairness” is expanding.

Creating desirable conditions

It is not that these two problems I briefly sketched, namely the hidden labour and the materiality of software are unknown or that there are no actions taken in that regard. But still I would argue that we tend to overlook these issues - maybe due to the apparent invisibility and seamlessness of software. Further, when faced with a problem, technology or engineering seems to be the solution. Many, if not most, issues however require social and political solutions.

A first step in shifting the focus away from the end product would be for people working on responsible innovation to become aware of the limits of their knowledge on the issue at hand. We should reflect on our assumptions and in doing so try to broaden our perspectives when it comes to understanding the larger dynamics at play.

As mentioned, stakeholder engagement is strongly emphasised in responsible innovation literature. It is crucial, however, which stakeholders are included in the conversation, e.g., people working on human rights, labor rights, climate justice, people affected, etc.

So instead of asking “how should technology X function so that it benefits most stakeholders” (also often framed in economic terms), the question should shift to “how can we improve the conditions that produce a particular technology or, more broadly, a particular way of life”. Responsible innovation should thus take into account the many steps that are necessary to achieve a desired goal. These include rules, regulations, organisational structures, procedures, etc - none of which are technological.

Without doubt, changing the entire machine-learning ecosystem is an ongoing and tedious task that cannot be solved by a single person. Nevertheless, innovation should not be understood as a technological fix, but as a process of interaction and organisation (understood as noun and verb) of the social. For example, it would also be conceivable to build a more ethical language model (i.e., ChatGPT) by changing the working conditions of content moderators. The starting point of innovation is therefore not technology, but the political and social conditions.


  1. Billy Perrigo. (2022-01-18). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. TIME. https://time.com/6247678/openai-chatgpt-kenya-workers/ ↩︎

  2. Kate Crawford and Vladan Joler. (2018). Anatomy of an AI System. https://anatomyof.ai/ ↩︎

  3. Karen Hao. (2021-04-24). Stop talking about AI ethics. It’s time to talk about power. MIT Technology Review. https://www.technologyreview.com/2021/04/23/1023549/kate-crawford-atlas-of-ai-review/ ↩︎