I am Simon, a PhD candidate working on the societal implications of Artifical Intelligence.
Find some of my paintings here: https://pixelfed.social/monsi
Contact me at hello [at] this.domain
I am Simon, a PhD candidate working on the societal implications of Artifical Intelligence.
Find some of my paintings here: https://pixelfed.social/monsi
Contact me at hello [at] this.domain
An important aspect of responsible innovation is stakeholder involvement. The incentive to do so is to include different perspectives and gain a broader picture of the issue at hand. But what exactly is meant by “issue” then? In the following I want to briefly argue that when talking about innovation and responsible innovation, scientists, technologists, ethicists tend to focus (too much) on the visible and tangible end product (i.e., technological achievements)....
For my masters thesis I took a closer look on the large language model GPT-3 (Generative Pre-Trained Transformer) through the lens of Nietzsche’s ontological understanding of the self, namely that the self is always in the process of becoming. In this process of becoming, technology plays an increasingly important role. One can say technology becomes part of the self. Abstract The advent of large language models introduces a new relation between the self, language and technology....
Our material environment is increasingly constituted by algorithms in the form of smart technologies, consisting of various sensors, such as facial recognition. But we are also making our homes more “smart” by connecting different devices to the internet. For example, smart speakers like Amazon Echo. Since the environment and the objects around us affect us human beings, we should wonder how that is the case with these increasingly sophisticated and autonomous systems....
Big Data or data-intensive science is often perceived as leading to more knowledge by processing more data (boyd & Crawford, 2012, p. 663). However, this is questionable. We face causal uncertainty in big data modelling, because it is becoming increasingly difficult to represent complex phenomena adequately. Ironically, because of the increase in data. We always face uncertainty and I do not claim that we have to overcome uncertainty. But rather we should be aware of the pitfalls we can make while constructing models based on big data in order to become “more certain”....
We use models1 as epistemological tools to gain understanding about the world or a particular phenomenon. Especially machine learning models are becoming more and more ubiquitous. However, due to the complexity of the model, often referred to as black-box, the knowledge we are able to extract seems limited. To overcome this problem there are various approaches that aim at making the model explainable or interpretable. In the literature these terms are often used interchangeably....
Artificial chatbots and voice assistants are becoming more and more part of our daily life. One rather new and novel development is Duplex, introduced by Google in 2018 (Leviathan & Matias, 2018). Duplex is able to conduct a conversation independently in order to make a reservation or appointment by telephone. The technology works so well that the other person on the phone assumes that they are talking to another human being....
In 1950 Alan Turing introduced the Turing-Test and since then there has been an ongoing debate about whether machines can think or not. Especially due to recent breakthroughs in computer science and artificial intelligence, it is being revived again and again. This discussion raises questions like: What is the difference between human beings and machines? How is human intelligence different from artificial intelligence? And do machines have a consciousness? In the following I want to briefly touch upon these questions respectively....