In 1950 Alan Turing introduced the Turing-Test and since then there has been an ongoing debate about whether machines can think or not. Especially due to recent breakthroughs in computer science and artificial intelligence, it is being revived again and again. This discussion raises questions like: What is the difference between human beings and machines? How is human intelligence different from artificial intelligence? And do machines have a consciousness?
In the following I want to briefly touch upon these questions respectively. By doing so I argue that computers are able to compute desired outcomes on the basis of experience and defined learning goals and therefore show a kind of intelligence. This intelligence, however, is very limited compared to human intelligence because computers do not have a subjective point of view. Therefore, computers do not perform operations that involve a consciousness.
The essence of humans in contrast to machines
Different philosophers have given different accounts of the essence of human beings. For now, two of these shall be briefly considered. First, Hegel (1910) describes human beings as relational beings. This means that a human, given his freedom, recognizes himself through the recognition of another. In other words, two self-consciousnesses recognize each other. This recognition process results in a “life and death struggle” which creates a relationship of dependence and independence. Hegel describes this as the “master slave dialectic”. If we think of machines, in this case computers, however, Hegel’s definition seems to be partially applicable as well. Computers are connected to other computers, i.e. the Internet, and thus can also be considered as relational beings. As I will show later, however, computers are not aware of being connected. Moreover, one must remember that computers as such are not relational beings, we humans have made them so because of our purpose of use.
A second definition of the essence of humans comes from Descartes (2008). Based on his methodical doubt he rejects every beliefs including his senses, since they could deceive him. Instead, Descartes relies on his capability to think, which, for him, is the essence of human beings. He is famously known for his cogito ergo sum: “I think, therefore I am”. Again, in relation to computers, we can raise the question whether machines can think. Alan Turing (1950) also did so and tried proving the fact with the Turing-Test. If an interlocutor could not distinguish, if he was talking to an actual human being or to a machine, the machine passed the Turing-Test. Turing himself was so sure, that machines can think, that for him it was “too meaningless to deserve discussion” (Turing, 1950, p.422). But whether this test is actually sufficient or not is still an ongoing debate in the lights of artificial intelligence.
In the following, I would like to analyze which ways of thinking a computer is capable of and which not. Depending on this notion, we arrive at different understandings of intelligence.
Human intelligence and machine intelligence
When we look at machine learning, we find that machines perform operations only on the basis of information, in the form of learning goals, which are defined in the system. This rather controlled learning environment is known as supervised learning, where labeled data is fed into the system and an output, i.e. learning goal, is defined. Through these inputs, which can also be referred to as human induction, the system is able to learn the features that are relevant in order to get to the desired output. In other words, the system searches for patterns within the data set. For example, we show the system a large number of images of cats, and after some time it is able to identify or classify images it has never seen before, based on the features it has learned, either as cats or as non-cats. This form of induction, thinking of David Hume, is based on the notion that the future will resemble the past. In the end, machine learning relies on experience.
Experience, however, is also necessary for human learning and thus intelligence. So the computational processing of information sounds similar to human intelligence. A child sees a cat and is able to recognize cats in the future. The amount of samples necessary, however, is far less compared to computers, but this should not be relevant here.
In this case, both, machines and humans, have come to an objective epistemological understanding, that is recognizing a cat. Referring back to Descartes, who describes the human as a thinking thing, we see that especially in science this objective and rational thinking is the driving force. Whether Descartes concentrated only on rationality or whether he also considered emotions or feelings as thinking might be questionable. Nevertheless, rationality may be described as having certainty over something, being in control and understanding things which gives rise to prediction and manipulation of the given object.
But is that all that constitutes human intelligence? We are surely not only rational beings. Moreover, given various examples of biased machine learning systems, we should reflect on our epistemic values, since machine learning, is still based on human induction. That is, however, another discussion.
Objectivity and Subjectivity
Following John Searle, I would like to focus more on this distinction between subjectivity and objectivity. Searle distinguishes between them on two levels, firstly the epistemic, the scope of knowledge claims, and secondly the ontological, the modes of existence (J. Searle et al., 1997, p.98). Epistemic objectivity is independent of a subject and thus everyone can agree on it. So for example the University of Twente is located in Enschede, the Netherlands is a matter of fact and thus epistemically objective. Epistemic subjectivity, on the contrary, is dependent on the subject. So some may think of Enschede as a beautiful town, while others do not. As for ontological objectivity, there are things that simply exist, like nature. Whereas, feelings like pain only exist through the experience of a subject, which makes it ontological subjective. In order to have ontological subjectivity one has to have consciousness. Therefore, for Searle “consciosness consists of inner, qualitative, subjective states and processes of sentience or awareness” (J. R. Searle, 2000, p.559).
To explore and better understand the consciousness, the focus shall be put on psychedelics and its effects since they are known for altering the states of consciousness. This notion is also entailed in the word psychedelic, first coined by Humphry Osmond (1957), which means mind-manifesting. Other words under consideration were mind-rousing, mind-bursting or mind-releasing (Osmond, 1957, p.429). The underlying assumption is that a mind is necessary for certain types of intelligence. A second reason why I am considering psychedelics, is that it often resembles mystical experiences and as Huxley put it, “the urge to escape from selfhood and the environment is in almost everyone almost all the time“ (Huxley, 1954). Psychedelics can therefore be understood as a kind of shortcut to this desired state. Besides, in contrast to rational thinking, embodied in machine learning, psychedelics can be understood as something uncertain and unpredictable. Even though we might have some sort of epistemic objectivity about the effects, the drugs alter the subject’s consciousness and are therefore ontological subjective.
Given this distinction between objectivity and subjectivity and using psychedelics, I show in three cases that machines have no consciousness and have therefore no intelligence, at least not comparable to human intelligence. The arguments I present can be grouped in the “argument from consciousness”, which Turing also addresses in his paper (Turing, 1950, p.445).
The inner master-slave-dialectic
Thinking of Hegel’s master-slave dialectic in relation to psychedelics, one can see that the self is capable of having an inner life and death struggle, meaning there is no other human necessary that does the recognizing. Rather, it is the intoxicated self fighting the sober self for recognition and vice versa (Houot, 2019, p.18). Literally speaking, when the sober self is drugged, it is on a trip, making it the slave, and the intoxicated self takes over, making it the independent master. At the same time, however, the intoxicated self is dependent on the sober self in order to exist, that is to create the psychedelic experience in the first place based on past experiences, thoughts or feelings. During this process, in which unconscious things may become conscious, the sober self, i.e. the slave, can learn many lessons about itself. Hegel also states that “through work and labour, however, the consciousness of the bondsman [slave] comes to itself” (Hegel, 1910, p.927). As an example, a meta-study analysing seven other studies on psychedelics in therapeutic settings concluded that these substances have positive effects on depression and anxiety (Muttoni et al., 2019).
This kind of self-improvement through introspection and reflection can also be seen in Nietzesche’s philosophy, if one thinks of the overhuman. The dissolution or loss of ego, as often described by people when talking about their psychedelic experiences, can be seen as the going-under as Nietzsche describes it. For him the human is a not yet determined animal with the possibility of reaching the overhuman, which is to be understood as transcending the human and not as having extraordinary capabilities as a “superman”. In this process of evolution, or transcendence, the human is like a bridge that connects two states – the current and the possible. In the “crossing over” he transitions from one state to the other. But in order to transition, he has to destroy himself which is the “going under” of the human. So perhaps through psychedelics and the insights gained from the altered state of consciousness, people are able to come closer to the overhuman.
With machines, this possibility of self-development, the going under and crossing over, or having the inner master-slave dialectic are not possible. It can be argued that today’s machines have evolved compared to the ones of decades ago, or that a computer virus resembles the master-slave dialectic. But this “transcendence” is not related to this inner, subjective development. Machines are not aware that they only consist of circuits. This means they have no ontological subjectivity and therefore no consciousness.
Breaking free from symbolic forms
This subjective development takes place not only within, but also in the external subjective experience. These are both intertwined and influence each other. Aldous Huxley (1954), for example, describes how he could, under the influence of mescaline, grasp the “naked existence” of flowers when looking at them, or how he was absorbed in paintings or music and experienced their “Is-ness”. Michael Pollan (2018) also reports of his experience in a similar way, especially how his visual and auditory senses merged. So it seems that the sensory perceptions are widened, or even “cleansed” as William Blake puts it. This direct and unconditional perception is what Huxley calls the Mind at Large.
One might think of psychedelic art, that computers are able to generate, such as the Deep Dream Generator shows. From this starting point, one could argue that we can also “overload” the computer with information and let it create new connections, which I do not deny. This notion, however, takes again only computation into account.
To better understand this, I want to turn to the philosopher Ernst Cassirer. He describes how symbolic forms, such as language, religion, science or culture, influence our mediated connection to the world (Cassirer, 1930). Thus, depending on the symbolic forms we use or which are familiar to us, we may derive different forms of knowledge. Based on the biologist Jakob von Uexküll, the environment consists of carriers of marks and humans or animals are able to receive or perceive these marks, which are mediated by symbolic forms. So if we look at animals and humans, the contextual knowledge and the different lifeworlds arising from it are obvious. Probably less obvious, but not deniable, is the different knowledge from shamans to “normal” users when it comes to psychedelics.
The point, however, is that during a psychedelic experience one has the possibility to break free from these familiar symbolic forms and thus have a different and maybe a more “pure” connection to the world, which is undoubtedly challenging and which many people fear. Besides, psychedelic experiences resemble mystical experiences that people seek through long-standing religious traditions, such as dancing, singing or fasting. That is, feeling a connection to the divine (Watts, 1970). Even though these altered states of consciousness are only active during the trip, the psychedelic, but also mystical experience itself has lasting effects on future perception. This is because the notion of certain symbolic forms has most likely been altered.
Looking at computers, one could say they have different and a limited set of symbolic forms, and thus a different relation towards the environment. Besides, computers do not have the possibility of changing their symbolic forms. If this is a necessary condition for having consciousness might be questionable. Nevertheless, this shows that computers do not have a subjective point of view and therefore have a very limited relation towards the environment. Even if we connect a camera, the computer has no sentience of what these images mean at least not in the understanding within the human lifeworld. This can also be seen in the aforementioned psychedelic art. Computers have no understanding of the images and do not even know why they created them in the first place. This brings me to my third and last point.
The meaning of information
Searle makes a further distinction that extends the ontology of a phenomena, ones that are “observer-relative” and others that are “observer-independent” (J. Searle et al., 1997, p.15). Things that exist regardless of what one thinks, such as nature, is observer-independent. Money or other “social constructs” are observer-relative since the meaning is ascribed by human beings. This also means that observer-relative phenomena contains an element of ontological subjectivity, that is the attitude a subject has towards the object. From this we can derive two consequences. First, after a psychedelic experience, as shown above, certain internal and external perceptions, which are intertwined, can change the way one refers to them. This, in turn, leads to the possibility of changing the epistemic subjective, but at large maybe even the epistemic objective meaning of object-relative phenomena. Second, since machines cannot ascribe meaning to observer-relevant phenomena, they do not have consciousness, i.e. onotological subjectivity. This is also the reason why human induction is necessary for machine learning. As Andrew Smart (2015) also argues, only the conscious observer gives meaning to information.
Judea Pearl (2018) also distinguishes human intelligence from artificial intelligence by defining three steps of causation, which he calls the “ladder of causation”. On the first rung is association, which is concerned with seeing or observing related variables. For example, what do past shopping habits tell me about preferences or likes. On the second level is intervention, asking questions like “What if I do…”. And on the third level are counterfactuals, the possibility to imagine possible scenarios and to understand. As Pearl argues, and also several real-world examples show, machine learning is still on the first rung. He even says:
The public believes that “strong AI,” machines that think like humans, is just around the corner or maybe even here already. In reality, nothing could be farther from the truth (Pearl & Mackenzie, 2018).
So even if machines have some intelligence that is based on computations, i.e. rational thinking, which might even outperform human intelligence in certain tasks, without this subjective point of view it is not able to progress on the “ladder of causation”. Again one may claim that it is only a lack of necessary resources that will be available in the future. But since the workings of the consciousness is still a mystery, it is impossible to “build it” into a computer.
Conclusion
It can be claimed that machines and humans have similar intelligence when it comes to computations based on experience. But when it comes to higher-order intelligence, such as giving meaning to the information extracted from the environment, and thus the capability to relate to it, or developing the self through retrospection, machines cannot hold up. This is because they do not have an ontological subjectivity, that is a mode of existence that only exists through the presence of a subject, such as emotions or pain. From that it follows that machines do not have a consciousness. We can see this kind of distinction when we think of emotional intelligence and not just of rational thinking that is currently embodied in machine learning. We should therefore consider ontological subjectivity and knowledge gained through altered sates of consciousness, as is the case during psychedelic experiences, as a valuable source of knowledge. This means there are different ways of experiencing and knowing the world.
As this brief discussion shows, intelligence is in the eye of the beholder. We human beings are able to ascribe meaning to observer-relative phenomena. Depending on the kind of intelligence we think of, it is possible to come to different conclusions. One can either claim that there is no intelligence in the computer, because no consciousness is involved; or that the machine has some kind of intelligence and consciousness is just a problem of computational resources. This debate is still ongoing. Nevertheless, since a computer does not have a subjective point of view, its nature of intelligence is not comparable to that of humans.
To conclude this discussion I want to refer to the title of this essay by raising two further questions. First, are we humans just machines, since we are so focused on rationality and increasingly alienated from ourselves, capable of altering our consciousness by taking psychedelics? And second, what happens, for example, to the demarcation between humans and machines, if computers are someday able to take psychedelics, which presupposes that they have consciousness?
References
- Cassirer, E. (1930). Form and technology. In A. Hoel & I. Folkvord (Eds.), Ernst cassirer on form and technology: Contemporary readings (pp. 15–53). Palgrave Macmillan.
- Descartes, R. (2008). Meditations on first philosophy: With selections from the objections and replies (M. Moriarty, Trans.). UK, Oxford University Press.
- Hegel, G. W. F. (1910). The phenomenology of mind (J. B. Baillie, Trans.). In L. Pojman & L. Vaughn (Eds.), Classics of philosophy (pp. 923–928). Oxford University Press.
- Houot, A. (2019). Toward a philosophy of psychedelic technology: An exploration of fear, otherness, and control.
- Huxley, A. (1954). The doors of perception. United Kingdom, Chatto & Windus.
- Muttoni, S., Ardissino, M. & John, C. (2019). Classical psychedelics for the treatment of depression and anxiety: A systematic review. Journal of Affective Disorders, 258, 11–24.
- Osmond, H. (1957). A review of the clinical effects of psychotomimetic agents. Annals of the New York Academy of Sciences, 66 (3), 418–434.
- Pearl, J. & Mackenzie, D. (2018). The book of why: The new science of cause and effect. New York, Basic Books, Inc.
- Pollan, M. (2018). How to change your mind: What the new science of psychedelics teaches us about consciousness, dying, addiction, depression, and transcendence. New York, Penguin Press.
- Searle, J. R. (2000). Consciousness. Annual Review of Neuroscience, 23 (1), 557–578.
- Searle, J., Dennett, D. & Chalmers, D. (1997). The mystery of consciousness. New York, The New York Review of Books.
- Smart, A. (2015). Beyond zero and one: Machines, psychedelics, and consciousness. New York, London, OR Books.
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59 (236), 433–460.
- Watts, A. (1970). Psychedelics and religious experience. In B. Aaronson & H. Osmond (Eds.), Psychedelics: The uses and implications of hallucinogenic drugs (pp. 131–144). New York, Anchor Books.