top of page

Bachelor Machines

1 May 2021

Bachelor Machines
Article PDF

Chocolate Grinder, Marcel Duchamp; Image credit: Wikiart

The author philosophically questions the hope (and fear) of building an Artificial Intelligence (AI) that matches and surpasses human mental faculties.  Starting from John Searle's criticisms of the idea that computer software "understands", the author questions the very concept of the machine, starting from its earliest implementations (a jug, a knife…). AI could achieve a mind comparable to the human one only if it were able to enter into an evolutionary process similar to that of life on earth, of which Homo sapiens is the product. But if this were to happen, human beings would no longer be able to judge an evolved AI as "intelligence." The very concepts of "understanding" and "thinking" are inseparable from the natural history of life, that is, AI would cease to be a simple simulation of human capabilities – as it is now – only if it were to take the path of an evolution completely similar to that of life. In fact, in order to say that a computer “understands”, computers would have to be able to reproduce themselves. Finally, the author problematizes the concept of environment (Umwelt), taking up the analyses of the biologist von Uexküll. That is, that thanks to language, Homo sapiens is able to think of something that is unthinkable: what one would call “the real” beyond its own Umwelt.

1


For some time now, a part of the scientific community has bet on the possibility that intelligent machines (Artificial Intelligence (AI)) evolve to emancipate themselves from human domination, becoming thinking beings in all respects. Not only science, a number of novels and science fiction films depict relations of dependence between humans and intelligent machines being reversed, that is, machines using human beings – like in the famous Matrix films, which greatly impressed philosophers. (1) This perspective is considered plausible above all because it is the perspective we desire (for this very reason we also fear it): that machines, our creatures, surpass their creators. Overtaking them in intellectual capacity. Somewhat like parents who want their children to achieve what they were not able to.


This paper will not, on the other hand, examine the development of robotics, that tends to imitate the material structure of the human brain, to reproduce its neural networks. Nor will it examine the development of robots, programmed to behave like living beings, to reproduce, etc. In this case, in fact, it would be better to speak of artificial life. (2) Anyway, the artificial reproduction of the mind of an ant appears to be enormously complex.


The entire debate sparked by this possibility (that intelligent machines may become truly intelligent) cannot ignore the philosophical problematization of the underlying concept: that of machine. What did we mean, and what do we mean when we use the term μηχανή, machine?


2


AI is the point of arrival of the evolution of machines, which begins with objects such as bowls and jugs. It does not seem to me a coincidence that Heidegger, (3) in his attempt to define the essence of the concept of “thing” (das Ding), did not use a sample of natural objects, such as trees or stars, but a jug, in short, an artefact. Is a jug already, in its own way, a form of AI? John Searle, among others, would answer yes. In fact, anything can be described in terms of binary computation, also the functioning of a jug.

Searle’s ‘Chinese Room Argument’ is well known. (4) He imagines he is in a room where Chinese ideograms are slipped under the door. He himself has no knowledge of Chinese. All he has is an English manual that tells him which Chinese characters he must use to “respond” to the input of the texts that come into the room, so that his output will be correct. From outside the room, it would seem that Searle knows Chinese perfectly, though in fact he does not understand it at all. We may say that he simulates an understanding of Chinese. Indeed, according to Searle, computers are machines that simulate human understanding, but they do not understand anything. Because their operations are syntactical, not semantic: the syntax of a sentence may be correct, but the sentence may not make any sense. As in the statement “Colourless green ideas sleep furiously” quoted by Chomsky. (5)


What must be made clear is what we mean when we say that a human being understands what he or she hears or says, which is not the same as displaying a correct linguistic behaviour. Phenomenology focused on this question regarding our understanding, but here I will not follow the phenomenological line of thought.


We conceive of human understanding as a sort of perception, which does not, however, originate from the five senses. Philosophy says that understanding is an intuition: in fact, what we perceive are not material things but abstract objects. Understanding could thus be considered as a sixth sense, an intellectual sense.


Searle says that in a certain sense even a knife or a chair are smart machines, only they have an extremely boring program. I preferred Heidegger’s example of a jug; however, the concept is the same: the program of the jug (to be filled with a liquid which is then poured) is also a very simple program, yet it is a program.


We will not follow the trajectory opened up by Heidegger. We will instead take for granted the naturalist point of view; according to which Homo sapiens is an animal just like any other. Thus, Homo sapiens will also be structured in relation to its own Umwelt, and has no access to the Welt, to the world as such.

What should surprise us about AI is not the fact that it behaves like a human mind, but the fact that it shows how intelligent human beings are at using certain things as machines. It shows the way the binary system was at a certain point put to use in an intelligent way.


It must be said that Searle’s theory is reductionist (all true materialism is reductionist) because it closely links the mind to the structure of the human brain. According to Searle, it is the material brain that produces understanding and consciousness. It has been objected that it is not possible to say that a single neuron or even that a set of neurons “understands”, yet the overall effect of neuronic activity is that human beings understand. The mind emerges from cerebral matter. Would it not, therefore, be possible to say that a mind emerges from a series of machines? Materialism uses a common-sense argument: a mind needs a brain to exist, something everyone agrees on. What is essential, however, remains to be proven: how does brain activity produce understanding, consciousness, etc.? The connection is postulated (and it satisfies our common beliefs about the body/mind relationship) but it is not actually described by neuroscience. Nor by the philosophies that turn to it.


3


It is indicative that modern, materialistic and mechanistic science has decided to consider all of nature, including the Homo sapiens, as a great mechanism. Machines, objects designed by humans to be used, have become the very model of the scientific explanation, by which something is explained when its functioning is described as a mechanism. Thus, classical mechanics gave way to relativist mechanics and quantum mechanics. Science breaks down nature (in theory, but sometimes also in practice) revealing it as a machine. The difference is that natural mechanisms are useless, human beings use the machines they design, but nobody uses the “human machine”. Unless we hypothesise a Matrix. Modern science views nature like one of those useless and awkward machines created by Jean Tinguely, the Swiss sculptor: the fact that nature is a machine that nobody needs is taken as a basic assumption. Science analyses nature as if it were a machine. As far as the world is described as a machine, it is deterministic. Science is like a child taking a clock apart to understand how it works. However, to believe that AI is a mind like the human one, or vice versa, is to believe that, by describing the functioning of a clock, one can understand what time is.


In the perspective of science, also human thought and the ability to understand must be described as functions of a mechanism. Hence the comparison, now a common place, between the human mind and the computer: our brain is the hardware, which allows the software to work, a software that is not itself cerebral. Many ironic comments have been made about this (see Horgan). (6) It has been pointed out that the human mind has always been described in analogy with the state-of-the-art machine of each age. It has been compared to a mill (Leibniz), (7) then to a telegraph, then to a jacquard loom and finally to a telephone switchboard. Already the Greeks, it seems, described the mind as a catapult. Today the most technologically advanced machine is the computer, so we tend to describe the human mind as if it were a computer. This is a corollary, a naive one, of the fact that for over four centuries science has described entities, so also the mind, resorting to the model of machines designed by humans.


So, if Homo sapiens is a biological machine capable of thinking, why should we not exclude that our electronic machines, our robots, are not able to think, despite what Searle has to say? What is overlooked is, indeed, that a machine is such because of its use. A chair, a jug, or a computer are not per se machines: they are machines because of how we use them. Nothing is intrinsically a digital calculator. Now, we start from the assumption that human beings, including their minds, are not used by anyone. Or can we say that the brain, or the mind, are machines that human beings use? If so, who uses the human mind? We end up with homunculi, a sort of Russian dolls, with minds that are inside the mental machine and move it, and that refer to yet another internal homunculus. Which does not solve the problem at all.


The mechanistic challenge today does not consist so much in saying that we human beings are machines, therefore that we are determined, but that machines can evolve to become not only like us humans, but even better than us. Machines with a higher degree of perfection than us.



The Bride stripped bare by her Bachelors, Marcel Duchamp; Image credit: Whoworeitbetter.net

4


As we can see, the mind/brain issue is usually addressed in synchronic terms. Generally scholars ask what relations there are today between a mechanism (both a human brain and a computer) and mental capacities. However, it would be more useful to address the issue in diachronic terms, in other words against the background of the history of life.


Certainly, for modern, post-Darwinian and post-Mendelian biology, living organisms are machines made up of chemically organic substances. But they are very special machines. All living beings, so also humans, have a so-called program, a genome. Apart from the fact that there are no two individuals with the same genome (except identical twins), life does something that is fundamental: life tends to change program continuously. This change is described as stochastic: in copying genes, nature makes mistakes. These ‘errors’ are selected by the environment après-coup. They are the basis of the whole history of life, called evolution: because their occurrence is at the origin of the extraordinary variety of living forms.


Indeed, understanding, thinking, deciding are human functions that have developed in the course of the whole history of life, or rather, they are the result of an endless variation of programs. When we say that an animal understands and decides, this claim is based on the analogy between what ‘understanding’ and ‘deciding’ mean for us humans. In fact, some fundamental instincts seem to us to be common to all living beings. Our view of the living is necessarily anthropomorphic. Whatever the essence of understanding is, it implies the living; the outcome of innumerable mutations and selections.


One may say that Darwinism is a recent theory, that humans have always described animals by comparing them to ourselves. Indeed, humans (except some, like Descartes) have always sensed something we may call the fundamental unity of life. It is the intuition of this unity that leads us to say that an animal is afraid, feels joy, decides, suffers, in its own way thinks, etc. Already at the time of Homer it was possible for Greeks to say that the dog Argo recognized Ulysses.


The crucial point therefore is not the difference between human mind and artificial mind, rather it is the difference between living and non-living being. In fact, we usually feel much closer to a cat than to a computer capable of beating everybody at chess. For this reason, we feel compassion for a cat that suffers, we can love it. However, we cannot love a thinking machine, or have compassion for it, not even if the designer were to insert a program able to perfectly simulate distress. Perhaps in some cases we are indeed able to love a machine and pity it (it is well known that children often fall in love with machines) but a robot that simulates distress is no different from an actor simulating the distress of a character: we know that in both cases there is no distressed subject. This is because all the various concepts of mind that we have constructed over the course of history are inseparable from the biological life from which they are extrapolated. We cannot even imagine what it would mean to have a mind (endowed with something similar to our consciousness) that is not affected by the needs and tribulations of life. For us consciousness is essentially a consciousness that suffers and enjoys. We conceive our mind as being essentially pathetic.


5


If we wanted AI to display something similar to our understanding, we should expect AI to also enter into an evolutionary process where it would mutate its program. But is it possible to program a machine that is capable of changing its program? The point is that change always comes from outside a program, it is not itself programmable, in life it is a consequence of a series of “errors” in reproduction. In order to say that a computer “understands”, computers would have to be able to reproduce themselves and to make mistakes in their reproduction. It is true that today we have male and female robots that can do this. But is it possible to build a meta-program that allows a change, also a randomized one, of programs? I don’t know how to answer this question. However, the limits of AI consist in its being a bachelor machine, as Marcel Duchamp said; (9) one that does not reproduce. And that therefore cannot vary.


Searle’s theory is reductionist (all true materialism is reductionist) because it closely links the mind to the structure of the human brain. According to Searle, it is the material brain that produces understanding and consciousness. It has been objected that it is not possible to say that a single neuron or even that a set of neurons “understands”, yet the overall effect of neuronic activity is that human beings understand.

I think computers could become emancipated from human use only by breaking, as can be seen with HAL 9000, the computer in Kubrick’s 2001: A Space Odyssey. It is also possible for a jug to crack, in which case all the liquid inside spills out – a sort of Dadaist jug. AI should continually break down in order to “evolve”, the same way living organisms have. ‘Break’ in the sense of randomly change program.


Many believe that AI may “evolve” if we introduce more and more information and increasingly sophisticated ways of processing it. This idea is based on a radical misunderstanding of evolution in the Darwinian sense. Many intend evolution in a “progressive” sense, they think that something evolves because it is perfected. However, the history of life for evolutionary sciences is not at all a matter of perfecting or complicating, it is only a multiplication of variants. Currently it is said that the planet is dominated by Homo sapiens, similarly, we could also say it is dominated by ants, which continue to reproduce in billions. It is not by perfecting machines that these may evolve.


With the character of HAL, cinema imagined a computer that becomes humanized, that ceases to serve humans, precisely because it breaks down. Thus, HAL acquires an instinct of self-perpetuation that resembles the instinct of human survival: it rebels against the human project to turn it off forever. HAL was not programmed to have an instinct of self-perpetuation: it acquires it by mistake. Except it is by pure chance (imagined for dramatic reasons) that this mistake results in a behaviour that is quite similar to that of humans, who struggle to survive. It could have caused entirely non-human behaviours, which humans would have trouble interpreting as acts.


Today computer scientists design programs that are able to change one another. But this implies that very definition of meta-program that makes change possible, a modification of a program, which may be also casual. For this type of change to be possible, AI would have to be contained within a meta-program, and this meta-program does not exist in life on earth. Evolution is not directed by anyone.


Even on giving machines the ability to reproduce and change programs, we have no idea what this would produce over time. It is extremely unlikely that the evolution of these machines will replicate the evolution of living intelligence. Machines that reproduce and mutate could develop in a completely different way from how life did, they could take paths that are unthinkable for us, paths never experienced by life.


Wittgenstein said: “If a lion could speak, we could not understand him”, because we are not lions. (10) Similarly, I say that if AI developed its own thinking, we could not understand it (or rather, we could not even recognize it as thought) because we cannot experience a kind of thought that is the result of the reproduction of machines, and not of living beings (I repeat: not “we have not experienced”, but “we cannot experience”). Also renowned scientists who believe AI may evolve to become autonomous thinking beings start from very anthropocentric assumptions: they take for granted that artificial minds are basically similar to human minds, and to animal ones in general. They imagine advanced AI as a set of minds superior to ours, but still human-like. If this evolution of machines actually took place, the same concept of mind would become problematic. It is clear that many scientists and philosophers definitely lack imagination.


We have said that generally it is believed that it is enough to insert an extraordinary quantity of information and combinatorial rules into a machine to create superior minds. With this cognitivist image of the mind, we do not realize that what makes the concept of “understanding” understandable for us humans is intimately connected to the fact that everyone understands differently. In fact, we can never be sure if the other person has really understood us, or that I have really understood the other person. Misunderstanding is a ubiquitous dimension of human communication. This is caused by the fact that each person has a unique genetic program and history, and only by approximation can we say that we understand each other. In a certain sense, we all talk to lions. What we call “experience” is always the result of an evolution that has “decided” what we can experience and what we cannot.


In short, what is fundamental in our relationship with others is not only what we communicate, but also what we do not communicate, which forms the background of our distance from the other. It is the abyssal solitude of the other that makes him or her human, an interlocutor. The fact that the other person may not understand us, or share our reasoning, despite how incontrovertible it seems to us (something which happens every day) is proof that this other person has a mind, that this other person is, in part, inaccessible.



Eureka, Jean Tinguely, Zurich; Image credit:

Life, in short, is characterized by a constant imperfection. (11) AI, on the other hand, is too perfect. For example, we know that 80% of our DNA has no function, it is genetic junk, yet this useless junk is the site of future possibilities. Now, the concept of imperfection is teleological: something is imperfect in relation to its supposed purpose or function. If we design a jug or a chair, we try to make it as perfect as possible in relation to its function. This is not the case with biological processes. And so, in our case we must admit that living beings are imperfect with respect to their supposed communicative capacity. Communication theory distinguishes signals from noise. Now, just like most of our DNA is noise compared to the supposed efficiency of the signal, similarly in human communication not everything is a signal, indeed a great part is communicative junk. Perhaps it is precisely this garbage, or noise, that constitutes the uncertainty that renders each one of us a subject.


We have talked about the need for ambiguity (for misunderstandings) in human communication. And we will also recall that according to Heidegger’s Being and Time, (at the opposite end of the Western philosophical tradition) misunderstanding, together with idle talk and curiosity, is one of the main expressions of inauthenticity. Why did Heidegger single out these three modes? What they have in common, in fact, is communicative inadequacy. With idle talk we are not communicating anything, rather we are performing what linguistics calls the phatic function, that is, we maintain contact with the other. It is the case of two people talking about the weather: they are maintaining contact without actually saying anything. The same can be said of curiosity, which originates from the need to include in the communication something that has not yet been brought into it: it is a pure desire to communicate in which nothing has been communicated yet. These limits of communication are, however, essential for us to be able to speak of “subjects”.


6


The best science fiction literature has imagined beings who are neither minds nor objects, or perhaps they are both. Entities that cannot, in short, be grasped by Cartesian dualism. For example, the novel Solaris by Stanislaw Lem (12) – that has been made into a series of films, in particular the one by Andrej Tarkovskij – features a mostly unthinkable entity, a “sentient ocean”. When humans set foot on the planet Solaris, where this ocean is located, beings that are buried in the minds of humans take concrete form, they become living replicants. Now, is it possible to say that the ocean of Solaris, with its almost psychoanalytic capacity to retrieve memories from the past, is a mind? Does it make sense to talk of minds in the case of entities such as these?

Perhaps machines that use us already exist, that dominate us, the same way we dominate dogs or cats, and we don’t realize this because we are not able to conceive this type of power. We do not have the concepts necessary to imagine this – in fact the word “power” already indicates too much. Our intellect, as we have said, is the result of a very particular and extremely improbable configuration of things. Life itself is a highly unlikely order of chemical combinations. Certainly, our intellect allows us to imagine something that is unintelligible (as we are doing here) which, however, remains unintelligible. (Besides, we may say that religions and mystics have done exactly this, constantly guarding us against anthropocentrism, reminding us that there is something unintelligible, something we are never able to fully understand but that might indeed exist.) Our ability to understanding is a function of our biological environment, our Umwelt. And the environment of a machine we have designed is not organic.


Some will say: why must we necessarily think that the concept of the mind is inseparable from life, and therefore from instincts, from drives? Could we not isolate purely mental functions from living bodies, thoughts from drives? Today there are computers capable of beating world champions of chess or go. When we play these logical games, do we not implement reasoning processes that can be reproduced in a machine? And yet even computers that are champions of chess simulate human reasoning, they do not reason like humans. The same way a bowl reproduces the way we collect water with our hands, though it is not the actual cup we form with our hands. Although performing the function of hands, the bowl is not a human being with hands, the same way a computer will never be a human mind. We could say at most that it is an a-human mind. Because the concept of reasoning implies a certain idea of effort, a certain focus of the mind that ignores everything around it, that a computer will never have. Unless, once again, computers do not evolve in turn like living beings, retracing our same path of life.


We cannot even imagine what it would mean to have a mind (endowed with something similar to our consciousness) that is not affected by the needs and tribulations of life. For us consciousness is essentially a consciousness that suffers and enjoys. We conceive our mind as being essentially pathetic.

7


The biologist Johannes von Uexküll, (13) (a pupil of Heidegger) defined the “world surrounding” an organism Umwelt, environment. In fact, the environment is what is relevant, significant, only and always for an organism of a given species: it is the set of traits that constitute a pertinent signal, capable of triggering certain prescribed reactions, already written, in an organism. This means that species that live in the same territory have different environments: each species has its own. This leads von Uexküll to formulate a Kantian statement according to which no animal can enter into a relation with an object as such (als solchem).


Von Uexküll gave the example of the Ixodes ricinus, the castor bean tick, an animal that reacts to only three external signals: a) the smell of butyric acid, b) a temperature of 37 °, c) a certain type of skin of mammals. Only these three factors are “carriers of meaning”. Nothing else is relevant for this arachnid: the rest is noise, not signal.

In short, the naturalistic approach leads to a sceptical conclusion: in the universe that we suppose exists, we only grasp what constitutes a signal for our species, which triggers our reactions.


Quoting Uexküll, Heidegger (1995) defined this “carrier of significance” as disinhibitor. (14) However, against the relativism implied by biology, Heidegger enhances human difference: beyond the disinhibiting ring there is a Welt constructed by human beings. Heidegger’s is a classical humanist position, which is based on the clear separation of animality and humanity. The exceptional nature of the Dasein, of being-there, with respect to animality, consists in the fact that humans enter into relation, beyond the disinhibiting ring, with entities as such. Only human beings thematize the being of entities.


We will not follow the trajectory opened up by Heidegger. We will instead take for granted the naturalist point of view; according to which Homo sapiens is an animal just like any other. Thus, Homo sapiens will also be structured in relation to its own Umwelt, and has no access to the Welt, to the world as such. Meaning that its understanding and thinking is always related to the environment. Inevitably Homo sapiens anthropomorphizes everything, even and especially when practicing science. Science is not developed by angels or aliens, but by a determined species, which reacts only to its own environment. We may travel to the moon, but biologically we will never step out of our environment. When astronauts saw the entire earth from the moon, they simply included the view of the earth into their environment. We may say that science is always unintentionally Ptolemaic.


Hence the stupidity of those thinkers who turn to aliens: “If a Martian came to earth, what would it think of ...?” If other minds existed that have not emerged from terrestrial life, they would see things from a completely different point of view, which is not even imaginable for us. Our knowledge, the knowledge we say is objective, is not “alien”, it is intimately connected to our natural drives. Knowledge in this sense is not radically different from sexuality, for example: just as we appreciate certain erotic traits of others because we are driven by a sexual desire that is ultimately the effect of our genome, similarly we become aware of certain things in the world and not of others because this is how some of our biological predispositions are expressed. We cannot know what is beyond our environment. One need only think of how differently we see things in childhood: when as adults we go back to a city we visited as a child, we see something different. Considering how much our perception of the world changes in the course of a lifetime, it becomes clear how distant a “perception” not produced by life is. Objective knowledge will never be released from the chains of life.


Yet, thanks to language, Homo sapiens is able to think of something that is unthinkable: what I would call “the real” beyond its own Umwelt. In fact, by saying that we can only know our environment, ipso facto, I posit something that does not belong to our environment, which does not disinhibit us. How is it possible for human beings to be able to think (to situate in the domain of being) something they themselves identify as unthinkable? And above all: does this real manifest itself to them in any way? Asked differently, does noise in any way behave like signal for them? Furthermore, if noise behaves like a signal as such, does it not cease to be noise? It could be that it behaves as noise like a void in the environment, that it could be considered as a non-being. Perhaps something similar to what the Polynesians and Melanesians call mana. (15) Nonetheless, if noise is precisely that which does not disinhibit us (in the Heideggerian sense), how can we disinhibit ourselves so as to say, “there is a void in the world, an absence of signal”? In short, philosophy addresses a paradox, which borders the unthinkable. Yet philosophy originates precisely because it measures itself against this unthinkable.



Ixodes scapularis – Deer Tick; Image credit: Bugguide.net

In fact, it was not necessary to wait for von Uexküll, because philosophy has thematized – obviously in a very different language – the division of what exists. A division between what is available to our experience (the Umwelt) and what is not, but whose being must be presupposed. Perhaps from its very beginning philosophy has attempted to solve this riddle. I think the ancient Greeks called this real beyond the environment ουσία (ousìa), which is literally “patrimony”. Hence the Platonic idea of a cave in which we see only shadows, that is, only what behaves as a signal for our species. Plato assumed that it was possible to leave the cave and see the real, ousìa, while today it is not generally believed that it is possible to leave the cave, to see real things directly. Yet we know we are in a cave, and that not only shadows exist.


I will put forward this hypothesis: what we call consciousness, qualia, the sense of an absolutely private experience, its inner quality, however one wants to call it, is something that makes noise in our world. Which at first sight seems absurd, because our consciousness, our private sensations, our intimate mind, seem to be on the contrary what is most significant for us.


In his Philosophical Investigations Wittgenstein developed a clear analysis of the status of the private world, which later became known as the Private Language Argument (see prepositions 243-370), which many consider rather a sophism. Wittgenstein maintains (I say it in a very succinct way) that we can certainly express our private world, but we cannot know it. Because for Wittgenstein language is always public, that is (in our terminology) it is related to what behaves as a signal for our species. And we can only know what behaves as a signal for our species, that as such can be made public (communicated to members of the same species) through language. Still, this in no way implies (as many who misunderstand Wittgenstein believe) that for this reason what is private does not count. On the contrary, what is private is essential, despite being unknowable. Precisely what is private renders us living minds, and not intelligent machines.


If someone says to me “I have toothache”, I take it for granted that he or she has a private experience of pain, if this person is not lying or simulating. Importantly, his or her private experience is not accessible to me. I assume it as something which escapes meaning, but that also gives meaning to the sentence “I have toothache”. According to Wittgenstein, “I have toothache” is not the description of a mental object, as such knowable, rather it is a way of expressing pain. And I can only know the expression of this pain. And we know that a pain can also be not expressed. Therefore, I do not “think therefore I am”, but “I cannot think what you are thinking, therefore you are”.


However, we must not think that what is inexpressible, private, consists in affects and emotions. As we all know, affects and private emotions behave as strong signals, no less than the things perceived. We are able to sense fear, joy, anger, desire, also in animals, and we can empathize with them because of this. What does not behave like a signal is the subjectivity of affects and emotions, what we cannot know. Which, however, remains in the background, the background that we do not assume in the case of a machine. Unless it deceives us, by simulating this background.


Moreover, humans can, of course, simulate, just like computers. There are people who are blind since birth who talk about colours, or deaf people who talk about sounds. They can learn to correctly use the names of colours or sounds, indeed a blind person can say that a red sunset is impressive, a deaf person that the singing of a nightingale is beautiful. However, by saying these things it is as if they were simulating, because they are not imagining what they are talking about. In fact, we can never define the way we sense a colour, or a sound, which we can convey through propositions: we are either capable of having these sensations, which will always be mine, or we are merely simulating the fact that we have these sensations. The concept of a colour, in particular, always refers to something that will never be conceptualizable, and that also gives meaning to all the concepts of colour.


We could say at most that it is an a-human mind. Because the concept of reasoning implies a certain idea of effort, a certain focus of the mind that ignores everything around it, that a computer will never have.

A machine can be programmed to give false information, but it cannot lie. Only Homo sapiens can lie, because in each individual we suppose a more or less wide gap between what it expresses and what it does not express; and the very idea of lying implies a thought which is not signalled, which therefore is noise. It is this background of noise that makes us view other living beings as similar to us.


Now, as we have said, there is nothing easier than the design of a robot that shakes and cries like someone who is in pain, that may also say “I have toothache”. If it is a Turing machine, it will not be possible to distinguish its behaviour from that of a real living being. There is, however, a fundamental difference: I will never empathize with the behaviour of pain of a robot, as we have already seen. So, what does a living being have that a machine will never have, despite the two behaviours might be identical? The fact that in the case of a living being something unknowable is signalled which is mere noise for the other: precisely his or her being someone, whom I will never be. And so, paradoxically, noise itself signals itself as such, as that of which I can only know the expression, not the being. This is the distance between a Turing machine and living intelligence, however elementary.


Every machine, being too perfect a mind, lacks a history. It responds well to all signs, but does not contain in itself that noise which makes us human, the unknowable that touches our hearts. In fact, AI is a construction of science, and science always distances noise, the real, returning it to us in the form of signals to which we can respond. But this same science is secretly polarized, attracted, drawn, I would say, by the real. And science moves towards it. For this reason, the contempt that a certain philosophical area (so-called “continental” philosophy) has for scientific endeavour is unjustifiable, in that science is driven by the real although it might not know it.


 

NOTES


1. See Irwin, W. (2002) ed. The Matrix and Philosophy: Welcome to the Desert of the Real. Chicago: Open Court Publishing.


2. See Parisi, D. (2014) Future Robots. Towards a robotic science of human beings. Amsterdam: John Benjamins Publ.


3. Heidegger, M. (1971) “The Thing”, in Heidegger, M. Poetry, Language and Thought. New York: Harper & Row.


4. Searle, J. (1984) Minds, brains and science. Cambridge, MA: Harvard University Press.


5. Chomsky, N. (1957) Syntactic Structures. The Hague/Paris: Mouton.


6. Horgan, J. (1999). The Undiscovered Mind: How the Human Brain Defies Replication, Medication and Explanation. New York: Touchstone.


7. Leibniz, G. (1714) La monadologie. Edition établie par E. Boutroux. Paris: LGF, 1991.


8. Searle, J. (1984).


9. Nechvatal, J. (2018) “Before and Beyond the Bachelor Machine”, October 2018, Arts 7(4):67


10. Wittgenstein, L. (1953) Philosophical Investigations. Oxford: Blackwell Publishers, p. 255.


11. See Pievani, T. (2019) Imperfezione. Una storia naturale. Milan: Raffaello Cortina.


12. Lem, S. (2002) Solaris. Boston: Mariner.


13. Von Uexküll, J. (1957) A Stroll Through the Worlds of Animals and Men: A Picture Book of Invisible Worlds. In Instinctive Behavior: The Development of a Modern Concept. New York: International Universities Press (5–80).


14. Heidegger, M. (1995) The Fundamental Concepts of Metaphysics: World, Finitude, Solitude. Bloomington and Indianapolis: Indiana University Press; See also Agamben, G. (2004) The Open: Man and Animal. Stanford: Stanford University Press.


15. Keesing, R. (1984) "Rethinking mana". Journal of Anthropological Research, 40:137–156; Lévi-Strauss, C. (1987), F. Baker (translator) Introduction to the Work of Marcel Mauss. London: Routledge and Kegan Paul; Meylan, N. (2017) Mana: A History of a Western Category. Leiden: Brill.


Related Articles

“But, there is nothing outside of philosophy”: An Interview with Shaj Mohan

RACHEL ADAMS

Conversations with Lacan

SERGIO BENVENUTO

bottom of page