In the final installment of this essay we reflect on why dualism has arisen among modern neuroscientists, and try to dispel the notion that to be conscious is to be aware of a ‘self’. Consciousness arises from sensory perception of one’s surroundings, not from “internal representations” of one’s perception of the world.
In the first two parts of this essay, I have tried to convince the reader that the language of modern neuroscience is rooted in, and continues to support, a dualist point of view. We have seen examples from Kandel, Kurzweil, and Gazzaniga that all point to dualism being alive and well in the 21st century. This stems from a misconception of what we mean by ‘I’, and what we understand ‘I’ to mean. To quote Wittgenstein: “Only of a human being and what resembles (behaves like) a human being can one say: it has sensations; it sees; is blind; hears; is deaf; is conscious or unconscious [1].” If we take this as a reasonable position, then the immediately obvious question is: What “behaves like” a human being?
This is an empirical question, with very little, if any, need to appeal to the metaphysical. I’m not exactly certain how far Wittgenstein was willing go in answering this question himself, but to me, it seems, that the quintessential thing that humans do is to search for food and mates. But isn’t this true of all vagile creatures? Quite simply, Yes. In this sense all organisms that have a nervous system can be said to be conscious, as consciousness is the state of being aware of one’s surroundings. There can be no doubt that an ant is aware of a distant crumb, after all, they will travel relatively great distances to pinpoint locations of food not by accident or providence. They have the ability to detect and act, just as we do.
In fact, consciousness of the world is the entire point of a nervous system. One need only look at who possess a nervous system to understand this. Almost without exception, animals that have nervous systems are those that move about their environment. Sessile animals and plants do not have a nervous system, because they don’t have any use for a nervous system. Neurons and other neural cells are among the most energy intensive cells for most creatures. In fact, in humans, the brain consumes up to 25% of all burned energy, while only comprising about 2% of the mass! It is obviously not energetically favorable for a plant to have a nervous system. In fact, the urochordates (more commonly known as sea squirts, a primitive sub-phylum of chordata, to which vertebrates belong), has a vagile larval stage, and a sessile adult stage. During its vagile state it has a central nervous system and resembles a chordate in most every respect. However, in its sessile adult state, the nervous system is devoured by the organism, and it ceases to have one for the remainder of its life, which it lives out filtering the waters for particles of food.
I am, of course, not suggesting that all forms of consciousness are qualitatively similar. A snail can ‘remember’ pain and react to anticipated painful stimuli accordingly, but it cannot plan for the future, perform complex tasks, or (probably) feel happy or sad. A crow on the other hand can perform complex tasks, but lacks other features that we commonly associate with consciousness. Does this mean that a crow is partly conscious? Obviously, this is a ridiculous proposition. A crow can’t reflect on its ability to use tools and perform complex tasks, but this is simply a matter of language. Language is what sets human consciousness apart from that of other animals, and it has also been a confounding factor in our search for what it means to be conscious.
According to Bennett and Hacker, the conception of ‘self’ is a wholly misconceived one, and one that has needlessly tormented philosophy for centuries, and now is pervasive in modern neuroscience, a place it has no business being. They state:
So, in their opinion (and indeed mine) ‘self’ is a misused term, insofar as you possess no distinct ‘self’ from the body which is you. Dualism itself was conceived from Descartes’ proposition, “I think, therefore I am.” This phrase, although known by all, is often misunderstood in popular culture. What Descartes meant was that he could conceive everything around him as being a figment of his imagination. Or worse, a figment of someone else’s, perhaps even some evil being’s, imagination. He reasoned that the world may be illusory, and that he could not prove otherwise. However, he also was sure that he must exist, because if he did not, then he would not have the ability to reflect on whether he existed. Thus, he reasoned that he could deny his body was real, but he could not deny his mind was real. Therefore, the mind must be immaterial, as material things (body included) are fleeting and may not even exist. Hence, he appealed to dualism to solve his conundrum: “I think therefore I am.” (‘I’ in this case specifically signifies the immaterial ‘I’, not ‘me’ as an extant being.)
But what of modern neuroscience? Have we not rejected this view? No. Apparently, we have not. Dualism, what we have been edging toward for the last several parts of this essay, becomes obvious in modern construction of ‘I’ or ‘self’. Let’s look back at Kandel’s et al.’s textbook. They speculate that:
Several aspects of this are certainly not correct and can only be explained by a dualistic view (even though, incidentally, earlier in the same passage they say, “Most neuroscientists now take for granted that all biological phenomena, including consciousness, are properties of matter. This physicalist stance breaks with the tradition of dualism...”). First, it is specious to think that different brain regions “compare information”. Whatever is meant by that I’m not sure, but I can assure you that regions of the brain cannot compare information, as that is a faculty of a human who may compare, say, works of art.
Second, and most importantly, they ascribe consciousness to a mechanism that requires all aspects of human cognition, insofar as distinguishing self from nonself requires the ability to rationalize with language. This is problematic, because it automatically implies that no creatures other than humans are conscious. And furthermore, that humans were not conscious until we invented the idea of ‘self’, a philosophical construct. Therefore, without philosophy, we lack consciousness. I’m sure many philosophers would be flattered, but this is surely non-sense. Moreover, since the vast majority of neuroscientific knowledge has been gleaned from animal studies, it is utterly foolish to claim that animals are not conscious (as is certainly the case if one need a sense of self—i.e. self-consciousness—to be conscious at all). There is no doubt that this is anything but a dualist interpretation of the world, as to be unconscious (as a species) one moment, and then endowed with consciousness the next implies that the Creator himself must have intervened. There is no materialist view of the universe that can support this version of reality.
Bennett and Hacker argue that this confusion has arisen because of a conceptual confusion, that we have made a grammatical mistake and turned it into a scientific one, that the ‘self’, as it is commonly talked about is an illusion.
We don’t ‘have an I’. That is, we don’t have some inner being that we can scan, or which remembers, or that has subjective experiences. We are indissoluble beings, and all that we remember, do, say, think or feel are functions of us as beings, not as an ‘ego’ or a ‘self’ or an ‘I’. This is the ultimate source of all our confusion, a grammatical miscalculation made by Descartes centuries ago. Debunked by Hume, it somehow continues today, and is tainting much of what cognitive neuroscientists study.
Stated simply: philosophy matters. The above examples, however brief, show that when one starts from a fundamental philosophical error, one cannot hope to come to a reasonable conclusion. It is thusly my contention that neuroscience education should include a stiff shot of philosophy education. If we are to understand the world materialistically, as is the stated goal of most natural scientists, then we must act from a materialistic stance. Otherwise, all the data in the world cannot help us.
1. Wittgenstein, L. and G.E.M. Anscombe, Philosophical investigations : the German text, with a revised English translation. 3rd ed. 2003, Malden, MA,: Blackwell Pub. x, 246 p.
2. Bennett, M.R. and P.M.S. Hacker, Philosophical foundations of neuroscience. 2003, Malden, MA: Blackwell Pub. xvi, 461 p.
3. Kandel, E.R., J.H. Schwartz, and T.M. Jessell, Principles of neural science. 4th ed. 2000, New York: McGraw-Hill, Health Professions Division. xli, 1414 p.