In part two of this three part series, we explore the supposedly materialist interpretation of the theory of mind that is actually more akin to Cartesianism than to a strictly materialist worldview. In it, I reflect on the proposition that the brain thinks, encodes and remembers, and I attempt to convince the reader that this type of language is not merely incorrect but is actually dangerous to science.
When we last left off, we were discussing how the vernacular of modern popular science appears to echo that of Cartesian dualism. Dualism itself found an explicit place with the advent of modern scientific investigations into the inner workings of the brain in the early 20th c. Three of the fathers of the field, Charles Sherrington and his two famous students, John Eccles and Wilder Penfield, advocated dualism, because it seemed to them, based on their study of neurons and cortical structures, that it was too big of a leap from neuron to mind, that there must be some intermediary or alternate explanation that imparted cognition and cogitation to us. While I disagree, this is not what my concern is here. It is better for one to own up to being a dualist as did they, than to (unwittingly) ascribe a dualistic logic to the brain, because it’s too painful and we’re too proud to simply admit that we don’t fully understand—nor have we developed appropriate language to describe—the animal mind.
We need look no further than the writings of Kandel and colleagues in Principles of Neural Science to find dualism hiding in plain sight. In it they assert that “[t]he aim of cognitive neural science is to examine classical philosophical and psychological questions about mental functions in the light of cell and molecular biology. [1]” While I agree with this statement, they later qualify it by stating that “[t]he major goal of cognitive neural science is to study the neural representations of mental acts.” And that “[t]he cognitive approach to behavior assumes that each perceptual or motor act has an internal representation in the brain.” (Emphasis theirs.) They specify that internal representations must correspond to specific patterns of neural activity, so that, “…an internal representation is a neural representation: a representation of neural activity.” This, at best, is tautological. At worst, it is incoherent. To further complicate matters, this is the textbook of record in the field, therefore exposing evermore impressionable students to this type of language. Of course, to call something a neural representation sounds very academic and also like something that you don’t quite understand. But, you’ll go along with it, because it must be too complicated for a non-scientist to fully grasp. However I say that it is perhaps totally incoherent, because when we think about what a “representation” in the brain might actually be, it quickly loses its luster.
To understand this we must first as ask, “What is a representation (of any kind)?” It is a signifier of or a surrogate for another thing, such as a map represents the streets of a city. For a representation to have any kind of meaning we must be able to interpret it by some type of logic. Therefore, for the brain to “represent” something implies firstly that a thing (memory, for example) is actively and systematically translated into a representation of a past experience, and secondly that that representation must be then reinterpreted by the brain and translated into a usable form that is accessible to the individual. But this is certainly silly, because to represent something requires that one have the skills necessary to make an accurate representation, which is learned. The brain, therefore, to represent anything would first have to acquire the skill of what it is to represent a thing in a systematic way, which, obviously, leaves one pondering where, exactly, the representations of the representations lay that would be required for the brain’s acquisition of such skills that are obviously antecedent to the organism’s analogous skills. As stated by Bennett and Hacker:
- Suppose…one was told that the Battle of Hastings was fought in 1066. What would count as a neural representation of this remembered fact? It is unclear whether…anything could count as a representation, short of an array of symbols belonging to a language. Nothing that one might find in the brain could possibly be a representation of the fact that one was told the Hastings was fought in 1066…But, of course, it may well be the case that but for certain neural configurations or strengths of synaptic connections, one would not be able to remember the date of the Battle of Hastings [2]. (Emphasis theirs.)
We run into the same difficulty with the problem with so called “neural encoding”, another topic often covered but rarely critiqued in neuroscience. It is the supposition of many neuroscientists, among them Gazzaniga and colleagues, that sensory information is processed, and then stored in the brain as an encoded message [3]. This idea is again a computer jargon surrogate for a coherent thought that amounts to saying nothing at all. To be encoded, one must first have some information, then have a systematic way of translating that information into an alternate form (e.g., as in cryptography), have space to store the message (e.g., on a piece of paper or in a disk drive), and finally to have the ability to retrieve and knowledge to decode the message when it is called upon. Surely the brain does not do this as a man might when he wants to relate a story or directions to another person. Again, what is correct is to say that one has causal neural correlates without which one could not form or recall memories or learn to do tasks.
To suggest that it is actually the brain that possesses cognitive abilities is what Bennett and Hacker have termed the “mereological fallacy”. In philosophy, mereology deals with the logic of part/whole relations. They point out that it is common in modern neuroscience to ascribe to the brain functions that can only be functions of an organism. For example the brain does not think, it does not contain images, it can’t remember anything, and it certainly doesn’t possess language skills. To suggest otherwise is not, they contend, incorrect, but rather a conceptual misunderstanding of how to interpret neuroscientific data. And further, that what we are doing when we use such language isn’t just making a painfully bad metaphor, but that we are inhibiting good science from proceeding, because science generally proceeds by building on the conclusions of previous work. If that work has drawn useless conclusions out of otherwise interesting data, how can we proceed expecting to learn something new and interesting about the world?
But, weren’t we talking about dualism? Whatever does this have to do with dualism? After all, even talking about the brain doing X is still ascribing an essentially materialistic world view to cognition and consciousness. It is technically correct that very few modern neuroscientists appeal to the supernatural, but logically, this type of thinking is analogous to Descartes’ view that there was a corporeal self and an immaterial self (i.e., the mind). If one replaces ‘mind’ with ‘brain’ we can easily draw parallels between the two ideologies.
In Cartesian thought, the body is an automata, and is only taken out of its reflexive state by the mind, which has free will and basically does all the things a person does (sees, feels, etc.), except it does so, somehow, in a disembodied manner. So too in neuroscience we are told that the brain thinks, searches for memories, contains images and encodes messages. A brain, however, is no more sentient than a rock, and claiming that it does any of these things is no more intellectually sound, therefore, than proclaiming that each task is performed by magic, as was (essentially) Descartes’ argument.
It is my belief that computer jargon is used so pervasively in place of well reasoned thought out of a sense of embarrassment of not knowing what to make of ourselves. It is, and has been for a long time, the nature of scientists (biologists especially) to make many proclamations about that which they do not know. To illustrate this point, one need only look so far as the term ‘junk DNA’ to see that language matters in science (a topic I will brush here but that will be covered in deeper detail in a future post). Junk DNA, which has many functions as it turns out, was cast aside as trash when it was discovered that most DNA does not encode proteins, and the result was an inhibition of its study for more than a decade. Why would one study ‘junk’? We, as scientists, need to learn to be much more careful in the proclamations we make and the interpretations of our results. There is nothing wrong with not knowing something’s true nature (after all that is why we do science, no?). Pretending we know when clearly we do not will always ensure we stay blind to the truth, the exact opposite of our stated goals.
In the final part of this post I shall try to take stock of what we do know, I will share my own take on what it is to be conscious, and I will try to relate my thoughts on how the mind can be interpreted from materialist point of view without resorting to dualism.
1. Kandel, E.R., J.H. Schwartz, and T.M. Jessell, Principles of neural science. 4th ed. 2000, New York: McGraw-Hill, Health Professions Division. xli, 1414 p.
2. Bennett, M.R. and P.M.S. Hacker, Philosophical foundations of neuroscience. 2003, Malden, MA: Blackwell Pub. xvi, 461 p.
3. Gazzaniga, M.S., R.B. Ivry, and G.R. Mangun, Cognitive neuroscience : the biology of the mind. 1998, New York: W.W. Norton. 1 v. (various pagings).
This is a fantastic post and one that I've had to read through a number of times to process some of the concepts I'm not familiar with however, the idea that we store information in the brain, like that of a computer is one that I have never been comfortable with (if this were so then I would be deleting some past encoding experiences!). I have also seen an increase in ads on TV promoting apps or websites that encourage users to 'reprogram' their brains. I understand their has been some scientific developments in understanding neuroplasticity but I think the jump straight into TV ads advising people that they can live happier lives via online games is somewhat pseudoscience. I could go into a vast philosophical rant but I'm keen to read your next post, thanks for sharing such an in depth article!
Thanks for reading! This, and parts 1 and 3, were part of a blog series that I wrote for theadvancedapes.com some time ago. I never kept up with it, because, well, blogging takes a lot of energy and time, neither of which I have enough of (or maybe I just don't have enough good ideas to keep my own and others' interest). My understanding of the world leads me to believe that we don't understand much of anything about the relationship of mind and brain. I study neural plasticity professionally, and I love it, but we shouldn't make a leap from synapse to mind so readily. There is a preponderance of evidence that disagrees with, and provides counterexample to, the reigning theory of mind (which is basically that one's thoughts and memories are dependent specific neural configurations). Anyway, if a vast philosophical rant is what this post inspires in you, I'd love to hear it!