- Table of Contents
- Random Entry
- Editorial Information
- About the SEP
- Editorial Board
- How to Cite the SEP
- Special Characters
- Advanced Tools
- Support the SEP
- PDFs for SEP Friends
- Make a Donation
- SEPIA for Libraries
- Entry Contents
- Friends PDF Preview
- Author and Citation Info
- Back to Top
The Mind/Brain Identity Theory
The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain. Strictly speaking, it need not hold that the mind is identical to the brain. Idiomatically we do use ‘She has a good mind’ and ‘She has a good brain’ interchangeably but we would hardly say ‘Her mind weighs fifty ounces’. Here I take identifying mind and brain as being a matter of identifying processes and perhaps states of the mind and brain. Consider an experience of pain, or of seeing something, or of having a mental image. The identity theory of mind is to the effect that these experiences just are brain processes, not merely correlated with brain processes.
Some philosophers hold that though experiences are brain processes they nevertheless have fundamentally non-physical, psychical, properties, sometimes called ‘qualia’. Here I shall take the identity theory as denying the existence of such irreducible non-physical properties. Some identity theorists give a behaviouristic analysis of mental states , such as beliefs and desires, but others, sometimes called ‘central state materialists’, say that mental states are actual brain states. Identity theorists often describe themselves as ‘materialists’ but ‘physicalists’ may be a better word. That is, one might be a materialist about mind but nevertheless hold that there are entities referred to in physics that are not happily described as ‘material’.
In taking the identity theory (in its various forms) as a species of physicalism, I should say that this is an ontological, not a translational physicalism. It would be absurd to try to translate sentences containing the word ‘brain’ or the word ‘sensation’ into sentences about electrons, protons and so on. Nor can we so translate sentences containing the word ‘tree’. After all ‘tree’ is largely learned ostensively, and is not even part of botanical classification. If we were small enough a dandelion might count as a tree. Nevertheless a physicalist could say that trees are complicated physical mechanisms. The physicalist will deny strong emergence in the sense of some philosophers, such as Samuel Alexander and possibly C.D. Broad. The latter remarked (Broad 1937) that as far as was known at that time the properties of common salt cannot be deduced from the properties of sodium in isolation and of chlorine in isolation. (He put it too epistemologically: chaos theory shows that even in a deterministic theory physical consequences can outrun predictability.) Of course the physicalist will not deny the harmless sense of "emergence" in which an apparatus is not just a jumble of its parts (Smart 1981).
1. Historical Antecedents
2. the nature of the identity theory, 3. phenomenal properties and topic-neutral analyses, 4. causal role theories, 5. functionalism and identity theory, 6. type and token identity theories, 7. consciousness, 8. later objections to the identity theory, other internet resources, related entries.
The identity theory as I understand it here goes back to U.T. Place and Herbert Feigl in the 1950s. Historically philosophers and scientists, for example Leucippus, Hobbes, La Mettrie, and d'Holbach, as well as Karl Vogt who, following Pierre-Jean-Georges Cabanis, made the preposterous remark (perhaps not meant to be taken too seriously) that the brain secretes thought as the liver secretes bile, have embraced materialism. However, here I shall date interest in the identity theory from the pioneering papers ‘Is Consciousness a Brain Process?’ by U.T. Place (Place 1956) and H. Feigl ‘The "Mental" and the "Physical"’ (Feigl 1958). Nevertheless mention should be made of suggestions by Rudolf Carnap (1932, p. 127), H. Reichenbach (1938) and M. Schlick (1935). Reichenbach said that mental events can be identified by the corresponding stimuli and responses much as the (possibly unknown) internal state of a photo-electric cell can be identified by the stimulus (light falling on it) and response (electric current flowing) from it. In both cases the internal states can be physical states. However Carnap did regard the identity as a linguistic recommendation rather than as asserting a question of fact. See his ‘Herbert Feigl on Physicalism’ in Schilpp (1963), especially p. 886. The psychologist E.G. Boring (1933) may well have been the first to use the term ‘identity theory’. See Place (1990).
Place's very original and pioneering paper was written after discussions at the University of Adelaide with J.J.C. Smart and C.B. Martin. For recollections of Martin's contributions to the discussion see Place (1989) ‘Low Claim Assertions’ in Heil (1989). Smart at the time argued for a behaviourist position in which mental events were elucidated purely in terms of hypothetical propositions about behaviour, as well as first person reports of experiences which Gilbert Ryle regarded as ‘avowals’. Avowals were thought of as mere pieces of behaviour, as if saying that one had a pain was just doing a sophisticated sort of wince. Smart saw Ryle's theory as friendly to physicalism though that was not part of Ryle's motivation. Smart hoped that the hypotheticals would ultimately be explained by neuroscience and cybernetics. Being unable to refute Place, and recognizing the unsatisfactoriness of Ryle's treatment of inner experience, to some extent recognized by Ryle himself (Ryle 1949, p. 240), Smart soon became converted to Place's view (Smart 1959). In this he was also encouraged and influenced by Feigl's ‘"The Mental" and the "Physical" ’ (Feigl 1958, 1967). Feigl's wide ranging contribution covered many problems, including those connected with intentionality, and he introduced the useful term ‘nomological danglers’ for the dualists' supposed mental-physical correlations. They would dangle from the nomological net of physical science and should strike one as implausible excrescences on the fair face of science. Feigl (1967) contains a valuable ‘Postscript’.
Place spoke of constitution rather than of identity. One of his examples is ‘This table is an old packing case’. Another is ‘lightning is an electric discharge’. Indeed this latter was foreshadowed by Place in his earlier paper ‘The Concept of Heed’ (Place 1954), in which he took issue with Ryle's behaviourism as it applied to concepts of consciousness, sensation and imagery. Place remarked (p. 255)
The logical objections which might be raised to the statement ‘consciousness is a process in the brain’ are no greater than the logical objections which might be raised to the statement ‘lightning is a motion of electric charges’.
It should be noticed that Place was using the word ‘logical’ in the way that it was used at Oxford at the time, not in the way that it is normally used now. One objection was that ‘sensation’ does not mean the same as ‘brain process’. Place's reply was to point out that ‘this table’ does not mean the same as ‘this old packing case’ and ‘lightning’ does not mean the same as ‘motion of electric charges’. We find out whether this is a table in a different way from the way in which we find out that it is an old packing case. We find out whether a thing is lightning by looking and that it is a motion of electric charges by theory and experiment. This does not prevent the table being identical to the old packing case and the perceived lightning being nothing other than an electric discharge. Feigl and Smart put the matter more in terms of the distinction between meaning and reference. ‘Sensation’ and ‘brain process’ may differ in meaning and yet have the same reference. ‘Very bright planet seen in the morning’ and ‘very bright planet seen in the evening’ both refer to the same entity Venus. (Of course these expressions could be construed as referring to different things, different sequences of temporal stages of Venus, but not necessarily or most naturally so.)
There did seem to be a tendency among philosophers to have thought that identity statements needed to be necessary and a priori truths. However identity theorists have treated ‘sensations are brain processes’ as contingent. We had to find out that the identity holds. Aristotle, after all, thought that the brain was for cooling the blood. Descartes thought that consciousness is immaterial.
It was sometimes objected that sensation statements are incorrigible whereas statements about brains are corrigible. The inference was made that there must be something different about sensations. Ryle and in effect Wittgenstein toyed with the attractive but quite implausible notion that ostensible reports of immediate experience are not really reports but are ‘avowals’, as if my report that I have toothache is just a sophisticated sort of wince. Place, influenced by Martin, was able to explain the relative incorrigibility of sensation statements by their low claims: ‘I see a bent oar’ makes a bigger claim than ‘It looks to me that there is a bent oar’. Nevertheless my sensation and my putative awareness of the sensation are distinct existences and so, by Hume's principle, it must be possible for one to occur without the other. One should deny anything other than a relative incorrigibility (Place 1989).
As remarked above, Place preferred to express the theory by the notion of constitution, whereas Smart preferred to make prominent the notion of identity as it occurs in the axioms of identity in logic. So Smart had to say that if sensation X is identical to brain process Y then if Y is between my ears and is straight or circular (absurdly to oversimplify) then the sensation X is between my ears and is straight or circular. Of course it is not presented to us as such in experience. Perhaps only the neuroscientist could know that it is straight or circular. The professor of anatomy might be identical with the dean of the medical school. A visitor might know that the professor hiccups in lectures but not know that the dean hiccups in lectures.
Someone might object that the dean of the medical school does not qua dean hiccup in lectures. Qua dean he goes to meetings with the vice-chancellor. This is not to the point but there is a point behind it. This is that the property of being the professor of anatomy is not identical with the property of being the dean of the medical school. The question might be asked, that even if sensations are identical with brain processes, are there not introspected non-physical properties of sensations that are not identical with properties of brain processes? How would a physicalist identity theorist deal with this? The answer (Smart 1959) is that the properties of experiences are ‘topic neutral’. Smart adapted the words ‘topic-neutral’ from Ryle, who used them to characterise words such as ‘if, ‘or’, ‘and’, ‘not’, ‘because’. If you overheard only these words in a conversation you would not be able to tell whether the conversation was one of mathematics, physics, geology, history, theology, or any other subject. Smart used the words ‘topic neutral’ in the narrower sense of being neutral between physicalism and dualism. For example ‘going on’, ‘occurring’, ‘intermittent’, ‘waxing’, ‘waning’ are topic neutral. So is ‘me’ in so far as it refers to the utterer of the sentence in question. Thus to say that a sensation is caused by lightning or the presence of a cabbage before my eyes leaves it open as to whether the sensation is non-physical as the dualist believes or is physical as the materialist believes. This sentence also is neutral as to whether the properties of the sensation are physical or whether some of them are irreducibly psychical. To see how this idea can be applied to the present purpose let us consider the following example.
Suppose that I have a yellow, green and purple striped mental image. We may also introduce the philosophical term ‘sense datum’ to cover the case of seeing or seeming to see something yellow, green and purple: we say that we have a yellow, green and purple sense datum. That is I would see or seem to see, for example, a flag or an array of lamps which is green, yellow and purple striped. Suppose also, as seems plausible, that there is nothing yellow, green and purple striped in the brain. Thus it is important for identity theorists to say (as indeed they have done) that sense data and images are not part of the furniture of the world. ‘I have a green sense datum’ is really just a way of saying that I see or seem to see something that really is green. This move should not be seen as merely an ad hoc device, since Ryle and J.L. Austin, in effect Wittgenstein, and others had provided arguments, as when Ryle argued that mental images were not a sort of ghostly picture postcard. Place characterised the fallacy of thinking that when we perceive something green we are perceiving something green in the mind as ‘the phenomenological fallacy’. He characterizes this fallacy (Place 1956):
the mistake of supposing that when the subject describes his experience, when he describes how things look, sound, smell, taste, or feel to him, he is describing the literal properties of objects and events on a peculiar sort of internal cinema or television screen, usually referred to in the modern psychological literature as the ‘phenomenal field’.
Of course, as Smart recognised, this leaves the identity theory dependent on a physicalist account of colour . His early account of colour (1961) was too behaviourist, and could not deal, for example, with the reversed spectrum problem, but he later gave a realist and objectivist account (Smart 1975). Armstrong had been realist about colour but Smart worried that if so colour would be a very idiosyncratic and disjunctive concept, of no cosmic importance, of no interest to extraterrestrials (for instance) who had different visual systems. Prompted by Lewis in conversation Smart came to realize that this was no objection to colours being objective properties.
One first gives the notion of a normal human percipient with respect to colour for which there are objective tests in terms of ability to make discriminations with respect to colour. This can be done without circularity. Thus ‘discriminate with respect to colour’ is a more primitive notion than is that of colour. (Compare the way that in set theory ‘equinumerous’ is antecedent to ‘number’.) Then Smart elucidated the notion of colour in terms of the discriminations with respect to colour of normal human percipients in normal conditions (say cloudy Scottish daylight). This account of colour may be disjunctive and idiosyncratic. (Maxwell's equations might be of interest to Alpha Centaurians but hardly our colour concepts.) Anthropocentric and disjunctive they may be, but objective none the less. David R. Hilbert (1987) identifies colours with reflectances, thus reducing the idiosyncrasy and disjunctiveness. A few epicycles are easily added to deal with radiated light, the colours of rainbows or the sun at sunset and the colours due to diffraction from feathers. John Locke was on the right track in making the secondary qualities objective as powers in the object, but erred in making these powers to be powers to produce ideas in the mind rather than to make behavioural discriminations. (Also Smart would say that if powers are dispositions we should treat the secondary qualities as the categorical bases of these powers, e.g. in the case of colours properties of the surfaces of objects.) Locke's view suggested that the ideas have mysterious qualia observed on the screen of an internal mental theatre. However to do Locke justice he does not talk in effect of ‘red ideas’ but of ‘ideas of red’. Philosophers who elucidate ‘is red’ in terms of ‘looks red’ have the matter the wrong way round (Smart 1995).
Let us return to the issue of us having a yellow, purple and green striped sense datum or mental image and yet there being no yellow, purple and green striped thing in the brain. The identity theorist (Smart 1959) can say that sense data and images are not real things in the world: they are like the average plumber. Sentences ostensibly about the average plumber can be translated into, or elucidated in terms of, sentences about plumbers. So also there is having a green sense datum or image but not sense data or images, and the having of a green sense datum or image is not itself green. So it can, so far as this goes, easily be a brain process which is not green either.
Thus Place (1956, p. 49):
When we describe the after-image as green... we are saying that we are having the sort of experience which we normally have when, and which we have learned to described as, looking at a green patch of light.
and Smart (1959) says:
When a person says ‘I see a yellowish-orange after-image’ he is saying something like this: " There is something going on which is like what is going on when I have my eyes open, am awake, and there is an orange illuminated in good light in front of me".
Quoting these passages, David Chalmers (1996, p. 360) objects that if ‘something is going on’ is construed broadly enough it is inadequate, and if it is construed narrowly enough to cover only experiential states (or processes) it is not sufficient for the conclusion. Smart would counter this by stressing the word ‘typically’. Of course a lot of things go on in me when I have a yellow after image (for example my heart is pumping blood through my brain). However they do not typically go on then: they go on at other times too. Against Place Chalmers says that the word ‘experience’ is unanalysed and so Place's analysis is insufficient towards establishing an identity between sensations and brain processes. As against Smart he says that leaving the word ‘experience’ out of the analysis renders it inadequate. That is, he does not accept the ‘topic-neutral’ analysis. Smart hopes, and Chalmers denies, that the account in terms of ‘typically of’ saves the topic-neutral analysis. In defence of Place one might perhaps say that it is not clear that the word ‘experience’ cannot be given a topic neutral analysis, perhaps building on Farrell (1950). If we do not need the word ‘experience’ neither do we need the word ‘mental’. Rosenthal (1994) complains (against the identity theorist) that experiences have some characteristically mental properties, and that ‘We inevitably lose the distinctively mental if we construe these properties as neither physical nor mental’. Of course to be topic neutral is to be able to be both physical and mental, just as arithmetic is. There is no need for the word ‘mental’ itself to occur in the topic neutral formula. ‘Mental’, as Ryle (1949) suggests, in its ordinary use is a rather grab-bag term, ‘mental arithmetic’, ‘mental illness’, etc. with which an identity theorist finds no trouble.
In their accounts of mind, David Lewis and D.M. Armstrong emphasise the notion of causality. Lewis's 1966 was a particularly clear headed presentation of the identity theory in which he says (I here refer to the reprint in Lewis 1983, p. 100):
My argument is this: The definitive characteristic of any (sort of) experience as such is its causal role, its syndrome of most typical causes and effects. But we materialists believe that these causal roles which belong by analytic necessity to experiences belong in fact to certain physical states. Since these physical states possess the definitive character of experiences, they must be experiences.
Similarly, Robert Kirk (1999) has argued for the impossibility of zombies. If the supposed zombie has all the behavioural and neural properties ascribed to it by those who argue from the possibility of zombies against materialism, then the zombie is conscious and so not a zombie.
Thus there is no need for explicit use of Ockham's Razor as in Smart (1959) though not in Place (1956). (See Place 1960.) Lewis's paper was extremely valuable and already there are hints of a marriage between the identity theory of mind and so-called ‘functionalist’ ideas that are explicit in Lewis 1972 and 1994. In his 1972 (‘Psychophysical and Theoretical Identifications’) he applies ideas in his more formal paper ‘How to Define Theoretical Terms’ (1970). Folk psychology contains words such as ‘sensation’, ‘perceive’, ‘belief, ‘desire’, ‘emotion’, etc. which we recognise as psychological. Words for colours, smells, sounds, tastes and so on also occur. One can regard common sense platitudes containing both these sorts of these words as constituting a theory and we can take them as theoretical terms of common sense psychology and thus as denoting whatever entities or sorts of entities uniquely realise the theory. Then if certain neural states do so too (as we believe) then the mental states must be these neural states. In his 1994 he allows for tact in extracting a consistent theory from common sense. One cannot uncritically collect platitudes, just as in producing a grammar, implicit in our speech patterns, one must allow for departures from what on our best theory would constitute grammaticality.
A great advantage of this approach over the early identity theory is its holism. Two features of this holism should be noted. One is that the approach is able to allow for the causal interactions between brain states and processes themselves, as well as in the case of external stimuli and responses. Another is the ability to draw on the notion of Ramseyfication of a theory. F.P. Ramsey had shown how to replace the theoretical terms of a theory such as ‘the property of being an electron’ by ‘the property X such that...’. so that when this is done for all the theoretical terms, we are left only with ‘property X such that’, ‘property Y such that’ etc. Take the terms describing behaviour as the observation terms and psychological terms as the theoretical ones of folk psychology. Then Ramseyfication shows that folk psychology is compatible with materialism. This seems right, though perhaps the earlier identity theory deals more directly with reports of immediate experience.
The causal approach was also characteristic of D.M. Armstrong's careful conceptual analysis of mental states and processes, such as perception and the secondary qualities, sensation, consciousness, belief, desire, emotion, voluntary action, in his A Materialist Theory of the Mind (1968a) with a second edition (1993) containing a valuable new preface. Parts I and II of this book are concerned with conceptual analysis, paving the way for a contingent identification of mental states and processes with material ones. As had Brian Medlin, in an impressive critique of Ryle and defence of materialism (Medlin 1967), Armstrong preferred to describe the identity theory as ‘Central State Materialism’. Independently of Armstrong and Lewis, Medlin's central state materialism depended, as theirs did, on a causal analysis of concepts of mental states and processes. See Medlin 1967, and 1969 (including endnote 1).
Mention should particularly be made here of two of Armstrong's other books, one on perception (1961), and one on bodily sensations, (1962). Armstrong thought of perception as coming to believe by means of the senses (compare also Pitcher 1971). This combines the advantages of Direct Realism with hospitality towards the scientific causal story which had been thought to have supported the earlier representative theory of perception. Armstrong regarded bodily sensations as perceptions of states of our body. Of course the latter may be mixed up with emotional states, as an itch may include a propensity to scratch, and contrariwise in exceptional circumstances pain may be felt without distress. However, Armstrong sees the central notion here as that of perception. This suggests a terminological problem. Smart had talked of visual sensations. These were not perceptions but something which occurred in perception. So in this sense of ‘sensation’ there should be bodily sensation sensations. The ambiguity could perhaps be resolved by using the word ‘sensing’ in the context of ‘visual’, ‘auditory’, ‘tactile’ and ‘bodily’, so that bodily sensations would be perceivings which involved introspectible ‘sensings’. These bodily sensations are perceptions and there can be misperceptions as when a person with his foot amputated can think that he has a pain in the foot. He has a sensing ‘having a pain in the foot’ but the world does not contain a pain in the foot, just as it does not contain sense data or images but does contain havings of sense data and of images.
Armstrong's central state materialism involved identifying beliefs and desires with states of the brain (1968a). Smart came to agree with this. On the other hand Place resisted the proposal to extend the identity theory to dispositional states such as beliefs and desires. He stressed that we do not have privileged access to our beliefs and desires. Like Ryle he thought of beliefs and desires as to be elucidated by means of hypothetical statements about behaviour and gave the analogy of the horsepower of a car (Place 1967). However he held that the dispute here is not so much about the neural basis of mental states as about the nature of dispositions. His views on dispositions are argued at length in his debate with Armstrong and Martin (Armstrong, Martin and Place, T. Crane (ed.) 1996). Perhaps we can be relaxed about whether mental states such as beliefs and desires are dispositions or are topic neutrally described neurophysiological states and return to what seems to be the more difficult issue of consciousness. Causal identity theories are closely related to Functionalism, to be discussed in the next section. Smart had been wary of the notion of causality in metaphysics believing that it had no place in theoretical physics. However even so he should have admitted it in folk psychology and also in scientific psychology and biology generally, in which physics and chemistry are applied to explain generalisations rather than strict laws. If folk psychology uses the notion of causality, it is no matter if it is what Quine has called second grade discourse, involving the very contextual notions of modality.
It has commonly been thought that the identity theory has been superseded by a theory called ‘functionalism’. It could be argued that functionalists greatly exaggerate their difference from identity theorists. Indeed some philosophers, such as Lewis (1972 and 1994) and Jackson, Pargetter and Prior (1982), have seen functionalism as a route towards an identity theory.
Like Lewis and Armstrong, functionalists define mental states and processes in terms of their causal relations to behaviour but stop short of identifying them with their neural realisations. Of course the term ‘functionalism’ has been used vaguely and in different ways, and it could be argued that even the theories of Place, Smart and Armstrong were at bottom functionalist. The word ‘functionalist’ has affinities with that of ‘function’ in mathematics and also with that of ‘function’ in biology. In mathematics a function is a set of ordered n-tuples. Similarly if mental processes are defined directly or indirectly by sets of stimulus-response pairs the definitions could be seen as ‘functional’ in the mathematical sense. However there is probably a closer connection with the term as it is used in biology, as one might define ‘eye’ by its function even though a fly's eye and a dog's eye are anatomically and physiologically very different. Functionalism identifies mental states and processes by means of their causal roles, and as noted above in connection with Lewis, we know that the functional roles are possessed by neural states and processes. (There are teleological and homuncular forms of functionalism, which I do not consider here.) Nevertheless an interactionist dualist such as the eminent neurophysiologist Sir John Eccles would (implausibly for most of us) deny that all functional roles are so possessed. One might think of folk psychology, and indeed much of cognitive science too, as analogous to a ‘block diagram’ in electronics. A box in the diagram might be labelled (say) ‘intermediate frequency amplifier’ while remaining) neutral as to the exact circuit and whether the amplification is carried out by a thermionic valve or by a transistor. Using terminology of F. Jackson and P. Pettit (1988, pp. 381–400) the ‘role state’ would be given by ‘amplifier’, the ‘realiser state’ would be given by ‘thermionic valve’, say. So we can think of functionalism as a ‘black box’ theory. This line of thought will be pursued in the next section.
Thinking very much in causal terms about beliefs and desires fits in very well not only with folk psychology but also with Humean ideas about the motives of action. Though this point of view has been criticised by some philosophers it does seem to be right, as can be seen if we consider a possible robot aeroplane designed to find its way from Melbourne to Sydney. The designer would have to include an electronic version of something like a map of south-eastern Australia. This would provide the ‘belief’ side. One would also have to program in an electronic equivalent of ‘go to Sydney’. This program would provide the ‘desire’ side. If wind and weather pushed the aeroplane off course then negative feedback would push the aeroplane back on to the right course for Sydney. The existence of purposive mechanisms has at last (I hope) shown to philosophers that there is nothing mysterious about teleology. Nor are there any great semantic problems over intentionality (with a ‘t’). Consider the sentence ‘Joe desires a unicorn’. This is not like ‘Joe kicks a football’. For Joe to kick a football there must be a football to be kicked, but there are no unicorns. However we can say ‘Joe desires-true of himself "possesses a unicorn" ’. Or more generally ‘Joe believes-true S’ or ‘Joe desires-true S’ where S is an appropriate sentence (Quine 1960, pp. 206–16). Of course if one does not want to relativise to a language one needs to insert ‘or some samesayer of S’ or use the word ‘proposition’, and this involves the notion of proposition or intertranslatability. Even if one does not accept Quine's notion of indeterminacy of translation, there is still fuzziness in the notions of ‘belief’ and ‘desire’ arising from the fuzziness of ‘analyticity’ and ‘synonymy’. The identity theorist could say that on any occasion this fuzziness is matched by the fuzziness of the brain state that constitutes the belief or desire. Just how many interconnections are involved in a belief or desire? On a holistic account such as Lewis's one need not suppose that individuation of beliefs and desires is precise, even though good enough for folk psychology and Humean metaethics. Thus the way in which the brain represents the world might not be like a language. The representation might be like a map. A map relates every feature on it to every other feature. Nevertheless maps contain a finite amount of information. They have not infinitely many parts, still less continuum many. We can think of beliefs as expressing the different bits of information that could be extracted from the map. Thinking in this way beliefs would correspond near enough to the individualist beliefs characteristic of folk and Humean psychology.
The notion ‘type’ and ‘token’ here comes by analogy from ‘type’ and ‘token’ as applied to words. A telegram ‘love and love and love’ contains only two type words but in another sense, as the telegraph clerk would insist, it contains five words (‘token words’). Similarly a particular pain (more exactly a having a pain) according to the token identity theory is identical to a particular brain process. A functionalist could agree to this. Functionalism came to be seen as an improvement on the identity theory, and as inconsistent with it, because of the correct assertion that a functional state can be realised by quite different brain states: thus a functional state might be realised by a silicon based brain as well as by a carbon based brain, and leaving robotics or science fiction aside, my feeling of toothache could be realised by a different neural process from what realises your toothache.
As far as this goes a functionalist can at any rate accept token identities. Functionalists commonly deny type identities. However Jackson, Pargetter and Prior (1982) and Braddon-Mitchell and Jackson (1996) argue that this is an over-reaction on the part of the functionalist. (Indeed they see functionalism as a route to the identity theory.) The functionalist may define mental states as having some state or other (e.g., carbon based or silicon based) which accounts for the functional properties. The functionalist second order state is a state of having some first order state or other which causes or is caused by the behaviour to which the functionalist alludes. In this way we have a second order type theory. Compare brittleness. The brittleness of glass and the brittleness of biscuits are both the state of having some property which explains their breaking, though the first order physical property may be different in the two cases. This way of looking at the matter is perhaps more plausible in relation to mental states such as beliefs and desires than it is to immediately reported experiences. When I report a toothache I do seem to be concerned with first order properties, even though topic neutral ones.
If we continue to concern ourselves with first order properties, we could say that the type-token distinction is not an all or nothing affair. We could say that human experiences are brain processes of one lot of sorts and Alpha Centaurian experiences are brain processes of another lot of sorts. We could indeed propose much finer classifications without going to the limit of mere token identities.
How restricted should be the restriction of a restricted type theory? How many hairs must a bald man have no more of? An identity theorist would expect his toothache today to be very similar to his toothache yesterday. He would expect his toothache to be quite similar to his wife's toothache. He would expect his toothache to be somewhat similar to his cat's toothache. He would not be confident about similarity to an extra-terrestrial's pain. Even here, however, he might expect some similarities of wave form or the like.
Even in the case of the similarity of my pain now to my pain ten minutes ago, there will be unimportant dissimilarities, and also between my pain and your pain. Compare topiary, making use of an analogy exploited by Quine in a different connection. In English country gardens the tops of box hedges are often cut in various shapes, for example peacock shapes. One might make generalizations about peacock shapes on box hedges, and one might say that all the imitation peacocks on a particular hedge have the same shape. However if we approach the two imitation peacocks and peer into them to note the precise shapes of the twigs that make them up we will find differences. Whether we say that two things are similar or not is a matter of abstractness of description. If we were to go to the limit of concreteness the types would shrink to single membered types, but there would still be no ontological difference between identity theory and functionalism.
An interesting form of token identity theory is the anomalous monism of Davidson 1980. Davidson argues that causal relations occur under the neural descriptions but not under the descriptions of psychological language. The latter descriptions use intentional predicates, but because of indeterminacy of translation and of interpretation, these predicates do not occur in law statements. It follows that mind-brain identities can occur only on the level of individual (token) events. It would be beyond the scope of the present essay to consider Davidson's ingenious approach, since it differs importantly from the more usual forms of identity theory.
Place answered the question ‘Is Consciousness a Brain Process?’ in the affirmative. But what sort of brain process? It is natural to feel that there is something ineffable about which no mere neurophysiological process (with only physical intrinsic properties) could have. There is a challenge to the identity theorist to dispel this feeling.
Suppose that I am riding my bicycle from my home to the university. Suddenly I realise that I have crossed a bridge over a creek, gone along a twisty path for half a mile, avoided oncoming traffic, and so on, and yet have no memories of all this. In one sense I was conscious: I was perceiving, getting information about my position and speed, the state of the bicycle track and the road, the positions and speeds of approaching cars, the width of the familiar narrow bridge. But in another sense I was not conscious: I was on ‘automatic pilot’. So let me use the word ‘awareness’ for this automatic or subconscious sort of consciousness. Perhaps I am not one hundred percent on automatic pilot. For one thing I might be absent minded and thinking about philosophy. Still, this would not be relevant to my bicycle riding. One might indeed wonder whether one is ever one hundred percent on automatic pilot, and perhaps one hopes that one isn't, especially in Armstrong's example of the long distance truck driver (Armstrong 1962). Still it probably does happen, and if it does the driver is conscious only in the sense that he or she is alert to the route, of oncoming traffic etc., i.e. is perceiving in the sense of ‘coming to believe by means of the senses’. The driver gets the beliefs but is not aware of doing so. There is no suggestion of ineffability in this sense of ‘consciousness’, for which I shall reserve the term ‘awareness’.
For the full consciousness, the one that puzzles us and suggests ineffability, we need the sense elucidated by Armstrong in a debate with Norman Malcolm (Armstrong and Malcolm 1962, p. 110). Somewhat similar views have been expressed by other philosophers, such as Savage (1976), Dennett (1991), Lycan (1996), Rosenthal (1996). A recent presentation of it is in Smart (2004). In the debate with Norman Malcolm, Armstrong compared consciousness with proprioception. A case of proprioception occurs when with our eyes shut and without touch we are immediately aware of the angle at which one of our elbows is bent. That is, proprioception is a special sense, different from that of bodily sensation, in which we become aware of parts of our body. Now the brain is part of our body and so perhaps immediate awareness of a process in, or a state of, our brain may here for present purposes be called ‘proprioception’. Thus the proprioception even though the neuroanatomy is different. Thus the proprioception which constitutes consciousness, as distinguished from mere awareness, is a higher order awareness, a perception of one part of (or configuration in) our brain by the brain itself. Some may sense circularity here. If so let them suppose that the proprioception occurs in an in practice negligible time after the process propriocepted. Then perhaps there can be proprioceptions of proprioceptions, proprioceptions of proprioceptions of proprioceptions, and so on up, though in fact the sequence will probably not go up more than two or three steps. The last proprioception in the sequence will not be propriocepted, and this may help to explain our sense of the ineffability of consciousness. Compare Gilbert Ryle in The Concept of Mind on the systematic elusiveness of ‘I’ (Ryle 1949, pp. 195–8).
Place has argued that the function of the ‘automatic pilot’, to which he refers as ‘the zombie within’, is to alert consciousness to inputs which it identifies as problematic, while it ignores non-problematic inputs or re-routes them to output without the need for conscious awareness. For this view of consciousness see Place (1999).
Mention should here be made of influential criticisms of the identity theory by Saul Kripke and David Chalmers respectively. It will not be possible to discuss them in great detail, partly because of the fact that Kripke's remarks rely on views about modality, possible worlds semantics, and essentialism which some philosophers would want to contest, and because Chalmers' long and rich book would deserve a lengthy answer. Kripke (1980) calls an expression a rigid designator if it refers to the same object in every possible world. Or in counterpart theory it would have an exactly similar counterpart in every possible world. It seems to me that what we count as counterparts is highly contextual. Take the example ‘water is H 2 O’. In another world, or in a twin earth in our world as Putnam imagines (1975), the stuff found in rivers, lakes, the sea would not be H 2 O but XYZ and so would not be water. This is certainly giving preference to real chemistry over folk chemistry, and so far I applaud this. There are therefore contexts in which we say that on twin earth or the envisaged possible world the stuff found in rivers would not be water. Nevertheless there are contexts in which we could envisage a possible world (write a science fiction novel) in which being found in rivers and lakes and the sea, assuaging thirst and sustaining life was more important than the chemical composition and so XYZ would be the counterpart of H 2 O.
Kripke considers the identity ‘heat = molecular motion’, and holds that this is true in every possible world and so is a necessary truth. Actually the proposition is not quite true, for what about radiant heat? What about heat as defined in classical thermodynamics which is ‘topic neutral’ compared with statistical thermodynamics? Still, suppose that heat has an essence and that it is molecular motion, or at least is in the context envisaged. Kripke says (1980, p. 151) that when we think that molecular motion might exist in the absence of heat we are confusing this with thinking that the molecular motion might have existed without being felt as heat. He asks whether it is analogously possible that if pain is a certain sort of brain process that it has existed without being felt as pain. He suggests that the answer is ‘No’. An identity theorist who accepted the account of consciousness as a higher order perception could answer ‘Yes’. We might be aware of a damaged tooth and also of being in an agitation condition (to use Ryle's term for emotional states) without being aware of our awareness. An identity theorist such as Smart would prefer talk of ‘having a pain’ rather than of ‘pain’: pain is not part of the furniture of the world any more than a sense datum or the average plumber is. Kripke concludes (p. 152) that the
apparent contingency of the connection between the mental state and the corresponding brain state thus cannot be explained by some sort of qualitative analogue as in the case of heat.
Smart would say that there is a sense in which the connection of sensations (sensings) and brain processes is only half contingent. A complete description of the brain state or process (including causes and effects of it) would imply the report of inner experience, but the latter, being topic neutral and so very abstract would not imply the neurological description.
Chalmers (1996) in the course of his exhaustive study of consciousness developed a theory of non-physical qualia which to some extent avoids the worry about nomological danglers. The worry expressed by Smart (1959) is that if there were non-physical qualia there would, most implausibly, have to be laws relating neurophysiological processes to apparently simple properties, and the correlation laws would have to be fundamental, mere danglers from the nomological net (as Feigl called it) of science. Chalmers counters this by supposing that the qualia are not simple but unknown to us, are made up of simple proto-qualia, and that the fundamental laws relating these to physical entities relate them to fundamental physical entities. His view comes to a rather interesting panpsychism. On the other hand if the topic neutral account is correct, then qualia are no more than points in a multidimensional similarity space, and the overwhelming plausibility will fall on the side of the identity theorist.
On Chalmers' view how are we aware of non-physical qualia? It has been suggested above that this inner awareness is proprioception of the brain by the brain. But what sort of story is possible in the case of awareness of a quale? Chalmers could have some sort of answer to this by means of his principle of coherence according to which the causal neurological story parallels the story of succession of qualia. It is not clear however that this would make us aware of the qualia. The qualia do not seem to be needed in the physiological story of how an antelope avoids a tiger.
People often think that even if a robot could scan its own perceptual processes this would not mean that the robot was conscious. This appeals to our intuitions, but perhaps we could reverse the argument and say that because the robot can be aware of its awareness the robot is conscious. I have given reason above to distrust intuitions, but in any case Chalmers comes some of the way in that he toys with the idea that a thermostat has a sort of proto-qualia. The dispute between identity theorists (and physicalists generally) and Chalmers comes down to our attitude to phenomenology. Certainly walking in a forest, seeing the blue of the sky, the green of the trees, the red of the track, one may find it hard to believe that our qualia are merely points in a multidimensional similarity space. But perhaps that is what it is like (to use a phrase that can be distrusted) to be aware of a point in a multidimensional similarity space. One may also, as Place would suggest, be subject to ‘the phenomenological fallacy’. At the end of his book Chalmers makes some speculations about the interpretation of quantum mechanics. If they succeed then perhaps we could envisage Chalmers' theory as integrated into physics and him as a physicalist after all. However it could be doubted whether we need to go down to the quantum level to understand consciousness or whether consciousness is relevant to quantum mechanics.
- Armstrong, D.M., 1961, Perception and the Physical World , London: Routledge.
- –––, 1961, Bodily Sensations , London: Routledge.
- –––, 1962, ‘Consciousness and Causality’, and ‘Reply’, in D.M. Armstrong N. Malcolm, Consciousness and Causality , Oxford: Blackwell.
- –––, 1968a, A Materialist Theory of the Mind , London: Routledge; second edition, with new preface, 1993.
- –––, 1968b, ‘The Headless Woman Illusion and the Defence of Materialism’, Analysis , 29: 48–49.
- –––, 1999, The Mind-Body Problem: An Opinionated Introduction , Boulder, CO: Westview Press.
- Armstrong, D.M., Martin, C.B. and Place, U.T., 1996, Dispositions: A Debate , T. Crane (ed.), London: Routledge.
- Braddon-Mitchell, D. and Jackson, F., 1996: Philosophy of Mind and Cognition , Oxford: Blackwell.
- Broad, C.D., 1937, The Mind and its Place in Nature , London: Routledge and Kegan Paul.
- Campbell, K., 1984, Body and Mind , Notre Dame, IN: University of Notre Dame Press.
- Carnap, R., 1932, ‘Psychologie in Physikalischer Sprache’, Erkenntnis , 3: 107–142. English translation in A.J. Ayer (ed.), Logical Positivism , Glencoe, IL: Free Press, 1959.
- –––, 1963, ‘Herbert Feigl on Physicalism’, in Schilpp 1963, pp. 882–886.
- Chalmers, D.M., 1996, The Conscious Mind , New York: Oxford University Press.
- Clark, A., 1993, Sensory Qualities , Oxford: Oxford University Press.
- Davidson, D., 1980, ‘Mental Events’, ‘The Material Mind’ and ‘Psychology as Part of Philosophy’, in D. Davidson, Essays on Actions and Events , Oxford: Clarendon Press.
- Dennett, D.C., 1991, Consciousness Explained , Boston: Little and Brown.
- Farrell, B.A., 1950, ‘Experience’, Mind , 50: 170–198.
- Feigl, H., 1958, ‘The “Mental” and the “Physical”’, in H. Feigl, M. Scriven and G. Maxwell (eds.), Concepts, Theories and the Mind-Body Problem (Minnesota Studies in the Philosophy of Science, Volume 2), Minneapolis: University of Minnesota Press; reprinted with a Postscript in Feigl 1967.
- –––, 1967, The ‘Mental’ and the ‘Physical’, The Essay and a Postscript , Minneapolis: University of Minnesota Press.
- Heil, J., 1989, Cause, Mind and Reality: Essays Honoring C.B. Martin , Dordrecht: Kluwer Academic Publishers.
- Hilbert, D.R., 1987, Color and Color Perception: A Study in Anthropocentric Realism , Stanford: CSLI Publications.
- Hill, C.S., 1991, Sensations: A Defense of Type Materialism , Cambridge: Cambridge University Press.
- Jackson, F., 1998, ‘What Mary didn't know’, and ‘Postscript on qualia’, in F. Jackson, Mind, Method and Conditionals , London: Routledge.
- Jackson, F. and Pettit, P., 1988, ‘Functionalism and Broad Content’, Mind , 97: 381–400.
- Jackson, F., Pargetter, R. and Prior, E., 1982, ‘Functionalism and Type-Type Identity Theories’, Philosophical Studies , 42: 209–225.
- Kirk, R., 1999, ‘Why There Couldn't be Zombies’, Proceedings of the Aristotelian Society (Supplementary Volume), 73: 1–16.
- Kripke, S., 1980, Naming and Necessity , Cambridge, MA: Harvard University Press.
- Levin, M.E., 1979, Metaphysics and the Mind-Body Problem , Oxford: Clarendon Press.
- Lewis, D., 1966, ‘An Argument for the Identity Theory’, Journal of Philosophy , 63: 17–25.
- –––, 1970, ‘How to Define Theoretical Terms’, Journal of Philosophy , 67: 427–446.
- –––, 1972, ‘Psychophysical and Theoretical Identifications’, Australasian Journal of Philosophy , 50: 249–258.
- –––, 1983, ‘Mad Pain and Martian Pain’ and ‘Postscript’, in D. Lewis, Philosophical Papers (Volume 1), Oxford: Oxford University Press.
- –––, 1989, ‘What Experience Teaches’, in W. Lycan (ed.), Mind and Cognition , Oxford: Blackwell
- –––, 1994, ‘Reduction of Mind’, in S. Guttenplan (ed.), A Companion to the Philosophy of Mind , Oxford: Blackwell.
- Lycan, W.G., 1996, Consciousness and Experience , Cambridge, MA: MIT Press.
- Medlin, B.H., 1967, ‘Ryle and the Mechanical Hypothesis’, in C.F. Presley (ed.), The Identity Theory of Mind , St. Lucia, Queensland: Queensland University Press.
- –––, 1969, ‘Materialism and the Argument from Distinct Existences’, in J.J. MacIntosh and S. Coval (eds.), The Business of Reason , London: Routledge and Kegan Paul.
- Pitcher, G., 1971, A Theory of Perception , Princeton, NJ: Princeton University Press.
- Place, U.T., 1954, ‘The Concept of Heed’, British Journal of Psychology , 45: 243–255.
- –––, 1956, ‘Is Consciousness a Brain Process?’, British Journal of Psychology , 47: 44–50.
- –––, 1960, ‘Materialism as a Scientific Hypothesis’, Philosophical Review , 69: 101–104.
- –––, 1967, ‘Comments on Putnam's “Psychological Predicates”’, in W.H. Capitan and D.D. Merrill (eds.), Art, Mind and Religion , Pittsburgh: Pittsburgh University Press.
- –––, 1988, ‘Thirty Years on–Is Consciousness still a Brain Process?’, Australasian Journal of Philosophy , 66: 208–219.
- –––, 1989, ‘Low Claim Assertions’, in J. Heil (ed.), Cause, Mind and Reality: Essays Honoring C.B. Martin , Dordrecht: Kluwer Academic Publishers.
- –––, 1990, ‘E.G. Boring and the Mind-Brain Identity Theory’, British Psychological Society, History and Philosophy of Science Newsletter , 11: 20–31.
- –––, 1999, ‘Connectionism and the Problem of Consciousness’, Acta Analytica , 22: 197–226.
- –––, 2004, Identifying the Mind , New York: Oxford University Press.
- Putnam, H., 1960, ‘Minds and Machines’, in S. Hook (ed.), Dimensions of Mind , New York: New York University Press.
- –––, 1975, ‘The Meaning of “Meaning”’, in H. Putnam, Mind, Language and Reality , Cambridge: Cambridge University Press.
- Quine, W.V.O., 1960, Word and Object , Cambridge, MA: MIT Press.
- Reichenbach, H., 1938, Experience and Prediction , Chicago: University of Chicago Press.
- Rosenthal, D.M., 1994, ‘Identity Theories’, in S. Guttenplan (ed.), A Companion to the Philosophy of Mind , Oxford: Blackwell, pp. 348–355.
- –––, 1996, ‘A Theory of Consciousness’, in N. Block, O. Flanagan, and G. Güzeldere (eds.), The Nature of Consciousness , Cambridge, MA: MIT Press.
- Ryle, G., 1949, The Concept of Mind , London: Hutchinson.
- Savage, C.W., 1976, ‘An Old Ghost in a New Body’, in G.G. Globus, G. Maxwell and I. Savodnik (eds.), Consciousness and the Brain , New York: Plenum Press.
- Schilpp, P.A. (ed.), 1963, The Philosophy of Rudolf Carnap , La Salle, IL: Open Court.
- Schlick, M., 1935, ‘De la Relation des Notions Psychologiques et des Notions Physiques’, Revue de Synthese , 10: 5–26; English translation in H. Feigl and W. Sellars (eds.), Readings in Philosophical Analysis , New York: Appleton-Century Crofts, 1949.
- Smart, J.J.C., 1959, ‘Sensations and Brain Processes’, Philosophical Review , 68: 141–156.
- –––, 1961, ‘Colours’, Philosophy , 36: 128–142.
- –––, 1963, ‘Materialism’, Journal of Philosophy , 60: 651–662.
- –––, 1975, ‘On Some Criticisms of a Physicalist Theory of Colour’, in Chung-ying Cheng (ed.), Philosophical Aspects of the Mind-Body Problem , Honolulu: University of Hawai‘i Press.
- –––, 1978, ‘The Content of Physicalism’, Philosophical Quarterly , 28: 339–341.
- –––, 1981, ‘Physicalism and Emergence’, Neuroscience , 6: 109–113.
- –––, 1995, ‘“Looks Red” and Dangerous Talk’, Philosophy , 70: 545–554.
- –––, 2004, ‘Consciousness and Awareness’, Journal of Consciousness Studies , 11: 41–50.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
[Please contact the author with suggestions.]
- Identity Theories , an incomplete paper by U.T. Place, published in the Field Guide to Philosophy of Mind
consciousness | functionalism
I would like to express my thanks to David Armstrong, Frank Jackson and Ullin Place for comments on an earlier draft of this article and David Chalmers for careful editorial suggestions.
Copyright © 2007 by J. J. C. Smart
View this site from another server:
- Info about mirror sites
The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
- school Campus Bookshelves
- menu_book Bookshelves
- perm_media Learning Objects
- login Login
- how_to_reg Request Instructor Account
- hub Instructor Commons
- Download Page (PDF)
- Download Full Book (PDF)
- Periodic Table
- Physics Constants
- Scientific Calculator
- Reference & Cite
- Tools expand_more
selected template will load here
This action is not available.
2: Personal Identity and the Mind-Body Problem
- Last updated
- Save as PDF
- Page ID 29958
- Golden West College via NGE Far Press
In philosophy, the matter of personal identity deals with such questions as, "What makes it true that a person at one time is the same thing as a person at another time?" or "What kinds of things are we persons?" The term "identity" in "personal identity" refers to "numerical identity," where saying that X and Y are numerically identical just means that X and Y are the same thing. Personal identity is not the same as personality, though some theories of personal identity maintain that continuity of personality may be required for one to persist through time. In relation to answer questions about persistence, such as under what conditions a person does or does not continue to exist, contemporary philosophers often seek to first answer questions about what sort of things we are, most fundamentally. Many people claim we are animals, or organisms, but many others strongly believe that no person can exist without mental traits, such as consciousness. Since an organism can exist without consciousness, both these views cannot be true (if we are organisms we can exist without being conscious; but if we can't exist without consciousness, we are not organisms). Thus, in order to determine whether certain features (such as consciousness) are crucial to a person's continued existence, it may be important to first ask what sort of things we are.
Generally, personal identity is the unique numerical identity of a person in the course of time. That is, the necessary and sufficient conditions under which a person at one time and a person at another time can be said to be the same person, persisting through time;
In contemporary metaphysics, the matter of personal identity is referred to as the diachronic problem of personal identity. The synchronic problem concerns the question of: What features and traits characterize a person at a given time. In Continental philosophy and in Analytic philosophy, enquiry to the nature of Identity is common. Continental philosophy deals with conceptually maintaining identity when confronted by different philosophic propositions, postulates, and presuppositions about the world and its nature.
- 2.1: Hoops of Steel
- 2.2: The Devil Himself
- 2.3: Removing “identity” from “persons”- Derek Parfit Reasons and Persons is a philosophical work by Derek Parfit, first published in 1984. It focuses on ethics, rationality and personal identity. His views on personal identity transformed how it is understood and used in philosophy, especially ethics.
- 2.4: The Ship of Theseus The ship of Theseus, also known as Theseus' paradox, is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The paradox is most notably recorded by Plutarch in Life of Theseus from the late first century. Plutarch asked whether a ship that had been restored by replacing every single wooden part remained the same ship.
- 2.5: This Above All
- 2.6: Descartes’ Meditations - Cartesian Dualism
Chapter 3: The mind-body problem
The mind-body problem
Matthew Van Cleave
Introduction: A pathway through this chapter
What is the relationship between the mind and the body? In contemporary philosophy of mind, there are a myriad of different, nuanced accounts of this relationship. Nonetheless, these accounts can be seen as falling into two broad categories: dualism and physicalism.  According to dualism , the mind cannot be reduced to a merely physical thing, such as the brain. The mind is a wholly different kind of thing than physical objects. One simple way a dualist might try to make this point is the following: although we can observe your brain (via all kinds of methods of modern neuroscience), we cannot observe your mind. Your mind seems inaccessible to third-person observation (that is, to people other than you) in a way that your brain isn’t. Although neuroscientists could observe activation patterns in your brain via functional magnetic resonance imagining, they could not observe your thoughts. Your thoughts seem to be accessible only in the first person—only you can know what you are thinking or feeling directly. Insofar as other can know this, they can only know it indirectly, though your behaviors (including what you say and how you act ). Readers of previous chapters will recognize that dualism is the view held by the 17th century philosopher, René Descartes, and that I have referred to in earlier chapters as the Cartesian view of mind . In contrast with dualism, physicalism is the view that the mind is not a separate, wholly different kind of thing from the rest of the physical world. The mind is constituted by physical things. For many physicalists, the mind just is the brain. We may not yet understand how mind/brain works, but the spirit of physicalism is often motivated by something like Ockham’s razor : the principle that all other things being equal, the simplest explanation is the best explanation. Physicalists think that all mind related phenomena can be explained in terms of the functioning of the brain. So a theory that posits both the brain and another sui generis entity (a nonphysical mind or mental properties) violates Ockham’s razor: it posits two kinds of entities (brains and minds) whereas all that is needed to explain the relevant phenomena is one (brains).
The mind-body problem is best thought of not as a single problem but as a set of problems that attach to different views of the mind. For physicalists, the mind-body problem is the problem of explaining how conscious experience can be nothing other than a brain activity—what has been called “ the hard problem .” For dualists, the mind-body problem manifests itself as “ the interaction problem ”—the problem of explaining how nonphysical mental phenomena relate to or interact with physical phenomena, such as brain processes. Thus, the mind-body problem is that no matter which view of the mind you take, there are deep philosophical problems. The mind, no matter how we conceptualize it, seems to be shrouded in mystery. That is the mind-body problem. Below we will explore different strands of the mind-body problem, with an emphasis on physicalist attempts to explain the mind. In an era of neuroscience, it seems increasingly plausible that the mind is in some sense identical to the brain. But there are two putative properties of minds—especially human minds—that appear to be recalcitrant to physicalist explanations. The two properties of minds that we will focus on in this chapter are “original intentionality” (the mind’s ability to have meaningful thoughts) and “qualia” (the qualitative aspects of our conscious experiences).
We noted above the potential use of Ockham’s razor as an argument in favor of physicalism. However, this simplicity argument works only if physicalism can explain all of the relevant properties of the mind. A common tactic of the dualist is to argue that physicalism cannot explain all of the important aspects of the mind. We can view several of the famous arguments we will explore in this chapter—the “Chinese room” argument, Nagel’s “what is it like to be a bat” argument, and Jackson’s “knowledge argument”—as manifestations of this tactic. If the physicalist cannot explain aspects of the mind like “original intentionality” and “qualia” then the simplicity argument fails. In contrast, a tactic of physicalists is to either try to meet this explanatory challenge or to deny that these properties ultimately exist. This latter tactic can be clearly seen in Daniel Dennett’s responses to these challenges to physicalism since he denies that original intentionality and qualia ultimately exist. This kind of eliminativist strategy, if successful, would keep in place Ockham simplicity argument.
Representation and the mind
One aspect of mind that needs explaining is how the mind is able to represent things. Consider the fact that I can think about all kinds of different things— about this textbook I am trying to write, about how I would like some Indian food for lunch, about my dog Charlie, about how I wish I were running in the mountains right now. Medieval philosophers referred to the mind as having intentionality —the curious property of “aboutness”—that is, the property of an object to be able to be about some other object. In a certain sense, the mind seems to function kind of like a mirror does—it reflects things other than itself. But unlike a mirror, whose reflected images are not inherently meaningful, minds seem to have what contemporary philosopher John Searle calls “ original intentionality .” In contrast, the mirror has only “ derived intentionality ”—its image is meaningful only because something else gives it meaning or sees it as meaningful. Another thing that has derived intentionality is words, for example the word “tree.” “Tree” refers to trees, of course, but it is not as if the physical marks on a page inherently refer to trees. Rather, human beings who speak English use the word “tree” to refer to trees. Spanish speakers use the word “arbol” to refer to trees. But in neither case do those physical marks on the page (or sound waves in the air, in the case of spoken words) inherently mean anything. Rather, those physical phenomena are only meaningful because a human mind is representing those physical phenomena as meaningful. Thus, words are only meaningful because a human mind represents them in a meaningful way. Although we speak of the word itself as carrying meaning, this meaning has only derived intentionality. In contrast, the human mind has original intentionality because only the mind is the ultimate creator of meaningful representations. We can explain the meaningfulness of words in terms of thoughts, but then how do we explain the meaningfulness of the thoughts themselves? This is what philosophers are trying to explain when they investigate the representational aspect of mind.
There are many different attempts to explain what mental representation is but we will only cursorily consider some fairly rudimentary ideas as a way of building up to a famous thought experiment that challenges a whole range of physicalist accounts of mental representation. Let’s start with a fairly simple, straightforward idea—that of mental images. Perhaps what my mind does when it represents my dog Charlie is that it creates a mental image of Charlie. This account seems to fit our first person experience, at least in certain cases, since many people would describe their thoughts in terms of images in their mind. But whatever a mental image is, it cannot be like a physical image because physical images require interpretation in terms of something else. When I’m representing my dog Charlie it can’t be that my thoughts about Charlie just are some kind of image or picture of Charlie in my head because that picture would require a mind to interpret it! But if the image is suppose to represent the thing that has “original intentionality,” then if our explanation requires some other thing that has that has original intentionality in order to interpret it, then the mental image isn’t really the thing that has original intentionality. Rather, the thing interpreting the image would have original intentionality. There’s a potential problem that looms here and threatens to drive the mental image view of mental representation into incoherence: the object in the world is represented by a mental image but that mental image itself requires interpretation in terms of something else. It would be problematic for the mental image proponent to then say that there is some other inner “understander” that interprets the mental image. For how does this inner understander understand? By virtue of another mental image in its “head”? Such a view would create what philosophers call an infinite regress : a series of explanations that require further explanations, thus, ultimately explaining nothing. The philosopher Daniel Dennett sees explanations of this sort as committing what he calls “ the homuncular fallacy ,” after the Latin term, homunculus , which means “little man.” The problem is that if we explain the nature of the mind by, in essence, positing another inner mind, then we haven’t really explained anything. For that inner mind itself needs to be explained. It should be obvious why positing a further inner mind inside the first inner mind enters us into an infinite regress and why this is fatal to any successful explanation of the phenomenon in question—mental representation or intentionality.
Within the cognitive sciences, one popular way of understanding the nature of human thought is to see the mind as something like a computer. A computer is a device that takes certain inputs (representations) and transforms those inputs in accordance with certain rules (the program) and then produces a certain output (behavior). The idea is that the computer metaphor gives us a satisfying way of explaining what human thought and reasoning is and does so in a way that is compatible with physicalism. The idea, popular in philosophy and cognitive science since the 1970s, is that there is a kind of language of thought which brain states instantiate and which is similar to a natural language in that it possesses both a grammar and a semantics, except that the representations in the language of thought have original intentionality, whereas the representations in natural languages (like English and Spanish) have only derived intentionality. One central question in the philosophy of mind concerns how these “words” in the language of thought get their meaning? We have seen above that these representations can’t just be mental images and there’s a further reason why mental images don’t work for the computer metaphor of the mind: mental images don’t have syntax like language does. You can’t create meaningful sentences by putting together a series of pictures because there are no rules for how those pictures create a holistic meaning out of the parts. For example, how could pictures represent the thought, Leslie wants to go out in the rain but not without an umbrella with a picture (or pictures)? How do I represent with a picture someone’s desire? Or how do I represent the negation of something with only a picture? True, there are devices that we can use within pictures, such as the “no” symbol on no smoking signs. But those symbols are already not functioning purely as pictorial representations that seem to represent in virtue of their similarity. There is no pictorial similarity between the purely logical notion “not” and any picture we could draw. So whatever the words of the language of thought (that is, mental representations) are, their meaning cannot derive from a pictorial similarity to what they represent. So we need some other account. Philosophers have given many such accounts, but most of those accounts attempt to understand mental representation in terms of a causal relationship between objects in the world and representations. That is, whatever types of objects cause (or would cause) certain brain states to “light up,” so to speak, are what those brain states represent. So if there’s a particular brain state that lights up any time I see (or think about) a dog, then that is what those mental representations stand for. Delving into the nuances of contemporary theories of representation is beyond the scope of this chapter, but the important point is that the language of thought idea that these theories support is supposed to be compatible with physicalism as well as the computer analogy of explaining the mind. On this account, the “words” of the language of thought have original intentionality and thinking is just the manipulation of these “words” using certain syntactic rules (the “program”) that are hard-wired into the brain (either innately or by learning) and which are akin to the grammar of a natural language.
There is a famous objection to the computer analogy of human thought that comes from the philosopher John Searle, who thinks that it shows that human thought and understanding cannot be reduced to the kind of thing that a computer can do. Searle’s thought experiment is called the Chinese Room . Imagine that there is a room with a man inside of it. What the man does is take slips of paper that are passed into the room via a slit. The slips of paper have writing on them that look like this:
The room also contains a giant bookshelf with many different volumes of books. Those books are labeled something like this:
The man writes the symbols and then passes it back through the slit in the wall. From the perspective of the man in the room, this is what he does. Nothing more nothing less. The man inside the room doesn’t understand what these symbols mean; they are just meaningless squiggles on a page to him. He sees the difference between the different symbols merely in terms of their shapes. However, from outside the room Chinese speakers who are writing questions on the slips of paper and passing them through the slot in the room come to believe that the Chinese room (or something inside it) understands Chinese and is thus intelligent.
The Chinese room is essentially a scenario in which a computer program passes the Turing Test . In paper published in 1950, Alan Turing proposed a test for how we should determine whether or not a machine can think. Basically, the test is whether or not the machine can make a human investigator believe that the machine is a human. The human investigator is able to ask the machine any questions they can think of (which Turing imagined would be conducted via types responses on a keyboard). Imagine what some of the questions might be. Here is one such potential question one might ask:
Rotate a capital letter “D” 90 degrees counterclockwise and place it atop a capital letter “J.” What kind of weather does this make you think of?
A computer that could pass the Turing Test would be able to answer questions such as this and thus would make a human investigator believe that the computer was actually another human being. Turing thought that if a machine could do this, we should count that machine as having intelligence. The Chinese Room thought experiment is supposed to challenge Turing’s claim that something that can pass the Turing Test is thereby intelligent. The essence of a computer is that of a syntactic machine —a machine that takes symbols as inputs, manipulates symbols in accordance with a series of rules (the program), and gives the outputs that the rules dictate. Importantly, we can understand what syntactic machines do without having to say that they interpret or understand their inputs/outputs. In fact, a syntactic machine cannot possibly understand the symbols because there’s nothing there to understand. For example, in the case of modern-day computers, the symbols being processed are strings of 1s and 0s, which are physically instantiated in the CPU of a computer as a series of on/off voltages (that is, transistors). Note that a series of voltages are no more inherently meaningful than a series of different fluttering patterns of a flag waving in the wind, or a series of waves hitting a beach, or a series of footsteps on a busy New York City subway platform. They are merely physical patterns, nothing more, nothing less. What a computer does, in essence, is “reads” these inputs and gives outputs in accordance with the program. This simple theoretical (mathematical) device is called a “ Turing machine ,” after Alan Turing. A calculator is an example of a simple Turing machine. In contrast, a modern day computer is an example of what is called a “ universal Turing machine ”— universal because it can run any number of different programs that will allow it to compute all kinds of different outputs. In contrast, a simple calculator is only running a couple different simple programs—ones that correspond to the different kinds of mathematical functions the calculator has (+, −, ×, ÷). The Chinese room has all the essential parts of the computer and is functioning exactly as a computer does: he “reads” these symbols and produces outputs using symbols, in accordance with what the program dictates. If the program is sufficiently well written, then the man’s responses (the room’s output) will be able to convince someone outside the room that the room (or something inside it) understands Chinese.
But the whole point is that the there is nothing inside the room that understands Chinese. The man in the room doesn’t understand Chinese—they are just meaningless symbols to him. The written volumes don’t understand Chinese either—how could they?—books don’t understand things. Furthermore, Searle argues that the understanding of Chinese doesn’t just magically emerge from the combination of all the parts of the Chinese room: if no one of the parts of the room has any understanding of Chinese, then neither does the whole room. Thus, the Chinese room thought experiment is supposed to be a counterexample to the Turing Test: the Chinese room passes the Turing Test but the Chinese room doesn’t understand Chinese. Rather, it just acts as if it understands Chinese. Without understanding, there can be no thought. The Chinese room, impressive as it is for passing the Turing Test, lacks any understanding and therefore is not really thinking. Likewise, a computer cannot think because a computer is merely a syntactic machine that does not understand the inputs or the outputs. Rather, from the perspective of the computer, the strings of 1s and 0s are just meaningless symbols.  The people outside the Chinese room might ascribe thought and understanding of Chinese to the room, but there is neither thought nor understanding involved. Likewise, at some point in the future, someone may finally create a computer program that would pass the Turing Test  and we might think that machine has thought and understanding, but the Chinese room is supposed to show that we would be wrong to think this. No merely syntactic machine could ever think because no merely syntactic machine could ever understand. That is the point of the Chinese room thought experiment.
We could put this point in terms of the distinction between original vs. derived intentionality: no amount of derived intentionality will ever get you original intentionality. Computers have only derived intentionality and since genuine thought requires original intentionality, it follows that computers could never think. Here is a reconstructed version of the Chinese room argument:
- Computers are merely syntactic machines.
- Therefore, computers lack original intentionality (from 1)
- Thought requires original intentionality.
- Therefore, computers cannot think (from 2-3)
How should we assess the Chinese room argument? One thing to say is that it seems to make a lot of simplifying assumptions about his Chinese room. For example, the philosopher Daniel Dennett suggests that in order to pass the Turing Test a computer would need something on the order of 100 billion lines of code. That would take the man inside the room many lifetimes to hand simulate the code in the way that we are invited to imagine. Searle thinks that these practical kinds of considerations can be dismissed—for example, we can just imagine that the man inside the room can operate faster than the speed of light. Searle thinks that these kinds of assumptions are not problematic, for why should mere speed of operation make any difference to the theoretical point he is trying to make—which is that the merely syntactic processing of a digital computer could not achieve understanding? Dennett, on the other hand, thinks that such simplifying assumptions should alert us that there is something fishy going on with the Chinese room thought experiment. If we were really, truly imagining a computer program that could pass the Turing Test, Dennett thinks, then it wouldn’t sound nearly as absurd to say that the computer had thought.
There’s a deeper objection to the Chinese room argument. This response is sometimes referred to as the “other minds reply.” The essence of the Chinese room rebuttal of the Turing Test involves, so to speak, looking at the guts of what is going on inside of a computer. When you look at it “up close,” it certainly doesn’t seem like all of that syntactic processing adds up to intelligent thought. However, one can make exactly the same point about the human brain (something that Searle believes is undoubtedly capable of thought): the functioning of neurons, or even whole populations of neurons in neuronal spike trains, do not look like what we think of as intelligent thought. Far from it! But of course it doesn’t follow that human brains aren’t thinking! The problem is that in both cases we are looking at the wrong level of description. In order for us to be able to “see” the thought, we must be looking in the right place.
Zooming in and looking at the mechanics of the machines up close is not going to enable us to see the thought and intelligence. Rather, we have to zoom out to the level of behavior and observe the responses in their context. Thought isn’t something we can see up close; rather, thought is something that we attribute to something whose behavior is sufficiently intelligent. Dennett suggests the following cartoon as a reductio ad absurdum of the Chinese room argument:
In the cartoon, Dennett imagines someone going inside the Chinese room to see what is going on inside the room. Once inside they see the man responding to the questions of Chinese speakers outside the room. The woman tells the man (perhaps someone she knows), “I didn’t know you knew Chinese!” In response the man explains that he doesn’t and that he is just looking up the relevant strings Chinese characters to write in response to the inputs he receives. The woman’s interpretation of this is: “I see! You use your understanding of English in order to fake understanding Chinese!” The man’s response is: “What makes you think I understand English?” The joke is that the woman’s evidence for thinking that the man inside the room understands English is her evidence of his spoken behavior. This is exactly the same evidence that the Chinese speakers have of the Chinese room. So if the evidence is good enough for the woman inside the room to say that the man inside the room understands Chinese, why is the evidence of the Chinese speakers outside the room any different? We can make the problem even more acute. Suppose that we were to look inside the man inside the room’s brain. We would see all kinds of neural activity and then we could say, “Hey, this doesn’t look like thought; it’s just bunches of neurons sending chemical messages back and forth and those chemical signals have no inherent meaning.” Dennett’s point is that this response makes the same kind of mistake that Searle makes in supposing a computer can’t think: in both cases, we are focusing on the wrong level of detail. Neither the innards of the brain nor the innards of a computer looks like there’s thinking going on. Rather, thinking only emerges at the behavioral level; it only emerges when we are listening to what people are saying and, more generally, observing what they are doing . This is what is called the other minds reply to the Chinese room argument.
Interlude: Interpretationism and Representation
The other minds reply points us towards a radically different account of the nature of thought and representation. A common assumption in the philosophy of mind (and one that Searle also makes) is that thought (intentionality, representation) is something to be found within the inner workings of the thinking thing, whether we are talking about human minds or artificial minds. In contrast, on the account that Dennett defends, thought is not a phenomenon to be observed at the level of the inner workings of the machine. Rather, thought is something that we attribute to people in order to understand and predict their behaviors. To be sure, the brain is a complex mechanism that causes our intelligent behaviors (as well as our unintelligent ones), but to try to look inside the brain for some language-like representation system is to look in the wrong place. Representations aren’t something we will find in the brain, they are just something that we attribute to certain kinds of intelligent things (paradigmatically human beings) in order to better understand those beings and predict their behaviors. This view of the nature of representation is called interpretationism and can be seen as a kind of instrumentalism . Instrumentalists about representation believe that representations aren’t, in the end, real things.
Rather, they are useful fictions that we attribute in order to understand and predict certain behaviors. For example, if I am playing against the computer in a game of chess, I might explain the computer’s behavior by attributing certain thoughts to it such as, “The computer moved the pawn in front of the king because it thought that I would put the king in check with my bishop and it didn’t want to be in check.” I might also attribute thoughts to the computer in order to predict what it will do next: “Since the computer would rather lose its pawn than its rook, it will move the pawn in front of the king rather than the rook.” None of this requires that there be internal representations inside the computer that correspond to the linguistic representations we attribute. The fundamental insight about representation, according to interpretationism, is that just as we merely interpret computers as having internal representations (without being committed to the idea that they actually contain those representations internally), so too we merely interpret human beings as having internal representations (without being committed to whether or not they contain those internal representations). It is useful (for the purposes of explaining behavior) to interpret humans as having internal representations, even if they don’t actually have internal representations.
Interpretationist accounts of representation raise deep questions about where meaning and intentionality reside, if not in the brain, but we will not be able to broach those questions here. Suffice it to say that the disagreement between Searle and Dennett regarding Searle’s Chinese room thought experiment traces back to what I would argue is the most fundamental rift within the philosophy of mind: the rift between the Cartesian view of the mind, on the one hand, and the behaviorist tradition of the mind, on the other. Searle’s view of the mind, specifically his notion of “original intentionality,” traces back to a Cartesian view of the mind. On this view, the mind contains something special—something that cannot be capture merely by “matter in motion” or by any kind of physical mechanism. The mind is sui generis and is set apart from the rest of nature. For Searle, meaning and understand have to issue back to an “original” mean-er or understand-er. And that understand-er cannot be a mindless mechanism (which is why Searle thinks that computers can’t think). For Searle, like Descartes, thinking is reserved for a special (one might say, magical) kind of substance. Although Searle himself rejects Descartes’s conclusion that the mind is nonphysical, he retains the Cartesian idea that thinking is carried out by a special, quasi-magical kind of substance. Searle thinks that this substance is the brain, an object that he thinks contains special causal powers and that cannot be replicated or copied in any other kind of physical object (for example, an artificial brain made out of metal and silicon). Dennett’s behaviorist view of the mind sees the mind as nothing other than a complex physical mechanism that churns out intelligent behaviors that we then classify using a special mental vocabulary—the vocabulary of “minds,” “thoughts,” “representations,” and “intentionality.” The puzzle for Dennett’s behaviorist view is: How can there meaning and understanding without any original meaner/understander? How can there be only derived intentionality and no original intentionality?
Consciousness and the mind
Interpretationism sees the mind as a certain kind of useful fiction: we attribute representational states (thoughts) to people in virtue of their intelligent behavior and we do so in order to explain and predict their behavior. The causes of one’s intelligent behavior are real, but the representational states that we attribute need not map neatly onto any particular brain states. Thus, there need not be any particular brain state that represents the content, “Brittney Spears is a washed up pop star,” for example.
But there another aspect of our mental lives that seems more difficult to explain away in the way interpretationism explains away representation and intentionality. This aspect of our mind is first-person conscious experience . To borrow a term from Thomas Nagel, conscious experience refers to the “what it’s like” of our first person experience of the world. For example, I am sitting here at my table with a blue thermos filled with coffee. The coffee has a distinctive, qualitative smell which would be difficult to describe to someone who has never smelled it before. Likewise, the blue of the thermos has a distinctive visual quality—a “what it’s like”—that is different from what it’s like to see blue. These experiences—the smell of the coffee, the look of the blue—are aspects of my conscious experience and they have a distinctive qualitative dimension—there is something it’s like to smell coffee and to see blue. This qualitative character seems in some sense to be ineffable—that is, it would be very difficult if not impossible to convey what it is like to someone who had never smelled coffee or to someone who had never seen the color blue. Imagine someone who was colorblind. How would you explain what blue was to them? Sure, you could tell them that it was the color of the ocean, but that would not convey to them the particular quality that you (someone who is not color blind) experience when you look at a brilliant blue ocean or lake. Philosophers have coined a term that they use to refer to the qualitative aspects of our conscious experience: qualia . It seems that our conscious experience is real and cannot be explained away in the way that representation can. Maybe there needn’t be anything similar to sentences in my brain, but how could there not be colors, smells, feels? The feeling of stubbing your toe and the feeling of an orgasm are very different feels (thank goodness), but it seems that they are both very much real things. That is, if neuroscientists were to be able to explain exactly how your brain causes you to respond to stubbing your toe, such an explanation would seem to leave something out if it neglected the feeling of the pain. From our first person perspective, our experiences seem to be the most real thing there are, so it doesn’t seem that we could explain their reality away.
Physicalists need not disagree that conscious experiences are real; they would simply claim that they are ultimately just physical states of our brain. Although that might seem to be a plausible position, there are well known problems with claiming that conscious experiences are nothing other than physical states of our brain. The problem is that it does not seem that our conscious experience could just reduce to brain states—that is, to our neurons in our brain sending lots and lots of chemical messages back and forth simultaneously. The 17th century philosopher Gottfried Wilhelm Leibniz (1646-1716) was no brain scientist (that would take another 250 to develop) but he put forward a famous objection to the idea that consciousness could be reduced to any kind of mechanism (and the brain is one giant, complex mechanism). Leibniz’s objection is sometimes referred to as “ Leibniz’s mill .” In 1714, Leibniz wrote:
Moreover, we must confess that perception , and what depends on it, is inexplicable in terms of mechanical reasons , that is, through shapes and motions. If we imagine that there is a machine whose structure makes it think, sense, and have perceptions, we could conceive it enlarged, keeping the same proportions, so that we could enter into it, as one enters into a mill. Assuming that, when inspecting its interior, we will only find parts that push one another, and we will never find anything to explain a perception ( Monadology , section 17).
Leibniz uses a famous form of argument here called reductio ad absurdum : He assumes for the sake of the argument that thinking is a mechanical process and then shows how that leads to the conclusion that thinking cannot be a mechanical process.We could put Leibniz’s exact same point into the language of 21st century neuroscience: imagine that you could enlarge the size of the brain (in a sense, we can already do with the help of the tools of modern neuroscience). If we were to enter into the brain (perhaps by shrinking ourselves down) we would see all kinds of physical processes going on (billions of neurons sending chemical signals back and forth). However, to observe all of these processes would not be to observe the conscious experiences of the person whose brain we were observing. That means that conscious experiences cannot reduce to physical brain mechanics. The simple point being made is that in conscious experience there exist all kinds of qualitative properties (qualia)—red, blue, the smell of coffee, the feeling of getting your back scratched—but none of these properties would be the properties observed in observing someone’s brain. All you will find on the inside is “parts that push one another” and never the properties that appear to us in first-person conscious experience.
The philosopher David Chalmers has coined a term for the problem that Leibniz was getting at. He calls it the hard problem of consciousness and contrasts it with easy problems of consciousness . The “easy” problems of mind science involve questions about how the brain carries out functions that enable certain kinds of behaviors—functions such as discriminating stimuli, integrating information, and using the information to control behavior. These problems are far from easy in any normal sense—in fact, they are some of the most difficult problems of science. Consider, for example, how speech production occurs. How is it that I decide what exactly to say in response to a criticism someone has just made of me? The physical processes involved are numerous and include the sounds waves of the person’s question hitting my eardrum, those physical signals being carried to the brain, that information being integrated with the rest of my knowledge and, eventually, my motor cortex sending certain signals to my vocal chords that then produce the sounds, “I think you’re misunderstanding what I mean when I said…” or whatever I end up saying. We are still a long way from understanding how this process works, but it seems like the kind of problem that can be solved by doing more of the same kinds of science that we’ve been doing. In short, solving easy problems involves understanding the complex causal mechanisms of the brain. In contrast, the hard problem is the problem of explaining how physical processes in the brain give rise to first- person conscious experience. The hard problem does not seem to be the kind of problem that could be solved by simply investigating in more detail the complex causal mechanism that is the brain. Rather, it seems to be a conceptual problem: how could it be that the colors, and sounds, the smells that constitute our first-person conscious experience of the world are nothing other than neurons firing electrical-chemical signals back and forth? As Leibniz pointed out over 250 years ago, the one seems to be a radically different kind of thing than the other.
In fact, it seems that a human being could have all of the functioning of normal human being and yet lack any conscious experience. There is a term for such a being: a philosophical zombie . Philosophical zombies are by definition being that are functionally indistinguishable from you or I but who lack any conscious experience. If we assume that it’s the functioning of the brain that causes all of our intelligent behaviors, then it isn’t clear what conscious experience could possibly add to our repertoire of intelligent behaviors. Philosophical zombies can help illustrate the hard problem of consciousness since if such creatures are theoretically possible then consciousness doesn’t seem to reduce to any kind of brain functioning. By hypothesis the brain of the normal human being and the brain of the philosophical zombie are identical. It’s just that the latter lacks consciousness whereas the former doesn’t. If this is possible then it does indeed seems to make consciousness seem like quite a mysterious thing for the physicalist.
There are two other famous thought experiments that illustrate the hard problem of consciousness: Frank Jackson’s knowledge argument and Thomas Nagel’s what it’s like to be a bat argument.
Nagel’s argument against physicalism turns on a colorful example: Could we (human beings) imagine what it would be like to be a bat? Although bats are still mammals, and thus not so different than human beings phylogenetically, their experience would seem to be radically different than ours. Bats echolocate around in the darkness, they eat bugs at night, and they sleep while hanging upside down. Human beings could try to do all these things, but even if they did, they would arguably not be experiencing these activities like a bat does. And yet it seems pretty clear that bats (being mammals) have some kind of subjective experience of the world—a “what it’s like” to be a bat. The problem is that although we can figure out all kinds of physical facts about bats—how they echolocate, how they catch insects in the dark, and so on—we cannot ever know what it’s like to be a bat. For example, although we could understand enough scientifically to be able to send signals to the bat that would trick it into trying to land on what it perceived as a ledge, we could not know what it’s like for the bat to perceive an object as a ledge. That is, we could understand the causal mechanisms that make the bat do what the bat does , but that would not help us to answer the question of what it’s like to experience the world the way a bat experiences the world . Nagel notes that it is characteristic of science to study physical facts (such as how the brain works) that can be understood in a third-person kind of way. That is, anyone with the relevant training can understand a scientific fact. If you studied the physics of echolocation and also a lot of neuroscience of bat brains, you would be able to understand how a bat does what a bat does. But this understanding would seem to bring you no closer to what it’s like to be a bat—that is, to the first-person perspective of the bat. We can refer to the facts revealed in first-person conscious experience as phenomenal facts . Phenomenal facts are things like what it’s like to see blue or smell coffee or experience sexual pleasure…or echolocate around the world in total darkness. Phenomenal facts are qualia, to use our earlier term. Nagel’s point is that if the phenomenal facts of conscious experience are only accessible from a first-person perspective and scientific facts are always third-person, then it follows that phenomenal facts cannot be grasped scientifically. Here is a reconstruction of Nagel’s argument:
- The phenomenal facts presented in conscious experience are knowable only from the first-person (subjective) perspective.
- Physical facts can always be known from third-person (objective) perspective.
- Nothing that is knowable only from the first person perspective could be the same as (reduce to) something that is knowable from the third-person perspective.
- Therefore, the phenomenal facts of conscious experience are not the same as physical facts about the brain. (from 1-3)
- Therefore, physicalism is false. (from 4)
Nagel uses an interesting analogy to explain what’s wrong with physicalism—the claim that conscious states are nothing other than brain states. He imagines an ancient Greek saying that “matter is energy.” It turns out that this statement is true (Einstein’s famous E = mc 2 ) but an ancient Greek person could not have possibly understood how it could be true. The problem is that the ancient Greek person could not have had the conceptual resources needed for being able to understand what this statements means. Nagel claims that we are in the same position today when we say something like “conscious states are brain states” is true. It might be true, we just cannot understand what that could possibly mean yet because we don’t have the conceptual resources for understanding how this could be true. And the conceptual problem is what Nagel is trying to make clear in the above argument. This is another way at getting at the hard problem of consciousness.
Frank Jackson’s famous knowledge argument is similar and makes a similar point. Jackson imagines a super scientist, whom he dubs “Mary,” knows all the physical facts about color vision. Not only is she the world’s expert on color vision, she knows all there is to know about color vision. She can explain how certain wavelengths of light strike the cones in the retina and send signals via the optic nerve to the brain. She understands how the brain interprets these signals and eventually communicates with the motor cortex that sends signals to produce speech such as, “that rose is a brilliant color of red.” Mary understands all the causal processes of the brain that are connected to color vision. However, Mary understands this without ever having experienced any color. Jackson imagines that this is because she has been kept in a black and white room and has only ever had access to black and white things. So the books she reads and the things she investigates of the outside world (via a black and white monitor in her black and white room) are only ever black and white, never any other color. Now what will happen when Mary is released from the room and sees color for the first time? Suppose she is released and sees a red rose. What will she say? Jackson’s claim was that Mary will be surprised because she will learn something new: she will learn what it’s like to see red. But by hypothesis, Mary already knew all the physical facts of color vision. Thus, it follows that this new phenomenal fact that Mary learns (specifically, what it’s like to see red) is not the same as the physical facts about the brain (which by hypothesis she already knows).
- Mary knows all the physical facts about color vision.
- When Mary is released from the room and sees red for the first time, she learns something new—the phenomenal fact of what it’s like to see red.
- Therefore, phenomenal facts are not physical facts. (from 1-2)
- Therefore, physicalism is false. (from 3)
The upshot of both Nagel and Jackson’s arguments is that the phenomenal facts of conscious experience—qualia—are not reducible to brain states. This is the hard problem of consciousness and it is the mind-body problem that arises in particular for physicalism. The hard problem is the reason why physicalists can’t simply claim a victory over dualism by invoking Ockham’s razor. Ockham’s razor assumes that the two competing explanations equally explain all the facts but that one does so in a simpler way than the other. The problem is that if physicalism cannot explain the nature of consciousness—in particular, how brain states give rise to conscious experience—then there is something that physicalism cannot explain and, therefore, physicalists cannot so simply invoke Ockham’s razor.
Two responses to the hard problem
We will consider two contemporary responses to the hard problem: David Chalmers’s panpsychism and Daniel Dennett’s eliminativism . Although both Chalmers and Dennett exist within a tradition of philosophy that privileges scientific explanation and is broadly physicalist, they have two radically different ways of addressing the hard problem. Chalmers’s response accepts that consciousness is real and that solving the hard problem will require quite a radical change in how we conceptualize the world. On the other hand, Dennett’s response attempts to argue that the hard problem isn’t really a problem because it rests on a misunderstanding of the nature of consciousness. For Dennett, consciousness is a kind of illusion and isn’t ultimately real, whereas for Chalmers consciousness is the most real thing we know. The disagreement between these two philosophers returns as, again, to the most fundamental divide within the philosophy of mind: that between Cartesians, on the one hand, and behaviorists, on the other.
To understand Chalmers’s response to the hard problem , we must first understand what he means by a “basic entity.” A basic entity is one that science posits but that cannot be further analyzed in terms of any other kind of entity. Can you think of what kinds of entities would fit this description? Or which science you would look to in order to find basic entities? If you’re thinking physics, then you’re correct. Think of an atom. Originally, atoms were thought of as the most basic building blocks of the universe; the term “atom” literally means “uncuttable” (from the Greek “a” = not + “tomos” = cut ). So atoms were originally thought of as basic entities because there was nothing smaller. As we now know, this turned out to be incorrect because there were even smaller particles such as electrons, protons, quarks, and so on. But eventually physics will discover those basic entities that cannot be reduced to anything further. Mental states are not typically thought of as basic entities because they are studied by a higher order science—psychology and neuroscience. So mental states, such as my perception of the red rose, are not basic entities. For example, brain states are ultimately analyzable in terms of brain chemistry and chemistry, in turn, is ultimately analyzable in terms of physics (not that anyone would care to carry out that analysis!). But Chalmers’s radical claim is that consciousness is a basic entity. That is, the qualia—what it’s like to see red, smell coffee, and so on—that constitute our first-person conscious experience of the world cannot be further analyzed in terms of any other thing. They are what they are and nothing else. This doesn’t mean that our conscious experiences don’t correlate with the existence of certain brain states, according to Chalmers. Perhaps my experience of the smell of coffee correlates with a certain kind of brain state. But Chalmers’s point is that that correlation is basic; the coffee smell qualia are not the same thing as the brain state with which they might be correlated. Rather, the brain state and the conscious experience are just two radically different things that happen to be correlated. Whereas brain states reduce to further, more basic, entities, conscious states don’t. As Chalmers sees it, the science of consciousness should proceed by studying these correlations. We might discover all kinds of things about the nature of consciousness by treating the science of consciousness as irreducibly correlational. Chalmers suggests as an orienting principle the idea that consciousness emerges as a function of the “informational integration” of an organism (including artificially intelligent “organisms”). What is informational integration? In short, informational integration refers to the complexity of the organism’s control mechanism—its “brain.” Simple organisms have very few inputs from the environment and their “brains” manipulate that information in fairly simple ways. Take an ant, for example. We pretty much understand exactly how ants work and as far as animals go, they are pretty simple. We can basically already duplicate the level of intelligence of an ant with machines that we can build. So an informational integration of an ant’s brain is pretty low. A thermostat has some level of informational integration, too. For example, it takes in information about the ambient temperature of a room and then sends a signal to either turn the furnace on or off depending on the temperature reading. That is a very simple behavior and the informational integration inside the “brain” of a thermostat is very simple. Chalmers’s idea is that complex consciousness like our emerges when the informational integration is high—that is, when we are dealing with a very complex brain. The less complex the brain, the less rich the conscious experience. Here is a law that Chalmers suggests could orient the scientific study of consciousness:
This graph just says that as informational integration increases, so does the complexity of the associated conscious experience. Again, the conscious experience doesn’t reduce to informational integration, since that would only run headlong into the hard problem—a problem that Chalmers thinks is unsolvable.
The graph also says something else. As drawn, it looks like even information processing systems whose informational integration is low (for example, a thermostat or tree) also has some non-negligible level of conscious experience. That is a strange idea; no one really thinks that a thermostat is conscious and the idea that plants might have some level of conscious experience will seem strange to most. This idea is sometimes referred to as panpsychism (“pan” = all, “psyche” = mind)—there is “mind” distributed throughout everything in the world. Panpsychism is a radical departure from traditional Western views of the mind, which sees minds as the purview of animals and, on some views, of human beings alone . Chalmers’s panpsychism still draws a line between objects that process information (things like thermostats, sunflowers, and so on) and those that don’t (such as rocks), but it is still quite a radical departure from traditional Western views. It is not, however, a radical departure from all sorts of older, prescientific and indigenous views of the natural world according to which everything in the natural world, including plants and streams, as possessing some sort of spirit—a mind of some sort. In any case, Chalmers thinks that there are other interpretations of his view that don’t require the move to panpsychism. For example, perhaps conscious experience only emerges once information processing reaches a certain level of complexity. This interpretation would be more consistent with traditional Western views of the mind in the sense that one could specify that only organisms with a very complex information processing system, such as the human brain, possess conscious experience. (Graphically, based on the above graph, this would mean the lowest level of conscious experience wouldn’t start until much higher up the y-axis.)
Daniel Dennett’s response to the hard problem fundamentally differs from Chalmers’s. Whereas Chalmers posits qualia as real aspects of our conscious experience, Dennett attempts to deny that qualia exist. Rather, Dennett thinks that consciousness is a kind of illusion foisted upon us by our brain. Dennett’s perennial favorite example to begin to illustrate the illusion of consciousness concerns our visual field. From our perspective, the world presented to us visually looks to be unified in color and not possessing any “holes.” However, we know that this is not actually the case. The cones in the retina do not exist on the periphery and, as a result, you are not actually seeing colors in the objects at the periphery of your visual field. (You can test this by having someone hold up a new object on one side of your visual field and moving it back and forth until you are able to see the motion. Then try to guess the color of the object. Although you’ll be able to see the object’s motion, you won’t have a clue as to its color, if you do it correctly.) Although it seems to us as if there is a visual field that is wholly colored, it isn’t really that way. This is the illusion of consciousness that Dennett is trying to get us to acknowledge; things are not really as they appear. There’s another aspect of this illusion of our visual field: our blind spot. The location where the optic nerve exits the retina does not convey any visual information since there are no photoreceptors; this is known as the blind spot. There are all kinds of illustrations to reveal your blind spot . However, the important point that Dennett wants to make is that from our first-person conscious experience it never appears that there is any gap in our picture of the world. And yet we know that there is. This again is an illustration of what Dennett means by the illusion of conscious experience. Dennett does more than simply give fun examples that illustrate the strangeness of consciousness; he has also famously attacked the idea that there are qualia. Recall that qualia are the purely qualitative aspects of our conscious experiences—for example, the smell of coffee, the feeling of a painful sunburn (as opposed to the pain of a headache), or the feeling of an orgasm. Qualia are what are supposed to create problems for the physicalist since it doesn’t seem that that purely qualitative feels could be nothing more than the buzzing of neurons in the brain. Since qualia are what create the trouble for the physicalism and since Dennett is a physicalist, one can understand why Dennett targets qualia and tries to convince us that they don’t exist.
If you’re going to argue against something’s existence, the best way to do that is first precisely define what it is you are trying to deny. Then you argue that as defined such things cannot exist. This is exactly what Dennett does with qualia.  He defines qualia as the qualitative aspects of our first-person conscious experience that are a) irreducibly first-person (meaning that they are inaccessible to third-person, objective investigation) and b) intrinsic properties of one’s conscious experience (meaning that they are what they are independent of anything else). Dennett argues that these two properties (irreducibly first person and intrinsic) are in tension with each other—that is, there can’t be an entity which possesses both of these properties. But since both of these properties are part of the definition of qualia, it follows that qualia can’t exist—they’re like a square circle.
Change blindness is a widely studied phenomenon in cognitive psychology. Some of the demonstrations of it are quite amazing and have made it into the popular media many times over the last couple of decades. One of the most popular research paradigms to study change blindness is called the flicker paradigm. In the flicked paradigm, two images that are the same with the exception of some fairly obvious difference are exchanged in a fairly rapid succession, with a “mask” (black or white screen) between them. What is surprising is that it is very difficult to see even fairly large differences between the two images. So let’s suppose that you are viewing these flickering images and trying to figure out what the difference between them is but that you haven’t yet figured it out yet. As Dennett notes, there are of course all kinds of changes going on in your brain as these images flicker. For example, the photoreceptors are changing with the changing images. In the case of a patch of color that is changing between the two images, the cones in your retina are conveying different information for each image. Dennett asks: “Before you noticed the changing color, were your color qualia changing for that region?” The problem is that any way you answer this question spells defeat for the defender of qualia because either they have to give up (a) their irreducible subjectiveness or their intrinsicness (b). So suppose the answer to Dennett’s question is that your qualia are changing. In that case, you do not have any special or privileged access to your qualia, in which case they aren’t irreducibly subjective, since subjective phenomena are by definition something we alone have access to. So it seems that the defender of qualia should reject this answer. Then suppose, on the other hand, that your qualia aren’t changing. In that case, your qualia can’t change unless you notice them changing. But that makes it looks like qualia aren’t really intrinsic, after all since their reality is constituted by whether you notice them or not. And “noticings” are relational properties, not intrinsic properties. Furthermore, Dennett notes that if the existence of qualia depend on one’s ability to notice or report them, then even philosophical zombies would have qualia, since noticings/reports are behavioral or functional properties and philosophical zombies would have these by definition. So it seems that the qualia defender should reject this answer as well. But in that case, there’s no plausible answer that the qualia defender can give to Dennett’s question. Dennett’s argument has the form of a classic dilemma , as illustrated below:
Dennett thinks that the reason there is no good answer to the question is that the concept of qualia is actually deeply confused and should be rejected. But if we reject the existence qualia it seems that we reject the existence of the thing that was supposed to have caused problems for physicalism in the first place. Qualia are a kind of illusion and once we realize this, the only task will be to explain why we have this illusion rather than trying to accommodate them in our metaphysical view of the world. The latter is Chalmers’s approach whereas the former is Dennett’s.
- True or false: One popular way of thinking about how the mind works is by analogy with how a computer works: the brain is a complex syntactic engine that uses its own kind of language—a language that has original intentionality.
- True or false: One good way of explaining how the mind understands things is to posit a little man inside the head that does the understanding.
- True or false: The mind-body problem is the same, exact problem for both physicalism and dualism.
- True or false: John Searle agrees with Alan Turing that the relevant test for whether a machine can think is the test of whether or not the machine behaves in a way that convinces us it is intelligent.
- True or false: One good reply to the Chinese Room argument is just to note that we have exactly the same behavioral evidence that other people have minds as we would of a machine that passed the Turing Test.
- True or false: According to interpretationism, mental representations are things we attribute to others in order to help us predict and explain their behaviors, and therefore it follows that mental representations must be real.
- True or false: This chapter considers two different aspects of our mental lives: mental representation (or intentionality) and consciousness. But the two really reduce to the exact same philosophical problem of mind.
- True or false: The hard problem is the problem of understanding how the brain causes intelligent behavior.
- True of false: The knowledge argument is an argument against physicalism.
- True or false: Dennett’s solution to the hard problem turns out to be the same as Chalmers’s solution.
For deeper thought
- How does the hard problem differ from the easy problems of brain science?
- If the Turing Test isn’t the best test for determining whether a machine is thinking, can you think of a better test?
- According to physics, nothing in the world is really red in the way we perceive it. Rather, redness is just a certain wavelength of light that our senses interpret in a particular way (some other creature’s sensory system might interpret that same physical phenomenon in a very different way). By the same token, redness does not exist in the brain: if you are seeing red then I cannot also see the red by looking at your brain. In this case, where is the redness if it isn’t in the world and it also isn’t in the brain? And does this prove that redness is not a physical thing, thus vindicating dualism? Why or why not?
- Could someone be in pain and yet not know it? If so, how would we be able to tell they were in pain? If not, then aren’t pain qualia real? And so wouldn’t that prove that qualia are real (if pain is)?
- According to Chalmers’s view, is it theoretically possible for a machine to be conscious? Why or why not?
- Readers who are familiar with the metaphysics of minds will notice that I have left out an important option: monism , the idea that there is ultimately only one kind of thing in the world and thus the mental and the physical do not fundamentally differ. Physicalism is one version of monism, but there are many others. Bishop George Berkeley’s idealism is a kind of monism as is the panpsychism of Leibniz and Spinoza . I have chosen to focus on physicalism for pedagogical reasons, because of its prominence in contemporary philosophy of mind, because of its intuitive plausibility to those living in an age of neuroscience, and because the nuances of the arguments for monism are beyond the scope of this introductory treatment of the problem. ↵
- We could actually retell the Chinese room thought experiment in such a way that what the man inside the room was manipulating was strings of 1s and 0s (what is called “binary code”). The point remains the same in either case: whether the program is defined over Chinese characters or strings of 1s and 0s, from the perspective of the room, none of it has any meaning and there’s no understanding required in giving the appropriate outputs. ↵
- Nothing has yet, claims to the contrary notwithstanding. ↵
- Daniel Dennett, Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. MIT Press. 2006. ↵
Introduction to Philosophy Copyright © by Matthew Van Cleave is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.
Share This Book
Descartes and the Discovery of the Mind-Body Problem
Consider the human body, with everything in it, including internal and external organs and parts — the stomach, nerves and brain, arms, legs, eyes, and all the rest. Even with all this equipment, especially the sensory organs, it is surprising that we can consciously perceive things in the world that are far away from us. For example, I can open my eyes in the morning and see a cup of coffee waiting for me on the bedside table. There it is, a foot away, and I am not touching it, yet somehow it is making itself manifest to me. How does it happen that I see it? How does the visual system convey to my awareness or mind the image of the cup of coffee?
The answer is not particularly simple. Very roughly, the physical story is that light enters my eyes from the cup of coffee, and this light impinges on the two retinas at the backs of the eyes. Then, as we have learned from physiological science , the two retinas send electrical signals past the optic chiasm down the optic nerve. These signals are conveyed to the so-called visual cortex at the back of the brain. And then there is a sort of a miracle. The visual cortex becomes active, and I see the coffee cup. I am conscious of the cup, we might even say, though it is not clear what this means and how it differs from saying that I see the cup.
One minute there are just neurons firing away, and no image of the cup of coffee. The next, there it is; I see the cup of coffee, a foot away. How did my neurons contact me or my mind or consciousness, and stamp there the image of the cup of coffee for me?
It’s a mystery. That mystery is the mind-body problem.
Our mind-body problem is not just a difficulty about how the mind and body are related and how they affect one another. It is also a difficulty about how they can be related and how they can affect one another. Their characteristic properties are very different, like oil and water, which simply won’t mix, given what they are.
There is a very common view which states that the French philosopher René Descartes discovered, or invented, this problem in the 17th century. According to Descartes, matter is essentially spatial, and it has the characteristic properties of linear dimensionality. Things in space have a position, at least, and a height, a depth, and a length, or one or more of these. Mental entities, on the other hand, do not have these characteristics. We cannot say that a mind is a two-by-two-by-two-inch cube or a sphere with a two-inch radius, for example, located in a position in space inside the skull. This is not because it has some other shape in space, but because it is not characterized by space at all.
The difficulty is not merely that mind and body are different. It is that they are different in such a way that their interaction is impossible.
What is characteristic of a mind, Descartes claims, is that it is conscious , not that it has shape or consists of physical matter. Unlike the brain, which has physical characteristics and occupies space, it does not seem to make sense to attach spatial descriptions to it. In short, our bodies are certainly in space, and our minds are not, in the very straightforward sense that the assignation of linear dimensions and locations to them or to their contents and activities is unintelligible. That this straightforward test of physicality has survived all the philosophical changes of opinion since Descartes, almost unscathed, is remarkable.
This issue aroused considerable interest following the publication of Descartes’s 1641 treatise “ Meditations on First Philosophy ,” the first edition of which included both Objections to Descartes, written by a group of distinguished contemporaries, and the philosopher’s own Replies . Though we do find in the “Meditations” itself the distinction between mind and body, drawn very sharply by Descartes, in fact he makes no mention of our mind-body problem. Descartes is untroubled by the fact that, as he has described them, mind and matter are very different: One is spatial and the other not, and therefore one cannot act upon the other. Descartes himself writes in his Reply to one of the Objections:
The whole problem contained in such questions arises simply from a supposition that is false and cannot in any way be proved, namely that, if the soul and the body are two substances whose nature is different, this prevents them from being able to act on each other.
Descartes is surely right about this. The “nature” of a baked Alaska pudding, for instance, is very different from that of a human being, since one is a pudding and the other is a human being — but the two can “act on each other” without difficulty, for example when the human being consumes the baked Alaska pudding and the baked Alaska in return gives the human being a stomachache.
The difficulty, however, is not merely that mind and body are different. It is that they are different in such a way that their interaction is impossible because it involves a contradiction. It is the nature of bodies to be in space, and the nature of minds not to be in space, Descartes claims. For the two to interact, what is not in space must act on what is in space. Action on a body takes place at a position in space, however, where the body is. Apparently Descartes did not see this problem. It was, however, clearly stated by two of his critics, the philosophers Princess Elisabeth of Bohemia and Pierre Gassendi. They pointed out that if the soul is to affect the body, it must make contact with the body, and to do that it must be in space and have extension. In that case, the soul is physical, by Descartes’s own criterion.
In a letter dated May 1643, Princess Elisabeth wrote to Descartes,
I beg you to tell me how the human soul can determine the movement of the animal spirits in the body so as to perform voluntary acts—being as it is merely a conscious substance. For the determination of the movement seems always to come about from the moving body’s being propelled—to depend on the kind of impulse it gets from what it sets in motion, or again, on the nature and shape of this latter thing’s surface. Now the first two conditions involve contact, and the third involves that the impelling [thing] has extension; but you utterly exclude extension from your notion of soul, and contact seems to me incompatible with a thing’s being immaterial.
Propulsion and “the kind of impulse” that set the body in motion require contact, and “the nature and shape” of the surface of the site at which contact is made with the body require extension. We need two further clarifications to grasp this passage.
The first is that when Princess Elisabeth and Descartes mention “animal spirits” (the phrase is from the ancient Greek physician and philosopher Galen) they are writing about something that plays roughly the role of signals in the nerve fibers of modern physiology. For Descartes, the animal spirits were not spirits in the sense of ghostly apparitions, but part of a theory that claimed that muscles were moved by inflation with air, the so-called balloonist theory. The animal spirits were fine streams of air that inflated the muscles. (“Animal” does not mean the beasts here, but is an adjective derived from “anima,” the soul.)
The second clarification is that when Princess Elisabeth writes that “you utterly exclude extension from your notion of soul,” she is referring to the fact that Descartes defines mind and matter in such a way that the two are mutually exclusive. Mind is consciousness, which has no extension or spatial dimension, and matter is not conscious, since it is completely defined by its spatial dimensions and location. Since mind lacks a location and spatial dimensions, Elisabeth is arguing, it cannot make contact with matter. Here we have the mind-body problem going at full throttle.
It was Descartes’ critics who discovered the problem, right in his solution to it.
Descartes himself did not yet have the mind-body problem ; he had something that amounted to a solution to the problem. It was his critics who discovered the problem, right in Descartes’s solution to the problem, although it is also true that it was almost forced on them by Descartes’s sharp distinction between mind and body. The distinction involved the defining characteristics or “principal attributes,” as he called them, of mind and body, which are consciousness and extension.
Though Descartes was no doubt right that very different kinds of things can interact with one another, he was not right in his account of how such different things as mind and body do in fact interact. His proposal, in “The Passions of the Soul,” his final philosophical treatise, was that they interact through the pineal gland, which is, he writes, “the principal seat of the soul” and is moved this way and that by the soul so as to move the animal spirits or streams of air from the sacs next to it. He had his reasons for choosing this organ, as the pineal gland is small, light, not bilaterally doubled, and centrally located. Still, the whole idea is a nonstarter, because the pineal gland is as physical as any other part of the body. If there is a problem about how the mind can act on the body, the same problem will exist about how the mind can act on the pineal gland, even if there is a good story to tell about the hydraulics of the “pneumatic” (or nervous) system.
We have inherited the sharp distinction between mind and body, though not exactly in Descartes’s form, but we have not inherited Descartes’s solution to the mind-body problem. So we are left with the problem, minus a solution. We see that the experiences we have, such as experiences of color, are indeed very different from the electromagnetic radiation that ultimately produces them, or from the activity of the neurons in the brain. We are bound to wonder how the uncolored radiation can produce the color, even if its effects can be followed as far as the neurons in the visual cortex. In other words, we make a sharp distinction between physics and physiology on the one hand, and psychology on the other, without a principled way to connect them. Physics consists of a set of concepts that includes mass , velocity , electron , wave , and so on, but does not include the concepts red , yellow , black , and the like. Physiology includes the concepts neuron , glial cell , visual cortex , and so on, but does not include the concept of color. In the framework of current scientific theory, “red” is a psychological term, not a physical one. Then our problem can be very generally described as the difficulty of describing the relationship between the physical and the psychological, since, as Princess Elisabeth and Gassendi realized, they possess no common relating terms.
Was there really no mind-body problem before Descartes and his debate with his critics in 1641? Of course, long before Descartes, philosophers and religious thinkers had spoken about the body and the mind or soul, and their relationship. Plato, for example, wrote a fascinating dialogue, the Phaedo, which contains arguments for the survival of the soul after death, and for its immortality. Yet the exact sense in which the soul or mind is able to be “in” the body, and also to leave it, is apparently not something that presented itself to Plato as a problem in its own right. His interest is in the fact that the soul survives death, not how, or in what sense it can be in the body. The same is true of religious thinkers. Their concern is for the human being, and perhaps for the welfare of the body, but mainly for the welfare and future of the human soul. They do not formulate a problem with the technical precision that was forced on Princess Elisabeth and Gassendi by Descartes’s neatly formulated dualism.
Something important clearly had changed in our intellectual orientation during the mid-17th century. Mechanical explanations had become the order of the day, such as Descartes’s balloonist explanation of the nervous system, and these explanations left unanswered the question of what should be said about the human mind and human consciousness from the physical and mechanical point of view.
What happens, if anything, for example, when we decide to do even such a simple thing as to lift up a cup and take a sip of coffee? The arm moves, but it is difficult to see how the thought or desire could make that happen. It is as though a ghost were to try to lift up a coffee cup. Its ghostly arm would, one supposes, simply pass through the cup without affecting it and without being able to cause it or the physical arm to go up in the air.
It would be no less remarkable if merely by thinking about it from a few feet away we could cause an ATM to dispense cash. It is no use insisting that our minds are after all not physically connected to the ATM, and that is why it is impossible to affect the ATM’s output — for there is no sense in which they are physically connected to our bodies. Our minds are not physically connected to our bodies! How could they be, if they are nonphysical? That is the point whose importance Princess Elisabeth and Gassendi saw more clearly than anyone had before them, including Descartes himself.
Jonathan Westphal is a Permanent Member of the Senior Common Room at University College, Oxford, and the author of “ The Mind-Body Problem ,” from which this article is adapted.
Organic unity theory: the mind-body problem revisited
- 1 Minnesota Institute of Psychiatry, St. Paul 55105.
- PMID: 2018155
- DOI: 10.1176/ajp.148.5.553
The purpose of this essay is to delineate the conceptual framework for psychiatry as an integrated and integrative science that unites the mental and the physical. Four basic philosophical perspectives concerning the relationship between mind and body are introduced. The biopsychosocial model, at this time the preeminent model in medical science that addresses this relationship, is examined and found to be flawed. Mental-physical identity theory is presented as the most valid philosophical approach to understanding the relationship between mind and body. Organic unity theory is then proposed as a synthesis of the biopsychosocial model and mental-physical identity theory in which the difficulties of the biopsychosocial model are resolved. Finally, some implications of organic unity theory for psychiatry are considered. 1) The conventional dichotomy between physical (organic) and mental (functional) is linguistic/conceptual rather than inherent in nature, and all events and processes involved in the etiology, pathogenesis, symptomatic manifestation, and treatment of psychiatric disorders are simultaneously biological and psychological. 2) Neuroscience requires new conceptual models to comprehend the integrated and emergent physiological processes to which psychological phenomena correspond. 3) Introspective awareness provides data that are valid for scientific inquiry and is the most direct method of knowing psychophysical events. 4) Energy currently being expended in disputes between biological and psychological psychiatry would be more productively invested in attempting to formulate the conditions under which each approach is maximally effective.
- Biological Psychiatry*
- Mental Disorders / etiology
- Mental Disorders / psychology
- Mental Disorders / therapy
- Models, Theoretical
- Systems Theory*
- Terminology as Topic
Mind-brain Identity Theory
- C. V. Borst 0
Department of Philosophy, University of Keele, UK
You can also search for this author in PubMed Google Scholar
Part of the book series: Controversies in Philosophy (COIPHIL)
- Table of contents
Authors and Affiliations
- Publish with us
This is a preview of subscription content, log in via an institution to check for access.
Table of contents (23 chapters)
Front matter, statements of the theory, mind—body, not a pseudo-problem.
- Herbert Feigl
Is consciousness a brain process?
- U. T. Place
Sensations and brain processes
- J. J. C. Smart
The nature of mind
- D. M. Armstrong
Initial Criticism and Clarification
Materialism as a scientific hypothesis, ‘sensations and brain processes’: a reply to j. j. c. smart.
- J. T. Stevenson
Further remarks on sensations and brain processes
- J. J C. Smart
Smart on sensations
Brain processes and incorrigibility, location and leibniz’s law, could mental states be brain processes.
- Jerome Shaffer
The identity of mind and body
- James Cornman
Shaffer on the identity of mental states and brain processes
- Robert Coburn
Mental events and the brain
Comment: ‘mental events and the brain’.
- Paul Feyerabend
Materialism and the mind—body problem
The smart—malcolm symposium on materialism.
C. V. Borst
Book Title : Mind-brain Identity Theory
Authors : C. V. Borst
Series Title : Controversies in Philosophy
DOI : https://doi.org/10.1007/978-1-349-15364-0
Publisher : Red Globe Press London
eBook Packages : Palgrave Religion & Philosophy Collection , Philosophy and Religion (R0)
Copyright Information : Macmillan Publishers Limited 1970
Edition Number : 1
Number of Pages : I, 261
Additional Information : Previously published under the imprint Palgrave
Topics : Philosophy of Mind , Epistemology
Policies and ethics
- Find a journal
- Track your research
1000-Word Philosophy: An Introductory Anthology
Philosophy, One Thousand Words at a Time
The Mind-Body Problem: What Are Minds?
Author: Jacob Berger Category: Philosophy of Mind and Language , Metaphysics Word count: 998
We have minds. We see the world around us; we feel happiness or sorrow; we can think, doubt, believe, remember, wonder, and hope. We also have bodies, which include our brains.
But what are minds? And what (if anything) is the relationship of the mind to the body/brain—or to anything in nature?
These questions constitute the so-called “mind-body problem,” a core issue in the philosophy of mind , the area of philosophy that studies phenomena such as thought, perception, emotion, memory, agency, and consciousness.
This essay introduces some of the most influential answers to these questions.
1. Varieties of Dualism
One popular reply to the mind-body problem is dualism , which holds that the mental is fundamentally distinct from anything physical.  There are several versions of dualism.
Substance dualism holds that minds are mental substances, whereas bodies are physical substances. A substance is something that can exist on its own. Physical substances take up space and time: tables, stars, atoms, and human bodies are physical substances. Substance dualism proposes that minds are substances that think, feel, and experience, but do not take up space and could exist without bodies. The view is akin to the religious idea of immaterial and immortal souls. 
Property dualism instead holds that the mental and the physical are different types of properties. A property is a way an object can be; a brown dog has the property of being brown. But properties cannot exist without something to modify: brownness cannot exist on its own. Property dualism holds that creatures may have distinct mental and physical properties, although they perhaps cannot have mental features without also having physical ones. 
Many considerations support some form of dualism. “Conceivability arguments,” for example, claim that we can imagine examples of minds without bodies, as in cases of non-physical ghosts, or examples of bodies without minds, such as philosophical zombies : creatures that are physically just like us, but lack conscious experience. If what we can imagine is a good guide to what’s possible, then it seems some type of dualism follows. But such inferences are questionable, since arguably not everything we can imagine is possible. For instance, one might imagine proving a mathematical theorem, even if it’s actually unprovable. 
There are also reasons to doubt dualism. A well-known objection to at least substance dualism is the “problem of interaction.” It is easy to understand causal interactions between physical things that take up space and can contact one another: a baseball can break a window. But minds and bodies interact too. For example, stubbing your toe is something physical in your body that causes you to feel pain in your mind, and your mental pain causes you to physically wince. But it is unclear how these mind-body interactions could occur, if mental states do not take up space and so cannot be in contact with the body. 
Many therefore defend views that hold that the mind is related to the body insofar as both are physical. 
2. The Identity Theory
A notable view that holds the mind is physical is the identity theory , which answers the mind-body problem by claiming that mental states are identical to—or the same things as—states of our brains. Brain-scan technology reveals that mentality is tightly correlated with the firing of neurons. The identity theory simply identifies these, holding that a headache is nothing more than a pattern of nociceptor activity, just as water is nothing but H 2 O. 
The identity theory avoids the problem of interaction, since it’s clear how the brain can impact one’s body and vice versa . But difficulties nonetheless arise. For example, it seems our mental states are what philosophers call “multiply realizable”: different sorts of physical systems can all exhibit the same types of mental states. After all, many people believe that things that have no brains at all , such as forms of alien life or artificial intelligence, might one day not only act as though they feel pleasure or fear, but genuinely experience those things. But if so, then such states can’t be identical with patterns of neural activity.
The identity theory explains minds in terms of what they are physically made of. But if we ask, “What is a shoe?” a response in terms of physical make-up is no help since shoes are multiply realizable by leather, plastic, or wood. It is better to characterize shoes functionally: shoes are items whose function is, among other things, to protect our feet when we walk. Functionalism likewise claims that mental states should be understood in terms of their function, that is, their characteristic causes and effects. 
Functionalism thereby answers the mind-body problem by maintaining that mental states are whatever states—be they bodily or otherwise—that play the relevant roles in whatever type of organism. Pains are states typically caused by bodily harms, and in turn typically cause behaviors such as wincing. Functionalism is compatible with mental states’ being nonphysical, but simplicity recommends that in humans such states are realized by brain activity. And if we one day build an artificially intelligent robot that experiences genuine pain, pains would be realized by states of its central processing unit that perform the functions of pain.
Functionalism allows for multiple realizability, but it faces problems. Many think, for example, that even if we knew all the physical and functional facts about some creatures such as bats—everything about their physiology and behavior—we still would not know what it’s like to be a bat. Only bats, it seems, can know what the bat experience of echolocation is like. But if that’s the case, then functionalism, which holds that we can understand minds wholly in terms of their functions, is false. 
Most contemporary philosophers of mind endorse some variety of dualism, identity theory, or functionalism.  But there are other theories of mind,  not to mention many versions of each of the above accounts, which have various advantages and disadvantages—too many for a short essay to explore! The mind-body problem thus remains one of the enduring puzzles of human thought. 
 For further discussion of both substance and property dualism and the arguments for and against them, see Calef (n.d.) and Robinson (2020).
 The most famous proponent of substance dualism in the history of philosophy is René Descartes; see Marc Bobro’s Descartes’ Meditations 1-3 and Descartes’ Meditations 4-6 .
 For a defense of property dualism, see, e.g., Chalmers (1996).
 For discussion of the relationship between conceivability and possibility in general, see Bob Fischer’s Modal Epistemology: Knowledge of Possibility & Necessity .
 This objection was arguably first raised for dualism by Descartes’ own student, Princess Elizabeth of Bohemia; see Princess Elizabeth and Descartes (1643-9/2019).
 It’s worth noting that some have instead endorsed idealism , the view that the mind and body are related insofar as both are actually mental . On that view, everything is a construction of perceptions and ideas–and so there are no physical bodies, at least as traditionally understood. But idealism remains a minority position. For discussion of it, see Addison Ellis’s Idealism Pt. 1: Berkeley’s Subjective Idealism .
 For a classic statement of identity theory, see, e.g., Smart (1959). For further discussion of the identity theory and the arguments for and against it, see Schneider (n.d.) and Smart (2007).
 For a classic statement of functionalism, see, e.g., Lewis (1972). For further discussion of functionalism and the arguments for and against it, see Polger (n.d) and Levin (2023).
 For this much-discussed argument, see Nagel (1974). For a similar argument against not only functionalism, but any view on which the mind is physical, see Tufan Kıymaz’s The Knowledge Argument Against Physicalism .
 See, for example, Bourget & Chalmers (n.d.), which indicates which theories in the philosophy of mind respondents are sympathetic with.
 For example, an issue faced by many theories is that it is unclear why anything physical, including suitably developed brains, would be associated with (much less be identical with) mental phenomena. If atoms don’t have minds, then it is not obvious why any collection of them would have minds either. To avoid this problem, some endorse panpsychism —the view that all physical objects, from atoms to tables, exhibit mental properties; for more, see, e.g., the essays in Goff and Moran (2022). Similarly, some endorse neutral monism —the theory that both mental and physical phenomena are properties of a more fundamental neutral substance; for more, see, e.g., the essays in Alter and Nagasawa (2015).
And other accounts take starker views of the mind. A view that was popular in the early 20th Century was behaviorism , on which there are no “inner” mental states; mental states are nothing but observable behaviors or dispositions of bodies to act. Feeling happy, for example, simply is (the disposition to perform) the act of smiling. For a classic statement of behaviorism, see Ryle (1949). And eliminative materialism maintains that there is no mind-body relationship because there simply are no minds at all: “mental states” are things we should no longer think exist like witches or phlogiston. For a statement of the view, see Churchland (1981).
 The mind-body problem is also highly relevant to many other areas of philosophy and ethics. For example, views on the the metaphysical issue of “personal identity”–how (and whether) we exist as the same being over time, despite the many changes that occur to us—are often informed by views on what the mind is and its relation to the body: see Chad Vance’s Personal Identity and Kristin Seemuth Whaley’s Psychological Approaches to Personal Identity: Do Memories and Consciousness Make Us Who We Are? . And whether a being has a mind—and what this means–is often thought to be highly relevant to many ethical issues: see Jonathan Spelman’s Theories of Moral Considerability: Who and What Matters Morally? and Nathan Nobis’s The Ethics of Abortion .
Alter, T. & Nagasawa, Y. (eds.) (2015). Consciousness in the physical world: Perspectives on Russellian monism . Oxford University Press.
Bourget, D. & Chalmers, D. (n.d.). Consciousness: panpsychism, dualism, eliminativism, identity theory, or functionalism? Survey2020.philpeople.org.
Calef, Scott. (n.d.). Dualism and mind. Internet Encylopedia of Philosophy .
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory . Oxford University Press.
Churchland, P. M. (1981). Eliminative materialism and the propositional attitudes. Journal of Philosophy, 78(2), 67–90.
Goff, P. & Moran, R. (2022). Is consciousness everywhere? Essays on panpsychism . Imprint Academic.
Levin, Janet. (2023). Functionalism. The Stanford Encyclopedia of Philosophy (Summer 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.).
Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy , 50(3), 249–258.
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83 (4), 435–450.
Polger, Thomas. (n.d.). Functionalism. Internet Encylopedia of Philosophy .
Princess Elizabeth and Descartes, R. (1643-9/2019). Correspondence. In Ariew, R. & Watkins, E. (eds.), Modern philosophy: An anthology of primary sources . Hackett Publishing.
Robinson, Howard. (2020). Dualism. The Stanford Encyclopedia of Philosophy (Spring 2023 Edition), Edward N. Zalta & Uri Nodelman (eds.).
Ryle, G. (1949). The concept of mind . Hutchinson.
Schneider, Steven. (n.d.) Identity theory. Internet Encylopedia of Philosophy .
Smart, J. J. C. (2007). The mind/brain identity theory. The Stanford Encyclopedia of Philosophy (Winter 2022 Edition), Edward N. Zalta & Uri Nodelman (eds.).
Smart, J. J. C. (1959). Sensations and brain processes. The Philosophical Review , 68 (2), 141–156.
Descartes’ Meditations 1-3 and Descartes’ Meditations 4-6 by Marc Bobro
Idealism Pt. 1: Berkeley’s Subjective Idealism by Addison Ellis
The Knowledge Argument Against Physicalism by Tufan Kıymaz
Modal Epistemology: Knowledge of Possibility & Necessity by Bob Fischer
Artificial Intelligence: The Possibility of Artificial Minds by Thomas Metcalf
Personal Identity by Chad Vance
Psychological Approaches to Personal Identity: Do Memories and Consciousness Make Us Who We Are? by Kristin Seemuth Whaley
Theories of Moral Considerability: Who and What Matters Morally? by Jonathan Spelman
The Ethics of Abortion by Nathan Nobis
About the Author
Jacob Berger is an Associate Professor in the Department of Philosophy at Lycoming College. He received his Ph.D. in Philosophy with a concentration in Cognitive Science at The Graduate Center of the City University of New York in 2013. His areas of specialization are philosophy of mind and cognitive science. jfberger.wixsite.com/home
Follow 1000-Word Philosophy on Facebook and Twitter and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com
Share this:, 13 thoughts on “ the mind-body problem: what are minds ”.
- Pingback: The Knowledge Argument Against Physicalism – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Personal Identity: How We Exist Over Time – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Psychological Approaches to Personal Identity: Do Memories and Consciousness Make Us Who We Are? – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Animalism and Personal Identity: Are We Animals? – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Descartes’ Meditations 4-6 – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Descartes’ Meditations 1-3 – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Modal Epistemology: Knowledge of Possibility & Necessity – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Idealism Pt. 1: Berkeley’s Subjective Idealism – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Theories of Moral Considerability: Who and What Matters Morally? – 1000-Word Philosophy: An Introductory Anthology
- Pingback: The Ethics of Abortion – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Online Philosophy Resources Weekly Update - Daily Nous
- Pingback: Artificial Intelligence: The Possibility of Artificial Minds – 1000-Word Philosophy: An Introductory Anthology
- Pingback: Intentionality – 1000-Word Philosophy: An Introductory Anthology
Comments are closed.
- Already have a WordPress.com account? Log in now.
- Subscribe Subscribed
- Copy shortlink
- Report this content
- View post in Reader
- Manage subscriptions
- Collapse this bar