Do selves even exist?

Share Embed


Descrição do Produto

Do selves even exist? John Ostrowick, UCT Abstract We always talk as if there is such a thing as individual persons, who are distinguishable from other persons, and who have an inner private conscious life in which they are aware of their own existence and can reflect on their own personhood. Descartes’ Cogito argument is typical of this sort of position. This paper claims that whilst we have these experiences, there’s no such thing as a self, per se. In particular, the paper rejects the notion of a causally efficacious self that can be held morally accountable. The paper provides a number of arguments against the reality, or at least causal efficacy or moral responsibility of selves: arguments from philosophy, neuroscience, and traditional concerns about persistence of identity over time. Introduction There is an enormous body of literature in all traditions of philosophy and neuroscience which addresses the notion of identity.1 This paper is not an exhaustive literature overview, offering, instead, the following arguments: (1) To address what a self (identity)2 could be and how it arises, (2) Explaining why it may be an illusion, and (3) Why it is that we think that persons have selves (identities) or characters which allow us to identify them as such. What is a “self”? The word “self” is a psychological term, referring to that which has our personality traits and memories. Following Dennett & Kinsbourne (1992) we will call this the “Cartesian model of the self”. Our inner life is accessible to us, and our sensory input seems to appear to us in a unified, multi-modal display in our “mind’s eye”, as if we have an inner “stage”, or “theatre”. Searle (1999) says consciousness is “essentially” a “[c]ombination of Qualitativeness, Subjectivity and Unity”. Velleman, likewise, thinks of the self as a unified “arena” (2000, p123). It is “a thing it is like something to be”. (Dennett & Kinsbourne, 1992, p183). We also use the concept of a “self” as a label for what we take to be the cause of our actions (McDermott, 1992, pp217-8).

The “Cartesian model of the self” is not the same as Cartesianism, which has dualism as an additional feature (Discourse on Method — Part 4 para 2). A self doesn’t have to be a free-floating

1

The author is aware of the work of Damasio, Brass, Lynn, Haggard, Hood, Eagleman, Bayne in Hill (op. cit), White, Metzinger, et al. The author is also aware of the literature around mereology. This small selection of existing literature is used for reasons of space. 2

Throughout this paper I will use these terms interchangeably. 1

soul — it could be something else (e.g. Searle’s “fields”, 2001a, 2001b) — as long as it is particular. Moral responsibility Events only occur if there are suitable antecedents (Searle, 2001, p495). But antecedent causes don’t seem sufficient to explain our actions (Searle, 2001, p495). In attributing causation to a person, we are treating them as a causally efficacious system. Locke, for example, originally identified the person (or self), as a “forensic” term, identifying who is morally responsible.3 So how would a person or self be involved in her actions and hence be morally responsible?

A self, according to Searle (2001a), is “conscious agency plus conscious rationality” (Searle, 2001a, p511). And in order to be involved in an action, a person has to choose the action on the basis of reasons. (Searle, 2001, p493). And in order for reasons for an action to be our reasons, the reasons that we have would have to be the reasons of our particular selves (cf. Searle, 2001a, p495, p499).

The difference between a mere motion, such as rock rolling down a hill, and an action, such as my typing this paper, is that a self wills the action (Searle, 2001a, p492; cf. Davidson, p685 et seq.; Velleman, 2000, p125, p127). What makes us agents is that we can interpose ourselves into a causal sequence, so that the resulting events trace back to us (Velleman, 2000, p128; Searle, 2001a, p499-500). Hence, it seems that the most important ingredient in an action, as opposed to a mere motion, is the self (you). We don’t blame rocks for rolling down hills, but we blame people for pushing rocks. Therefore, Searle concludes, we need to posit the existence of an “irreducible self” (Searle, 2001a, p501, p502) if we want to attribute moral responsibility. Why irreducible? This is because we can’t reduce selves to causes or components. To do so would be to eliminate the inherently first-person nature of the self, reducing it to objective impersonal causal sequences (Searle, 2001a, p506). Hence, if a self is not one thing, moral responsibility is threatened. Critical discussion of the Cartesian Self In this section, we consider philosophical problems with the Cartesian notion of self. 1. Hume on the self. Hume argues that we are nothing more than a bundle or composite of perceptions and recollections, parts constantly in flux, which annihilate when we sleep or are unaware of ourselves (Hume, Treatise, Book I, Sect. 4, para 3-4). The only reason we believe we are single selves is because of the “transition of the mind from one object to another,” which “renders its passage as smooth as if it contemplated one continu’d object”. As Hume observes, and as I shall later argue, “memory alone acquaints us with the continuance and extent of this

3

Locke, An Essay Concerning Human Understanding; Book 2: Of Identity and Diversity: Section 26, Also Allison, H. E. (p106) 2

succession of perceptions” and thus, it is because of memory, that we consider our selves to be real. (Book I, Sect. 4, para 6). We are nothing more than “... a republic or commonwealth... [which can] ... change its members ... without losing [its] identity.” (Book I, Sect. 4, para 20). 2. The fallacy of division and emergence. Properties of a whole are not necessarily properties of the parts. Individual neurons are not conscious. It is a fallacy to suggest that neurons are conscious in that it results in an infinite regress; neurons would have to have their own neurons. So, in the case of the self, subsystems of the brain or regions of the brain can’t contain the self or be conscious; what we now call the self has to be a complex property which supervenes on a range of subsystems.

Consider this analogy. Individual cars do not constitute a traffic jam. However, a traffic jam stops the cars from moving. So, we can argue, the properties of component parts (cars) give rise to the properties of the whole (“traffic jam”) through the phenomenon of emergence (Gazzaniga, 2012, pp135-136), which, though merely emergent, is causally efficacious. But this implies that if selves are emergent phenomena, they might well be causally efficacious. We’ll see more on this when we discuss rolling rocks. Let’s turn to Ryle’s argument about composition. 3. Ryle’s Category Mistake. Ryle (1949) provides us with the analogy of a university. He claims that we can’t readily state which entities represent or “are” the university, and to attempt to do so is to commit a category mistake; to misunderstand what is being asked (“Where is the university?” or “What is the university?”) (Ryle, 1949, p17). A university is more than just buildings. So, to extend the analogy, it is a mistake to ask what the self is, where it is, what it can do, or attribute it with causal powers. The self might well lack causal powers, if it is a composite like a university. We will give further argument for this as we go along. But consider this. A university does not “decide” anything; the university council decides. Indeed; even the council per se does not decide: individuals in the council vote and compete (and as I am arguing here, not even those individuals vote or decide; their subsystems do). If Hume is right that the self is a series of sense impressions, memories, etc., then it is fairly clear that a collective does not stand as a plausible cause of actions. These states might incite or cotribute to what we call “action”, through a complex chain of events, but they do not stand directly as the sole immediate cause. 4. Dennett’s Multiple Drafts. Dennett and Kinsbourne (1992a, 1992b, 1993) argue that there is no good reason to suppose that individual sensory inputs come together in a central “Cartesian Theatre” in the mind — the self. And in his 1993 Chapter 8, Dennett explicitly says there’s no one in charge. Let’s see how he defends this position.

In their 1992a article, Dennett et al. deny that we have a single locus of perceptual consciousness (1992, p183, p185; cf. Parfit, 1984, p249). Their first objection is this. “Cartesian materialism” involves an infinite regress (1993, pp52-3). The homunculus (Latin: little man) who is looking at the Cartesian Theatre, “where it all comes together” (Dennett et al., 1992, p184) would need his own theatre if he is to be conscious. And his homunculus also has to have a theatre, etc. This will lead to an infinite regress (1993, pp52-3). This critique is not entirely convincing, 3

however. If we consider the theatre itself to have the property of consciousness, rather than being a kind of “container” for the self that sees what is inside the theatre, then Dennett’s view here is a straw man.

The second problem that Dennett et al. highlight, is this. Grant that we are able to recognise simultaneity. But in order to ascertain simultaneity, we seem to need a central locus of consciousness (Dennett et al., 1992, p184). Since we know we can make judgements of simultaneity accurately, it would follow that we have a central locus of consciousness. Let us then do a modest thought experiment. Take the example of a simultaneous tap on the forehead and toe. The toe-brain distance is longer than the forehead-brain distance, so one may imagine, in order to perceive two taps as simultaneous, one would need a “delay circuit” to keep the tap on the forehead out of consciousness until the tap on the foot “arrived”. But why should sensory input be delayed just in case something else relevant comes along later? (1992, p188). Gazzaniga offers the same thought experiment (Gazzaniga, 2012, pp127-8). It would be an evolutionary mistake. How would our brains “know” in advance to delay the forehead tap and keep it “out of consciousness” until a foot-tap arrived? This clearly is wrong. This would mean that to obtain the impression of simultaneity, we’d need to do some post-processing which delivers the “final results” to consciousness. This suggests that we have at least two areas of the mind, as Freud suggested: an unconscious and consciousness. Yet we believe that the conscious self is the “real me”.

Dennett et al. then argue that asking about when “I” (myself) became aware of something is like asking when the British Empire became aware of the Truce of the War of 1812 — because “I”, like the empire, is spread out in space and time (1992, p235). Likewise, it’s meaningless, given the delay example, to say when “I” became aware of something, since experiments can show that we are aware of things even if we do not report, or cannot report, that we are aware. There is no guarantee that the system as a whole is ever apprised of anything at the same time as its parts (1992, p236). And more importantly, it doesn’t need to be. There’s no good reason to send visual information to the auditory cortex, or vice versa. Likewise, there’s no good reason to send their data to a central place after processing. Processing once is enough. Hence, there’s no good reason to suppose that there’s centralised apperception.

Furthermore, Dennett argues, (1993, Ch. 8), given the evidence of speech deficit disorders, a self is not an ultimate judge over choices; what happens is something more like a case of competing “candidates”. He gives the examples of various forms of aphasia in which the patient struggles to find the right word (Dennett, 1993, p249 et seq.). So, just as a person has many versions of what they sense, so they have many versions of what it is that they are going to say. Dennett argues, moreover, that aphasia actually just brings normal underlying word selection processes forward. The person is simply not silencing their inner verbal processing (Dennett, 1993, p250).

Compare, now, lace-tying, which is automated, and playing chess, which isn’t. Once we know how to tie our laces, we don’t have to think about it, and the same applies to most of our ‘deliberate’ acts — they don’t involve any deliberation. This, Dennett claims, applies to all kinds of acts, not just speech acts. Rather, deliberated acts are rare (Dennett, 1993, pp251-2; 1984, p87). If there is no central self which perceives, as he argued in earlier chapters, he continues, perhaps there 4

is no central self choosing our actions and words either (1993, pp250-2). Instead, we have a “Joycean Machine” which throws out words that get selected and assembled (in some way) for a final utterance (Dennett, 1993, p427-9). Thus, if there is no central arena of perception or word choice, there is no central self, no “Oval Office in the brain” (Dennett, 1993, p106).

“Actions are over, done, kaput, before your brain is conscious of them.” (Gazzaniga, 2012, p112). If it was left up to us to ruminate about threats, for example, we’d not survive. It’s better to instantly jump at the sound of rustling grass than find out it was the sound of a rattlesnake’s tail (Gazzaniga, 2012, p76). “I did not make a conscious decision to jump”. My decision to jump is justified post-hoc that I saw a snake, but in reality, I did not see one; it was an automatic response. Consciousness takes too long (Gazzaniga, 2012, p78). But we do not have any access to any of the nonconscious processing that led to a jump. Our explanation afterwards about jumping deliberately, is fabricated (Gazzaniga, 2012, p77). 5. Parfit. In Reasons and Persons, Parfit argues that identity, or selfhood, per se, cannot persist over time. He offers a number of thought experiments which attempt to demonstrate this.4

Consider Theseus’ Ship. This well-known example (to what extent the ship is still the same ship over time as its planks get replaced), demonstrates that there’s no clear threshold for identity or when we can say a thing or person is the same. In other words, the self (person) is not one continuous persistent thing, but rather something that changes components over time.

Another familiar example is the teleporter in Star Trek. Presumably, when the teleporter operates, it disintegrates the crew on the Enterprise and then reassembles them on the surface of the planet. Let’s say the teleporter does not transport their original atoms, but uses atoms on the surface to reconstitute the crew. The question that arises, of course, is whether the teleporter kills the crew. Suppose the teleporter malfunctions and the originals do not get disintegrated, but remain on the ship. In this case, we see that the original crew is on the ship, and the crew on the planet are mere copies. Parfit thus argues that identity (selfhood) is not a one-to-one relationship (one identity to one body). Yet, if the planetary crew were copies, then, when the teleporter does not malfunction, and the crew are disintegrated on the ship, we have to say that the teleporter kills the crew on the ship.

We could therefore be something very different, e.g., a whole team of “selves”, coming and going out of existence, passing the baton of the mental contents from one physical entity to the next in a “relay race” (Parfit, 1984, p223). A person, then, is somewhat like a club, with members coming and going all the time (Dennett, 1993, p423; cf. Parfit, 1984, p213). And, like clubs, “selves” are ‘empty notions,’ they do not take objective answers about their continued existence. This is not to say that Parfit doubts that selves exist, just that there need not be any absolute answers to questions like “is this me?” (1971, p3; 1984, p214). As long as some resemblance of our organisation is preserved in a recognisable manner by an appropriate means, we have survived.

4

The author is aware of objections to Parfit, e.g. from Ricoeur, however, the author agrees with Beck, S. (2015, personal communication) that they they do no succeed. 5



So, Parfit offers us his notion of survival. Instead of talking about identity, he urges us, we should consider our survival “good enough” if we are sufficiently similar and causally connected to the later copies. The notion of identity, in short, is nonsensical. Some cases will not admit of a determinate answer, and that is because there just is no answer (1984, p255, 260). Thus, since we can’t identify a person, physically or psychology, as being more than just a temporal continuum,5 the real existence of the self is in doubt. “... we need not explain this unity by ascribing these experiences to the same person, or subject of experiences” (Parfit, 1984, p251). Ethological evidence against the Cartesian Self Eugene Marais, an early 20th-century academic in South Africa, wrote “The Soul of the White Ant”. Marais argued that a termite hill is in fact the animal, and the termites are merely agents within it, like our blood cells or neurons, and that the appearance of organisation is an emergence from the behaviour of the termites within the nest. But there is no soul in a termite nest (cf. Gazzaniga, 2012, pp135-6).

“...[T]here really is no proper-self... complex systems can in fact function in what seems to be a thoroughly ‘purposeful and integrated’ way simply by having lots of subsystems doing their own thing without any central supervision. ... The behavior of a termite colony provides a wonderful example of it ... but quite uninfluenced by any master-plan.” (Dennett & Humphrey, p8). “... the strangest and most wonderful constructions ... are [those] made by ... Homo sapiens. Each normal individual of this species makes a self. Out of its brain it spins a web of words and deeds... [likewise,] so wonderful is the organisation of a termite colony that it seemed to some observers that each termite colony had to have a soul ... We now understand that its organisation is simply the result of a million semi-independent little agents, each itself an automaton, doing its thing. So wonderful is the organisation of a human self that to many observers it has seemed that each human being had a soul too: a benevolent Dictator...” (Dennett, 1993, p416). Instead, what a self is, then, is an emergent set of character traits, a regular behavioural pattern, or a dispositional tendency to behave in a certain way. “The self is in the action”, as Velleman argues.

5

Compare the perdurantist position in which selves are four-dimensional, existing in time. McKinnon, N. 2002. "The Endurance/Perdurance Distinction", The Australasian Journal of Philosophy 80:3 p. 288-306; Sider, T. 2001. FourDimensionalism Oxford: Clarendon Press. 6

Neuropsychological evidence against the Cartesian Self We now consider evidence from Northoff, Gazzaniga, and others. Because it is apparent that we have a self of some sort, we must explain how it arises (Searle, 2001, p510).

Northoff et al. locate the self in the cortical midline structures (CMS) (Northoff et al., 2004, p103). The choices of the self ultimately just have a neurological cause (Northoff et al., 2004, p104). “Anatomically the ‘core self’ is associated with the orbitofrontal and ventromedial prefrontal cortex.” (p103). But is it the that CMS that controls the body?

Gazzaniga, in his book “Who is in charge?”, argues that instead of a “self”, there is an ‘interpreter’ in the brain which makes up stories about what the brain is doing (Gazzaniga, 2012, p95, p105, p108). The ‘interpreter’ does not control what the brain does (Gazzaniga, 2012, p8). Gazzaniga, in his experiments with split-brain or commissurotomy patients,6 fed information to one cerebral hemisphere of his patients only, and that hemisphere would then be asked to draw what it saw. The other hemisphere would be given no information. When the uninformed hemisphere was asked why its hand drew something, instead of saying “I do not know,” the uninformed hemisphere merely made up a justification. Apart from demonstrating that the left and right brains don’t communicate as much as we’d assume, this, Gazzaniga says, demonstrates that an area on the brain’s left side, the interpreter, “makes up” justifications for the nonconscious actions that we perform (2012, p82-83). “The ... interpreter ... has created the illusion of self and, with it, the sense we humans have agency and ‘freely’ make decisions about actions” (Gazzaniga, 2012, p105). Furthermore, as a brain’s size increases, so do the number of connections between neurons. However, at a certain point in size, the length of wiring required to connect all neurons to all other neurons in the vicinity becomes prohibitive and slow. Hence, in our case, whilst neurons are indeed more arborised (multiply-connected) than other animals, they are not connected as one would expect — to every other neuron. This results in high connectivity between smaller areas, which produce computational results that are sent out, without the details of the computation itself. In effect, parts of the cerebrum, called ‘modules’, are specialised for their tasks. There are mostly short local connections, and only a few longer connections to other subsystems. This means that subsystems have to be specialised and automated, “each doing their own thing” (Gazzaniga, 2012, p33, 67-68).7 Hence, when we become aware of a mental state, we’re just becoming aware of the computational results in some subsystem. Furthermore, these modules operate in parallel (ibid., p95), as Dennett said. Our brains “don’t seem to have a boss, much like the Internet does not have a boss” (Gazzaniga, 2012, p43, p44).

6 7

We will see Marks’ (1981) objections to this evidence later.

This is exactly the same scenario seen in computer design: there are specialised chips that process graphics or sound or network connectivity, and they share information via longer wires. 7



This accounts for how we can speak, hear and see at the same time, using different modules (Raichle, Peterson, Posner in Gazzaniga, 2012, p33). Yet whilst we accept autonomous control for breathing, say, we are reluctant to accept it in the case of decisions (Gazzaniga, 2012, p41). But: “There is no ghost in the machine, no secret stuff that is you. That you that you are so proud of is a story woven together by your interpreter module to account for as much of your behaviour as it can incorporate, and it denies or rationalises the rest.” (ibid.). Crick and Koch (2005), and Koubeissi et al (2014), have released papers showing that consciousness is disrupted if one interferes electrically with the claustrum. This could mean that perhaps the claustrum is a central neural pathway through which all data travels in the brain; or perhaps it is the seat of consciousness. Only further research can answer this. The skeptic’s model of the self This paper therefore proposes that the “self” is an illusion; it just is consciousness (but it does not have “will”). We might also include our memories in the “self” as a way to explain how it is that ‘the interpreter’ remembers or describe narratives about ourselves, what we have done, and so on. There is, however, nothing that ‘has’ consciousness — there is no “I” in “Cogito”.8

So how do we explain action? It seems reasonable to suppose that mental states like memories, sensations, etc., could provide motives for actions. This would explain, for example, why it is that when we see food, that we get up to retrieve it. However, animals that we don’t normally credit with selves, e.g. lizards, insects, etc., seek food accurately. And if that’s all it takes to explain action (because presumably we want to say that a lower animal can act), we certainly do not need a conscious self to cause actions.

However, we are left with some objections. Objections to the skeptic’s model of the self a. What’s the point of a self? Why should we have conscious goal-directed states if they’re not causally efficacious and informed by our other states? From an evolutionary point of view, it doesn’t make sense to suggest that selves or consciousness have no purpose (Searle, 2001, p509). It would also not make sense for an internal monologue or self-concept to be merely epiphenomenal. This is an “expensive piece of equipment” to deploy if it has no function (Ismael, 2006, p351). It would make more sense, Ismael argues, that a “self” was a late evolutionary addition to an original termite colony-like design (Ismael, 2006, p352).

8

The Cogito is at best a tautology, where ms = thinking: the Cogito expands to ∃(ms) ⊅ ∃(Self); at best: ∃(ms) ⊃ ∃(ms) 8

Response: Just because we have a feature, does not mean it evolutionarily beneficial. Consider the appendix, wisdom teeth, and biological atavisms generally.9 Something being present in our evolutionary makeup does not entail that it is useful; it might just be not so harmful as to cause our extinction. Epiphenomena ipso facto have no purpose (Searle, 2001, p506). Now, only higher animals seem to show self-consciousness.10 And if that’s the case, then it seems to be true that a “self” is a late evolutionary addition. However, this does not demonstrate that self-awareness has a function. It could still be an epiphenomenon that emerges as a side-effect of more cerebral neurons.

Consider burning your hand on a stove. While it seems prima facie that jerking your hand away is a result of the conscious apprehension of pain, it is in fact a reflex (does not go via the brain).11 According to Libet, awareness of a stimulus takes about 200 msec, and instructions to the hand to move takes about 50-200 msec. Thus, we perceive the pain and the hand movement as simultaneous; and we think that the awareness of pain caused the hand to be withdrawn [heat ➝ signal ➝ pain ➝ decision to move]; but what really happened was that the original signal caused both directly [heat ➝ signal ➝ pain] & [heat ➝ signal ➝ reflex]. b. The moral responsibility objection: If we are not a particular causal agent, and cannot be held responsible for our actions, it seems that our reactive attitudes12 and moral practices become nonsense. This seems to pose a threat to our very way of interacting with each other. Like with the question of Ultimate Responsibility (Kane, 1996, p75; 2002, p224), we need to settle on an answer as to the ultimate personal causation of an event, or why persons are morally valuable. “One of the most important roles of a self ... is as the place where the buck stops... If selves aren’t real — aren’t really real, won’t the buck just get passed on and on, around, forever? [W]e seem to be threatened with a ... bureaucracy of homunculi, who always reply, when challenged: ‘Don’t blame me, I just work here’. ” (Dennett, 1993, pp429-430). “I am a character in a story my brain is making up... [but] if selves turn out to be constructs of information-processing systems, it will be hard to say why are of any value... I’m just something my brain made up... [but] If people are valuable, it is not because they are imperishable souls connected to bodies only for a brief sojourn. They have to be valuable for some other reason.” (McDermott, 1992, p217-8).

9

M. Le Page, “When Evolution Runs Backward”, New Scientist (13/01/2007), 28-33; C. Ainsworth and M. Le Page, “Evolution’s Greatest Mistakes”, New Scientist (11/08/2007), 36-39. 10

Gallup, G.G. Jr. (1970).

11

This is called a ‘reflex arc.’

12

P. Strawson, p5 et seq. 9

Response: Our having reactive attitudes does not establish that our actions have moral value. 13 Rather, all that reactive attitudes establish is that we naturally react to people. Indeed, our moral instincts appear to also just be about circuitry. Koenigs et al.14 discovered that social emotions lie in the Ventromedial Prefrontal Cortex (VMPC). Persons with damage to the VMPC can report on what the “right” thing to do would be, however, in tests, such persons chose between moral dilemmas based on pragmatic rather than moral reasons. Gazzaniga argues that people start with a moral emotion, and then their ‘interpreter’ unit works backward from the emotion to justify it. Fehr et al. (in Gazzaniga, 2012, p177) found that by disrupting the dorsolateral prefrontal cortex, that people become more selfish under test conditions (ibid.). We are thus born with circuitry to provide us with morality (ibid., p171), selected for by evolution and our social natures (ibid., p182). c. The rolling rock objection: Emergence and the fallacy of division. Let’s take the case of rock rolling down a hill. We do think that rocks roll down hills in virtue of being a rounded single thing. It therefore makes no intuitive sense, say, to attribute the rock’s rolling down a hill to a particular atom inside the rock. Were the atoms of a rock scattered around rather than being in a ball-like shape, they’d not roll down a hill. The same applies to a self. If we consider the self to be a loose aggregate of uncooperating components, it makes little sense to attribute the self with causal powers. But, if we assume that a self is a cooperating composite, it might be causally efficacious. Hence, a self being composite doesn’t entail that it is not causally efficacious; indeed, like a rock rolling down a hill, a self’s being composite might be what gives it its causal efficacy. Response: Dennett argues that even though we seem to be acting as a unit, it’s more of an internal competition (Dennett, 1993, p250). The brain state which has the “strongest” influence on the motor cortex is the one which leads to action. There is no “deciding” between the states (Dennett, 1993, p238); each pushes itself forward as a possible choice.

Secondly, is a rock even one thing? There’s no way that we can draw a hard line between where we decide that a rock is one thing or a network or even a cloud of atoms. It’s a dangerous metaphysical assumption to think that a termite colony is just a network or loose aggregate, and a self or rock is not.15 Something’s being a “rock” or a “self” has nothing to do with a thing itself but rather how we label it. We need to be able to categorise and demarcate things, hence we have these terms. We will argue this point in further detail below.

Thirdly, group behaviours of creatures look orchestrated, but they’re not. Consider a football match where the crowd rushes onto the field, or a flock of geese travelling in a V-shape. Regular behaviour does not entail the existence of a central controlling self. Complex systems, e.g.

13 Assume

moral skepticism for the sake of the argument.

14

Koenigs, Young, Adolphs, Tranel, Cushman, Hauser & Damasio (2007).

15

Christiaan Reynolds, personal communication, 2015. 10

weather, display regularity without there being an orchestrator (Amaral & Ottino in Gazzaniga, 2012, p72).16 d. Marks’ objections to commissurotomy examples. Marks (1981) dislikes the argument from commissurotomy. Marks argues these cases imply that either we are always two people (and just don’t know it), or, a new person is created by commissurotomy. Both alternatives are hard to accept.

Marks objects that splitting a person’s brain does not entail a lack of psychological unity prior to the split, or even after. He claims that the singularity of selfhood persists. (Marks, 1981, p32). It is only through carefully selected experiments that we can detect disruptions in personhood (p41). As long as both halves of the brain retain some communication — which they do through the lower brain — and as long as both halves retain similar characteristics, behaviours, goals, etc., the self is not really split. If a person’s behaviour postoperatively is largely consistent with his preoperative behaviour, we are obliged to explain it by a largely unified mentality. Indeed, Gazzaniga points out, commisurotomy does not halve IQ (Gazzaniga, 2012, p31). Such patients claim that they experience “no split personality, no split consciousness” (Gazzaniga, 2012, p54). Response: This is unconvincing, as it begs the question of the unity of the self. We can’t talk about a “self” being unified and then subsequently operated on, and then subsequently still being unified, if we’ve not yet established whether it is unified preoperatively, which is the complaint of this paper.

We might, for example, have identical twins, who mostly agree, share the same attitudes, generally behave the same, and who communicate often. This does not entail that these two persons are the same person. What makes a person “the same” is, as Parfit has amply demonstrated, has nothing to do with their occupying one body or having one brain, rather, it is about survival and causal continuity.

Gazzaniga reports that closer testing of his patients reveals that patients are not aware of certain things (e.g. if shown to the wrong visual hemisphere). Further investigation shows that patients sometimes detect something, but that they cannot report on what it is that they are conscious of, under certain conditions (Gazzaniga, 2012, p56). Hence, consciousness is not quite unified as Marks claims, since information does not always go from one side of the patient’s brain to another. For example, a patient could be shown a bicycle, and draw it by hand, yet verbally deny seeing it. And it turns out that sensory information is processed on the side that receives the information (Gazzaniga, 2012, p56-7). Contradicting his remarks on page 54, Gazzaniga now says, “Surgery... induce[s] a double state of consciousness” (p59). There’s a kind of “bubbling up” to consciousness, and as Dennett claimed (Dennett, 1984, pp78-80), a competition between states as to which achieves focus of attention (Gazzaniga, 2012, p66).

16

The reader should be cautious, however, as Gazzaniga argues that perhaps, just as traffic is an emergent property of cars, which can cause delays for cars stuck in the traffic, so perhaps the self, as an emergent property of the mind, can control the neurological level. See Gazzaniga, 2012, pp5-8. 11

e. The “actions reflect a self” response. The question, for Velleman, is whether choice is reducible to causal event descriptions (Velleman, 2000, p130). Velleman suggests that this might be achieved if the agent has mental states which are “functionally equivalent to a self” (Velleman, 2000, p137). So, Velleman wishes to isolate some mental system as the thing in charge, as providing the functional equivalent of a Cartesian self. The role of a “self”, he says, is to “adjudicate conflicts of motives”. An agent is responsible for what she does, because she selects a choice by throwing her own “weight” as a self behind the choice (Velleman, 2000, p139, pp142-3; Kane, 2002, p228).

As long as we can characterise when a self is in an action or not, or it reflects some desire, value or reason of the agent’s, we can decide whether the agent was responsible. As long as her actions reflect her motives, her self is in her actions and the self is responsible (Leon, 2002b). Response: This may be circular or question-begging. We can’t talk of a “self” existing and being in an action if we’ve not defined what a self is yet, or if it’s the very thing we’re calling into question. What makes those the reasons or goals of a self, as opposed to the brain, the body, etc.?

Moreover, “functional equivalence” (Velleman, 2000, p137) of a self does not resurrect or reify the self; it merely replaces it with something functionally equivalent. At best, this is a behaviourist claim. If we do not want this idea to reduce to plain old behaviourism, it requires hypothesising an internal “self”, which then begs the question. f. Problems for Dennett and termites. Ismael (2006) largely agrees with Dennett but wishes to refine his model. He says there are degrees to a system, and gives three models. These are: “System 1: Self-organizing system. ... System 2: Navigating system... System 3: Dennett’s model.” (Ismael, 2006, p351). In System 1, a termite colony, it is entirely mechanical and unsteered. In System 2, the internal narrative or consciousness is used to guide movement, and, in System 3, consciousness is merely an epiphenomenal side-effect of the automated subsystems. If either System 1 or 3 are true of our selves, we would lack moral responsibility and free-will.

Ismael seems to think that his model — System 2 — alone, would give us the kind of control we want over our actions. The analogy he uses is of a ship which discovers new ground and maps it out as it explores (Ismael, 2006, p349). This “map” then represents our consciousness, and it is necessary for navigational purposes, e.g. remembering where food is, how the world is laid out, what creatures are enemies, etc., and this, he seems to believe, requires consciousness.

Ismael then argues that the Joycean Machine, our internal monologue, demonstrates our unity of consciousness and decision-making (Ismael, 2006, p355-7). Even if a self is like a committee, he says, it can still speak with one voice and therefore make decisions. Yet if a committee is not really a single thing or unified, it seems as if a composite entity could still be responsible and have reasons. “I speak to the world with a single voice” (Ismael, 2006, p357). 12

Response: On Libet’s (1985, 1989, 2001) evidence, it seems that the mental is epiphenomenal. Jeannerod (1992, p212, p213), Soon et al. (2008), Bode et al. (2011), Obhi et al. (2004) and many others support this with further and more recent evidence. Whilst I am aware of philosophers’ objections to this work, e.g. Mele, I believe that it at least shows that actions of a certain type are non-consciously initiated, and that is sufficient to, in my view, strip the conscious self of some causal efficacy, even if there is such a thing as a self.

Regarding the “map” or System 2 model above, if voluntary actions are produced by a self’s volitions, we need to ask whether those volitions are themselves voluntary. If they are, we get an infinite regress. If however we recognise that voluntary choices have to eventually be explained by something non-voluntary or mechanical, to avoid this regress, we obtain the result that voluntary acts are the result of non-voluntary events (Ryle, in Dennett, 1984, p78). It may very well be some non-conscious navigational brain states and memory systems that steer our ship. If the reader doubts this point, think of the case where you’re trying to recall something and are not able to, and yet, minutes, hours or days later, it suddenly pops into your head. Where did it come from? Not conscious choice. I venture that all our planning and decisionmaking is done nonconsciously. We only “think” about it and “decide” once we become aware of the results of the computations.

Lastly, when a committee pronounces a decision, the decision was not made by the pronouncing of the decision, as Ismael argues. The pronouncing of the decision is merely the marker of what decision has already been taken by the committee members. Likewise, a particular choice or verbalisation of a choice is a result of computations and a Joycean machine, not the cause. “Are decisions voluntary? Or are they things that happen to us? ... Our decision bubbles up to consciousness from we know not where. We do not witness it being made; we witness its arrival.” (Dennett, 1984, pp78-80. Italics are Dennett’s). Why do we believe in a self? We now conclude with some reasons why people might persist in believing in a Cartesian self. 1. Introspective evidence: It just seems to us that we have unified consciousness, which “has” all our sense data, memories, and originates our decisions. So this leads us to believe that there is a being — a self or soul — which has these states. 2. Induction. Perhaps what is happening is induction. 17 We generalise about properties of things we observe in the world. Consider “cat”. The reason we know what a cat looks like is that we’ve been shown multiple instances by our parents, say, and had the word “cat” uttered at the time. Eventually, we pair the sound “cat” with the object. Hence, when we hear “cat”, we expect to see a cat, and vice versa, if we see a cat, we think the word “cat”.

17

I believe that this section is the key finding of the paper. 13



The same applies to our actions. Eventually, after repeatedly seeing that what we do is regular, we induce that this is “our” character or self that is in the action, as Velleman and Leon argue. Identity, then, is inductive recognition of the regular behaviour patterns. The reason we recognise that some redness, or an apple, or “this is me”, is the same thing as another red apple, or “me yesterday”, is because that is how our memory systems work, as Hume argued. “I” or “myself” is a type, not a token, just like “apple” is a type.

This might not only explain how we recognise ourselves, but how we recognise objects, properties, other persons, groups, sports teams, or nations, even. We see a regular set of features associated with objects (e.g. tiger = stripy, orange, dangerous, large cat), and then, by the process of memorisation, store the information about the entity, and in so doing, we create its type (“tiger”). In other words, lossy18 memory storage of particular token events or objects create types. Then, when we next encounter a token of that type (“That thing reminds me of a tiger”), we assume it matches the memory of it (the type, “tiger”), and expect the same properties and behaviour (“it will attack”).19 The future resembles the past. And indeed this is how AI is currently implemented. 20 So, when it comes to ourselves, we see yet again, that we’re writing a philosophy paper, and say, “Oh, I do enjoy writing philosophy papers”, because we don’t know what else to predicate “enjoy writing philosophy papers” to, apart from this body, or its mind. “Dennett argues that ... selves ... simplify description and facilitate prediction of behavior with no real correlate inside the mind. ... The ‘I’ that is supposed to be choreographing the complicated ballet of bodily motion is a fiction” (Ismael, 2006, pp345-6, my italics). 3. Willingness to blame/The Intentional Stance. As Dennett observes, we operate from an intentional stance; that is, we perceive goal-directedness in every movement. This, he argues, is a side-effect of evolution; it is better to get more false-positives about things being alive and having malevolent intent towards us, than it is to get false-negatives and assume that a moving object does not have malevolent intent towards us. Hence, when something moves, we perceive it as moving intentionally. So, inasmuch as we used to think that lightning was a divine, intentionally thrown weapon, we now cling to the same logic about human bodies. When something goes wrong, someone must be responsible and we feel righteously outraged (Dennett, 1984, p154, p159; Dennett, 1984, in Double, p81). “The illusion of ... an ultimate center arises... from our taking a good idea, the idea of a self ... and pushing it too far under the pressure of preoccupations with our [moral] responsibility. ... loath to abandon our conviction that we really do things (for which we are

18 A computer

science term referring to compression algorithms that discard surplus or repetitive information.

19

This model of types and tokens was originally stated by Aristotle in Book Alpha of the Metaphysics, Paragraph 1.

20

Dr. B. Rosman, Personal Communication, March 2015. 14

responsible), we exploit the cognitive ... gaps ... by filling it with a rather magical ... entity, the unmoved mover, the active self” (Dennett, 1984, pp78-80. Italics are Dennett’s). “Man is not truly one, but truly two. I say two, because the state of my own knowledge does not pass beyond that point. Others will follow, others will outstrip me on the same lines; and I hazard the guess that man will be ultimately known for a mere polity of multifarious, incongruous and independent denizens.” — R. L. Stevenson, Jekyll and Hyde

15

References and Bibliography Allison, H. E., Locke’s Theory of Personal Identity: a Re-examination. Locke on Human

Understanding. Selected Essays. I.C. Tipton (Ed.), Oxford, Oxford University Press, 1977. Bode, S., He, A. H., Soon, C. S., Trampel, R., Turner, R., & Haynes, J.-D. D (2011). Tracking the

Non-Conscious Generation of Free Decisions Using UItra-High Field fMRI. PLOS ONE,

6(6), e21612. doi: 10.1371/journal.pone.0021612 Crick, F C, and C Koch. What Is the Function of the Claustrum? Philosophical Transactions of

t h e Royal Society B: Biological Sciences 360, no. 1458 (June 29, 2005): 1271-79.



doi:10.1002/cne.903350106. Davidson, D (1963). Actions, Reasons and Causes. The Journal of Philosophy, Volume LX, No 23. Dennett, D. C (1984). Elbow Room. Oxford: Oxford Dennett, D. C (1993). Consciousness Explained. London: Penguin. Dennett, D. C., and Kinsbourne, M (1992a). Consciousness and the observer: The where and when

of consciousness in the brain. Behavioural and Brain Sciences, 15:2. Dennett, D. C. and Kinsbourne, M (1992b). Escape from the Cartesian Theater. Behavioural and

Brain Sciences, 15:2 Descartes, R., Discourse on Method. London: Penguin Double, R. (1991). The Nonreality of free-will. Oxford: Oxford Eagleman, D. 2012. Incognito. Gallup, G. G. Jr. (1970). "Chimpanzees: Self recognition". Science 167 (3914): 86-87. doi:10.1126/

science.167.3914.86 Gazzaniga, M. S (2012). Who’s In Charge? Free Will and the Science of the Brain. London:

Constable & Robinson. Hill, C. S. Tim Bayne on the Unity of Consciousness. Analysis. 74:3. July 2014. Hume, D (1740). A Treatise of Human Nature. London: Penguin Humphrey, N. Dennett, D. C. (1989), Speaking for our selves: an assessment of multiple

personality disorder, Raritan, 9:1, 68-98. Ismael, J (2006). Saving the Baby: Dennett on Autobiography, Agency, and the Self. Philosophical

Psychology. Vol. 19, No. 3, June 2006, pp345-360 Jeannerod, M (1992). The where in the brain determines the when in the mind. Behavioural and

Brain Sciences, 15:2 Kane, R (1996). The Significance of free-will. Oxford: Oxdord Kane, R. (2002) (Ed.). Free-will. Oxford: Blackwell. Koenigs, M., Young, L., Adolphs, R., Tranel, D., Cushman, F., Hauser, M., and Damasio, A. (2007)

Damage to the Prefrontal Cortex Increases Utilitarian Moral Judgements. Nature 446,

no. 7138 (April 18, 2007): 908-11. doi:10.1038/nature05631. Koubeissi, Mohamad Z, Fabrice Bartolomei, Abdelrahman Beltagy, and Fabienne Picard.



Electrical Stimulation of a Small Brain Area Reversibly Disrupts Consciousness.

Epilepsy & Behavior 37 (August 2014): 32-35. doi:10.1016/j.yebeh.2014.05.027. 16

Leon, M (2002). Responsible Believers. The Monist, vol. 85, no. 3 Libet, B (1982). Brain stimulation in the study of neuronal functions for conscious sensory

experiences. Human Neurobiology 1 Libet, B (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action.

Behavioural and Brain Sciences, 8:4. Also see his response to critics: Theory and

evidence relating cerebral processes to conscious will. Ibid. Libet, B (2001). Consciousness, Free Action, and the Brain. Commentary on John Searle’s Article.

Journal of Consciousness Studies, 8 No. 8. Locke, J. (1689). An Essay Concerning Human Understanding. Available online. Marais, E. The Soul of the Ape and the Soul of the White Ant. London: Penguin. Marks, C. E (1981). Commissurotomy, Consciousness, and Unity of Mind. Cambridge, Mass. MIT:

Bradford Books McDermott, D (1992). Little ‘me’. Behavioural and Brain Sciences, 15:2 McKinnon, N. 2002. The Endurance/Perdurance Distinction. The Australasian Journal of

Philosophy 80:3 p. 288-306; Northoff, G., & Bermpohl, F (2004). Cortical midline structures and the self. Trends in Cognitive

Sciences, 8(3), 102-107. doi:10.1016/j.tics.2004.01.004 Obhi, S. S., & Haggard, P. (2004). Free Will and Free Won't: Motor actitivy in the brain precedes

our awareness of the intention to move, so how is it that we perceive control? American

Scientist, 92(4), 358-365. Parfit, D (1971). Personal Identity. Philosophical Review, Vol. 80 Parfit, D (1984). Reasons and Persons. Clarendon: Oxford. Ryle, G (1949). The Concept of Mind. London: Hutchinson Searle, J. R (1980). Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-45 Searle, J. R (1999). Consciousness.

http://socrates.berkeley.edu/~jsearle/html/articles/consciousness.html. Searle, J. R (2001/2001a). Free Will as a Problem in Neurobiology. Philosophy, 76(04).

doi:10.1017/S0031819101000535 Searle, J. R (2001b). Further Reply to Libet. Journal of Consciousness Studies: 8, No. 8. 2001. Sider, T. 2001. Four-Dimensionalism Oxford: Clarendon Soon, C. S., Brass, M. M., Heinze, H.-J., & Haynes, J.-D. D (2008). Non-Conscious determinants

of free decisions in the human brain. Nature Neuroscience, 11(5), 543-545. Strawson, P. F. (1974). Freedom and Resentment and other Essays. London: Methuen. Velleman, J. D (2000). The Possibility of Practical Reason. Clarendon: Oxford

17

Lihat lebih banyak...

Comentários

Copyright © 2017 DADOSPDF Inc.