Two problems with “self-deception”: No “self” and no “deception”

http://homepage.psy.utexas.edu/HomePage/Faculty/Swann/docu/brooks-swann.pdf

doi:10.1017/S0140525X10002116

Robert Kurzban
Department of Psychology, University of Pennsylvania, Philadelphia, PA 19104.
kurzban@psych.upenn.edu
http://www.psych.upenn.edu/kurzban/

Abstract:
While the idea that being wrong can be strategically advantageous in the context of social strategy is sound, the idea that there is a “self” to be deceived might not be. The modular view of the mind finesses this difficulty and is useful – perhaps necessary – for discussing the phenomena currently grouped under the term “self deception.”

I agree with a key argument in the target article, that the phenomena discussed under the rubric of “self-deception” are best understood as strategic (Kurzban, in press; Kurzban & Aktipis 2006; 2007). For a social species like humans, representations can play roles not just in guiding behavior, but also in manipulating others (Dawkins & Krebs 1978). If, for example, incorrect representations in my head (about, e.g., my own traits) will contribute to generating representations in your head that I am a valuable social partner, then selection can act to bring about mechanisms that generate such incorrect representations, even if these representations are not the best estimate of what is true (Churchland 1987).

This is an important idea because generating true representations has frequently been viewed as the key – indeed only – job of cognition (Fodor 2000; Pears 1985). True beliefs are obviously useful for guiding adaptive behavior, so claims that evolved computational mechanisms are designed to be anything other than as accurate as possible requires a powerful argument (McKay & Dennett 2009). Indeed, in the context of mechanisms designed around individual decision-making problems in which nature alone determines one’s payoff, mechanisms designed to maximize expected value should be expected because the relentless calculus of decision theory punishes any other design (Kurzban & Christner, in press). However, when manipulation is possible and a false belief can influence others, these social benefits can offset the costs, if any, of false beliefs.

Despite my broad agreement with these arguments, I have deep worries about the implicit ontological commitments lurking behind constructions that animate the discussion in the target article, such as “deceiving the self, “convincing the self,” or “telling the self.” Because I, among others, do not think there is a plausible referent for “the self” used in this way (Dennett 1981; Humphrey & Dennett 1998; Kurzban, in press; Kurzban & Aktipis 2007; Rorty 1985), my concern is that referring to the self at best is mistaken and at worst reifies a Cartesian dualist ontology. That is, when “the self” is being convinced, what, precisely, is doing the convincing and what, precisely, is being convinced? Talk about whatever it is that is being deceived (or “controlled,” for that matter; Wegner 2005) comes perilously close to dualism, with a homuncular “self” being the thing that is being deceived (Kurzban, in press).

So, the first task for self-deception researchers is to purge discussions of the “self” and discuss these issues without using this term. Modularity, the idea that the mind consists of a large number of functionally specialized mechanisms (Tooby & Cosmides 1992) that can be isolated from one another (Barrett 2005; Fodor 1983), does exactly this and grants indispensable clarity. For this reason, modularity ought to play a prominent role in any discussion of the phenomena grouped under the rubric of self-deception. Modularity allows a much more coherent way to talk about self-deception and positive illusions that finesses the ontological difficulty.

Consider the modular construal of two different types of self-deception. In the context of so-called “positive illusions” (Taylor 1989), suppose that representations contained in certain modules – but not others – “leak” into the social world. For such modules, the benefits of being correct – that is, having the most accurate possible representation of what is true in these modules – must be balanced against the benefits of persuasion (sect. 9). If representations that contain information about one’s traits and likely future will be consumed by others, then errors in the direction that is favorable might be advantageous, offsetting the costs of error. For this reason, such representations are best understood not as illusions but as cases in which some very specific subset of modules that have important effects on the social world are designed to be strategically wrong, – that is, they generate representations that are not the best estimate of what is true, but what is valuable in the context of social games, especially persuasion.

Next, consider cases in which two mutually inconsistent representations coexist within the same head. On the modular view, the presence of mutually inconsistent representations presents no difficulties as a result of informational encapsulation (Barrett & Kurzban 2006). If one modular system guides action, then the most accurate representations possible should be expected to be retained in such systems. If another modular system interacts with the social world, then representations that will be advantageous if consumed by others should be stored there. These representations might, of course, be about the very same thing but differ in their content. As Pinker (1997) put it, “the truth is useful, so it should be registered somewhere in the mind, walled off from the parts that interact with other people” (p. 421). One part of the mind is not “deceiving” another part; these modular systems are simply operating with a certain degree of autonomy.

The modular view also makes sense of another difficulty natural language introduces into discussion of self-deception, the folk concept of “belief” (e.g., Stitch 1983). If it is true that two modular systems might have representations about the very same thing, and that these two representations might be inconsistent, then it makes no sense to talk about what an agent “really,” “genuinely,” or “sincerely” believes. Instead, the predicate “believe” attaches to modular systems rather than people or other agents (Kurzban, in press). This has the added advantage of allowing us to do away with metaphorical terms like the “level” on which something is believed (sect. 7), and we can substitute a discussion of which representations are present in different modules. Again, this undermines the folk understanding of what it means to “believe” something, but such a move, taking belief predicates away from agents as a whole, is required on the modular view and helps clarify that the belief applies to modules, that is, parts of people’s minds, rather than a person as a whole.

Generally, trying to understand self-deception with the conceptual tool of evolved function is an advance. Trying to understand self-deception without the conceptual tool of modularity is needlessly limiting.