Notes
-
[1]
Our approach is different from that of bounded rationality. Bounded rationality points to three types of constraints that might affect the rationality of agents’ decision making: (1) only limited information is available; (2) human agents only have limited computational and information processing capacity; (3) only limited time to make a decision is available. Although the constraints of bounded rationality are relevant to the analysis of doxastic biases, here we want to focus on a different phenomenon, the fact that for each decision agents make there are not only epistemic but also pragmatic and psychological goals that agents may attempt to fulfill. In some cases, the goals converge and can be attained simultaneously, but at other times the goals diverge and only some of the relevant goals can be attained.
-
[2]
The references for the articles of this issue can be found below, p. 407.
-
[3]
The authors acknowledge the support of the European Research Council for project PERFECT (grant agreement 616358) in the preparation of this article.
1Our behaviour as agents can have a multiplicity of goals. These might be pragmatic in nature (for example, fulfilling practical goals such as being well fed). They might be psychological in nature (for example, increasing wellbeing or reducing anxiety). They might also be epistemic in nature, and have to do with the attainment of true beliefs about ourselves or the world. Epistemologists have identified different notions of epistemic attainment, and different senses in which one can fail epistemically by being doxastically irrational.
2Doxastic irrationality is the irrationality of beliefs. It does manifest in different ways and comprises: (a) beliefs that do not cohere with each other and violate other basic principles of formal logic or probability theory; (b) beliefs that are factually erroneous; (c) beliefs that are not well-supported by, or responsive to, evidence; (d) beliefs that are poorly calibrated because people assign inaccurate degrees of confidence to them; (e) beliefs that are well integrated in people’s behaviour. In this article we are interested in the effects of some biases leading to doxastic irrationality on individual agents and groups of agents.
3So, doing well epistemically can mean fulfilling epistemic goals such as: (a) having beliefs that cohere, and which are based on principles of formal logic or probability theory; (b) having beliefs that are factually based; (c) having beliefs that are well supported by, or responsive to evidence; (d) having well calibrated beliefs; (e) having beliefs that are well integrated in people’s behaviour.
4Agents’ behaviour may be assessed negatively when it fails to satisfy the epistemic goals above, or other epistemic goals such as attaining beliefs that encourage exchange of information with other agents, or developing an intellectual virtue such as curiosity or honesty. But there are other reasons why a behaviour may be negatively assessed, for instance by failing to fulfill other types of goals (pragmatic or psychological). Sometimes, an instance of behaviour can succeed at fulfilling some goals and fail to fulfil others. Furthermore, new costs and benefits may emerge when the behaviour in question is assessed in the context of groups to which the agent belongs, rather than simply at the individual level.
5In this paper, we argue for the need for an analysis that is sensitive to the multiplicity of goals of human behaviour at both the individual and group level, when assessing failures of doxastic rationality. [1] We argue that doing so can reveal some underexplored costs, as well as benefits, that can be pragmatic, psychological, and epistemic in nature. Examples of practices that may lead to doxastic irrationality include the overconfidence bias, biases about one’s own and other social group(s), and optimistically biased beliefs about the self. Here we will focus on these families of biases as they are, arguably, among the ones that have attracted the greatest attention of philosophers.
6In section 1, we consider the literature on overconfidence and the different ways in which overconfidence can represent failures of doxastic rationality. We then consider whether overconfidence may have any positive effects on individuals and society. We argue that overconfidence may indeed bring benefits to overconfident agents, but remain more sceptical as to whether overconfidence brings any benefit to the social group the agent belongs to and to society at large.
7In section 2, we show that the intergroup bias (the tendency to favour members of a group one belongs to) often leads to doxastic irrationality and to making irrational decisions based on false or ill-grounded beliefs. Next, we ask whether the bias has any positive effects on individuals and society, and find that it may have some psychological benefits for individuals and even groups (in terms of enhancing personal or collective self-esteem), but that the latter benefits tend to apply more to groups that are already socially dominant. However, for non-dominant groups, fostering negative outgroup perceptions of dominant groups can enhance the former’s recognition of their own oppression.
8In section 3, we show that the optimism bias (the tendency to make rosier predictions about one’s own future than is warranted by the evidence or the tendency to believe that one’s future will be rosier than that of other people) is an instance of doxastic irrationality but also has well-documented positive effects on people’s wellbeing and mental health. We then ask whether the optimism bias also impacts positively on the cohesion of social groups and society at large. As the optimistic agent is described as more productive, resilient and caring, it would seem that optimistically biased beliefs are a blessing not just for the agent, but also for those the agent interacts with. However, this is only part of the story, and the effects of people’s optimism on their contribution to the community cannot be easily predicted.
9An important lesson seems to emerge from our discussion of these three biases: any alleged benefits at the individual level represent only part of the story when trying to address the effects of doxastic irrationality. It is possible that benefits at the individual level also imply costs at the group level. This means that as long as discussions over costs and benefits of doxastic irrationality are committed to methodological individualism, their conclusions should be adequately qualified. Hence, we will appeal to cognitive scientists to further investigate the potential benefits of reasoning strategies leading to doxastic irrationality, considering not only their effects on individual agents, but also on groups and society at large.
1. The overconfidence bias
10We will start our analysis of the costs and benefits of doxastic irrationality by referring to literature on overconfidence, which is one of the flagship effects, or better families of effects, that have been discovered and discussed within the heuristics and biases research tradition (Lichtenstein et al., 1982) [2]. As it turns out, the term “overconfidence” has been deployed to refer to several phenomena, from hubris to “unskilled and unaware of it” effects (Kruger and Dunning, 1999). Recently, three definitions of overconfidence have been distinguished in the psychological literature: (i) overestimation, (ii) overplacement, and (iii) overprecision (Moore and Healy, 2008).
11Overestimation is meant to refer to those cases in which a subject believes that they are better than they actually are. It thus focuses on the comparison between a person’s actual performance and their beliefs about their performance. For example, you might want to compare the number of right answers offered by a person on a particular knowledge test with the same person’s estimate of how many questions they think they answered correctly. Overplacement refers instead to those cases in which agents hold the exaggerated belief that they are better than others. It thus involves a comparison between the actual performance of a person in relation to others and their beliefs about their performance in relation to others. Finally, overprecision refers to cases in which an individual places an inflated sense of confidence in the accuracy of their beliefs. Notably, assessing the extent to which these dimensions represent different psychological constructs is a topic of growing interest (e.g., Larrick et al., 2007).
1.1. The overconfidence bias and doxastic irrationality
12An important reason to appreciate differences between the three aspects of overconfidence is that they capture distinct failures of doxastic irrationality. For instance, whilst cases of overplacement represent instances of beliefs that are factually erroneous, cases of overprecision seem to be better framed in terms of beliefs in which individuals are excessively confident. Consider research on overplacement: participants who fall prey of the “better-than-average” effect (Taylor, 1989) and believe that they drive better than the average when this is not a true display of doxastic irrationality in that their beliefs are factually erroneous. But consider now research on overprecision, which shows that participants often have poorly calibrated beliefs. Confidence in general knowledge is typically studied with questions of the following kind:
Which city has more inhabitants?
(a) Hyderabad, (b) Islamabad
How confident are you that your answer is correct?
14Participants are then asked to choose what they believe to be the correct answer and to rate their confidence that the answer they offered is correct. Research has suggested that people are prone to place too much confidence in the accuracy of their beliefs (for an overview, see Lichtenstein et al., 1982).
15Interestingly, whilst probabilistic beliefs are typically assessed using so-called coherence criteria such as the conjunction rule of probability (Polonioli, 2015; Arkes et al., 2016), in research on overprecision a correspondence-based method is used to assess the inaccuracy of probabilistic judgments (Newell et al., 2007, 81). More precisely, the calibration of subjective probabilities is measured by comparing subjective probability judgments with the corresponding objective probabilities. Experimenters count how many answers in each of the confidence categories are factually correct. For example, in all the cases where participants said, “I am 100 % confident that my answer is correct,” the relative frequency of correct answers is typically only about 80 %; in all the cases where participants said, “I am 90 % confident that my answer is correct,” the relative frequency of correct answers is only about 75 % (Lichtenstein et al., 1982).
16Whilst empirical literature on overconfidence has been taken to clearly indicate that people exhibit excessive confidence in their beliefs, some researchers have instead argued that these effects are not real, after all. Consider, for instance, research on overprecision. Scholars within the framework of ecological rationality have argued that participants may be well calibrated to the probability structure of their environment. Gigerenzer (1991) argued, for instance, that many of the tests of confidence in general knowledge that have been used in the empirical literature have a very large number of difficult questions. In other words, although people are likely to be well calibrated when considering their normal performance on typical knowledge questions, overprecision can be documented when the experimenter asks participants to take tests that are especially difficult. On this view, if items are instead randomly selected from a reference class of objects defined by a natural environment, overprecision should diminish. Whilst a few studies support this hypothesis (e.g., Gigerenzer et al., 1991), there is also important evidence that at least a significant degree of overconfidence remains even when task difficulty is controlled for (Klayman et al., 2006).
17Importantly, findings on overconfidence are still taken in the literature not only to reveal something important about our psychology, but also to account for some negative outcomes experienced by people. For instance, Griffin and Tversky write that:
Overconfidence in the diagnosis of a patient, the outcome of a trial, or the projected interest rate could lead to inappropriate medical treatment, bad legal advice, and regrettable financial investments. It can be argued that people’s willingness to engage in military, legal, and other costly battles would be reduced if they had a more realistic assessment of their chances of success. We doubt that the benefits of overconfidence outweigh its costs (1992, p. 432).
19Others, like Dominic Johnson (2009), have even ventured to suggest that international conflicts and wars should often be accounted for by appealing to the effect of overconfidence. But is overconfidence conducive to negative life outcomes? As we will see in the next subsections, no simplistic and unqualified answers should be given. The answer might depend on whether we focus on the agent or rather on the group she is part of.
1.2. Effects of the overconfidence bias on the individual
20Whilst overconfidence has been widely considered as an impediment to individual success (Dunning et al., 2004), here we wish to address the possible benefits of the particular kind of doxastic irrationality that is exemplified by cases of overconfidence. In particular, it might be useful to consider social contexts and dynamics. There is a growing and important body of literature on various possible benefits of imperfect thoughts and cognitions (Bortolotti, 2014, 2015; Gigerenzer and Selten, 2001; Polonioli, 2014; Sullivan-Bissett, 2015). Moreover, recent work suggests that social contexts deserve special attention when considering whether cognitions that are doxastically irrational can lead to successful behavior and hence be adaptively rational (e.g., Hertwig and Hoffrage, 2013).
21But what exactly constitutes success in social worlds? Whilst in natural contexts accuracy, speed, and frugality have been presented as the key goals of cognizers, in social contexts relevant goals may encompass fairness (e.g., Fehr and Schmidt, 1999), accountability (Tetlock, 1992), unpredictability (e.g., protean behaviour: Miller, 1997), honor, pride, and face-saving (e.g., Nisbett and Cohen, 1996), equality (e.g., Messick, 1993) and self-interest. The question, at this point, is whether overconfidence could bring about any benefits and help agents achieve some of their goals.
22Recent literature suggests that the effects of overconfidence might be more nuanced than previously hypothesized and that overconfidence effects can in fact have benefits as well as costs. According to Von Hippel and Trivers (2011), overconfidence is a form of self-deception that serves the goal of interpersonal deception by convincing others that one’s enhanced self-views are not overstated, and a growing body of work is currently addressing the interpersonal consequences of overconfidence, such as the impact of overconfidence on peer-rated competence or status. For instance, Vallinder and Olsson (2013) suggest that overconfidence might be beneficial from a purely epistemic point of view, protecting the inquirer from a kind of self-defeating doubt that may arise from observing a series of bad results.
23Anderson et al. (2012) recently focused on overplacement and examined whether the desire for status leads to higher levels of overconfidence. Their argument was that overconfidence might help individuals attain higher social status by making them appear more competent than they are in the eyes of others, even when in fact they lack competence. Individuals’ competence is often hidden from others and skills have to be appraised on the basis of only superficial cues, which may be, for instance, a person’s appearance or their style of speaking. On this view, high levels of overconfidence should lead the individual to exhibit more competence cues, in the same way as high degrees of justified confidence do. As it turns out, it is often difficult to differentiate justified from unjustified confidence. It should be noted that Anderson and colleagues. (2012) did test some of these hypotheses, reporting that partners indeed perceived overconfident individuals to be more competent. Such dynamics seem likely to conduce to a cascade of psychological and pragmatic benefits for overconfident people.
24But there are other findings pointing to possible benefits of overconfidence (Ronay et al., 2017). In particular, Murphy and colleagues (2015) explored other dimensions of the possible adaptiveness of overconfidence by focusing on overprecision in the context of partner choice. The idea behind their study was that good relationships are key to happiness (Zimmermann and Easterlin, 2006), and the authors sought to provide evidence for the role of overconfidence in mate choice. Here, again, the authors reasoned that traits such as intelligence, kindness, and competence are not directly visible and thus must be inferred from relevant behaviors. In light of the fact that people have access to more information about themselves than anyone else does, their assessment of their own qualities, expressed through a high level of confidence, may be a useful indicator for judging their quality as a potential partner. In brief, having a great amount of confidence can be beneficial, as it can indicate the presence of a number of desirable traits. In line with such considerations, Murphy and colleagues (2015) indeed found that overconfident individuals were perceived as more desirable, and that individuals who can project confidence (but not arrogance) benefit strongly in mate attraction.
1.3. Effects of the overconfidence bias on groups and society
25In the previous section we reviewed some evidence suggesting that overconfidence can bring some benefits for the individual. Another interesting question is whether overconfidence could also benefit groups or society at large. Here, however, we wish to express some scepticism, and mainly for the following two considerations. First, it seems that the possible benefits for overconfident people are greater when others in society, and especially those against whom the overconfident subjects are competing, do not display overconfidence traits. For instance, in various contexts such as job selection, mate choice, or competition for leadership, overconfident individuals might in fact benefit from others being under-confident or just not displaying overconfidence. Second, and relatedly, the social contexts discussed above are competitive and the described dynamics seem to involve some levels of deception. More precisely, overconfidence would seem to bring about benefits to the agent by deceiving the individuals with whom the overconfident agent interacts.
26Environments and contexts that are characterized by high levels of competition and deception may be classed as “hostile” (Sterelny, 2003). To be sure, not all social dynamics should be characterized this way. Still, at least in the cases discussed above, the benefit of the overconfident agent seems to emerge from some sort of deception, and in those cases the person who is judging skills or knowledge based on the cues provided by overconfident people is being deceived. The people making the assessment are, as a consequence, worse off, and not only epistemically, but also pragmatically as they acquire a set of false beliefs or impressions about the person they are interacting with.
2. The intergroup bias
27Not only do people generally overestimate their own skills and knowledge, but they also have a systematic tendency to favour members of groups to which they belong over members of groups to which they do not belong (Tajfel and Wilkes, 1963; Crocker et al., 1987; Hewstone et al., 2002; Martiny-Huenger et al., 2014). This is known as the intergroup bias and it has cognitive manifestations (such as noticing particular features of some groups more readily than others); evaluative manifestations (such as the tendency to judge members of one’s own group more favourably than members of out-groups); and behavioural manifestations (such as rewarding members of one’s own group at a higher rate than out-group members). The intergroup bias can occur in relatively arbitrary groupings (such as sharing first names or birthdays, as in Burger et al., 2004) but research suggests that individuals are more susceptible when the in-group is central to their identity (Oakes et al., 1994; Branscombe and Wann, 1991), rendering instances of intergroup bias that track social identity of particular interest.
28Noticing and evaluating groups is not sufficient for bias — bias occurs when the norms governing some activity are violated. For instance, at least some aspects of cognition aim at accuracy, so, to the extent that the cognitions just described constitute distorted representations, they violate this norm. They can also lead to further distortions downstream in cognition (Puddifoot, 2017). Whilst intergroup biases violate some conditions for success and good functioning, they may also contribute to achieving success in a variety of other domains by enhancing an individual’s sense of collective self-esteem (in 2.2); and facilitating more accurate representation of the discrimination faced by marginalized groups (in 2.3), benefits that should be considered alongside their costs in efforts to reduce such biases.
2.1. The intergroup bias and doxastic irrationality
29The intergroup bias can give rise to doxastic irrationality because perceiving people or objects as members of a group can result in cognition that is not supported by evidence. For instance, perceiving target objects as members of groups leads individuals to both (i) believe there to be greater within-group similarity than there really is, and (ii) to believe there to be greater between-group difference than there really is.
30Cognition of greater within-group similarity and between-group difference can occur at the perceptual level. For instance, subjects who are shown a set of short lines labelled ‘A’ and a set of long lines labelled ‘B’ perceive greater similarity in length between same-labelled lines and greater dissimilarity between differently-labelled lines than subjects who see the same lines without labels (Tajfel and Wilkes, 1963). Such effects also occur when evaluating beliefs and opinions (Hensley and Duval, 1976).
31Perceiving people as members of other social groups alters cognition, producing, in particular, perceptions of out-group homogeneity, particularly in line with stereotypical traits. For example, people consider members of ethnic groups other than their own as highly similar to each other when considering stereotypical attributes of the groups in question (Tajfel et al., 1964; Tajfel, 1981). Settler Canadians perceive members of the indigenous population as having more group-stereotypical characteristics than they perceive their own group to have (Osgood et al., 1975). White subjects cannot distinguish differences between pictures of black people as well as they can distinguish between those of white people (Malpass and Kravitz, 1969).
32In summary, intergroup biases lead people to over- or under-represent important social details. In these cases, cognition does not reflect reality and the available evidence, and as such, these biases can be considered doxastically irrational.
2.2. Effects of the intergroup bias on the individual
33Intergroup biases have a number of effects on the individual. Further to leading individuals to misrepresent features of social groups, when intergroup biases are activated, they can mediate decision-making, resulting in behavior that does not cohere with the individual’s other beliefs and commitments, or behaviour that might seem unreasonable to an observer. For instance, in an experiment in which participants aim to hire the best candidate for the role of police chief, aiming to choose objectively on the basis of the candidates’ qualifications, men rate particular qualifications as more relevant to the job when they are had by male candidates as opposed to when they are had by female candidates (Uhlmann and Cohen, 2005). This leads to hiring decisions which favour applicants on the basis of their gender. If an individual’s aim is to hire the best qualified candidate, but they are influenced by factors other than qualifications, then they are not acting in accordance with their aims, demonstrating instrumental irrationality.
34Other studies reveal that individuals allocate higher financial rewards to those that they are led to believe share their recently formed aesthetic preferences than to those who seem not to (Tajfel, 1970), and more readily help those with whom they are incidentally similar, such as through sharing a birthday or first name (Burger et al., 2004). People also allocate rewards to members of their ethnic group at a higher rate than to out-group members, even when this transaction is costly to them (Whitt and Wilson, 2007). In these studies people may act for what we might think are not particularly good reasons (e.g., sharing a birthday with someone seems like not a particularly good reason to reward them over someone else) and so may be practically irrational as well as doxastically so.
35Although intergroup biases lead to irrationality, a number of theorists argue that there are psychological benefits to developing an overly positive conception of one’s social identity which facilitate goal pursuit and good functioning in some domains. For instance, intergroup bias has been shown to enhance an individual’s feelings of self-esteem and self-worth, psychologically desirable resources associated with good functioning (Tajfel and Turner, 1986; Jelić, 2009). One way to enhance one’s conception of one’s identity is to over-represent the positive characteristics of the group(s) to which one belongs ‒ that is, to sometimes see positive features of the group that aren’t really there, or to overestimate the extent to which these features obtain. Another way to enhance identity is to over-represent negative characteristics of groups with which one does not identify ‒ that is, to sometimes see negative features that aren’t really there, or to overestimate the extent to which these features obtain, thus enhancing the concept of one’s in-group through contrast with the negatively perceived outgroup.
36It is a prominent finding that individuals high in intergroup bias also tend to be high in self-esteem (Crocker et al., 1987). It is proposed that high self-esteem individuals are able to maintain a positive conception of themselves at least in part through expressing intergroup bias, because doing so bolsters the individual’s self-concept (Aberson et al., 2000). This may return certain psychological benefits. For instance, fans who strongly identify with a sports team (which is associated with greater intergroup bias) report increased frequency of positive emotions and fewer feelings of depression, alienation and other negative emotions than those who do not (Branscombe and Wann, 1991).
37Others point out the importance of distinguishing between the different components of self-esteem, in particular distinguishing personal self-esteem, which has to do with our individual characteristics, from collective self-esteem, which has to do with how we see ourselves as a member of a specific social group (Luhtanen and Crocker, 1992). Research demonstrates that an individual’s sense of collective self-esteem is enhanced through intergroup bias (Jelić, 2009) and that people who are high in collective self-esteem react to threats to this form of self-esteem by favouring their in-group and derogating out-groups (Luhtanen and Crocker, 1992).
38Although intergroup biases may have some negative epistemic and practical outcomes for the individual, as discussed above, there are also some indirect psychological benefits through raising self-esteem and enhancing psychological functioning. Collective forms of self-esteem are associated with mental health and happiness (Crocker et al., 1994; Diener and Diener, 1995; Bettencourt and Dorr, 1997; Simsek, 2013). In particular, Simsek (2013) argues that personal self-esteem mediates the effects of collective self-esteem on well-being and partially on happiness. Collective self-esteem is also negatively correlated with depression in both white and African-American students (Luhtanen et al., 1991), and there is some evidence of a correlation between collective self-esteem and well-being for white and Asian participants, although not for African-American participants (Crocker et al., 1994). Individuals may also elect to reap the benefits of collective self-esteem through a phenomenon known as ‘basking in reflected glory’ such as wearing one’s team kit when one’s team has done well (Cialdini et al., 1976). So, whilst individuals engaging in the intergroup bias may manifest irrationality, they may also reap significant psychological benefits.
2.3. Effects of the intergroup bias on groups and society
39The intergroup bias has a number of obvious societal costs, thwarting some of the social goals discussed in section 1.2. Disfavouring people and denying their access to certain social goods on the basis of their social identity clearly violates fairness (Fehr and Schmidt, 1999) and equality (Messick, 1993). As intergroup biases raise the possibility that people are treated unfairly in various intergroup exchanges, then there are important consequences for the effectiveness of societal institutions that purport to operate on the basis of fairness, such as the justice system (Stammers and Bunn, 2015). Intergroup biases can also damage social cohesion and cooperation (Taylor et al., 1978, p. 799) and exacerbate intergroup conflict (Cohen and Insko, 2008), which can be detrimental to a well-functioning society. Further, because the intergroup bias tends to be greater in already dominant social groups with greater access to societal resources, it may exacerbate the unfair distribution of resources among non-dominant groups (Sidanius et al., 2000; Pratto et al., 1993; Sidanius et al., 1991). So, one might think that in order to increase the accurate representation of groups and their tendencies, and to decrease unfairness and inequality across society, we ought to act to reduce all instances of intergroup bias. But further research reveals that the picture is more nuanced.
40Various findings demonstrate that intergroup bias against a dominant group mediates a non-dominant group’s awareness of discrimination and disadvantage (Rodriguez and Gurin, 1990; Ellison and Powers, 1994; Wright and Lubensky, 2008; Saguy et al., 2009; Dixon et al., 2010). To the extent that attitudes negatively biased against a dominant group enable a marginalized group to recognise how discrimination acts against their interests, intergroup biases are advantageous. For instance, Saguy and colleagues (2009) demonstrate that a disadvantaged group that fosters negative perceptions of an advantaged group predicts how the advantaged group will distribute a reward to them with greater accuracy than a disadvantaged group that fosters positive out-group perceptions. In this case, the groups are arbitrarily dis/advantaged, but other studies demonstrate similar effects in real communities.
41Black South Africans, for example, have suffered a long history of discrimination that still structures present society (Dixon et al., 2010, p. 404). Dixon and colleagues found that the stronger black South Africans’ negative attitudes towards white South Africans, the greater their perceptions of group discrimination. By having awareness of this discrimination, South Africans fulfil a number of natural and social goals discussed in 1.2. If discrimination is occurring, then through awareness of it, one meets goals of accurately representing one’s situation. Further, such awareness may promote psychological wellbeing by vindicating the distress of discrimination, confirming its basis is real. Finally, awareness of discrimination enables marginalized groups to take action to reduce discrimination. Following these results, and other findings which suggest that more equal societies as a whole enjoy a variety of mental and physical health benefits (Wilkinson and Pickett, 2010), it is not straightforwardly true that reducing all instances of the intergroup bias leads to a fairer, healthier society. So, the findings that some instances of intergroup bias facilitate more accurate representations of oppression, enabling collective action towards a fairer society, should be considered in measures to reduce intergroup biases.
3. The optimism bias
42We saw that people overestimate their own skills and knowledge and favour members of the groups they belong to. They are also optimistically biased when they evaluate behaviors, contributions, and outcomes in terms that are particularly favourable to themselves. Shelley Taylor (1989) discusses different types of so-called “positive illusions.” One has the illusion of control when one overestimates one’s capacity to control independent, external events (e.g., Langer and Roth, 1975). One experiences the better-than-averageeffect or has the illusion of superiority when one regards oneself as above average and overrates one’s performance relative to others in a variety of domains (e.g., Brown, 2012; Wolpe et al., 2014). The optimism bias is a tendency to predict that one’s future will be largely positive and will yield progress, and that negative events will not be part of one’s life (e.g., Lench and Bench, 2012).
43The three classic positive illusions, and the optimism bias in particular, lead to doxastic irrationality because they give rise to beliefs that are biased, not responsive to evidence, and often also factually erroneous. But it has been argued that positive illusions have beneficial effects on the individual’s functioning, contributing in many contexts to wellbeing, mental health, and goal attainment. What we ask here is whether there are any reasons to believe that optimistically biased beliefs also have benefits for social groups and society at large.
3.1. The optimism bias and doxastic irrationality
44Optimistically biased beliefs are an example of doxastic irrationality, due to the ways in which they are adopted and maintained (Jefferson et al., 2017). For instance, in the optimism bias, a person makes a prediction about whether they will get a divorce without taking into account some of the statistical evidence at their disposal, such as how frequent divorce is among people who live in the same environment. Typically, the person prioritises evidence for positive outcomes, such as having a long lasting and satisfying relationship, and dismisses or neglects evidence for negative outcomes, such as separating from their life partner (Jefferson et al., 2017).
45One mechanism that allows people to arrive at and maintain unrealistically positive beliefs in the face of disconfirming evidence is systematically distorted belief updating, where people take into account desirable news to a greater extent than undesirable news. In the influential paradigm introduced by Sharot (Sharot, 2011; Sharot et al., 2011), people are asked first to provide risk estimates for negative future events. Next, they are confronted with base rates for these events. At a later stage, they are asked for another risk estimate for the same negative event. It has been shown that when people update their initial risk estimates, they tend to incorporate desirable base rate information (i.e., information suggesting that risks are lower than expected) to a greater extent than undesirable base rate information (i.e., information implying that risks are higher than expected).
46Other positive illusions and self-enhancing beliefs are adopted and maintained in ways that signal either areas of ignorance or doxastic irrationality. When people overestimate their own skills, they might be incompetent, that is, they may not know how to measure their skills against the appropriate standards, and adopt as a result an overly optimistic belief about their skills (Kruger and Dunning, 1999). As a result, the person comes to believe that they do better in a particular domain than is warranted by the evidence.
47The person may have the relevant competence and thus know what it takes to be especially talented or skilful in a given domain, but neglect information about the comparison between their own performance and that of others, focusing primarily on more salient evidence about themselves. This may result in the belief that they are above average in a specific domain when that is not the case (Sedikides and Gregg, 2008).
48Another case is mnemic neglect, which consists of the person recalling more readily praise than blame, and success than failure, when looking for evidence of her skills in her autobiographical memory. Even when negative feedback is obtained and registered, a person may still fail to learn from it, and thus not realise that their performance could be improved (Hepper and Sedikides, 2012). This is often due to the fact that the available feedback is either incomplete or dishonest (people do not always tell the truth when they offer feedback on someone’s performance for fear of hurting her feelings).
49We saw some of the ways in which optimistically biased beliefs can be formed. Once we have adopted them, do we ever get rid of them? Optimistically biased beliefs are resistant to counterevidence, but Taylor argues that they are not fixed and can be “adjusted” depending on the context (1989). People tend to be more optimistic about events that they know they can partially control, and less optimistic just before receiving feedback about an outcome (Sweeny, Carroll, and Shepperd, 2006). While people may give up their optimistic predictions in order to brace for bad news, they also tend to avoid situations that would cause disappointment, that is, situations in which their optimistic beliefs could be easily disproved (Armor and Taylor, 1998; Neff and Geers, 2013). Although this is evidence of some flexibility in positive illusions, it does not support the claim that people are responsive to evidence in a doxastically rational way when it comes to their optimistically biased beliefs. Rather, optimism seems to be strategically enhanced when there are fewer opportunities for it to be disconfirmed.
3.2. Effects of the optimism bias on the individual
50Some have argued that positive illusions are the most promising case of adaptive misbeliefs (McKay and Dennett, 2009, p. 507); they are designed to be inaccurate because their inaccuracy carries some benefits. Positive illusions are thought to be biologically adaptive for enhancing survival and reproduction and for promoting better physical and mental health. They are also described as psychologically adaptive because they enhance wellbeing and have been correlated to mastery, motivation, productivity, resilience, and even altruistic and caring behaviour (Taylor, 1989).
51Positive beliefs about the self, the world, and the future are associated with happiness, sociability, motivation, and heightened activity (Taylor, 1989, p. 203).
52Self-enhancement is positively related to psychological resources (e.g., extraversion, positive reframing, optimism, mastery, planning, active coping), social resources (e.g., positive relations, family support), and psychological adjustment (e.g., purpose in life, personal growth, subjective well-being); on the other hand, self-enhancement is negatively related to psychological distress (e.g., anxiety, depression, neuroticism, hostility) (Alicke and Sedikides, 2009).
53Generally, people with optimistically biased beliefs are better-adjusted, feel better about themselves, are more sociable, and have a more resilient attitude towards stressful events than people without (Campbell, Rudich, and Sedikides, 2002; Taylor et al, 2003). However, the recent literature has disclosed that in some contexts positive illusions can have harmful as well as beneficial consequences. For instance, optimistically biased predictions about one’s future can increase self-confidence and become self-fulfilling in some circumstances, but in other circumstances they lead one to ignore potential obstacles and take unnecessary risks. By believing that the future will be rosy, people can become complacent and be unprepared when failure ensues (Shepperd et al., 2013). This is due to optimistically biased beliefs fostering feelings of invulnerability or leading to disappointment when expectations are not met.
54In particular, conflicting evidence has been gathered in the areas of romantic relationships and health promotion (for a short review see Bortolotti and Antrobus, 2015). A general tendency to expect positive outcomes (dispositional optimism), characterised as a personality trait, seems to be beneficial across the board, and especially at critical times. However, positive illusions as such can be linked to agents either coping well with existing threats and difficult situations (e.g., Yan and Bonanno, 2015; Murray et al., 1996), or feeling disappointed and disengaging from their goals as a result (e.g., Robins and Beer, 2001; Swann et al., 1994). The key is whether the optimistically biased belief supports an agent’s constructive response to the inevitable setbacks. If the motivation to pursue the desired goal is sustained, then optimistically biased beliefs have a positive impact on goal pursuit and attainment. Otherwise, they may lead the agent to stop seeing the goal as desirable and abandon its pursuit.
3.3. Effects of the optimism bias on groups and society
55One often neglected question in the empirical literature on the effects of the optimism bias is whether optimistically biased beliefs benefit the individual agent only (when they benefit anyone at all) or whether they also have positive effects for social cohesion and cooperation. The question is unlikely to have a simple answer and here we can only scratch the surface using the limited data available.
56Let us consider first what sort of agent an unrealistically optimistic person can be. As we observed earlier, when positive illusions support the agent’s motivation to continue pursuing goals after challenges and setbacks, they are positively correlated with goal attainment, as they indirectly boost success. If it is true then that the optimistic agent is more resilient and productive, their contributions to society may be greater than the contributions of people whose motivation is not equally supported.
57Let us consider next how the unrealistically optimistic person is likely to think of, and behave towards, the people she loves. Not surprisingly, people are found to have better-than-average beliefs not only about themselves, but also about their children and their romantic partners (Gagné and Lydon, 2004). So, when Rita believes that she is smarter and more attractive than average, she also believes that her partner and children are better than average in these domains, and this is likely to strengthen her commitment to her family. Murray and colleagues (1996) found that idealising one’s life partner contributes to relationship stability and satisfaction, because it leads to more constructive responses to the inevitable crises that a couple faces. Those who believe that their partners have ideal qualities strive to make the relationship work and are happy to reinterpret positively what might be experienced as a partner’s failing. Similarly, children receive better care and support by their parents, even at parents’ great personal sacrifice, if parents are happy with them and believe that they will do well (Wenger and Fowers, 2008).
58What about the effects of optimism on social groups beyond the close family group? One way to address this question would be to ask to what extent the unrealistic optimists develop moral virtues that make them more likely to cooperate with others and assist people in their community. There are competing considerations relevant to how optimism affects the capacity people have to interact with, and care for, others. On the one hand, having a positive concept of herself as a person who is morally better than average also means that in at least some contexts Rita will behave in such a way as to confirm her positive concept. “As I am an exceptionally generous person, I will offer a lift to my friend who just missed her bus home.” Research on the self-concept in morally charged situations (Aronson, 1999) has shown that people desire consistency and thus are likely to behave in a way that does not disconfirm their existing beliefs. On the other hand, if Rita believes she is already an exceptionally generous person, she may be less motivated to take steps to prove her generosity and enhance her moral character more generally, becoming complacent (Blanken et al., 2015). This suggests that the effects of people’s optimism on their contribution to the community cannot be easily predicted.
It would be practically useful to know whether the illusion of moral superiority predicts certain types of moral behaviour ‒ for example, dishonesty for monetary gain. On the basis of existing research there is scope for competing predictions. Given the evidence that affirmation of moral image “licenses” subsequent immoral behavior (Blanken, van de Ven, and Zeelenberg, 2015), feeling morally superior may promote greater dishonesty. Alternatively, to the extent that people value belief-behavior consistency (Festinger, 1962), moral superiority may be associated with a greater likelihood of honest behavior (Tappin and McKay, 2017, p. 7).
60***
61The overconfidence bias, the intergroup bias, and the optimism bias can easily result in the adoption of beliefs that are false, badly supported by evidence, and resistant to change. Such epistemic costs are accompanied by other costs and some benefits: people can be better off as a result of adopting biased beliefs if the beliefs contribute to their wellbeing, good functioning, and goal attainment. Much less attention has been dedicated to the question whether biases benefit in some way social groups and society at large as well as the individual agent.
62To address the effects of doxastic irrationality, we have examined the potential benefits and costs of three biases within a social context, based on existing data. In most cases, the answer is at best tentative due to there being reasons to regard the biases both helpful and harmful. We hope the question will be addressed empirically and that the answer will inform current interventions aimed at enhancing the quality of people’s reasoning. Avoiding doxastic irrationality is an important goal, but we should also take into account what would be lost if the bias were eliminated, reduced, or replaced, both for the individual agent and more broadly for human societies. [3]
Mots-clés éditeurs : épistémologie sociale, biais d’optimisme, individualisme méthodologique, biais intergroupe, irrationalité doxastique, biais d’excès de confiance
Date de mise en ligne : 04/09/2018.
https://doi.org/10.3917/rphi.183.0327Notes
-
[1]
Our approach is different from that of bounded rationality. Bounded rationality points to three types of constraints that might affect the rationality of agents’ decision making: (1) only limited information is available; (2) human agents only have limited computational and information processing capacity; (3) only limited time to make a decision is available. Although the constraints of bounded rationality are relevant to the analysis of doxastic biases, here we want to focus on a different phenomenon, the fact that for each decision agents make there are not only epistemic but also pragmatic and psychological goals that agents may attempt to fulfill. In some cases, the goals converge and can be attained simultaneously, but at other times the goals diverge and only some of the relevant goals can be attained.
-
[2]
The references for the articles of this issue can be found below, p. 407.
-
[3]
The authors acknowledge the support of the European Research Council for project PERFECT (grant agreement 616358) in the preparation of this article.