Cognitive Mechanics
jared
How To Think About Belief
How to Think About Belief
What determines belief? How do beliefs form and what circumstances influence people to change them? How can beliefs be accurately described in terms of other, pertinent psychological and neuroscientific phenomena? Although beliefs play an instrumental role in all human activities and are highly consequential for individual and societal behavior, there are no accepted mainstream or even academic answers to these straightforward questions. In the search for answers, where does one begin?
The pertinent scholarly literature is scarce and inconsistent. Actually, belief may be unique among common, psychological constructs because of the absence of any broad synthesis of its terms. To advance toward such an understanding it is necessary to borrow from a number of different approaches and disciplines, and to integrate and reconcile. Areas, such as social psychology, cognitive psychology, philosophy of mind, epistemology, linguistics and cognitive neuroscience, all make contributions; yet unlike other fundamental psychological constructs, there seems to be little explicit reconciliation between these contributions for beliefs.
This review will attempt to draw from a number of different sources in the sciences and humanities to offer a description of previous treatments given to belief, to reconceptualize the matter given the new perspectives, and to identify unknowns for future study. Research on the dynamics of belief will be emphasized over semantics and statics. The best way to study something as elusive as beliefs is probably to observe them in action - during formation and change. To this end, the focus of this research is to analyze both theoretical and empirical work on the foundation and plasticity of beliefs in the hope of furthering our understanding of this highly consequential, natural phenomenon.
Psychology and related disciplines have treated the construct of belief inconsistently. Due to its ubiquitous use as a psychological construct, in folk psychology and everyday life, it has been resistant to scientific operationalization. Belief is often discussed but left undefined, and several well-received definitions are not only inconsistent but also mutually contradictory (Furinghetti & Pehkonen, 2002). There is no consensus concerning what criteria a thought must meet to qualify as a belief (Eichenbaum & Bodkin, 2000). However, certain conventions are generally adhered to in the literature. Generally, a belief is treated as a fundamental mental representation, and therefore, a basic unit of cognition.
Belief is usually held to be the psychological state in which a person holds a proposition, perception, inference, judgment or premise to be true (Green, 1971). Beliefs can be created at the time they are needed during an activity or constructed presumptively to account for a past event or to prepare for a future state of affairs. Talk of a belief necessarily supposes an entity (the believer) and a proposition (the object of belief). It also presupposes that the belief may not be well supported enough to constitute true knowledge - that it is a conjecture or hypothesis (Abelson, 1979). Belief involves conviction, possibly even devout conviction, but does not necessarily involve certainty. Moreover, beliefs can be held about common workaday concepts but are usually invoked in matters of importance or where there is a divide in credence. Thus beliefs often involve stances on consequential topics, such as morality, faith, politics, science, personal identity, history, religion, distribution of wealth, economics, or culture.
How beliefs are formulated and what protocol people use when formulating them is an issue of contention, but also one of curious speculation and wonder. There is no accepted “unified theory” of belief formation, and most researchers that endeavor to grapple with the concept write about it as if it is mostly uncharted territory. Aside from Plato’s early work on belief justification, there is only patchy common ground. In his often cited dialogue, Theaetetus, Plato asserts that reason (personal logic), evidence (empirical support), and guidance (social influence) provide the best justification for a new belief (Cornford, 2003).
Psychologists have consistently identified these three contributors - reasoning, evidence and the beliefs of others - to be among the most important determinants in belief justification (Abelson, 1986b). Information relevant to how people search for these, how they think about them and how they use them are essential to grasping belief formation and change. We will focus on the constructive contributions of Plato’s logic, evidence and social influence here but also consider what happens when these factors fail to guide belief properly.
Most people assume that their beliefs, especially when they involve matters of importance, are chosen conscientiously with a good amount of deliberation and reasoning. This paper will consider several sources of evidence suggesting that often, even consequential beliefs are chosen with very little declarative rationale and poorly supported by critical thinking. Several fundamental mistakes of belief formation will be examined, including the use of biases, fallacies, faulty heuristics and irrational tendencies (such as the inclination to give preferential treatment to concepts that come to mind more readily). One reason beliefs often evade systematic justification may be that much of the cognitive process of analyzing and accepting a belief is below the level of conscious awareness. Individuals often only have insight into mental processes that are guided by consciousness; unfortunately, adults have been automating aspects of believing since early childhood and becoming aware of these aspects can be very difficult after they are made implicit. Even into adulthood, many false beliefs probably come about, unjustified, due to a questionable association between two stimuli in the environment that should be interrogated further, but is not.
As associations that take place in the brain, beliefs are not exclusively the province of psychology. The neurobiology of how these associations occur, in terms of neurons, synapses, binding and neuroanatomic space, will be discussed. The cortical and conscious correlates of belief will be investigated and the existence of belief in non-human animals will be considered. Derangements in the proper neurobiology of belief, often result in deranged belief. Psychologically and/or biologically impaired belief development can lead not only to false beliefs but also to the debilitating and socially damning delusions of psychosis and dementia.
Hasty or incomplete belief formulation can lead people to espouse beliefs that are unfounded. It seems that the way that humans are hard-wired, it is easy for them to become convinced of something and even to act on this conviction, without going through pertinent intervening operations, such as citing evidentiary facts or articulating logical arguments. People routinely fail to search for and evaluate evidence, even when forming their most cherished beliefs (Tavris & Aronson, 2007). Sometimes such evaluations are clouded by snap judgments or emotions, which have the capacity to cause us to become inordinately invested in a restricted subset of concerns without considering contradictory information. “Feelings of certainty” are often responsible for premature conviction and their derivation, emotional concomitants and validity are evaluated here. Interestingly though, new research has shown that emotionally driven judgments can be constructive and even advantageous in certain situations (Gladwell, 2005). Many beliefs that are not analyzed conscientiously, or even consciously, may still be good beliefs. A large body of recent literature has shown that, when made within one’s area of expertise, snap judgments, hunches and intuitions can often lead to better answers than analytical investigations (Gigerenzer, 2007). Such beliefs are also much faster. Choosing what to believe, especially when one feels obligated to deliberate carefully, can be time intensive and exhausting. Clearly, there are tradeoffs involved in belief formation involving accuracy, efficiency and expediency.
When someone realizes that their beliefs lack logical and evidential support, they often choose to appropriate beliefs from others. This is usually done when a belief is too complex or unfamiliar; when someone does not have the skill or knowledge base to think about it on their own. It seems that parents are a substantial source of belief for their children, especially early in life. It is unclear though, how mothers and fathers transfer their beliefs and whether children can accurately gauge their parents’ true sentiments. Surely the social process of believing varies by belief. For beliefs of low or moderate complexity, people often side with their parents or close friends. For beliefs of high complexity though, many people feel compelled to side with a specialist in the pertinent field, such as a scientist or philosopher. As we will see, people use a variety of heuristics when deciding from whom to borrow ideas. Sometimes this borrowing is unintentional. People can be unaware of the influence that others have on their beliefs. In fact, it is possible to be oblivious to the impact of persuasion, even when being persuaded coercively. Overall, it seems that many beliefs are the outcome of social pressures, a need to fit in with others and tacit and unacknowledged expropriation.
Literature Review
In the pages that follow, we will elaborate further on the subject of belief formation and change from a wide range of perspectives. We will consider how beliefs are affected and constrained by attitudes, fallacies, heuristics, delusional thinking, intuition, neuroscience, personality, persuasion, unconscious factors and self-identity. These concerns will be traced back to the processes of belief formation and change, focusing on the cognitive aspects of belief inception, endorsement and assimilation. Where possible, we will draw inferences about belief from experimental studies and data collection efforts. At this point in the evolution of belief research, however, we are highly reliant on speculation, anecdote, personal observation and convergent validation. It is not clear how much of this information can be neatly coordinated into a unified theory of belief, but considering the existing knowledge about belief in this way should constitute a good starting point. Overall, it is clear that the study of belief change is truly multifaceted, should be intensely scrutinized and deserves much wider study.
Belief as a Construct in Psychology and Philosophy
The early psychological literature on attitudes and the age-old literature on philosophy of knowledge, have substantially contributed to the demarcation and exposition of what it means to believe. Systematic examination of beliefs began in the early 20th century by psychologists, mainly in the arena of social psychology (Thompson, 1992). Much of this research was actually conducted with the intent to study the volatility of attitudes and the power of persuasion, but the research was cut short. Behaviorism, with its emphasis on observable behavior and ridicule of the study of cognitive processes, ended most of the early research on beliefs and belief systems. As new developments in cognitive psychology began to arise in the 1970s, interest in beliefs reemerged as behaviorist ideology dwindled (Abelson, 1979). Around this time, beliefs began to be viewed as conclusions about phenomena and their nature that both affective and logical factors impacted (Green, 1971). The study of attitudes was resurrected and for quite a while the best place to look for research on beliefs was in the literature on attitude formation. Thereafter, the link between belief and attitude was made explicit (Underhill, 1988). Despite the fact that the relationship between these two concepts has not been entirely clarified, we will consider some of the research efforts and theoretical work within this area in the section on attitudes.
Philosophical thinking on belief is much older than the psychological research. This research, historically, has also been more insular and more exploratory though. Some philosophers believe that ‘belief’ cannot be defined, is not equivalent to the content of any definite description and is difficult to describe in terms of its essential and accidental properties (Hay, 2008). Philosophy has tended to be relatively abstract and inconsistent in its treatment of beliefs whereas in psychology, a data driven pursuit, less is written on the definition of belief but there is more agreement as to what constitutes a belief (Green, 1971). The two disciplines contribute differently, but substantially, when taken together.
Both the philosophical and psychological literature emphasize that most people distinguish what they know from what they believe even though they consider both kinds of statements to be true (Schwitzgebel, 2006). This distinction between belief and knowledge originates from the philosophy of mind where it is a seminal concept. Both psychologists and philosophers concur that belief systems often include a substantial amount of episodic material from personal experience, folklore, cultural doctrine or propaganda and contain strong references to the self-concept of the believer, a feature usually left out of knowledge systems completely (Abelson, 1979). In addition, beliefs can be held with varying degrees of certitude; one can be passionate or restrained about a belief, whereas with knowledge you know something to be a fact or not. This difference, where only beliefs can vary in certainty, leads many beliefs to become subjects of powerful emotional or subjective feelings. The interrelationship between beliefs and personal concerns is a potentially rich but mostly unexplored topic that will be elaborated on in the section on self-identity. Empirical research has made it clear that a person’s past, occupation, habitual activities, pride and ego all play a role in what they choose to believe (Furinghetti & Pehkonen, 2002). In fact, the involvement of concerns related to selfhood and individuality are a major factor that differentiates things that are believed from things that are known.
A knowledge system is a set of proven facts that are accepted to be true; whereas, a belief system is a set of nomologically related propositions that one holds to be true but may not have been scientifically proven or sociologically accepted. There are caveats to this though. Cognitive psychologist Robert Abelson (1979) has asserted that if every normal person of a particular culture believes in an unproven supernatural construct, even though this might constitute a false belief system to an observing anthropologist, it would constitute a knowledge system for the members of this culture because of the unanimity of belief. This brings an interesting concept into play, mainly that belief may be distinguished from knowledge on the basis of either scientific grounds or by cultural consensus. Most philosophers though, agree that a scientifically false belief should not be considered knowledge even if it is totally sincere (Abelson, 1986a). Conversely, a truth that is not believed by anyone does not constitute knowledge because for it to be knowledge, a person must believe or know it. Equivalently, a person must believe a belief for it to exist, even though according to some theorists, a person may hold a specific belief, but not know it until they are forced by experience to formulate the belief consciously (Hay, 2008).
There are other important facets to the relationship between knowledge and belief. Knowledge requires belief, so it is epistemically impossible to know something but not believe it. On the other hand, belief does not require knowledge nor does knowledge about a particular belief necessarily constitute an endorsement of it (Abelson, 1979). Often statements about belief entail faith such as a person believing in his or her favorite sports team. This has been called “belief in” which indicates faith in something and is usually commendatory or exhortatory (I believe in the power of love). Such beliefs refer more to inner states of opinion than they do to an outer reality. Epistemology and psychology have historically been less concerned with this type of belief and more concerned with beliefs that can be formulated into subjective, personal statements on topics involving knowledge more so than faith (Hay, 2008).
Plato and Socrates made what is regarded as an important distinction between knowledge and belief, saying that knowledge is a direct perception of information about the world and that belief is the qualification we put on the accuracy of that perception. Plato in Theaetetus defined knowledge as “justified true belief (Cornford, 2003).” Since this time, philosophers have seemed to relish the distinction between knowledge and belief. This topic is interesting because it details how we piece our worlds together from phenomenal experiences. The topic, known as epistemology- the philosophical study of how humans use knowledge to justify beliefs - is a highly influential discipline that appears particularly germane in our discussion of beliefs.
Personal Epistemology
Epistemology is the branch of philosophy concerned with the nature and scope of knowledge. Since epistemology is concerned primarily with determining what criteria must be met by conjectures for them to constitute true knowledge, understanding it should help us to better understand beliefs. A comprehensive account of the important constructs in epistemology would be pedantic yet a review of its foundations should help to elucidate the problems encountered by people who are trying to decide what to believe and bring us closer to an understanding of the cognitive basis of belief formation and change.
Opposing epistemological camps have helped to delineate ground rules for how to think about beliefs. These camps have taken strong, opposing positions but in doing so have generated and expounded upon fundamental viewpoints, most of which are not necessarily incompatible with one another. Foundationalism represents the notion that basic statements that cannot be falsified are self-evident and self-justifying, do exist and give justificatory support to other derivative statements, creating a foundation for a structure of knowledge. The doctrine of Fallibilism contradicts this assertion, arguing that absolute certainty about knowledge is impossible and that all claims of knowledge, in principle, could be incorrect. This nihilistic stance, where there is thought to be no objective basis for truth, is not widely embraced but has never been satisfactorily dismissed either (BonJour, 2002). Empiricists counter that it is possible to lay a foundation for knowledge, and they insist that reports of sensation are the source and criterion for knowledge. This empirical stance holds that sensory knowledge is indubitable and can constitute epistemologically basic propositions (this will be discussed further in the section on evidence). This tradition, along with rationalism, has formed the foundation for modern science. Rationalists argue that true knowledge does exist and is gained by reason but not by experiences. Rationalism is concerned with the logical paths to knowledge and much of this literature involves the identification of fallacies that interfere with or obfuscate logic. Here, to be reasonable, it is necessary that one’s rationale has not committed to a fatal falsity.
In the study of logic, a fallacy is defined as a misconception resulting from incorrect reasoning in rhetoric or argumentation (Hay, 2008). Fallacies include mistakes in argument such as false dichotomy; appeal to common opinion; confusion of cause and effect; drawing the wrong conclusion; appeal to emotion; misuse of a vague expression; begging the question; false alternative; faulty analogy; omission of key evidence and use of a red herring. Importantly, fallacious arguments are thought to be used often to support belief (BonJour, 2002). Although some fallacies are specific to arguments between two people and could probably not be generalized toward an “argument” someone is having with themselves, personal beliefs are highly susceptible to common fallacious logic (Dancy, 1991). Rationalism has produced these tools of logic which can be used to assay the justification for individual beliefs.
Rationalism, Empiricism, Foundationalism and Fallibilism are each extreme stances that allow important insight into how beliefs are generated and supported. Commingling the messages from these schools of epistemological thought allows us to see that good logic and trustworthy evidence can combine to erect a sincere and credible belief system despite the fact that a degree of uncertainty will remain. Popular and recently derived models of epistemic decision making map out how these stances affect individuals when they are deciding what to believe. One particularly successful model, the Reflective Judgment Model, illustrates how personal epistemic reasoning can attempt to avoid fallacy and falsity.
The Reflective Judgment Model (RJM) is a theory of human decision making designed to describe the development of reasoning by detailing how epistemic assumptions change and how critical and reflective thinking skills inform belief. The model has been supported by extensive longitudinal and cross-sectional research and routinely informs the work of developmental and educational psychologists (King & Kitchener, 1994). Reflective Judgment emphasizes that many problems cannot be solved with certainty, that people know this and that they create strategies for dealing with uncertainty. As they do this, they move up through a hierarchy of many stages of proficiency that are divided into three main categories. The categories correspond to modes of reasoning which are thought to develop in an invariant sequence: prereflective reasoning, quasi-reflective reasoning and reflective reasoning.
Prereflective reasoning mediates the acquisition of beliefs through the word of an authority figure or through firsthand observation. People who use this type of reasoning do not question their beliefs and assume that they know things with complete certainty. A person who uses quasi-reflective reasoning appreciates that knowledge claims contain elements of uncertainty and uses evidence to support their beliefs but they are inconsistent, idiosyncratic and subjective in their epistemic reasoning. Reflective reasoning, on the other hand, is much more objective, is open to continuous reevaluation, is conscious of the pitfalls of fallacious reasoning and is never certain but operates on the basis of the “most reasonable” evaluations of available data. RJM provides a fine model for different degrees of experience and acumen in belief formation and change. By emphasizing the importance of comfort in the absence of certainty and openness to constant reevaluation of the same beliefs, RJM sets a high standard and gives most believers a lofty goal to aspire to.
Another popular paradigm discussed in the literature on belief formulation, the Data-oriented Belief Revision (DBR) model, is consistent with this interpretation (Paglieri, 2005). DBR operates on the assumption that data and beliefs are two separate entities. Under this model, data are snippets of information collected and arranged by an individual and beliefs are interpretations of the arrangements of this data that have been accepted as true. According to this paradigm, and consonant with a good deal of other research perspectives, a large number of logical, emotional and cognitive-developmental determinants are thought to play roles in whether data is accepted or rejected (Paglieri, 2005). This is similar to RJM, and other models of belief because DBR’s conceptualization of data is practically equivalent to the former’s concept of knowledge. Other epistemological models feature various other concerns but none brings them all together. Creating a comprehensive model of the process of belief and believing is an endeavor for the future.
Personal epistemology, a subject still being formalized, maps out how individuals conceive of and use logic, evidence and other people to assemble and fortify their belief systems. Empirical studies have supported that there are degrees of maturity and effectiveness in using epistemological reasoning (Perry, 1970). This research has evaluated participants on a variety of levels corresponding to the constructs in RJM and shown that a large degree of interpersonal variability in skill with belief exists. This research led its principle investigator, William Perry (in his scheme of intellectual development), to point out that mature people realize that not all questions have verifiable answers, that some contentious issues are truly only a matter of opinion and that even distinguished authorities can disagree on certain topics (1970). It is clear that some statements can be proven, others can be strongly supported, others can only be bolstered and that judicious and discerning individuals can perceive and apprehend the difference. Every believer should benefit from being exposed to these enlightening epistemological considerations. Other informative doctrines of epistemology that potentially could and perhaps should be reconciled with the notion of belief include agnosticism, determinism, fatalism, nihilism, skepticism and solipsism. Personal epistemology is a topic that will pervade our discussion of beliefs for the remainder, especially our examination of the role of evidence and logic.
Empirical Evidence and Logical Reasoning
The use of evidence and reason in guiding belief has been a topic of foremost concern in scientific methodology. For thousands of years philosophers of science have been active in rationally examining the nature of belief derived from observational research (Bechtel, 1988). Aristotle contributed appreciably to the understanding of how data could lead to classification, theory and knowledge. His ideas on the matter were preserved and conformed to for over a millennium despite the fact that they were less than comprehensive. Aristotelian science was subject, in a haphazard way; to the rules of natural philosophy where naturalistic observations could be analyzed using the philosophical method of one’s choosing. Since the 17th century, Francis Bacon, Rene Descartes, John Stuart Mill and the Logical Positivists have greatly improved upon the old philosophical methods of syllogism, transitive inference, metaphysics and ontology with more algorithmic methods of science. The modern scientific method espouses the view that empirical evidence is indispensable for knowledge of the world and that scientific beliefs must be justified by strong physical evidence, materialistic induction and deduction and the systematic testing of alternative hypotheses.
Although the scientific method acts as a good model of belief epistemology, its methodology is too rigorous and exhaustive to be practical for personal beliefs. People need a quicker more direct way to justify their beliefs. It is probably a safe bet to base one’s beliefs on the beliefs of scientists but much scientific thought takes voluminous reading to uncover and many things that people want to form beliefs about have not been subjected to scientific inquiry. Instead people often rely on personal observations, the opinions of secondary sources, authority claims, social or cultural consensus and the coherence of argumentation (Irving et al., 1998). Personal observations are usually trustworthy unless the perception involved was illusory or the person attempts to generalize an observation inappropriately. Secondary source evidence such as photos, videos, or reports are often credible except when they are manipulative or misleading. Authority claims and social consensus can differ but are both taken as reliable by most people (Ross & Anderson, 1982). Logic, reason and the coherence of arguments are usually, at least, taken into account by people deciding what to believe. But precise logic, which involves the forming of premises and deducing valid conclusions from them, is laborious (Abelson, 1979). Every person probably has their own idiosyncratic methods of using logic and evidence, and these methods themselves are probably used inconsistently.
Most people think that their unique way of justifying beliefs is valid. They assume the beliefs that they choose to espouse are those that are consistent with sensory perceptions, sociologically accepted systematizations and dedicated reasoning. These people may be sincere and even sensible in thinking that the manner in which they choose what to believe is logically permissible but there is a good deal of research suggesting that most people hold a multitude of beliefs that are not supported by evidence or well-reasoned argument (Kida, 2006).
Empirical studies have examined the role of evidence and reason in guiding personal opinion and have demonstrated that they are often used inconsistently and inappropriately (Schommer, 1990). Schommer administered an epistemological questionnaire to undergraduates and found that students that simplify their searches for evidence too much tend to be overconfident in their comprehension and tend to reach oversimplified conclusions. Further, she discovered that students that frequently use irrational epistemological reasoning are more likely to reach inappropriately absolute conclusions when asked to write a concluding paragraph to a passage about scientific findings. In fact, a growing body of literature indicates that our beliefs, and or certainty in them, may be guided more strongly by emotional construals, transient motivations, subjective biases, subconscious objectives and constructs tied to self-identity (Tversky & Kahneman, 1974). This can be good or bad, depending on the belief in question.
Many researchers advocate that emotions (in the form of conditioned visceral reflexes, amygdalar responses or orbitomedial prefrontal cortex biases) can cause people to jump to accurate conclusions without need of employing the intervening cognitive steps (Damasio, 1994a,b). Other research in this area shows that evidence may not be necessary when it has already been gathered, when it is implicit in an emotional response, when good evidence cannot be found, or when too much evidence leads to “analysis paralysis” (Gladwell, 2005). Spontaneous decisions or snap judgments can be helpful in such situations, but this is more often true of one-time decisions than permanent beliefs. A decision is usually particular to a situation; whereas, a belief is often formed because of its utility in multiple situations. When beliefs are constitutional, when it is clear that they will guide future behavior, they should be made deliberately and not spontaneously.
The philosophical literature on “evidential belief” makes a distinction between core beliefs and dispositional beliefs. Core beliefs are propositions that have been considered or decided upon in the past. Any core belief has been thought about actively at one point. A dispositional belief is a belief that someone might ascribe to if confronted with a topic but has never considered the topic before and therefore has not come to a belief about the topic in the past (Bell et al., 2006). Dispositional beliefs have not yet had logic or evidence brought to bear on them. It is thought that dispositional beliefs, when formed, are more likely to be contrived hastily and, relative to beliefs that have some kind of precedent, are not as adequately supported. Another similar view of belief revision explains that keeping consistency among our beliefs is a basic human need and an urgent concern during belief formulation (Schick & Vaughn, 1995). Pencil and paper studies evince that people tend to reject facts or statements that are at odds with core beliefs that they have chosen to espouse or support in the past (Schick & Vaughn, 1995). For this reason, many people will embrace evidence that supports a held belief and disregard evidence that conflicts – regardless of merit - in order to maintain cognitive consistency (Dancy, 1991).
The philosophy of belief source has elaborated on two approaches: the foundation model and coherence model (Doyle, 1992). According to foundations theory, beliefs are maintained if they are reasonable, rational and justified, and beliefs are abandoned as an individual adopts evidence to the contrary. The coherence approach, in contrast, contends that an individual will accept a belief if it logically coheres with other closely held beliefs pertaining to self. Some beliefs may be more important, or psychologically central, for a person than others and so new beliefs are probably tested for coherence with these first (Pehkonen, 1994). Core beliefs are usually affected by both. The foundational and coherence models are thought to be able to coexist and lead to the following situation: the availability of rational and justified evidence will combine with personal relevance of the belief to determine certainty strength or degree of conviction. Like DBR and RJM, these approaches can be used to inform predictions about how humans will make decisions under different evidentiary conditions (Doyle, 1992).
Mathematicians have contributed to the debate about human beliefs and proposed prescriptive models of how a person’s belief should change in strength when they are presented with new evidence supporting or refuting a belief. Bayes’ Theorem has been used to describe how the strength of a rational person’s beliefs should change when they combine new evidence with previously accumulated evidence (von Winterfeldt & Edwards, 1986). In fact, the field of Decision Analysis was born in 1954 when Ward Edwards, asking participants to revise their existing beliefs after being exposed to new evidence, demonstrated that human decision makers depart greatly from the mathematical predictions of Bayes’ Theorem (Edwards, 1954). Most people were never instructed how to use evidence rationally and we cannot expect them to operate under mathematically optimal conditions. Also, people do not normally calculate probabilities, they compare an imagined scenario employing a given belief to a scenario without the belief.
Descartes and Spinoza had different ideas about how evidence plays a role in belief. Rene Descartes described beliefs as involving two mental representations, one regarding the claim at stake and another that exposes this claim to assessment and scrutiny. He thought that evidence played a major role in this assessment. Importantly though, he maintained that beliefs are held and analyzed objectively until the person chooses to accept or reject it (Clarke, 2006). This view dominated until Baruch Spinoza argued that in order to assess a belief we must first comprehend it and in order to comprehend it we must accept it (Boucher, 1999).
Some functional magnetic resonance imaging (fMRI) data have supported this notion that in order to question a belief we must, at least momentarily, accept it as true (Harris et al., 2007). Others have taken this idea further and pointed out that because we must believe a belief in order to understand and analyze it, perhaps sometimes we believe falsely because we have begun, but not finished, the process of belief formation (Gilbert, 1991). Studies have shown that merely being exposed to a statement, like leading questions from an unethical lawyer, can induce belief. Other studies have shown that distraction or time pressure can make people prone to accepting a falsehood (Schick & Vaughn, 1995). Ironically, failing to properly bring good evidence to a claim, in some circumstances, can make us more likely to believe it.
Wimmer and Pemer (1983) have elaborated on Spinoza’s position and asserted that in order to analyze an incoming belief we must construct two completely separate models of the world: one in which the information is true and one in which the information is false. It is not clear if this is true or not but certainly, the ability to create a different model of the world can act as a frame of reference helping us to better understand, interpret and predict the actions of someone whose beliefs differ from our own. It would be interesting to find out more about how individuals employ working memory to represent, model and test probationary beliefs. That very little research has been done here and that few have attempted to verify or disprove these philosophical ideas is exciting for younger generations of researchers.
When we act on intuition, instead of employing working memory, we may be relying on evidence that was acquired in the past but is now preconscious. Certain behaviors, even ones that we are not aware of, can become routinized and automated to reflect entrenched beliefs that, in the past, were based on true evidence. For example, one might have a predilection to treating strangers kindly without having to reactivate previously held convictions about altruism. Just as good posture can be maintained by muscle memory, personality, general demeanor, belief propensity and even decision styles can be maintained unconsciously. Many beliefs that have influenced behavior in the past probably become phased out of consciousness as they are incorporated into automatic subroutines. When held accountable for explaining why they acted in a certain way, one may not be able to invoke the original belief despite the fact that it did powerfully, albeit indirectly, influence behavior. It is probable that young children become explicitly aware of some of the cognitive protocol involving belief formation, but after repetition and practice in using beliefs, these formal rules become procedural, and thus, lost to conscious awareness. These explicit rules of belief and knowledge acquisition are effectively retained in the sense that they continue to determine belief outcome, but, because they have been made implicit, they are unavailable for personal or even scientific scrutiny. For this reason, attempting to pry loose the integral elements of belief, especially in early life, should help us attain a comprehensive model for belief dynamics. The section on the influence of other people will consider from whom and how we extract evidence.
Research shows that individuals will often maintain a belief in spite of overwhelming amounts of contradicting evidence and this tendency is termed “unwarranted theory perseverance.” After performing several survey studies and an extensive literature review, Anderson et al., (1980) concluded that people frequently cling to beliefs to a, “considerably greater extent than is logically or normatively warranted.” Their findings and the findings of others suggest that evidence is often not measured judiciously and that competing beliefs and counter explanations are too often ignored or overlooked (Kida, 2006; Schick & Vaughn, 1995). The ability to guard against hasty belief has been called “source monitoring” by Marcia Johnson (Johnson, 1999). This ability is thought to be multifaceted and proficiency is said to take experience and practice (Johnson, 1999). Ability at source monitoring is thought to be a function of a person’s awareness of and refusal to commit the common mistakes of belief formation.
Mistakes of Belief Formation
When forming beliefs, people use processing shortcuts, or heuristics, which work in some situations, but also lead to mistakes if they are used inflexibly. Several popular books have been written on the topic of cognitive blunders and it seems that the public has an appreciation for, or at least an interest in how to recognize and correct common mental lapses. According to Thomas Kida (2006), these mistakes include human tendencies to: prefer stories or anecdotes to statistics; be confused by superficial similarities; give preferential treatment to concepts that come to mind more readily; seek to confirm though not to question ideas; disregard alternative explanations for phenomena; accept flimsy evidence to support an extraordinary claim; underemphasize the role of chance and coincidence in shaping events; misperceive; oversimplify; and have faulty memories. Several of these mistakes are congruent with specific logical fallacies identified by philosophers. These are a somewhat arbitrary and motley grouping of blunders, but because psychologists (Kida, 2006; Tversky & Kahneman, 1982) have emphasized them routinely, we shall briefly consider each in an attempt to glimpse how beliefs go wrong.
People have a tendency to prefer stories or anecdotes to statistics. Stories are probably easier for us to understand; they seem more salient and more reliable even though they are usually less reliable than statistics garnered by intensive experimentation. Cognitive science has evinced that people often find themselves in situations where it is necessary to employ statistical
reasoning to solve problems or make intelligent estimates. Most people have difficulty using statistical information effectively; consequently, they will often use other “heuristics” to help solve problems. Kahneman and Tversky (1974) studied these phenomena in depth by measuring people’s performances on carefully devised assessments. They wanted to see what rationale people used to make decisions, especially decisions related to determining the relative frequency of specific events. Representativeness heuristic and the availability heuristic were two heuristics they found thought to have a substantial bearing on beliefs. The representativeness heuristic is used when we judge two things as being similar only because they share prima facie characteristics, or a superficial resemblance. People using this heuristic ignore statistical rules and assume that if one concept shares a specific quality with another concept that these two concepts are sure to share many other qualities and should be categorized together (Tversky & Kahneman, 1982). This heuristic is very similar to the fallacy of “faulty analogy” mentioned earlier. Both are thought to be responsible for why many people see illusory relationships in a series of random events. In addition, when applied incorrectly, the representativeness heuristic is known to lead to the creation of damaging stereotypical beliefs (Kahneman & Tversky, 1973).
The availability heuristic is similar but distinct from the representativeness heuristic. Many psychological experiments have shown that people regularly use “available” or easily accessible memories to make judgments about the likelihood of events. This is probably because it is natural for us to use concepts that readily spring to mind rather than complete and unbiased information. We can easily remember recent experiences or reports from friends and the news, and we often use these types of information instead of using statistical information to estimate probabilities (Tversky & Kahneman, 1973). The fact that prejudiced information is more readily available to memory causes us to discard more reliable empirical knowledge and thus, leads to unobserved, hasty beliefs. Researchers have pointed out that throughout its evolutionary history, our species has gained knowledge from personal anecdotes or memorable occurrences, not from statistics or experimental studies. Many researchers believe that this partly underlies our penchant to pay close attention to information coming from a story, a personal account, or an associated experience (Shermer, 1997). This tendency has the effect of making us believe in causes that are really only partial causes, accept things that are unsubstantiated, and trust small sample sizes (Sagan, 1995).
We also seek to confirm our beliefs. People have a strong penchant for committing the confirmation bias or the positive-test strategy, where they are prejudiced towards confirming their speculations. This is a common cognitive error that biases us toward confirming our ideas by making us seek out cases that support our hypotheses and disregarding cases that question them (Shermer, 2003). This proclivity acts to reinforce existing beliefs and plays a large role in the maintenance of delusion, in attitude polarization, and in illusory correlation (Charles & Lodge, 2006; Lee & Anderson, 1982). Related, the behavioral confirmation effect, also known as the self-fulfilling prophecy, occurs when a person’s expectations influence their own behavior, which can lead to disastrous decisions in organizational, military, and political contexts (Darley & Gross, 2000). These examples show how powerful expectations can be in influencing our decision-making strategies.
Expectations have even been shown to influence perceptions. When a newsflash in a small town reported that a large bear had escaped from a local zoo, the 911 switchboards lit up. People reported seeing the bear all over town, despite the fact the bear never wandered more than 100 yards from the zoo (Harter, 1998). In a similar way, sports fans have been shown to be functionally blind to infractions committed by their own team (Hastorf & Cantril, 1954). People expecting to deduce the rules used in a video of a ball-passing game have an attentional scotoma for the appearance of a man in a gorilla suit, simply because his presence was not expected and thus, was not attended to (Simons & Chabris, 1999). Other experiments with selective attention have shown that people can be functionally blind to highly salient stimuli if they are concertedly attending to other stimuli (Knudsen, 2007). These and many similar anecdotes and experimental outcomes embody the lyric, “what a fool believes… he sees.” Expectations can have powerful effects on perception, and it is thought that misguided perceptions also have the capacity to lead to false beliefs (Kida, 2006).
The hindsight bias is another, related mistake that has the potential to impinge on both memory and belief. It is common for people to recall their correct predictions but to forget about the faulty ones (Fischhoff & Beyth, 1975). The dramatic fervor that fans display for their team is rekindled after a win but quickly forgotten about after a loss. When a situation is playing out, an individual might throw in a quiet remark about their prediction for a certain event. If they lose, the remark is forgotten. If they win, they can then speak vociferously about their “uncanny” prediction. Often an attempt to gain credibility, this tactic can confuse even the speaker because it gives them an erroneous conception of probability and of their own ability to predict random events.
Many psychological models of memory impairment attempt to explain how this type of cognitive error might stem from a few different causal factors. Some psychologists think that knowledge about the outcome of an event might alter or erase previous memories related to the event before it played out (Fischhoff & Beyth, 1975). Motivational factors and factors related to the heuristics used in recalling events might make the original judgments or beliefs less easy to activate (Morson, 1994). The hindsight bias, much like many of the phenomena described by psychologists, to many people seems to be trite or “common sense.” This view is influenced by the hindsight bias, the tendency to see things as obvious, but only after the fact.
Faulty memory can lead to mistakes in belief formation. Memory recall was once thought to be a highly accurate and automatic process in the sense that it ran to completion via subconscious mechanisms, and thus was hard, if not impossible, to bungle up. Now, recall is often conceived as a subjective process, where people use working memory and executive functions to piece together past events. Memory recall thus involves conscious deliberation and, because of this, is open to all sorts of processing errors. Memory is often thought to be patently veridical, but when it is not – when it is reconstructive – it is fallible.
Confabulation is a common error that can be made during recollection. Confabulation is the spontaneous and unintentional narrative report of events that never happened. When confabulations involve recollection, it is the confusion of imagination with memory or a confusion in the application (or integration) of true memories (Berrios, 1999). Confabulation is an indicator of psychosis or frank delirium but is thought to occur in a less prominent and less understood way in all people. Daniel Schacter’s (2001) book The Seven Sins of Memory points out seven common problems with memory or its use that can result in mistaken thinking. These involve the transience of many memories; the consequences of absent-minded thinking; the tendencies of certain memories to interfere with or block the recall of other related memories; the misattribution of source; the intrusive persistence of memories that are impertinent, unwanted or disturbing; and the corruptibility of memory by suggestion; and bias. Beliefs are necessarily predicated on memories, and thus, when memory is obscured or blatantly erroneous, belief accuracy can be made especially vulnerable.
It has been shown that a wide variety of memories can be falsely created, either inside or outside of a therapist’s office, through the use of suggestion, guided imagery, and hypnosis. Though these techniques do not always result in false memories, experiments suggest that a significant proportion of people will believe in and actively defend the existence of fabricated events, even after they are told that the events were false and deliberately implanted (Reyna & Lloyd, 1997). False memories involving childhood sexual abuse have gained significant attention because, even when it is clear the accused is innocent, the accuser can be irrationally convinced to the contrary (Loftus & Ketcham, 1994). Not just the victims of guided imagery believe in its efficacy. Surveys indicate that most Americans believe psychologists or hypnotherapists can free up traumatic events that were previously inaccessible or repressed, even though research does not support this (Loftus & Loftus, 1980). False memories can even be created by suggestions that are much more subtle.
Eyewitness testimonies, for instance, were thought to be highly reliable at one time until cognitive psychologists were able to show that the memories that these testimonies rely on are highly volatile and heavily vulnerable to contaminating information. It is worth mentioning that false testimony is thought to be a common occurrence despite the fact that the witness, who is under oath, often believes resolutely in their testament (Loftus & Loftus, 1980). It is becoming clear that our conscious mind can come to believe things that are patently false because its reality constructing mechanisms often act in prefabricated and obstinate ways. Inflexibility in our memory and thought has been shown to affect our ability to understand even our own intentions.
Neuroscientist Michael Gazzaniga (1998) has a paradigm that explicates why we are so susceptible to mistaken thought and how it is intimately tied to the way the conscious mind pieces the world together. Gazzaniga formulated this paradigm after working with split-brain patients with callosotomies. These patients have had their corpus callosum cut in half (sagitally), effectively isolating the left and right hemispheres from each other. Gazzaniga observed the speechless, right hemispheres of these patients command the left half of the body in ways that were inconsistent with the wishes of the speaking, left hemisphere. One might expect that the left hemisphere would report that it could not explain these actions and that it was not responsible for them. However, Gazzaniga (1998) found that often the person would confabulate; they would make up false reasons for why the right hemisphere did what it did as if they had been in control all along. It was disconcertingly clear that otherwise sensible people were not at all aware of this conspicuous subterfuge. This led Gazzaniga to posit that much of our immediate behavior must be mediated by unconscious, habitual, or procedural brain systems, and that we often only have the capacity to analyze our decisions after we act on. He purports that what he calls the interpreter, the language center in the left hemisphere, does its best to provide rationale for decisions and actions after the fact, and that this has the potential to result in blustering, duplicitous distortion (Gazzaniga, 1998). Rigorous experimentation on normal people without callosotomies have supported this conclusion, showing that spontaneous cerebral initiative to action, involving no preplanning, precedes conscious awareness of the will to act by more than 300 ms (Libet, 1985).
When we respond quickly to an environmental stimulus, the conscious mind does not have the time to be considerate and reflective. Often we act simply because we trust an intuition. Since subconscious brain modules perform these cursory actions (behaviors that can often seem complex), our conscious mind never has the opportunity to understand what was done or why until afterwards. As it is not involved in the planning of many fast responses (and because much of the cortex does not have direct connections to many subconscious motor areas, such as the basal ganglia), it can only infer from what it can gather through the senses, why the lower areas did what they did. Studies of the neuroscience of free will have shown that a person’s brain can commit to certain decisions from a half a second, to several seconds, before the person is consciously apprised of the decision (Soon, et al., 2008). Some researchers have inferred that, because our conscious selves are updated independent of the unconscious guidance mechanisms, most people may confuse the correlation of conscious experience with movement for causation (Schlinger, 2009). Not only the creation of motor movement but the immediate creation of sensory imagery – thinking itself – may be highly guided by determined by unconscious processing. This impels one to wonder how often our beliefs are predicated on thoughts that are invalid or uninformed attempts at explaining unconscious phenomena. At first glance it appears to trivialize the role of beliefs, as we have said that beliefs are mediated by conscious thought. Upon further inspection we remember that beliefs can become deeply ingrained and that perhaps a large amount of unconscious action can reflect past conscious belief.
When we have the time to think before acting, we often employ preconceived models, or schemata, to help orient ourselves conceptually. Schemas are learned conceptual models that people impose on their experiences to aid them in information processing, decision-making, and memory (Bartlett, 1932). A schema for a certain social situation might contain the sequence of events normally associated with that situation. Our schema for visiting a friend may include calling ahead of time, greeting our friend, interacting with them, and finally thanking them. Examples of schemata include academic scripts, social worldviews, stereotypes, and archetypes. Schemas can help to make certain routines become second nature and help us to develop mental representations or “theories” about how our world operates. Sometimes we use schemas, mental frameworks, for commonly occurring things, to help us organize current knowledge, and to provide structure for future understanding (Bartlett, 1932). We can utilize our schemas to prod us into remembering events or hard-to-recall facts (Brewer & Treyens, 1981). They help make processing less effortful. For example, I might forget what I wore last Sunday, but remembering that I attended church might help to expedite my information search.
Using schemas incorrectly, however, can easily lead to cognitive errors. The misapplication of a schema is very similar to confabulation. Sometimes one has to think outside the box and consider the possibilities of other less normative routines coming into play in order to avoid these errors. Many people with self-limiting schemas, even those that have demonstrated insight into the questionable origin of the schema, still adhere closely to them, and continue to act on them, even when it would be far easier to abandon or even temporarily ignore them (Hoffer, 2002). Since beliefs constitute habitual ways of perceiving the environment, they are, at least in many ways, comparable to schemas. It seems that like beliefs, the repeated utilization of a schema increases its consolidation and makes it less susceptible to disruption (Hoffer, 2002).
This list of common mistakes of belief formulation includes both conscious oversights and unconscious inadvertences. We search for meaning in the wrong places, connect the dots in the wrong ways, and adopt frames of mind that miss the big picture. The solutions to correcting most of these mistakes appear to be common sense, but so many of us fall prey to them and others like them, on a daily basis. It is clear that once informed, people can make efforts to resist some of these pitfalls (Tversky & Kahneman, 1982). At best, these mistakes can lead to extreme views in matters of opinion, but at their worst, they can cause people to adopt beliefs that are contrary to what most people know, causing them to be ridiculed, pitied, or at very worst, institutionalized. Interestingly, the delusions of a person with pronounced schizophrenia or drug-induced psychosis appear to be formed under the same conditions as false beliefs.
False Beliefs and Delusions
It seems that the literature on delusions can be brought to bear informatively on the literature on beliefs and vice versa. There are important differences between delusions and normal false beliefs. Most false beliefs can be challenged, modified or brought to extinction if they prove erroneous or unsupported. Delusions though, persevere even in the absence of support and in the face of strong contradictory evidence. The American Psychiatric Association defines a delusion as a “false belief based upon an incorrect inference about external reality,” one “that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary (APA, 1994).”
It is thought that it is possible to describe delusions accurately in terms of associative learning (Miller, 1989). According to this interpretation, in a delusion, one concept is linked somehow with another, but under a fallacious association. Given this, the concept of extinction, the uncoupling of two formerly associated things, may be an appropriate construct to represent the resolution of a delusion (Miller, 1989). If a delusion is resolved, the explanatory, causal associations that held the delusion together are disentangled so that the concepts are no longer coactivated together. This is thought to be similar to the decline of a salivary response to a bell that was formerly, but no longer, paired with food. During associative learning, also known as conditioning, an organism learns to associate a previously neutral stimulus (such as a tone, referred to as the conditioned stimulus) to a reinforcer (such as food or an electric shock, referred to as the unconditioned stimulus). Once a dog is exposed to the ringing of a bell several times without being given food it learns not to expect the food in this new situation (Pavlov, 1927). At first it associates the absence of food with some new (misleading) contextual cues that allow it to differentiate between the original situation where the bell predicted food, and the new association. Eventually though, with enough pairings, the dog learns that the bell is not associated with food and the extinction of this association is thought to involve an inhibitory mechanism that overrides the midbrain dopamine neurons responsible for maintaining the strength of the original association (Pan et al., 2008).
It is thought that in schizophrenia (a disorder marked by dopamine dysregulation); individuals are relatively insensitive to this type of extinction. They do not learn to inhibit the previously reinforced response. Delusions, however, do more than persist in the absence of confirming evidence; they also persist in the face of contradictory evidence (Rubin, 1976). When faced with clear, counterfactual indications against their delusions, the deluded often confabulate, make further erroneous suppositions or preposterously transform disconfirming information into confirming information (Joseph, 1986).
Attempting to question the delusion of a deluded person is often futile. Simply brining up their delusion activates it, strengthens it and makes it more available in the future. This process is called reconsolidation and it makes it so that the two concepts being associated are more likely to be coactivated again in the future. In the same way that beliefs can be weakened by extinction, they can be strengthened by reconsolidation (Eichenbaum & Bodkin, 2000). Depending on how the memory for a belief is reactivated, it may be opened up to condemnation or simply made highly salient so that it is highly associable and reconsolidated. Just being reactivated may make the memory traces responsible for the false belief more stable and more likely to be activated by related memories in the future. Hence, the salience of reactivation may matter more than whether it was confirmatory or disconfirmatory in fixing the belief. Salience probably plays a large role in determining which memory traces are reconsolidated into knowledge and in turn, are enshrined as beliefs.
As the deranged, confirming memories become less available and the delusion becomes weakened and less salient, the deluded person experiences ambivalence between belief and disbelief. Such double-bookkeeping occurs when a delusion persists but the person does not act on it consistently (Sass, 2004). When psychotic individuals go on antipsychotic medication, the memory trace mediating belief in the delusion is not erased completely; it is merely overshadowed by extinction learning. This explains why delusions often return once medication is ceased (Chadwick, 2001). It has also been shown that confronting deluded patients with reasons for why their delusion is unrealistic often actually strengthens the delusional belief not only because of reconsolidation but sometimes because the patient is so inflexible that they incorporate the inconsistent information into their delusional schema (Milton et al., 1978).
In the discussion of delusions and extreme false beliefs, two very important concepts come into play: motivational salience and prediction error. Motivational salience is a quality of objects or events that affects a person’s interest in them and affects the person’s relevant actions. Salient stimuli command attention and direct goal-driven behavior (Berridge & Robinson, 1998). People who are delusional often have a skewed sense of what is salient and might become motivated by superficial or misleadingly important things. This may be continuous with the “utilization behavior” seen in bilateral frontal lobe damage, where a patient’s behavior is obligatorily linked to the most obvious “affordances” presented by the objects in their immediate environment. When the frontal damage is extensive, the patient may display the “environmental dependency syndrome” where they have no capacity to inhibit pre-potent motor programs that are procedurally linked to the presence of certain objects (Lhermitte, 1986). A delusional person with skewed motivational salience does more processing between input and output but has limited capacity to inhibit pre-potent salience programs. A distorted sense of importance causes them to attend to minor, emotionally laden stimuli at the expense of the bigger picture. It causes them to act on these impulses but, unlike environmental dependency, they have enough cognitive reserve to actually analyze and formulate beliefs about them. These beliefs are usually simplistic and often paranoid. The beliefs themselves are likely to be faulty. The emotionally salient aspects of the situation overpower other, often more causal, considerations and they contaminate subsequent conclusions.
The second valuable concept in delusory thinking, prediction error, represents the mismatch between what we expect to experience in a given situation and what we actually encounter. Prediction error has been shown to be a fundamental parameter in associative learning models, and it often determines the strength of perceived salience (Smith et al., 2006). Only a limited subset of factors is considered making the prediction inaccurate. Efforts that are made to reduce this mismatch result in a clearer and more accurate worldview. It is thought that prediction errors and salience are interrelated, and together, greatly affect the formation of delusions seen in individuals with schizophrenia (Murray et al., 2008). To someone with psychosis, events that are insignificant and merely coincidental can be perceived as significant, can command attention and, after analysis, can seem to relate to each other in meaningful ways. Clearly, both false beliefs and delusions have mistaken or meretricious associations at their crux.
A person does not have to be delusional to be deluded about certain statements. Studies have shown that, when asked, most people indicate that they believe that low self-esteem is a cause of aggression, that crime in America is steadily increasing and that cosmetic implants cause major disease. They believe these things even though research indicates that these are all false and it is highly unlikely that the respondents had ever been exposed to good evidence for them (Kida, 2006). But false beliefs are not all bad. Perhaps, under certain circumstances, it can be harmless, or even beneficial to formulate a false belief or two. The best way to test a hypothesis is to take it seriously for a while. Humans learn through the process of trial and error and erring sometimes allows the learning of important life lessons and demonstrates how and why certain strategies are not preferable.
Normal people are more likely to accept a false belief if other have accepted it. Collective false beliefs, often called mass delusions, have been documented several times in the United States in only the last 50 years. In the spring of 1954, tens of thousands of people were convinced that a windshield-pitting epidemic had broken out. All scientific investigations of this phenomena reported that no increased window pitting had occurred at all and that because of simple suggestion- the salience changed- people were looking at their windshield, searching for pits, instead of what they usually do, looking through them (Medalia & Larsen, 1958). Historically, Homo sapiens have convinced each other (despite good, available evidence to the contrary) in animal spirits, astrology, ghosts, psychic powers, witches and demons. Even today, superstition, religious mythology and magical thinking play a large role in our culture and in many peoples’ everyday life.
Chadwick and Lowe (1994) reported that four main principles have emerged from the application of cognitive therapy toward delusion: a) Belief modification should begin with the least strongly held beliefs; b) Patients should be encouraged to consider the alternative to the delusional belief rather than encouraged to try to accept the alternative immediately; c) Evidence for the belief should be challenged before the belief itself; and d) The patient should be encouraged to voice the arguments against the belief his or herself. These principles demonstrate that treatment of delusion is a tender and touchy matter that many patients exhibit high levels of reactance against. These also show that it is important to try to undermine the foundation of the beliefs before attempting to topple it directly. It can be extremely difficult to alter beliefs even in non-delusional people. Some questionnaire work done measuring plasticity in occupational-related beliefs concluded that most beliefs could not be changed in the short time span of a few hours (Harris & Daniels, 2005). It seems that the dopaminergic pressure reinforcing the associations between concepts, even in non-patients, can be very difficult to overcome. Uncovering the neuroscience of belief should help to elucidate causes and treatments for false belief but should also tell us much more.
The Biology of Belief
Beliefs form and change in the brain. However, not much has been said about where beliefs reside in the brain, what brain processes are responsible for them or what changes in the brain when beliefs change. That almost no literature addresses this topic forces the present author to offer speculation about the neural underpinnings of belief.
First we must consider the important question of whether different beliefs can be said to share neurological characteristics. Clearly, no two beliefs are the same, and thus, the brain basis for any two beliefs must be different. Recognizing that there are many kinds of beliefs makes it clear that it is a difficult task to attempt to use the reductive method to pinpoint where and how beliefs form in the brain. Many singular concepts can be reduced to their component parts in the way that our brain can be reduced to individual cells or the way cells can be reduced to molecules. Not all concepts have to be singular to be broken down to their constituent parts though. Scientists have been keen on explicating molecular and neuroscientific reductionist accounts of memories, which, like beliefs, consistently differ from one another in many ways. These individual differences have not stopped memory researchers from dissecting and classifying memories on neurological grounds and likewise, should not impede our progress. Indeed, from a neuroscientific perspective, belief and memory overlap substantially with each other, it is just not yet entirely clear how.
Memories are recorded in the brain as alterations that modify the firing patterns between neurons. These modifications are mediated by either physical or chemical changes in cellular structures. One of the most plastic components of the neuron is the synapse which takes advantage of protein synthesis either to increase or decrease the sensitivity of the postsynaptic neuron to the presynaptic neuron (Kandel et al., 2000). Small networks of neurons that “fire together” to create representations of things in the environment, become “wired together.” After they are “wired up” they comprise a stable representation of some feature in the environment that can be activated to contribute to a sensory perception or to mental imagery. The smallest and most localized of these networks code for the most basic stimulus features and are commonly called neural assemblies (Kandel et al., 2000). When a number of features held by different assemblies are coactivated, they bind together to create representations of objects and concepts (Baars & Gage, 2007). Most memories involve coactivations across large numbers of these neural assemblies; building features into complex structures. The more the assemblies responsible for a memory are coactivated together, the more entrenched the memory becomes and the stronger the affinity between the coactivated assemblies. Beliefs, like memories, must be composed of neural networks and their constituent assemblies.
Conscious, associative memories and the networks responsible for them, are commonly thought to be etched into the synapses of neurons of the cerebral cortex (Thompson, 2005). The cortex is a wrinkled sheet of neural tissue covering the brain that (because of the distribution and plasticity of its synapses) exhibits a more profound capability for learning than any other area. The connections that form outside of the cortex, in subcortical areas of the brain, are responsible for automatic and reflexive behaviors (and even perhaps for behaviors that reflect ingrained beliefs) but probably belief formation or change. Activity in the cortex, especially in frontal and parietal fields is responsible for conscious thought- the kind of thought necessary for belief dynamism. At first, like all new memories, freshly generated beliefs are stored in the hippocampus along with other contextual elements that surrounded the belief at its inception (Baars & Gage, 2007). As the information is gradually transferred from the hippocampus to other cortical areas it becomes separated from its episodic context making it difficult to recall how and when it was first learned (Smith & Mizumori, 2006). This phenomenon, called source amnesia, probably contributes to the difficulty in recalling whether memorable information had factual merit (Schacter et al., 1984). Again limitations inherent in human memory retrieval impact the accuracy of beliefs. But, in much the same way that beliefs are more than knowledge, they are also more than just activated memories.
Beliefs are memories that have associative meaning relative to other memories. This associative or propositional meaning allows them their utility, their applicability in problem solving, self-directed action and day-to-day life. The prefrontal cortex (PFC), the “central executive” of the cortex, probably contributes heavily to our ability to piece together simple memories to create beliefs (Kandel et al., 2000). The PFC sits above and in front of the other brain areas and fine tunes our actions by inhibiting, overriding and commanding posterior-cortical, subcortical and spinal areas to modify their tendencies and reflexes (Sylvester, 1993). The PFC has this ability because it is wired up to receive fully processed information from a large number of different areas giving it the perspective to make multimodal, cross-conceptual associations. It also has the ability to inhibit these other areas, allowing it to replace impulsive responses with better informed responses and to orchestrate the efforts of separate brain modules. The cerebral cortex, guided by the PFC, is probably the only part of the brain that has an overt capacity for logic and the weighing of evidence but as we have seen, the thoughts and behaviors that it directs are often approximations that may result in faulty beliefs. Conversely, reflexive, subcortical areas, such as the brain stem, the midbrain, the basal ganglia and the cerebellum operate outside of consciousness yet still, they can administrate behaviors and decisions that are functional and that can appear logical (Baars & Gage, 2007). For example, in most animals the cortex is proportionally very small relative to these subcortical areas, yet animal behavior is highly functional and purposeful (Alcock, 2001). Does this mean that animals are guided by instinct and implicit learning but not belief? It can be difficult, especially in animals that cannot report on their experiences, to determine whether the response to a stimulus is mediated by conscious or unconscious brain activity. The association responsible for the salivation response shown by Pavlov’s dogs is contingent on an association in the brain; however, it is not easy to determine if this association is an unconscious, automatic reflex (Moscovitch et al., 2007). If the dog salivates after it hears a bell, and it has no access as to why, the response would be indicative of an associative memory in the animal’s subcortex without a corresponding belief. A conscious, propositional association between the bell and the provisioning of food could mediate the response though, in which case, it would constitute a belief. The response could also be both reflexive and conscious depending on the given animal’s mental state.
If the response exhibited by a Pavlovian dog is only mediated by a subcortical reflex then it does not constitute a belief. But, if the response involves cortical processing it may constitute a belief. Especially if the dog is intelligent enough to become aware of the association and to use memories of it to inform other behaviors, then it should be seen as a true belief.
As we pointed out earlier, many human beliefs probably start unconsciously as gut feelings, acquired through classical or operant conditioning, that we later came to be conscious of. Like other animals, through trial and error, reward and punishment, we are conditioned by our environment to have certain tendencies. If we can become aware of these tendencies (associate the association to other associations), they are no longer simply behavioristic and can be called true beliefs. Once a belief becomes highly associated to other memories, it is reconsolidated, made salient and potentiated for use in guiding behavior. Like the simplest of animals, humans have tendencies that they never become aware of. The difference between a human and most invertebrates though is that humans can become aware of most of their tendencies because their attention can be directed to the high-level abstractions necessary for introspection.
Animals, which have sense organs to receive stimuli and muscles to react to them, are continuously bombarded by sensory stimuli from their environment. Overtime, guided by reflexes, instincts, innate behavioral tendencies and prepared learning, they develop complex ways of interpreting the perceptions that stream through their senses. Some coopt this process, drawing conclusions about experiences to form subjective knowledge (Greenough, et al., 1987). This process uses knowledge to build knowledge. New learning interacts with, and is perceived in terms of, old learning as no belief ever appears in isolation. Swiss psychologist Jean Piaget (1977) viewed learning in terms of two basic processes: assimilation and accommodation. He defined assimilation as the process whereby an individual interprets their environment in terms of their own internalized model of the world that they have been forming since their creation. Accommodation is the process of changing the internalized model to accommodate the new information. These two processes were intended to be applied to knowledge but can also be applied to belief. Beliefs involve a good deal of assimilation and accommodation- mental work that requires working memory, active representation and modeling, comparison of alternative scenarios and conscious, cortical deliberation.
How much consciousness, or alternatively, how much cortex do you need in order to have the processing power to truly believe things? There is very little research on belief in animals, although it is assumed that most animals are relatively limited in what they can believe but that humans, with large brains and language, are fully equipped to acquire and personally manufacture beliefs about virtually anything (Damasio, 2000). It seems clear that many vertebrates, especially mammals, are rational agents that can be understood not only from a behaviorist but also from a cognitivist perspective (Dennet, 1991).
Daniel Dennett (1998) has affirmed that many species of animals can hold beliefs- especially if one is using a liberal definition of belief. The capacity to entertain explicit beliefs and to evaluate and reflect on them though, is probably a recently evolved innovation, rare or absent in other species. Humans alone embellish beliefs using language. Beliefs held by intelligent animals, although not implicit, are less explicit than those of humans because animals associate their beliefs to a much smaller number of concepts. For instance, an animal may recognize a belief - knowing that it used this belief in the past - but may not be equipped with the right conceptualizations or vocabulary to know how to doubt or question their belief. It has been made clear that humans have can also have trouble questioning beliefs but humans are shown informally by others about how to believe – lessons animals rarely have.
At the outset of this section, we asked three questions about beliefs that we still have not answered sufficiently. Beliefs probably reside, like memories, within networks of neurons and (in a psychological sense) within the mental imagery that these networks create. The cortex, especially the PFC and Wernicke’s and Broca’s language areas, are probably responsible for the human ability to manipulate, question and be aware of beliefs. Belief change probably involves the dissociation of shared activity between the networks responsible for two previously associated memories. We have seen that this dissociation is made more easily if the midbrain dopamine neurons, that tie the association in with biological drives, release the associated networks from each other. Clearly, these answers are of limited practical use. Allow me to share a personal anecdote that may help shed light on these issues.
Recently I heard a rustle in the top of a tree followed by loud chirping that continued for a number of seconds. A large leaf fell from the tree, but for at least a full two seconds I mistook the large leaf for the bird that was making the chirping sound. I didn’t have my glasses on and I did not realize that the leaf was not the bird until I noticed that it was falling in a way that was very characteristically, stereotypically leaf-like. But for hundreds of milliseconds I “believed” that I was seeing the bird despite the fact that the real bird was totally obscured by foliage the entire time.
It became apparent to me that this illusion was caused by an error in perceptual binding. The neural networks responsible for two different constructs, in two different sensory modalities, were activated. Then these two perceptions were bound together, in the jargon of “operational architectonics,” they were integrated in synchronous oscillatory processing. Because they were visually striking and loud enough, the sight and sound gained privileged access to the cortex after being judged for relevance by the thalamus. At first, these stimuli were processed for content at the level of primary sensory cortex: primary visual (striate) cortex for the sight of the leaf and primary auditory cortex for the sound of the bird. Here, their spatial and temporal frequencies were given the chance to excite existing neural networks in order to determine if their features mapped on to anything I had experienced before. This message was passed from the primary sensory areas to secondary sensory areas, where they excited assemblies that corresponded to their unique traits, allowing more detailed identification of structure and form. Then the messages traveled from the secondary sensory areas to higher order, more globally communicative areas.
The information was allowed to spill into the brain regions responsible for processing experiences outside of a single modality like the prefrontal, occipitotemporal and intraparietal cortex. These “association” or “convergence” areas, which are equipped with the right inputs to consider multisensory information, have networks that are able to accommodate the binding of visual with auditory stimuli. According to the neural binding hypothesis, brain areas with different neuronal assemblies fired in synchrony to unite different features of these neuronal representations together. In my opinion, these higher-order interpretations are sent back to the earlier (primary and secondary) sensory processing areas just mentioned, creating visual imagery that corresponds to the interpretations of the association areas. In other words, bottom-up mental imagery evoked by a top-down interpretation, of a bottom-up perception caused a picture of a bird in my mind’s eye to be superimposed over the falling leaf.
I heard chirping, saw a large leaf begin to fall and failed to question whether the two stimuli might represent different entities. This perception was automatic in the sense that my brain fused the two stimuli before I could question the association consciously. After I had the time to do so (it takes tens of milliseconds for the frontal lobe to have access to the output of the sensory areas), I still did not change my immediate perception and found myself consciously expecting to see the leaf fly away. It probably was not until motion neurons, located in visual area 5, identified a familiar pattern in the motion of the leaf that I became aware that a bird would never fall as slowly and as waveringly as a leaf. This mistake in binding, a common occurrence underlying everyday mistakes, is sometimes called an illusory conjunction. It is clear that I fully believed that this leaf was a bird, and this was due to binding between stimuli that should not have been bound. This association took place automatically and had to be questioned deliberately in order to be fixed (or perhaps it was superseded by the subsequent automatic perception of the leaf’s motion).
This example can be thought of as a bottom-up belief. Incongruous sensory elements were fused or bound before higher-order areas could intercede. This can probably take place with emotional learning (such as conditioned fear) and procedural learning (such as the salivation reflex). Binding can also be controlled by higher-order association areas and the resultant associations could be thought of as top-down beliefs.
It seems rational to assume that many false beliefs occur because the wrong concepts are bound in association areas where different features can converge into one. Such perceptual illusions are rare because people become experts at perceiving physical events without error from an early age. Higher-order perceptions and representations though, ones that are not seen but imagined, are probably much more fallible. Such cognitive perceptions consist of judgments involving many moving parts and can be extremely difficult to parse apart, collect evidence for and question systematically. These perceptions, held in association areas, involve concepts like existence, import and relative efficacy, whereas the lower-order perceptions, held in sensory areas, involve things like contour, color and timbre. Higher-order beliefs probably work much like sensory ones, and go wrong for the same reasons. Both involve perceptual elements, that, after limited information is considered are bound together to create a new, higher-order perceptions. Sometimes, like a superstitious belief, these are chimerical contrivances that have no basis in the real world. Once the binding of memories that should never have been bound happens, mental imagery corresponding to the conjunction is created. Once this imagery is analyzed and acted upon a few times, aspects of it become implicit, making it difficult to regain conscious insight into the reason for the underlying belief. Happily, with experience and concerted practice, we become accurate and proficient in the way that we conjoin higher-order concepts.
Ontology of Belief
The objective existence of belief and similarly indefinite concepts in psychology has been questioned. Ontology, the philosophical study of being, involves determining what things can be said to exist in reality and what kinds of existence there are. Some philosophers, most notably of the Platonic school, contend that to exist, something must be referred to, or referable to, by a noun. In fact, according to some, all abstract nouns are thought to refer to existent entities (Griswold, 2001). Beliefs then, by this criterion, do exist. Other philosophers contend that nouns do not always refer to entities but that they often refer to collections of entities or events that do not necessarily sum to an objectively existent whole. Thus, beliefs may not be real, only nominal.
There do not seem to be any established methods of determining the existence of many non-physical entities such as beliefs, minds, communities, thoughts or happiness. Beliefs certainly cannot be easily scrutinized or manipulated as concrete, physical objects can. That a belief can be “held” but not touched tells us that we can bring some schemas to bear on beliefs but that many schemas fail to be compatible with them. When some schemas work with an abstract noun, it implicitly appears to be real. Beliefs may be defensible in some instances, but if it is indefensible in most, if it is incompatible with most scientific schemas, can it really be said to exist? Habitual, unobserved use of the word belief probably makes people implicitly assume that they are as real as any physical object. This reveals that beliefs can be conceived as patchwork of explanations and abstractions that has a place in lay discourse but very limited scientific utility. One could even go so far as to say that many common concepts such as the self, love, attitude, consciousness, the soul and beliefs can be seen as inadequately specified, indefensible fictions.
Many in this area of research contend that if belief is a defensible, adequately specified psychological construct then it should be possible to identify the underlying neural processes that support it (Baker, 1989). However, if beliefs are not equivalent to mental states, are incoherent or ultimately indefensible, then any attempt to identify their underlying neural substrates will fail. Much of the contemporary literature on beliefs in philosophy has been devoted to the validity of the term belief as a natural or neuroscientific phenomenon.
Jerry Fodor published well-received work supporting the notion that the most people’s common-sense understanding of belief is correct (1985). This is sometimes called the “mental sentence theory,” which perceives beliefs as simple statements and purports that the way that people talk about beliefs in everyday life is more or less complete and scientifically valid (Baker, 1989). Three twists on this conception exist. Stephen Stich argued that our common-sense understanding of belief might not be entirely correct but that it is useful until we can devise a more scientifically accurate understanding. Paul and Patricia Churchland advocate a view called eliminativism, or eliminative materialism, which argues that the common-sense understanding of beliefs is not scientifically accurate and will eventually be replaced by a different and neuroscientifically accurate account. These philosophers of mind argue that no coherent neural basis will be found for many everyday psychological concepts such as belief, desire, or even thought. Daniel Dennet (1998) and Lynne Rudder Baker (1989) take the third position on the common-sense understanding of beliefs, in what Dennett has called the “intentional stance.” Dennet says that our current conceptualization of what beliefs are is entirely wrong but that it does have some redeeming value such as its utility in generating testable hypotheses about intent, motivation and logic. It may never be completely clear, even with definitive and comprehensive knowledge of neuroscience, whether and when belief is an ontologically valid construct. Beliefs, like consciousness carry crucial subjective aspects that science may never be able to capture or explicate. Certainly, however, the term, belief, is functional and instructive for learners, children especially. Imagine growing up without the concept of belief. During early cognitive development the concept of belief is instrumental in the creation of mental models concerning empathy, decision and knowledge acquisition. Even beliefs flout ontology, at least they facilitate ontogeny.
Cognitive Mechanics
jared