According to the social Psychologist Susan Fiske and Shelly Taylor, human beings are cognitive misers- that is, we are forever trying to conserve our cognitive energy. Given that we have a limited capacity to process information, we attempt to adopt strategies that simplify complex problems. We accomplish this by ignoring some information’s to reduce our cognitive load; or we “overuse” other information to keep from having to search for more, or we may be willing to accept a less than perfect alternative because it is almost good enough. The strategies of the cognitive miser may be efficient-making fairly good use of our limited cognitive capacity to process a near infinite world of information-but these strategies can also lead to serious errors and biases, especially when we select the wrong simple strategy, or in our rush to move on, we ignore a vital piece of information.
Some readers may be disheartened to find that they are not as rational or as thorough in their thinking as they might have supposed. It is comforting to believe that the human mind has unlimited, even hidden powers-as proclaimed in many pop psychology books. It is comforting to believe that one has a personal pipeline to the absolute. But it is dangerous to fail to realize that one’s shortcuts can produce biases and prejudices that are far from the absolute. Unless we recognize our cognitive limitations we cannot take steps to overcome them. For example, if we fail to realize that we often judge others on the basis of stereotypes or that the way information’s is presented can distort our judgment, we are unable to take steps to correct our errors. And what is worse by not admitting that we are cognitive misers, we may not come to believe that our personal perspective is the only perspective there is, and it is thereby synonymous with Truth. As history demonstrates, it is very easy to commit acts of hatred and cruelty when one knows oneself to be absolutely right.
The Effects of Contest on Social Judgment
Reference Points and Contracts Effects. Real estate agents implicitly understand this one basic principle of social cognition-judgment is relative. How we evaluate and perceive an object is highly dependent on the nature of the alternatives around it-the point of reference we use to make a comparison. An object can appear to be better or
Now we can see how that visit to the dilapidated house can influence our purchase. Perhaps the next house you look at is not really ideal. But compared to that last one-what an improvement! The yard and the master bedroom are bigger. The interior is in good shape. We won’t need to paint it for at least three years. And the price is only slightly higher than what they were asking for that old shack. What a deal! We’ll take it right away-before the owner has a chance to change is mind!
Priming and Construct Accessibility. One of the standard comedic devices on television sitcoms is the double entendre. A typical double entendre goes like this: Early in the show the young teenage daughter tells everyone but her father that she made the schools coed softball team as the starting catcher. On the other hand, her father finds out about big party sponsored by some of the daughter’s classmates which promise to have “some wild going-on” and just happens to be scheduled on the same night as the softball game. The climactic scene involves the father overhearing his “innocent” daughter telling her friend about a pitcher:
“Boy I can hardly wait for tonight- I am so excited. I’ve never played with Tommy before. I love his technique. If he tries, I know he can go all the way. Tommy has wonderful stuff.”
The father is outraged and storms out of the house to intercept his young daughter. The audience is “entertained” because they know what is happening; the father thinks his daughter is talking about sex when she really discussing softball.
A study by Tory Higgins, William Rholes, and Carl Hones illustrates the role of priming in the formation of impressions about other people. In this experiment, subjects were asked to participate in two “different” research projects- one on perception and one on reading comprehension. The first experiment served to prime different trait categories; some of the subjects were asked to remember positive trait words (adventurous, self-confident, independent, and persistent), whereas the others ere asked to remember negative traits (reckless, conceited, aloof and stubborn). Five minutes later, as part of the “reading comprehension” study, subjects the read an ambiguous paragraph about a fictitious person named “Donald.” The paragraph described a number of behaviors performed by Donald that could be seen as adventures or reckless (e.g., skydiving), self-confident or conceited (e.g., believes in his abilities), independent or aloof (e.g., doesn’t rely on anyone); a persistent or stubborn (e.g., doesn’t change his mind often). The subjects then describe Donald in their own words and related how desirable they considered him to be. The results showed that the priming manipulation influenced the subjects’ impression of Donald. When negative trait categories had be primed, the characterized Donald in negative terms and saw him as less desirable than when positive categories had been primed.
Let us look at priming in the mass
media. Several studies have shown that there is a link between which stories
the media covers and what viewers consider to be the most important issues of
the day. In other words, the mass media make certain issues and concepts
readily accessible and thereby st the public’s political and social agenda. To
take one example, in a pioneering study of an election in North Carolina,
Maxwell McCombs and Donald Shaw found that the issues voters came to consider
to be most important in the campaign coincided precisely with the amount of
coverage of those issues in the local media. Similarly, the problems of drug
abuse, NASA’s incompetence, and nuclear energy were catapulted into the
nation’s consciousness by the coverage of dramatic events such as the drug
related death of basketball star Len Bais, the Challenger explosion, and the
In a set of ingenious experiments,
the political psychologist Shanto Iyengar and Donald Kinder demonstrated the
importance of priming to account for the relationship between repeated media
exposure and issue importance. In their studies, Iyengar and Kinder actually
varied the contents of news shows watched by the participants in their
experiments. Specifically, Iyengar and Kinder edited the evening news so that
participants received a steady dose of news about a specific problem facing the
The results were clear. After a
week of viewing the edited programs, participants emerged from the experiment
convinced that the target problem-the one primed by the extensive coverage in
the shows that they watched-was a more important problem for the country to
solve than they did before viewing the shows. What’s more, the research
participants acted on their newfound perceptions, evaluating the current
may not be successful much of the time in telling people what to think, but it is stunningly successful in telling its readers what to think about…The world will look different to different people. Depending …on the map that is drawn for them by the writers, editors, and publishers of the papers they read.
Framing the Decision. Another factor influencing how we construct our social world is decision framing-whether a problem or decision is presented in a such way that it appears to represent the potential for a loss or for a gain.
Stereotypic Knowledge and Expectations. One of the most important consequences of categorization is that it can invoke a set of specific data or stereotypes that then guides our expectations. For example, each of the following words probably invokes some very specific meanings: Yuppie, college professor, party girl, racist, and liberal democrat. Once we categorize a person or an event using one of these terms (as opposed to others), we base our expectations about future interactions on the accompanying stereotypes. Suppose I go into a restaurant and classify it as a “bar” as opposed to a “fine dining establishment.” I will probably think of the restaurant in a different terms and act in different ways-and probably foolishly if I have mistakenly categorized the restaurant and invoked the wrong script.
An interesting study by John Darley
and Paget Gross demonstrates the power of expectations created by a stereotype
to influence the way we think and make judgments. In their experiment, they told four different
stories about “Hannah”-a fourth grade schoolchild. After hearing one of the
Darley and Gross found that, when subjects were just one of the two videotapes of Hannah playing, they rated her ability as average; Hannah was just like everyone else in her class. In other words, subjects who were these films did not use their stereotypes in making their judgment. However, when subjects also watched the film of Hannah solving achievement-test problems, the effects of stereotypes became apparent: Subjects rated Hannah as having high socioeconomic background; they also interpreted her ambiguous performance as consistent with their judgments-evaluation the test as easier and estimating that fewer problems were solved when Hannah came from a poor background. Two lessons can be learned about stereotypes for this experiment. First, most people seem to have some knowledge of stereotypes effects and an ability to control them to some extent. Second, despite this knowledge about stereotypes, they information that lends a false sense of rationality to the judgment.
Our stereotype leads us to see a relationship that then seems to provide evidence that the original stereotype is true.
In-Group/Out-Group Effects. One of the most common ways of categorizing people is to divide the world into two groups: those in “my” group and the “out” group. For example, we often divide the world into us versus them, my school versus foreigners, my ethic group versus yours, or those who sit at my lunch table versus the rest of you. When we divide the world into such realities, researches have found considerable evidence for at least two consequences which can be termed “they-all-look-alike-to-me” effect and in-group favoritism.
In general, we tend to see member of out-groups as a more similar to each other as the members of our own group. For example, Bernadette Park and Myron Rothbart asked members of three different sororities to indicate how similar members of each sorority were to each other. They found that the women perceived more similarity of members in other sororities compared to their own. In other words, the women in other sororities all looked alike. One explanation for this effect is that when the subjects thought of members in their own group, they thought of them as individuals, each with a unique personality and lifestyle. On the other hand, when they thought of out-group members, they considered them in terms of the group label and stereotype and thus saw each as similar to this group identity.
(Re) Constructive Memory
What is the role of memory in social cognition? Human memory is primarily reconstructive in nature. By his I mean that we do not record a literal translation of past events-like a tape recorder or a VCR- but instead re-create many of our memories from bits and pieces that we can recall and from our notions and expectations of what should have been. Perhaps you disagree with this assertion about human memory; most people do. If you do happen to disagree, you should expect an argument from Timothy Hennis, a sergeant in the U.S. Army. The research that led to the conclusion that memory is (re)constructive probably saved Sergeant Hennis’s life. Let me explain.
During the trial, two eyewitnesses placed Hennis at the scene of the crime. Chuck Barrett testified he had seen Timothy Hennis walking in the area at on the morning of the murders. Sandra Barnes testified she had seen a man that looked like Hennis using a bank card that police had earlier identified as one stolen from the Eastburns residence. In spite of the fact that Hennis had an airtight alibi for his whereabouts on the night of the murder and there was no physical evidence (fingerprints, clothing fibers, footprints, bloodstains, hair) to link him to the scene, a jury convicted Hennis and sentenced him to death by lethal injection.
Hennis spent 845 days awaiting his execution on death row before a judge from the court of appeals ordered a new trial on the basis of a procedural technicality unrelated to the eyewitness testimony. Hennis’s lawyers knew that if Hennis had any chance of overturning his conviction, they would need to attack the eyewitness testimony placing him at the scene of the crime. And it was weak evidence. Chuck Barrett had originally told police two days after the murders that the man he saw had brown hair (Hennis is blond) and was six feet tall (Hennis is much taller), and when asked to identify Hennis into a photo lineup, Barrett was uncertain of his judgment. When Sandra Barnes was first contacted by police a few weeks after the crime, she told them firmly and emphatically that she had not seen anyone at the bank machine that day. Why then at the trial had both of these eyewitnesses so confidently placed Hennis at the scene of the crime? Were they both liars? Probably not; their memories of the events had be leveled and sharpened-constructed and shaped-by over a year of questioning by police and lawyers.
Elizabeth Loftus a talented cognitive psychologist, served as an expert witness at he second Hennis trial. Loftus had conducted a fascinating program of research on reconstructive memory-investigating how such “suggestive” questioning can influence memory and subsequent eyewitness testimony. In one of her experiments, Loftus showed subjects a film depicting a multiple car accident. After the film, some of the subjects were asked, “About how fast were the cars going when the smashed into each other?” Other subjects asked the same question, but the word smashed was replaced by the word hit. Subjects who were asked about smashing cars, as opposed to hitting cars, estimated that the cars were going significantly faster and, a week after seeing the film, were more likely to state the there was broken glass at the accident scene (even though no broken glass was shown in the film).
Leading questions can not only influence the judgment of facts (as in the case above), but also can affect the memory of that was happened as well. In another experiment, Loftus showed subjects a series of slides depicting an auto-pedestrian accident. In a critical slide, a green car drove past the accident. Immediately after viewing the slides half of the subjects were asked, “Did the blue car that drove past the accident have a ski rack on the roof?” The remaining subjects were asked this same question but with the word blue deleted. The results showed that those subjects who were asked about the “blue” car were even more likely to claim incorrectly that they had seen a blue car (even though in reality it was green). A simple question had changed their memory. In subsequent experiments, Loftus had succeeded in planting false memories of childhood experiences in the minds of young adults by simply instructing a close relative to talk about these events as fact. For example, if a young man’s older sister says to him, “Remember the time when you were five years old and you got lost into a panic-and an oldish man tried to help you? When we discovered you, you were holding the old man’s hand and were crying?” Within a few days of hearing such a story, most people will have incorporated that planted memory into their own history, will have embroidered it with details (“oh yeah, the old man who helped me was wearing a flannel shirt), and will be absolutely certain that it really happened-when, in fact, it didn’t.
In her testimony at the Hennis trail, Loftus discussed the nature of reconstructive memory and they ways that an interrogation can lead an observer to construct an imaginary scenario and then believe that it really happened. Consider the testimony of Sandra Barnes. At first she could not recall the presence of anyone at the bank teller machine. However, after listening to months of television coverage and reading a year’s worth of newspaper stories about the crime, coupled with the pressure stemming from the fact that she was the only one who might have seen the real murderer, Barns reconstructed a memory of her visit to the bank machine that included someone who looks like Hennis-in a manner similar to the students who recalled a blue rather than a green car in the Loftus experiment. By rehearsing this new construction repeatedly for lawyers and judges, Barnes came to accept the fact. It is important to not that Sandra Barnes was not intentionally lying. She was simply reconstructing the event. She came to believe what she was saying. A similar story can be told about Chuck Barrett’s testimony. Subsequently, the man he was the morning of the murder was conclusively identified as another man on his way to work- not Hennis.
Fortunately for Hennis, this was
not the end of the story. On
How Conservative Is Human Cognition?
Much evidence has accrued to indicate that the confirmation bias is a common tendency in human thought. For example, in a experiment by Mark Snyder and William Swann, female college students were told that the person they were about to meet was either an extrovert (outgoing, warm, and friendly) or an introvert (reserved, cool, and aloof). They then prepared a set of questions that they would like to ask this person in order to get to know them. What types of questions did they wish to ask? In general, subjects sought to confirm their original hypothesis. Subjects who thought they would meet an extrovert were more likely to ask questions that confirmed the hypothesis, such as: “What do you do to liven up a party?” and “In what situations are you most talkative?” Those expecting to meet an introvert were likely to ask: “In what situations do you wish you could be more outgoing?” and “What things do you dislike about loud parties?”
The confirmation and hindsight biases provide support for the proposition that human cognition tends to be conservative. By conservative, I do not mean that the mind adopts a certain political orientation. When applied to social cognition, conservatism refers to the tendency to preserve that which is already established – to maintain our preexisting knowledge, beliefs, attitudes, and hypothesis.
The Attitude-Behavior Relationship in Our Heads. How can we reconcile this body of research with out intuition that a person’s beliefs are strongly related to this or her behaviors? One way is to conclude that there is no relationship between attitudes and behavior-it is all in our heads; we just imagine that people act consistently with their beliefs and attitudes. There is some support for this proposition. In the last two chapters we saw the power of the social situation to induce conformity. LaPiere’s innkeepers undoubtedly faced strong social pressures to say “no” to an inquiry about admitting Chinese people; at the same time, they faced contrary pressures (to avoid making a scene) to lodge the young Chinese couple once the appeared at the hotel. Perhaps we are nothing more than creatures of our immediate social environment.
In support of the hypothesis that the perception of attitude-behavior consistency is “all in our head” is the common tendency among people to attribute the cause of an individual’s behavior to characteristics of the individual such as personality traits and attitudes rather than the power of the situation itself. For example, the inquiry “why did little Johnny fail on his homework assignment?” is often answered “Because he is stupid or lazy”-ignoring such situational factors as overcrowded schools or a poor academic environment. In other words, when we see something happens to a person he or she is. We would like to believe that people get what they deserve and deserve what they get.
Edward Jones and his colleagues call this tendency to attribute the cause of a behavior to a corresponding characteristic of a person a correspondent inference-that is, the behavior of the person is explained in terms of an attribute or trait that is just like the behavior. Some examples include: “Sam spilled wine on the carpet because he is clumsy (not because of a momentary distraction),” and “Amy spoke sarcastically to Ted because she is a hostile person (not because she momentarily lost her temper).”
experiment by Edward Jones and Victor Harris demonstrates that such inferences
can be pervasive. In this experiment, subjects read essays either favorable or
unfavorable to Fidel Castro’s regime in
The Self-Serving Bias. The self-serving bias refers to a tendency for individuals to make dispositional attributions for their successes and situational attributions for their failures. For example, in a basketball game, in Linda sinks a difficult shot, chances are she will attribute it to her great eye and leaping ability. On the other hand, if she misses, she might claim that she was fouled or that there was a soft spot in the floor that led to a mistiming of her jump.
Automotive driving provides many opportunities for motorists to engage in the self-serving bias. For example, the following are actual written reports given by drivers involved in automobile accidents. As can be seen, the self-serving bias is much in evidence.
The telephone pole was approaching fast; I attempted to swerve out of its way, when it struck the front of my car.
An invisible car same out of nowhere, struck my vehicle, and vanished.
My car was legally parked as it backed into the other vehicle.
As I reached an intersection, a hedge sprang up obscuring my vision. I did not see the other car.
A pedestrian hit me and went under my car.
Researches have gathered a great deal of evidence in support of the informal observation that we take credit for the good and deny the bad. For example: (a) Student who does well on an exam attribute their performance to ability and effort, whereas those who do poorly attribute it to a poor exam or bad luck; (b) gamblers perceive their successes as based on skill and their failures and as fluke; (c) when married persons estimate how much of the housework each routinely did, their combined total of housework performed amounts to more than 100 percent- in other words, each person things he or she did more work than their partner thinks he or she did; (d) in general, people rate themselves more positively than others do, believing that they themselves are “better than average”; (e) two-person teams performing a skilled task accept credit for the good scores but assign most of the blame for the poor scores to their partner, and (f) when asked to explain why someone else dislikes them, college students take little responsibility for themselves (i.e., there must be something wrong with this other person), but when told that someone else likes them, the student attributed it to their own personality. As Greenwald and Breckler note, “The presented self is (unusually) too good to be true; the (too) good self is often genuinely believed.”
Basically, cognitive dissonance is a state of tension that occurs whenever an individual simultaneously holds two cognitions (ideas, attitudes, beliefs, opinions) that are psychologically inconsistent. Stated differently, two cognitions are dissonant if, considering these two cognitions alone, the opposite of one follows from the other. Because the occurrence of cognitive dissonance is unpleasant, people are motivated to reduce it; this is roughly analogous to the processes involved in the inductions and reduction of such drives as hunger or thirst-except that, here, the driving force arises from cognitive discomfort rather than physiological needs.
By changing one or both cognitions in such a way as to render them more compatible (more consonant) with each other, or by adding more cognitions that help bridge the gap between the original cognitions.
But, for most people it is not east to give up smoking. Imagine Sally, a young woman who tried to stop smoking but failed. What does she do to reduce dissonance? In all probability, she will try to work on the other cognition: “Cigarette smoking produces cancer.” Sally might attempt to make light of evidence linking cigarette smoking to cancer. For example, she might try to convince herself that the experimental evidence is inconclusive. In addition, she might seek out intelligent people who smoke and, by so doing, convince herself that if Debbie, Nicole, and Larry smoke, it can’t be all that dangerous. Sally might switch to a filter-tipped brand and delude herself into believing that the filter traps the cancer-producing materials. Finally, she might enhance the value placed on smoking; that is, she might come to believe smoking is an important and highly enjoyable activity that is essential for relaxation: “I may lead a shorter life, but it will be more enjoyable one.” Similarly, she might actually try to make a virtue out of smoking by developing a romantic devil-may-care self-image, flouting danger by smoking cigarettes. All such behavior reduces dissonance by reducing the absurdity of the notion of going out of one’s way to contract cancer.
The need to reduce dissonance (the need to convince oneself that one is right or good) leads to behavior that is maladaptive and therefore irrational.
An important game between Princeton and
after the game, a couple of psychologist-Albert Hastorf of
People don’t like to see or hear things that conflict with their deeply held beliefs or wishes. An ancient response to such bad news was to kill the messenger-literally. A modern-day figurative version of “killing the messenger” is to blame the media for the presentation of material that produces the pain of dissonance. For example, when Ronald Regan was running for president in 1980, Time published an analysis of his campaign. Subsequent angry letters to the editor vividly illustrated the widely divergent responses of his supporters on the one hand, and his detractors of the other. Consider the following two letters:
Lawrence Barrett’s pre-election piece on Candidate Ronald Reagan [Oct. 20] was a slick hatchet job, and you know it. You ought to be ashamed of yourselves for printing it disguised as an objective look at the man.
Your story on “The Real Ronald Reagan” did it. Why didn’t you just editorially endorse him? Barrett glosses over Reagan’s fatal flaws so handily that he “real” Ronald Reagan came across as the answer to all our problems.
Dissonance Reduction and Rational Behavior
I have referred to dissonance-reducing behavior as “irrational.” By this I mean it is often maladaptive, in that it can prevent people from learning important facts or from finding real solutions to their problems. On the other hand, it does serve a purpose: Dissonance-reducing behavior is ego-defensive behavior; by reducing dissonance, we maintain a positive image of ourselves-an image that depicts, us as good, or smart, or worthwhile.
Edward Jones and Rika Kohler, these investigators selected individuals who were deeply committed to a position on the issue of racial segregation-some of the subjects were in favor of segregation and other were opposed to it. These individuals were allowed to read a serious of arguments on both sides of the issue. Some of these arguments were extremely sensible and plausible, and others were so implausible they bordered on ridiculous. Jones and Kohler were interested in determining which of the arguments people would remember best. If people were purely rational, we would expect them to remember best. If people were purely rational, we would expect them to remember the plausible arguments best and the implausible arguments the least; why in the world would people want to keep implausible arguments in their heads? Accordingly, the rational person would rehearse and remember all the arguments that made sense and would slough off all ridiculous arguments. What does the theory of cognitive dissonance predict? It is comforting to have all the wise people on your side and all the fools on the other side: A silly argument in favor of one’s own position arouses some dissonance, because it raises some doubts about the wisdom if that position or the intelligence of the people who agree with it. Likewise, a plausible argument on the other side of the issue also arouses some dissonance; because it raises the possibility the other side is right. Because these arguments arouse dissonance, one tries not to think about them-that is, one might not learn them very well, or some might simply forget about them. This is exactly what Jones and Kohler found. Their subjects did not remember in a rational-functional manner. They tended to remember the plausible arguments agreeing with their own position and the implausible arguments agreeing with the opposing position.
Those of us who have worked extensively with the theory of cognitive dissonance do not deny that humans are capable of rational behavior. They theory merely suggests that a good deal of our behavior is not rational-although, from inside, it may seems sensible indeed.
An experiment by Jack Brehm demonstrates how this can come about. Posing as a marketing researcher, Brehm showed each of several women eight different appliances (a toaster, an electric coffee-maker, a sandwich grill, and the like) and asked that she rate them in terms of how attractive each appliance was to her. As a reward, each woman was told she could have one of the appliances as a gift-and she was given a choice between two of the products she had rated as being equally attractive. After chose one, it was wrapped up and given to her. Several minutes later, she was asked to rate the products again. It was found that after receiving the appliance of her choice, each woman rated the attractiveness of that appliance somewhat higher and decreased the rating of the appliance she had a chance to own but rejected. Again, making a decision produces dissonance: Cognitions bout any negative aspects of the preferred object are dissonant with having chosen it, and cognitions about the positive aspects of the unchosen object are dissonant with not having chosen it. To reduce dissonance, people cognitively spread apart the alternatives. That is, after the decision, the women in Brehm’s study emphasized the positive attributes of the appliance they decided to own, while de-emphasizing its negative attributes; for the appliance they decided not to own, they emphasized its negative attributes and de-emphasized its positive attributes.
A study conducted by Dennis Johnson and Caryl Rusbult, college students were asked to evaluate the probable success of a new computer dating service on campus. Subjects were shown pictures of individuals of the opposite sex, who they believed were applicants to the dating service. Subjects were then asked to rate the attractiveness of these applicants, as well as how much they believed they would enjoy a potential date with him or her- a possibility that was presented in a realistic manner. The results of this study were remarkably similar to Brehm’s findings about the appliances: The more heavily committed the students were to their current romantic partners, the more negative were their ratings of the attractiveness of alternative partners presented in the study. In a subsequent experiment, Jeffry Simpson and his colleagues also found that those in committed relationships saw opposite sex persons as physically and sexually attractive than did those who weren’t in committed relationships. In addition, Simpson showed that this effect only holds for “available others”; when presented with individuals who were somewhat older or who were of the same sex, people in committed relationship did not derogate their attractiveness. In short, no threat, no dissonance; no dissonance, no derogation.
There was a tendency, when actions were our of line with
ideas, for decisions makers to align their ideas with their actions.” To take
just one of many examples, the decision to continue to escalate the bombing of
It is instructive, for instance, to compare McNamara’s highly factual evidence-oriented summary of the case against bombing in 1966 (pgs. 555-63 of the Pentagon Papers) with the Joint Chief’s memorandum that disputed his conclusion and called he bombing one of our two trump cards, while it apparently ignored all of the facts that showed the opposite. Yet it was the Joint Chiefs who prevailed.
White sunrises the reason they prevailed was that their advice was consonant with decisions already made and with certain key assumptions then operating that later proved to be erroneous.
Escalation is self-perpetuating. Once a small commitment is made, it sets the stage for ever-increasing commitments. His behavior needs to be justified, so attitudes are change; this change in attitudes influences future decisions and behavior. The flavor of this kind of cognitive escalation is nicely captured in an analysis of the Pentagon Papers by the news magazine Time:
Yet the bureaucracy, the Pentagon Papers indicate, always demanded new options; each option was to apply more force. Each tightening of the screw created a position that must be defended; once committed, the military pressure must be maintained.
This phenomenon was demonstrated by Jonathan Freedman and Scott Fraser. They attempted to induce several homeowners to put up a huge sign in their front yards reading “Drive Carefully.” Because of the ugliness and obtrusiveness of this sign, most residents refused to put it up; only 17 percent complied. A different group of residents, however, were first “softened up” by and experimenter who “put his foot in the door” by getting them to sign a petition favoring safe driving. Because of signing a petition is an easy thing to do, virtually all who were asked agreed to sign. A few weeks later, a different experimenter went to each resident with the obtrusive ugly sign reading “Drive Carefully.” More than 55% of these residents allowed the sign to be put up on their property. Thus, when individuals commit themselves in a small way, the likelihood they will commit themselves further in that direction is increased. This process of using small favors to encourage people to accede to larger requests had been dubbed the foot-in-the-door technique. It is effective because having done the smaller favor sets up pressure toward agreeing to do the larger favor; in effect, it provides justification in advance for complying with the large request.
Similar results were obtained by Patricia Pliner and her associates. These investigators found that 46% of their samples were willing to make a small donation to the Cancer Society when they were approached directly. A similar group of people were asked one day earlier to wear a lapel pin publicizing twice as many of these people were willing to make a contribution.
Robert Knox and Hames Inkster simply intercepted people who were on their way to place two-dollar bets. They had already decided on their horses and were about to lance their bets when the investigators asked them how certain they were their horses would win. Because they were on their way to the two-dollar window, their decisions were not irrevocable. The investigators collared other bettors just as they were leaving the two-dollar window, after having placed their bets, and asked them how certain they were their horsed would win. Typically, people who had just placed their bets favored their horses a much better chance of winning then did those who were about to place their bets. But, of course, nothing had changed except the finality of the decision.
Canadian voters interviewed immediately after voting were more certain than their candidates would win and liked their candidates more than those voters interviewed immediately before they had cast their votes. In short, when a decision is irrevocable, more dissonance is aroused; to reduce this dissonance, people become more certain they are right after there is nothing they can do about it.
While the irrevocability of a decision always increases dissonance and the motivation to reduce it, there are circumstances in which irrevocability is unnecessary. Let me explain with an example. Suppose you enter automobile showroom intent on buying a new car. You’ve already priced the car you want at several dealers-you know you can purchase it for about $9,300. Lo and behold, the salesman tells you he can sell you one for $8,942. Excited by the bargain, you agree to the deal and write a check out for the down payment. While the salesman takes your check to the sales manager to consummate the deal, you rub your hands in glee as you imagine yourself driving home in your shiny new car. But alas, 10 minutes later, the salesman returns with a forlorn look on his face-it seem he made a calculation error and the sales manager caught it. The price of the car is actually $9,384. You can get it cheaper elsewhere; moreover, the decision to by is not irrevocable. And yet, far more people in this situation will go ahead with the deal than if the original asking price had been $9,384-even though the reason for purchasing the car from this dealer (the bargain price) no longer exists. Indeed, Robert Cialdini, a social psychologist who temporarily joined the sales force of an automobile dealer, discovered that the strategy described above is common and successful ploy called lowballing, or “throwing the customer a lowball”.
These speculations were out to the test by Judon Mills in an experiment with 6th graders. Mills 1st measured their attitudes toward cheating. He then had them participate in a competitive exam with prizes being offered to the winners. The situation was arranged so it was almost impossible to win without cheating; also, it was easy for the children to cheat, thinking they would not be detected. As one might expect, some of the students cheated and others did not. The next day, the 6th graders were again asked to indicate how they felt about cheating. In general, those children who had cheated became more lenient toward cheating, and those who resisted the temptation to cheat adopted a harsher attitude toward cheating.
The Psychology of Inadequate Justification
The people are talking with horror about the fact that the
“Oh my God,
what have I done?” He says. He is intensely uncomfortable. Put another way, he
is experiencing a great deal of dissonance, His cognition “I mislead a bunch of
people; I told them a lot of things about
Suppose I wanted to effect a lasting change in your attitudes and beliefs. In that case, just the reverse is true. The smaller the external reward I give to induce you to recite the speech, the more likely it is you will be forced to seek yourself that the things you said were actually true. This would result in an actual change in attitude, rather than mere compliance. The importance of this technique cannot be overstated. If we change our attitudes because we have made a public statement for minimal external justification, our attitude change will be relatively permanent; we are not changing our attitudes because of a reward (compliance) or because of the influence of an attractive person (identification). We are changing our attitudes because we have succeeded in convincing ourselves that our previous attitudes were incorrect. This is a very powerful form of attitude change.
Leon Festinger and J. Merrill Carlsmith asked college students to perform a very boring and repetitive series of tasks-packing spools in a tray, dumping them out, and then refilling the try over and over, or turning rows and rows of screws a quarter turn and the going back and turning them another quarter turn. The students engaged in these activities for a full hour. The experimenter then induced them to lie about the task; specifically, he employed them to tell a young woman (who was waiting to participate in the experiment) that the task she would be performing was interesting and enjoyable. Some of the students were offered $20 for telling the lie, others were offered only $1 for telling the lie. After the experiment was over and interviewer asked the “lie tellers” how much they had enjoyed the tasks they had performed earlier in the experiment. The results were clear-cut: Those students who had been paid $20 for lying-that is, for saying the spool packing and screw turning had been enjoyable-actually rated the activity as dull. This is not surprising-it was dull. But what about the students who had been paid only $1 for telling their fellow student the experiment was enjoyable? They did, indeed, rate the tasks as an enjoyable one. In other words, people who receive an abundance of external justification for lying told the lie but didn’t believe it, whereas those who told the lie in the absence of a great deal of external justification did, indeed, move in the direction of believing that what they said was true.
Arthur R. Cohen induced Yale men to engage in a particularly
difficult form of counter attitudinal behavior. Cohen conducted his experiment
immediately after a student riot in which the
Phillip Zimbardo and his colleagues conducted an analogous experiment in which army reservists were asked to try fried grasshoppers as part of a study allegedly about “survival” foods. For ½ the subjects, the requests was made by a warm, friendly, officer; for the other ½ it was made by a could, unfriendly officer. The reservisits’ attitudes toward eating grasshoppers were measured before and after they ate them. The results were exactly as predicted above: Reservists who ate grasshoppers at the unpleasant officer increased their liking for them far more than those who ate grasshoppers at the request of the pleasant officer. Thus, when sufficient external justification was present-when reservists complied with the friendly officer’s request-they experienced little need to change their attitudes toward grasshoppers. They already had a convincing explanation for why they ate them-they did it to help a “nice guy.” But reservists who complied with the unfriendly officer’s request had little external justification for their actions. As a result, they adopted more positive attitudes toward eating grasshoppers in order to rationalize their discrepant behavior.
These speculations were tested in an experiment. I performed more than three decades ago in collaboration with my friend Judson Mills. In this study, college women volunteered to join a group that would be meeting regularly to discuss various aspects of the psychology of sex. The women were told that, if they wanted to join, they would have to 1st have to go through a screening test designed to ensure that all people admitted to the group could discuss sex freely and openly. This instruction served to set the stage or the initiation procedure. One-third of the women were assigned to a severe initiation procedure, which required them to recite aloud a list of obscene words. One-third of the students underwent a mild procedure, in which they recited a list of words that were sexual but not obscene. The final one-third of the subjects were admitted to the group with out undergoing an initiation. Each subject was then allowed to listen in on a discussion being conducted by the members of the group she had just joined. Although the women were led to believe the discussion was a “live,” ongoing one, what they actually heard was a pre-recorded tape. The tape discussion was arranged so it was as dull and as bombastic as possible. After it was over, each subject was asked how to rate the discussion in terms of how much she liked it, how interesting it was, how intelligent the participants were and so forth.
The results supported the predictions: Those subjects who underwent little or no effort to get into the group did not enjoy the discussion very much. They were able to see it as it was-a dull and boring waste of time. Those subjects who went through a severe initiation, however, succeeded in convincing themselves that the same discussion was interesting and worthwhile.
Psychology of Inevitability
In one experiment, Jack Brehm got children to volunteer to eat a vegetable they had previously said they disliked a lot. After they had eaten the vegetable, the experimenter led half of the children to believe they could expect to eat much more of that vegetable in the future; the remaining children were not so informed. The children who were led to believe it was inevitable that they would be eating the vegetable in the future succeeded in convincing themselves the particular vegetable was not so very bad. In short, the cognition “I dislike that vegetable” is dissonant with the cognition “I will be eating that vegetable in the future.” In order to reduce the dissonance, the children came to believe the vegetable was really not as noxious as they had previously thought.