• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2023 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Representativeness Heuristic Affects Decisions and Bias

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

representative heuristic problem solving

Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital.

representative heuristic problem solving

  • What Is It?
  • What Causes It
  • Why It Matters
  • How to Avoid It

Verywell / Cindy Chung

Making decisions isn't always easy, especially when we don't have all the details or the situation seems murky. When we make decisions in the face of uncertainty, we often rely on a mental shortcut known as the representativeness heuristic. It involves making judgments by comparing the current situations to concepts we already have in mind.

This shortcut can speed up the decision-making process, but it can also lead to poor choices and stereotypes.

For example, have you ever misjudged someone because they didn't 'fit' a certain image you had in mind? For example, maybe you assumed that someone must work in finance, accounting, or some other business-related profession based on how they dress, only to find out they're actually a musician or artist.

Because of the representativeness heuristic, you made a guess about what they do for a living based on your stereotypes about specific professional roles. 

At a Glance

The representativeness heuristic is just one type of mental shortcut that allows us to make decisions quickly in the face of uncertainty. While this can lead to quick thinking, it can also lead us to ignore factors that also shape events.

Fortunately, being aware of this bias and actively trying to avoid it can help. The next time you are trying to make a decision, consider the way in which the representative heuristic might play a role in your thinking.

Press Play for Advice On How to Make Wise Decisions

Hosted by therapist Amy Morin, LCSW, this episode of The Verywell Mind Podcast shares some tips that can help you make better choices. Click below to listen now.

Follow Now : Apple Podcasts / Spotify / Google Podcasts

What Is the Representativeness Heuristic?

The representativeness heuristic involves estimating the likelihood of an event by comparing it to an existing prototype that already exists in our minds. This prototype is what we think is the most relevant or typical example of a particular event or object.

The problem is that people often overestimate the similarity between the two things they compare.

When making decisions or judgments, we often use mental shortcuts or "rules of thumb," known as  heuristics . The fact is that we just don't always have the time or resources to compare all the information before we make a choice, so we use heuristics to help us reach decisions quickly and efficiently.

Sometimes these mental shortcuts can be helpful, but in other cases, they can lead to errors or  cognitive biases .

History of the Representativeness Heuristic

The representativeness heuristic was first described by psychologists Amos Tversky and Daniel Kahneman during the 1970s. Like other heuristics, making judgments based on representativeness is intended to work as a type of mental shortcut, allowing us to make decisions quickly. However, it can also lead to errors.

In their classic experiment , Tversky and Kahneman gave participants a description of a person named Tom, who was orderly, detail-oriented, competent, self-centered, with a strong moral sense. Participants were then asked to determine Tom's college major.

Based on the description provided by the researchers, many participants concluded that Tom must be an engineering major. Why? Because Tom was representative of what the participants expected from an engineering student. He fit the description, so to speak.

The study's participants ignored other clues that might have pointed them in a different direction, such as the fact that there were relatively few engineering students at their school. Based purely on probability, it would have made more sense for them to predict that Tom was majoring in a more popular subject.

Tversky and Kahneman's study demonstrated how influential the representativeness heuristic can be when making decisions and judgments.

In 2002, Kahneman was awarded the Nobel Memorial Prize in Economic Science for his research on factors that affect judgment and decision-making in the face of uncertainty. (Tversky was not eligible because he passed away in 1996, and the Noble Prize is not awarded posthumously.) 

What Causes the Representativeness Heuristic?

So why does representativeness play such a role in guiding our judgements, often in the face of contrary evidence? There are several different factors that can play a role in the use of representativeness when making judgments. Some of these include:

Our Cognitive Resources Are Limited

While our cognitive resources are limited, we still have thousands of decisions to make every day. To make the most of what we have, we rely on heuristics. These shortcuts allow us to conserve mental resources and still make decisions quickly and efficiently.

We Categories People and Objects

Conserving our resources by using short is one part of the explanation, but the way we categorize people and objects also plays a major role.

Making decisions based on representativeness involves comparing an object or situation to the schemas or mental prototypes we already have in mind. Such schemas are based on past learning. We can also change our existing categories based on the new things we learn.

If an existing schema doesn't adequately account for the current situation, it can lead to poor judgments.

We Overestimate the Importance of Similarity

When we make decisions based on representativeness, we may make more errors by overestimating the similarity of a situation. Just because an event or object is representative does not mean that what we've experienced before is likely to happen again.

In Tversky and Kahneman's famous study, people assumed that Tom was an engineering major because he fit a stereotype they might have encountered in the past. They overestimated the importance of the similarity between Tom and their mental prototype.

In this case, other sources of information were even more relevant, such as the fact that engineering students made up only a tiny portion of the student population and that the general description could fit a wide range of students from all different walks of life.

Examples of the Representativeness Heuristic

It can be helpful to examine a few examples of how the representativeness heuristic works in real life. For example

In the Workplace

The heuristic can affect decisions made in the workplace. In one study, for example, researchers found that managers made biased decisions more than 50% of the time, many of which were based on representativeness.

Stereotyped attitudes can have serious ramifications. Discrimination based on age , disability, parental status, race, color, and sex can also be influenced by stereotypes linked to the representativeness heuristic.

In Social Relationships

Representativeness can affect the judgments we make when meeting new people. It may lead us to form inaccurate impressions of others, such as misjudging a new acquaintance or blind date.

In Political Choices

This heuristic can also influence how people vote and the candidates they support. For example, a person might support a political candidate because they fit the mental image of someone they think is a great leader without really learning about that person's platform.

What Are the Effects of the Representativeness Heuristic?

The representativeness heuristic is pervasive and can play a major role in many real-life decisions and judgments . In many cases, this can lead to poor judgments that can have serious consequences. 

Criminal Justice

Jurors may judge guilt based on how closely a defendant matches their prototype of a "guilty" suspect or how well the crime represents a specific crime category.

For example, a person accused of abducting a child for ransom may be more likely to be viewed as guilty than someone accused of kidnapping an adult for no ransom.

The representativeness heuristic is thought to play a role in racial bias in the criminal justice system. Studies have found that jurors in mock trials are more likely to hand down guilty verdicts to defendants who belong to ethnic minority groups commonly associated in the media with crime.

Such findings also play out in real-world settings—research has found that Black defendants are less likely to be offered plea bargains and receive longer, more severe sentences than White defendants who have been charged with the same crimes.

Doctors and healthcare professionals may make diagnostic and treatment decisions based on how well a patient and their symptoms match an existing prototype. Unfortunately, this can lead professionals to overestimate similarity and ignore other relevant information.

For example, a physician might rule out a relevant diagnosis because a patient does not fit their expected prototype for someone with that condition.

One study found that in 49.6% of cases, the final diagnosis matched a doctor's first diagnostic impression, suggesting that representativeness plays a role in doctors' decisions.

Interpersonal Perceptions

This heuristic can also play a role in our assessments about other people. We tend to develop ideas about how people in certain roles should behave.

In another variation of Tversky and Kahnemahn famous research, they described a man named Steve as shy, withdrawn, and helpful despite having little interest in other people. 

Would you think that Steve was a librarian or a farmer? Like most of us, most participants immediately picked librarian based entirely on representativeness.

A farmer, for example, might be seen as hard-working, outdoorsy, and tough. A librarian, on the other hand, might be viewed as being quiet, organized, and reserved.

Stereotypes

Because people are so prone to drawing on prototypes to guide decisions, it can also lead to problems such as prejudice . The prototypes people hold can become stereotypes, which leads people to make prejudiced judgments of other people.

Such stereotypes can also lead to systemic discrimination against different groups of people.

How to Avoid the Representativeness Heuristic

The representativeness heuristic isn't easy to avoid, but there are some things that you can do to help minimize its effects. This can help you make more accurate judgments in your day-to-day life. Things you can do include:

  • Becoming more aware of this tendency : Kahneman has found that when people become aware that they are using the representativeness heuristic, they can often correct themselves and make more accurate judgments.
  • Reflecting on your judgments to check for bias : As you make decisions about people or events, spend a few moments thinking about how bias might affect your choices.
  • Applying logic to problems : As you solve problems, focus on thinking through them logically. Learning more about critical thinking skills and logical fallacies can also be helpful.
  • Asking others for feedback : It can be difficult to spot the use of representativeness in your own thinking, so it can sometimes be helpful to ask other people for feedback. Explain your thinking and ask them to check for possible biases.

Kahneman D, Tversky A. On the psychology of prediction . Psychological Review . 1973;80(4):237-251. doi:10.1037/h0034747

Smith D. Psychologist wins Nobel prize . Monitor on Psychology . 2002;33(11):22.

AlKhars M, Evangelopoulos N, Pavur R, Kulkarni S. Cognitive biases resulting from the representativeness heuristic in operations management: an experimental investigation .  Psychol Res Behav Manag . 2019;12:263-276. Published 2019 Apr 10. doi:10.2147/PRBM.S193092

Stolwijk S. The representativeness heuristic in political decision making . In: Oxford Research Encyclopedia of Politics . Oxford University Press; 2019. doi:10.1093/acrefore/9780190228637.013.981

Curley LJ, Munro J, Dror IE. Cognitive and human factors in legal layperson decision making: Sources of bias in juror decision making .  Med Sci Law . 2022;62(3):206-215. doi:10.1177/00258024221080655

United States Sentencing Commission.  Demographic differences in sentencing .

Payne VL, Crowley RS. Assessing the use of cognitive heuristic representativeness in clinical reasoning .  AMIA Annu Symp Proc . 2008;2008:571-575.

Fernández‐Aguilar C, Martín‐Martín JJ, Minué Lorenzo S, Fernández Ajuria A. Use of heuristics during the clinical decision process from family care physicians in real conditions . Evaluation Clinical Practice . 2022;28(1):135-141. doi:10.1111/jep.13608

Hinton, P. Implicit stereotypes and the predictive brain: cognition and culture in “biased” person perception . Palgrave Commu . 2017;3:17086. doi:10.1057/palcomms.2017.86

Kahneman D. A perspective on judgment and choice: Mapping bounded rationality . American Psychologist . 2003;58(9):697-720. doi:10.1037/0003-066X.58.9.697

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Heuristics: Definition, Examples, And How They Work

Benjamin Frimodig

Science Expert

B.A., History and Science, Harvard University

Ben Frimodig is a 2021 graduate of Harvard College, where he studied the History of Science.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

On This Page:

Every day our brains must process and respond to thousands of problems, both large and small, at a moment’s notice. It might even be overwhelming to consider the sheer volume of complex problems we regularly face in need of a quick solution.

While one might wish there was time to methodically and thoughtfully evaluate the fine details of our everyday tasks, the cognitive demands of daily life often make such processing logistically impossible.

Therefore, the brain must develop reliable shortcuts to keep up with the stimulus-rich environments we inhabit. Psychologists refer to these efficient problem-solving techniques as heuristics.

Heuristics decisions and mental thinking shortcut approach outline diagram. Everyday vs complex technique comparison list for judgments and fast, short term problem solving method vector

Heuristics can be thought of as general cognitive frameworks humans rely on regularly to reach a solution quickly.

For example, if a student needs to decide what subject she will study at university, her intuition will likely be drawn toward the path that she envisions as most satisfying, practical, and interesting.

She may also think back on her strengths and weaknesses in secondary school or perhaps even write out a pros and cons list to facilitate her choice.

It’s important to note that these heuristics broadly apply to everyday problems, produce sound solutions, and helps simplify otherwise complicated mental tasks. These are the three defining features of a heuristic.

While the concept of heuristics dates back to Ancient Greece (the term is derived from the Greek word for “to discover”), most of the information known today on the subject comes from prominent twentieth-century social scientists.

Herbert Simon’s study of a notion he called “bounded rationality” focused on decision-making under restrictive cognitive conditions, such as limited time and information.

This concept of optimizing an inherently imperfect analysis frames the contemporary study of heuristics and leads many to credit Simon as a foundational figure in the field.

Kahneman’s Theory of Decision Making

The immense contributions of psychologist Daniel Kahneman to our understanding of cognitive problem-solving deserve special attention.

As context for his theory, Kahneman put forward the estimate that an individual makes around 35,000 decisions each day! To reach these resolutions, the mind relies on either “fast” or “slow” thinking.

Kahneman

The fast thinking pathway (system 1) operates mostly unconsciously and aims to reach reliable decisions with as minimal cognitive strain as possible.

While system 1 relies on broad observations and quick evaluative techniques (heuristics!), system 2 (slow thinking) requires conscious, continuous attention to carefully assess the details of a given problem and logically reach a solution.

Given the sheer volume of daily decisions, it’s no surprise that around 98% of problem-solving uses system 1.

Thus, it is crucial that the human mind develops a toolbox of effective, efficient heuristics to support this fast-thinking pathway.

Heuristics vs. Algorithms

Those who’ve studied the psychology of decision-making might notice similarities between heuristics and algorithms. However, remember that these are two distinct modes of cognition.

Heuristics are methods or strategies which often lead to problem solutions but are not guaranteed to succeed.

They can be distinguished from algorithms, which are methods or procedures that will always produce a solution sooner or later.

An algorithm is a step-by-step procedure that can be reliably used to solve a specific problem. While the concept of an algorithm is most commonly used in reference to technology and mathematics, our brains rely on algorithms every day to resolve issues (Kahneman, 2011).

The important thing to remember is that algorithms are a set of mental instructions unique to specific situations, while heuristics are general rules of thumb that can help the mind process and overcome various obstacles.

For example, if you are thoughtfully reading every line of this article, you are using an algorithm.

On the other hand, if you are quickly skimming each section for important information or perhaps focusing only on sections you don’t already understand, you are using a heuristic!

Why Heuristics Are Used

Heuristics usually occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind at the same moment

When studying heuristics, keep in mind both the benefits and unavoidable drawbacks of their application. The ubiquity of these techniques in human society makes such weaknesses especially worthy of evaluation.

More specifically, in expediting decision-making processes, heuristics also predispose us to a number of cognitive biases .

A cognitive bias is an incorrect but pervasive judgment derived from an illogical pattern of cognition. In simple terms, a cognitive bias occurs when one internalizes a subjective perception as a reliable and objective truth.

Heuristics are reliable but imperfect; In the application of broad decision-making “shortcuts” to guide one’s response to specific situations, occasional errors are both inevitable and have the potential to catalyze persistent mistakes.

For example, consider the risks of faulty applications of the representative heuristic discussed above. While the technique encourages one to assign situations into broad categories based on superficial characteristics and one’s past experiences for the sake of cognitive expediency, such thinking is also the basis of stereotypes and discrimination.

In practice, these errors result in the disproportionate favoring of one group and/or the oppression of other groups within a given society.

Indeed, the most impactful research relating to heuristics often centers on the connection between them and systematic discrimination.

The tradeoff between thoughtful rationality and cognitive efficiency encompasses both the benefits and pitfalls of heuristics and represents a foundational concept in psychological research.

When learning about heuristics, keep in mind their relevance to all areas of human interaction. After all, the study of social psychology is intrinsically interdisciplinary.

Many of the most important studies on heuristics relate to flawed decision-making processes in high-stakes fields like law, medicine, and politics.

Researchers often draw on a distinct set of already established heuristics in their analysis. While dozens of unique heuristics have been observed, brief descriptions of those most central to the field are included below:

Availability Heuristic

The availability heuristic describes the tendency to make choices based on information that comes to mind readily.

For example, children of divorced parents are more likely to have pessimistic views towards marriage as adults.

Of important note, this heuristic can also involve assigning more importance to more recently learned information, largely due to the easier recall of such information.

Representativeness Heuristic

This technique allows one to quickly assign probabilities to and predict the outcome of new scenarios using psychological prototypes derived from past experiences.

For example, juries are less likely to convict individuals who are well-groomed and wearing formal attire (under the assumption that stylish, well-kempt individuals typically do not commit crimes).

This is one of the most studied heuristics by social psychologists for its relevance to the development of stereotypes.

Scarcity Heuristic

This method of decision-making is predicated on the perception of less abundant, rarer items as inherently more valuable than more abundant items.

We rely on the scarcity heuristic when we must make a fast selection with incomplete information. For example, a student deciding between two universities may be drawn toward the option with the lower acceptance rate, assuming that this exclusivity indicates a more desirable experience.

The concept of scarcity is central to behavioral economists’ study of consumer behavior (a field that evaluates economics through the lens of human psychology).

Trial and Error

This is the most basic and perhaps frequently cited heuristic. Trial and error can be used to solve a problem that possesses a discrete number of possible solutions and involves simply attempting each possible option until the correct solution is identified.

For example, if an individual was putting together a jigsaw puzzle, he or she would try multiple pieces until locating a proper fit.

This technique is commonly taught in introductory psychology courses due to its simple representation of the central purpose of heuristics: the use of reliable problem-solving frameworks to reduce cognitive load.

Anchoring and Adjustment Heuristic

Anchoring refers to the tendency to formulate expectations relating to new scenarios relative to an already ingrained piece of information.

 Anchoring Bias Example

Put simply, this anchoring one to form reasonable estimations around uncertainties. For example, if asked to estimate the number of days in a year on Mars, many people would first call to mind the fact the Earth’s year is 365 days (the “anchor”) and adjust accordingly.

This tendency can also help explain the observation that ingrained information often hinders the learning of new information, a concept known as retroactive inhibition.

Familiarity Heuristic

This technique can be used to guide actions in cognitively demanding situations by simply reverting to previous behaviors successfully utilized under similar circumstances.

The familiarity heuristic is most useful in unfamiliar, stressful environments.

For example, a job seeker might recall behavioral standards in other high-stakes situations from her past (perhaps an important presentation at university) to guide her behavior in a job interview.

Many psychologists interpret this technique as a slightly more specific variation of the availability heuristic.

How to Make Better Decisions

Heuristics are ingrained cognitive processes utilized by all humans and can lead to various biases.

Both of these statements are established facts. However, this does not mean that the biases that heuristics produce are unavoidable. As the wide-ranging impacts of such biases on societal institutions have become a popular research topic, psychologists have emphasized techniques for reaching more sound, thoughtful and fair decisions in our daily lives.

Ironically, many of these techniques are themselves heuristics!

To focus on the key details of a given problem, one might create a mental list of explicit goals and values. To clearly identify the impacts of choice, one should imagine its impacts one year in the future and from the perspective of all parties involved.

Most importantly, one must gain a mindful understanding of the problem-solving techniques used by our minds and the common mistakes that result. Mindfulness of these flawed yet persistent pathways allows one to quickly identify and remedy the biases (or otherwise flawed thinking) they tend to create!

Further Information

  • Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: an effort-reduction framework. Psychological bulletin, 134(2), 207.
  • Marewski, J. N., & Gigerenzer, G. (2012). Heuristic decision making in medicine. Dialogues in clinical neuroscience, 14(1), 77.
  • Del Campo, C., Pauser, S., Steiner, E., & Vetschera, R. (2016). Decision making styles and the use of heuristics in decision making. Journal of Business Economics, 86(4), 389-412.

What is a heuristic in psychology?

A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

Bobadilla-Suarez, S., & Love, B. C. (2017, May 29). Fast or Frugal, but Not Both: Decision Heuristics Under Time Pressure. Journal of Experimental Psychology: Learning, Memory, and Cognition .

Bowes, S. M., Ammirati, R. J., Costello, T. H., Basterfield, C., & Lilienfeld, S. O. (2020). Cognitive biases, heuristics, and logical fallacies in clinical practice: A brief field guide for practicing clinicians and supervisors. Professional Psychology: Research and Practice, 51 (5), 435–445.

Dietrich, C. (2010). “Decision Making: Factors that Influence Decision Making, Heuristics Used, and Decision Outcomes.” Inquiries Journal/Student Pulse, 2(02).

Groenewegen, A. (2021, September 1). Kahneman Fast and slow thinking: System 1 and 2 explained by Sue. SUE Behavioral Design. Retrieved March 26, 2022, from https://suebehaviouraldesign.com/kahneman-fast-slow-thinking/

Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make that big decision .

Kahneman, D. (2011). Thinking, fast and slow . Macmillan.

Pratkanis, A. (1989). The cognitive representation of attitudes. In A. R. Pratkanis, S. J. Breckler, & A. G. Greenwald (Eds.), Attitude structure and function (pp. 71–98). Hillsdale, NJ: Erlbaum.

Simon, H.A., 1956. Rational choice and the structure of the environment. Psychological Review .

Tversky, A., & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185 (4157), 1124–1131.

Print Friendly, PDF & Email

representative heuristic problem solving

Heuristic Problem Solving: A comprehensive guide with 5 Examples

What are heuristics, advantages of using heuristic problem solving, disadvantages of using heuristic problem solving, heuristic problem solving examples, frequently asked questions.

  • Speed: Heuristics are designed to find solutions quickly, saving time in problem solving tasks. Rather than spending a lot of time analyzing every possible solution, heuristics help to narrow down the options and focus on the most promising ones.
  • Flexibility: Heuristics are not rigid, step-by-step procedures. They allow for flexibility and creativity in problem solving, leading to innovative solutions. They encourage thinking outside the box and can generate unexpected and valuable ideas.
  • Simplicity: Heuristics are often easy to understand and apply, making them accessible to anyone regardless of their expertise or background. They don’t require specialized knowledge or training, which means they can be used in various contexts and by different people.
  • Cost-effective: Because heuristics are simple and efficient, they can save time, money, and effort in finding solutions. They also don’t require expensive software or equipment, making them a cost-effective approach to problem solving.
  • Real-world applicability: Heuristics are often based on practical experience and knowledge, making them relevant to real-world situations. They can help solve complex, messy, or ill-defined problems where other problem solving methods may not be practical.
  • Potential for errors: Heuristic problem solving relies on generalizations and assumptions, which may lead to errors or incorrect conclusions. This is especially true if the heuristic is not based on a solid understanding of the problem or the underlying principles.
  • Limited scope: Heuristic problem solving may only consider a limited number of potential solutions and may not identify the most optimal or effective solution.
  • Lack of creativity: Heuristic problem solving may rely on pre-existing solutions or approaches, limiting creativity and innovation in problem-solving.
  • Over-reliance: Heuristic problem solving may lead to over-reliance on a specific approach or heuristic, which can be problematic if the heuristic is flawed or ineffective.
  • Lack of transparency: Heuristic problem solving may not be transparent or explainable, as the decision-making process may not be explicitly articulated or understood.
  • Trial and error: This heuristic involves trying different solutions to a problem and learning from mistakes until a successful solution is found. A software developer encountering a bug in their code may try other solutions and test each one until they find the one that solves the issue.
  • Working backward: This heuristic involves starting at the goal and then figuring out what steps are needed to reach that goal. For example, a project manager may begin by setting a project deadline and then work backward to determine the necessary steps and deadlines for each team member to ensure the project is completed on time.
  • Breaking a problem into smaller parts: This heuristic involves breaking down a complex problem into smaller, more manageable pieces that can be tackled individually. For example, an HR manager tasked with implementing a new employee benefits program may break the project into smaller parts, such as researching options, getting quotes from vendors, and communicating the unique benefits to employees.
  • Using analogies: This heuristic involves finding similarities between a current problem and a similar problem that has been solved before and using the solution to the previous issue to help solve the current one. For example, a salesperson struggling to close a deal may use an analogy to a successful sales pitch they made to help guide their approach to the current pitch.
  • Simplifying the problem: This heuristic involves simplifying a complex problem by ignoring details that are not necessary for solving it. This allows the problem solver to focus on the most critical aspects of the problem. For example, a customer service representative dealing with a complex issue may simplify it by breaking it down into smaller components and addressing them individually rather than simultaneously trying to solve the entire problem.

Test your problem-solving skills for free in just a few minutes.

The free problem-solving skills for managers and team leaders helps you understand mistakes that hold you back.

What are the three types of heuristics?

What are the four stages of heuristics in problem solving.

Other Related Blogs

conflict mediation

Top 15 Tips for Effective Conflict Mediation at Work

Top 10 games for negotiation skills to make you a better leader, manager effectiveness: a complete guide for managers in 2024, 5 proven ways managers can build collaboration in a team.

representative heuristic problem solving

The Psychology Square

The Psychology Square

Photo of Two Red Dices

Affordable Mental Wellness is Possible!

Explore The Psychology Square for Support.

  • March 31, 2024
  • Social Psychology

Representativeness Heuristic in Psychology: A Complete Guide

Muhammad Sohail

Muhammad Sohail

Table of contents.

This article explores the concept of heuristics in social cognition, focusing on the use of simple rules to make judgments and decisions under conditions of uncertainty. It delves into the representativeness heuristic as a prime example, discussing its application in various scenarios and cultural differences in its utilization. Additionally, it examines the implications of heuristic-based thinking for societal issues like road safety and climate change.

What Are Heuristics?

Heuristics, originating from the Greek word “heurískō” meaning “to find, discover,” describe the cognitive shortcuts humans employ to make decisions efficiently. These strategies are utilized across various domains by humans, animals, organizations, and machines to quickly formulate judgments and solve complex problems, often prioritizing speed over accuracy, particularly in situations of uncertainty or limited information. While heuristics offer rapid solutions, they may not always be optimal, sometimes overlooking important nuances. The historical development of heuristics, notably through the work of Herbert A. Simon, Amos Tversky, and Daniel Kahneman, has highlighted their pragmatic nature and limitations. However, the “less-is-more” effect suggests that simplified heuristic approaches can sometimes yield outcomes as accurate or more so than exhaustive analyses, emphasizing the effectiveness of heuristic decision-making in specific contexts.

Heuristics are mental shortcuts that enable individuals to make decisions and judgments quickly and efficiently, especially when faced with complex information and uncertainty. This article aims to dissect the representativeness heuristic, one of the most prevalent heuristic strategies, and its impact on social cognition. By understanding how heuristics function, we can gain insights into human decision-making processes and their implications for various aspects of society.

Heuristics serve as cognitive tools that help individuals manage information overload and conserve mental resources. Rather than engaging in exhaustive processing, people often rely on heuristics to arrive at judgments and decisions swiftly. This tendency is particularly pronounced in situations where cognitive capacity is limited or when facing high levels of stress.

The Role of Heuristics in Social Thought

In social cognition, heuristics play a significant role in shaping perceptions and guiding behavior. Individuals frequently employ heuristic strategies to navigate social interactions, interpret ambiguous information, and make predictions about others’ behavior. By relying on simple rules, people can streamline the decision-making process, albeit at the risk of occasional inaccuracies.

The Representativeness Heuristic: Judging by Resemblance

The representativeness heuristic involves making judgments or decisions based on how closely an individual or event resembles a particular prototype or category. This heuristic operates on the principle that the more similar something is to a known category, the more likely it is perceived to belong to that category.

Application of the Representativeness Heuristic

Consider a scenario where an individual encounters a new neighbor and attempts to infer their occupation based on observed characteristics. By comparing the neighbor’s traits to prototypes associated with various professions, the observer may quickly form an impression. One common representativeness heuristic example in psychology is if a neighbor exhibits traits commonly associated with librarians (e.g., conservative dress, intellectual pursuits), the observer may infer that they are likely a librarian.

Factors Influencing the Representativeness Heuristic

The application of the representativeness heuristic, a cognitive shortcut used to make judgments based on similarity to prototypes or stereotypes, is influenced by various factors that shape human decision-making processes. When faced with novel stimuli or events, individuals tend to assess their representativeness based on factors such as similarity, randomness, and local representativeness.

In judging the representativeness of a new stimulus or event, individuals often focus on the degree of similarity between the stimulus and a standard or prototype. This similarity is crucial for determining whether the stimulus fits into a preexisting category or process. For example, medical beliefs often rely on the representativeness heuristic, where symptoms are expected to resemble their causes or treatments. However, this can lead to misconceptions, such as attributing ulcers to stress rather than bacteria. Even physicians may fall prey to this heuristic when diagnosing patients, judging their similarity to prototypical cases of certain disorders.

Irregularity and local representativeness also influence judgments of randomness. When faced with sequences that lack a clear logical sequence, individuals are more likely to perceive them as representative of randomness. Conversely, well-ordered sequences may be deemed less random. This bias towards perceived randomness can affect various domains, such as assessing the fairness of coin tosses. Small samples, in particular, are susceptible to the assumption of local representativeness, where observers may erroneously generalize from limited data, leading to misconceptions about the underlying distribution. For instance, a string of “heads” in a series of coin tosses may lead observers to believe the coin is biased towards “heads”, despite the small sample size.

Limitations of the Representativeness Heuristic

While the representativeness heuristic can yield accurate judgments in many cases, it is not without its limitations. One significant drawback is its tendency to overlook base rates—the frequency with which certain events or categories occur in the population. By focusing solely on resemblance to prototypes, individuals may disregard essential statistical information, leading to flawed judgments.

Cultural Variations in Heuristic Use

Cultural factors influence the extent to which individuals rely on heuristics in decision-making. Research suggests that cultural differences exist in the utilization of heuristics, with varying degrees of emphasis placed on simplification strategies like the representativeness heuristic.

Cultural Contrasts in Heuristic Reasoning

Studies comparing Western and Asian cultures have revealed distinct patterns in heuristic use. While Westerners often exhibit a strong tendency to rely on representativeness heuristic, Asians tend to consider a broader range of factors when making judgments. This cultural disparity can have implications for problem-solving and decision-making on global issues such as climate change.

Implications for Society

Understanding heuristic-based thinking has important implications for addressing societal challenges and promoting safety and well-being. By recognizing the influence of heuristics on behavior and decision-making, policymakers can design interventions that mitigate the negative consequences of heuristic biases.

Road Safety and Distracted Driving

The prevalence of distracted driving, particularly due to cell phone use and texting, underscores the need for interventions that address heuristic-driven behavior. Laws prohibiting these behaviors aim to counteract the heuristic tendency to prioritize immediate gratification over long-term safety concerns.

Climate Change and Cross-Cultural Collaboration

Cultural differences in heuristic reasoning can pose challenges for international cooperation on issues like climate change. Differing perceptions of causal relationships and problem-solving approaches may hinder consensus-building efforts and impede progress toward shared goals.

Heuristics serve as indispensable tools in social cognition, allowing individuals to navigate complex information environments efficiently. The representativeness heuristic, in particular, offers insights into how people make judgments based on similarity to prototypes. However, understanding the limitations of heuristics and recognizing cultural variations in their use are crucial for promoting effective decision-making and addressing societal challenges in an increasingly interconnected world. By leveraging insights from heuristic research, we can develop strategies to enhance decision-making processes and foster collaboration across diverse cultural contexts.

more insights

easily accessible information in news paper that can lead to availability heuristic

The Availability Heuristic: Cognitive Bias in Decision Making

The availability heuristic is a cognitive bias that affects decision-making based on how easily information can be recalled or accessed.

Therapist talking to a teenager about dual diagnosis in adolescents

Dual Diagnosis: Recognizing and Treating Co-Occurring Disorders

Uncover effective strategies to address dual diagnosis in adolescents, ensuring early intervention for a brighter and substance-free future.

GARS-3 Gilliam Autism Rating Scale Complete Guide

GARS-3 Gilliam Autism Rating Scale Complete Guide

This is a comprehensive guide on GARS-3 Gilliam Autism Rating Scale, a highly popular assessment scale for autism spectrum disorder

  • Media Center

Why do we use similarity to gauge statistical probability?

The representativeness heuristic, what is the representativeness heuristic.

The representativeness heuristic is a mental shortcut that we use when estimating probabilities. When we’re trying to assess how likely a certain event is, we often make our decision by assessing how similar it is to an existing mental prototype.

representativeness heuristic

Where this bias occurs

Debias your organization.

Most of us work & live in environments that aren’t optimized for solid decision-making. We work with organizations of all kinds to identify sources of cognitive bias & develop tailored solutions.

Let’s say you’re going to a concert with your friend Sarah. She also invited her two friends, John and Adam, whom you’ve never met before. You know that one is a mathematician, while the other is a musician.

When you finally meet Sarah’s friends, you notice that John wears glasses and is a bit shy, while Adam is more outgoing and dressed in a band T-shirt and ripped jeans. Without asking, you assume that John must be the mathematician and Adam must be the musician. You later discover that you were mistaken: Adam does math, and John plays music.

Thanks to the representativeness heuristic, you guessed Adam and John’s jobs based on stereotypes surrounding how these careers typically dress. This reliance caused you to ignore better indicators of their professions, such as simply asking them what they do for a living.

Related Biases

  • Gambler's Fallacy

Individual effects

Since we tend to rely on representativeness, we often fail to consider other types of information, causing us to make poor predictions. The representativeness heuristic is so pervasive that many researchers believe it is the foundation of several other biases that affect our processing, including the conjunction fallacy and the gambler's fallacy .

The conjunction fallacy occurs when we assume multiple things are more likely to co-occur than a single thing on its own. Statistically speaking, this is never the case, but the representativeness heuristic may convince us so.

Take Lisa, who is a bright philosophy graduate deeply concerned with discrimination and social justice. When given the option, we are much more likely to guess that she is both an active feminist and a bank teller, rather than just a bank teller.6 This is because of representativeness: the fact that Linda resembles a prototypical feminist skews our ability to predict the probability of her career.

Another bias caused by the representativeness heuristic is the gambler’s fallacy, which causes people to apply long-term odds to short-term sequences. For example, in a coin toss, there is roughly a fifty-fifty chance of getting either heads or tails. This doesn’t mean that if you flip a coin twice, you’ll get heads one time and tails the other. The probability only works over long sequences, such as tossing a coin a hundred times. However, we believe that short-term odds should represent their long-term counterparts, even though this is almost never the case. 7 

As its name suggests, the gambler’s fallacy can have serious consequences for gamblers. For example, somebody may believe that their odds of winning are better if they’ve been on a short losing streak, even though it will take many more times losing to fulfill that probability.

Systemic effects

Our reliance on categories can easily tip over into prejudice, even if we don’t realize it. The way that mass media portrays minority groups often reinforces commonly-held stereotypes. For instance, Black men tend to be overrepresented in coverage of crime and poverty, while they are underrepresented as “talking head” experts. 9 These patterns support a narrative that Black men are violent, which even Black viewers may internalize and incorporate into their categorization.

These stereotypes from the representativeness heuristic contribute to systemic discrimination. For example, police looking for a crime suspect might focus disproportionately on Black people in their search. Their prejudices cause them to assume that a Black person is more likely to be a criminal than somebody from another group.

How it affects product

Representativeness is a valuable tool for developing user interface (UI). Digital designers have intentionally incorporated symbols representing categories to guide us when we navigate virtual spaces, often without us even realizing it.

For example, when we see the trash bin icon, we know we can drag our documents over to dispose of them—just as we would throw out paper documents in real life. Or when we see a floppy disc icon, we know we can click on it to save our document, just as we used to store information. These prototypes are a good reminder of how the material can better help us understand the digital when designing new products.

The representativeness heuristic and AI

Machine learning has optimized categorization by relying on statistical patterns and base rates to sort information. However, humans still succumb to the representativeness heuristic while interpreting these outputs.

For example, the healthcare system has adopted AI technology to help diagnose patients by scanning medical images and running comparisons to thousands more in their dataset. Doctors may be more inclined to trust AI’s diagnosis if the symptoms match a disease's prototypical description. However, doctors might dismiss AI diagnoses if these do not align, even though the AI has much more access to rare or unusual presentations of symptoms in their files than doctors may have experienced in their careers.

Why it happens

The representativeness heuristic was coined by Daniel Kahneman and Amos Tversky, two of the most influential figures in behavioral economics. The classic example they used to illustrate this bias asks the reader to consider Steve: His friends describe him as “very shy and withdrawn, invariably helpful, but with little interest in people, or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” After reading this description, do you think Steve is a librarian or a farmer? 2  

Most of us intuitively feel like Steve must be a librarian because he’s more representative of our image of a librarian than he is our image of a farmer. In reality, no evidence directly points to Steve’s career, so we rely on stereotypes to dictate our decision.

Conserving energy with categories

As with all biases, the main reason we rely on representativeness is because we have limited mental resources. Since we make thousands of daily decisions, our brains are wired to conserve as much energy as possible. This means we often rely on shortcuts to quickly judge the world around us. However, there is another reason behind why the representativeness heuristic happens, rooted in how we perceive people and objects.

We draw on prototypes to make decisions

Grouping similar things together—that is, categorizing them—is an essential part of how we make sense of the world. This might seem like a no-brainer, but categories are more fundamental than many realize. Think of all the things you encounter in a single day. Whenever we interact with people, animals, or objects, we draw on the knowledge we’ve learned about that category to know what to do.

For instance, when you go to a dog park, you might see animals in a huge range of shapes, sizes, and colors. But since you can categorize them all as “dogs,” you immediately know what to expect: they run and chase things, like getting treats, and if one of them starts growling, you should probably back away.

Without categories, every time we encountered something new, we would have to learn what it was and how it worked from scratch. Not to mention the fact that storing so much information about every separate entity would be impossible given our limited cognitive capacity. For this reason, our ability to understand and remember things about the world relies on categorization. 

On the flip side, the way we originally learned to categorize things can also affect how we perceive them.3 For example, in Russian, lighter and darker shades of blue have different names (“goluboy” and “siniy,” respectively), whereas in English, we refer to both as “blue.” Research reveals that this difference in categorization affects how people actually perceive the color blue: Russian speakers are faster at discriminating between light and dark blues compared to English speakers. 4

According to one hypothesis of categorization known as prototype theory, we use unconscious mental statistics to figure out what the “average” member of a category looks like. When we are trying to make decisions about unfamiliar things or people, we refer to this average—the prototype—as a representative example of the entire category. There is some interesting evidence to support the idea that humans are somehow able to compute “average” category members like this. For instance, people tend to find faces more attractive the closer they are to the “average” face as generated by a computer. 5

Prototypes guide our estimates about probability, just like in the example where we guessed Steve’s profession. Our prototype for librarians is probably somebody who resembles Steve quite closely—shy, neat, and nerdy—while our prototype for farmers is probably somebody more muscular, more down-to-earth, and less timid. Intuitively, we feel like Steve must be a librarian because we are bound to think in terms of categories and averages.

We overestimate the importance of similarity

The problem with the representativeness heuristic is that it doesn’t actually have anything to do with probability—and yet, we put more value on it than we do on relevant information. One such type of information is base rates: statistics revealing how common something is in the general population. For instance, in the United States there are many more farmers than there are librarians. This means that statistically speaking, it is incorrect that Steve is “more likely” to be a librarian, no matter what his personality is like or how he presents himself. 2

Sample size is another useful type of information that we often neglect. When estimating a large population based on a sample, we want our sample to be as large as possible to give us a more complete picture. But when we focus too much on representativeness, sample size can end up being crowded out.

To illustrate this, imagine a jar filled with balls. ⅔ of the balls are one color, while ⅓ are another color. Sally draws five balls from the jar, of which four are red and one is white. James draws 20 balls, of which 12 are red and eight are white. Between Sally and James, who should feel more confident that the balls in the jar are ⅔ red and ⅓ white?

Most people say Sally has better odds of being right because the proportion of red balls she drew is larger than the proportion James drew. But this is incorrect: James drew a greater sample of balls than Sally, so he is in a better position to judge the contents of the jar. We are tempted to go for Sally’s 4:1 sample because it is more representative of the ratio we’re looking for than James’s 12:8, but this leads us to an error in our judgment.

Why it is important

Representation is essential for identification and interpretation. This way, we can understand something brand new without starting from square zero. Sometimes, this novelty exists within ourselves . For example, when exploring our gender or sexual identity, it may be comforting to identify with a new label to understand what we are going through. Other times, this novelty exists within others . For example, if your brother comes out as gay, we might rely on what we know about our queer friends to grasp his experience better.

However, there are two problems with only relying on strict categorization.

First, we may forget to consider uniqueness. Believe it or not, we can fall completely outside of categories—just like non-binary people, who do not feel their gender falls under any strict label. In situations like these, forcing categories onto someone might bring them further away from who they actually are, rather than guiding self-exploration.

Second, many categories have incorrect associations. Many groups are plagued with stereotypes, especially when it comes to minority ones such as LGBTQ+. This means that once we learn which category a person belongs to, we may be more likely to make wrong assumptions about them than correct ones. 

Since the representativeness heuristic encourages us to neglect uniqueness and believe incorrect associations, we must learn to do more than blindly trust categories when making predictions.

How to avoid it

Since categorization is so fundamental to our perception of the world, it is impossible to avoid the representativeness heuristic altogether. However, awareness is a good start. Countless research demonstrates that when people become aware that they are using a heuristic, they often correct their initial judgment. 10 Pointing out others’ reliance on representativeness, and asking them to do the same for you, provides useful feedback that might help to avoid this bias.

Other researchers have tried to reduce the effects of the representativeness heuristic by encouraging people to “think like statisticians.” These nudges do seem to help, but the problem is that without an obvious cue, people forget to use their statistical knowledge—not even those in academia. 10

Another strategy with potentially more durability is formal training in logical thinking. In one study, children trained to think more logically were more likely to avoid the conjunction fallacy. 10 With this in mind, learning more about statistics and critical thinking might help us avoid the representativeness heuristic.

How it all started

While categorization is a modern psychology staple, sorting objects can be traced all the way back to the Ancient Greeks philosophers. While Plato first touched on categories in his Statesman dialogue, it became a philosophical mainstay of his student, Aristotle. In his text accurately titled Categories , Aristotle aimed to sort every object of human apprehension into one of ten categories.

Prototype theory was empirically introduced by psychologist Eleanor Rosch in 1974. Up until this point, categories were thought of in all-or-nothing terms: either something belonged to a category, or it did not. Rosch’s approach recognized that members of a given category often look very different from one another, and that we tend to consider some things to be “better” category members than others. For example, when we think of the category of birds, penguins don’t seem to fit into this group as neatly as, say, a sparrow. The idea of prototypes lets us describe how we perceive certain category members as being more representative of their category than others.

At around the same time, Kahneman and Tversky introduced the concept of the representativeness heuristic as part of their research on strategies that people use to estimate probabilities in uncertain situations. Kahneman and Tversky played a pioneering role in behavioral economics, demonstrating that people make systematic errors in judgment because they rely on biased strategies, including the representativeness heuristic.

Example 1 – Represenativeness and stomach ulcers

Stomach ulcers are a relatively common ailment, but they can become serious if left untreated, sometimes even resulting in fatal stomach cancer. For a long time, it was common knowledge that stomach ulcers were caused by one thing: stress. So in the 1980s, when an Australian physician named Barry Marshall suggested at a medical conference that a kind of bacteria might cause ulcers, his colleagues initially rejected it out of hand. 11 After being ignored, Marshall finally proved his suspicions using the only method ethically available to him: he took some of the bacteria of the gut of a sick patient, added it to a broth, and drank it himself. He soon developed a stomach ulcer, and other doctors were finally convinced. 12

Why did it take so long (and such an extreme measure) to persuade others of this new possibility? According to social psychologists Thomas Gilovich and Kenneth Savitsky, the answer is the representativeness heuristic. The physical sensations people experience from a stomach ulcer—burning pains and a churning stomach—is similar to what we feel when we’re experiencing stress. On an intuitive level, we feel like ulcers and stress must have some connection. In other words, stress is a representative cause of an ulcer. 11 This may have been why other medical professionals were so resistant to Marshall’s proposal.

Example 2 – Representativeness and astrology

Gilovich and Savitsky also argue that the representativeness heuristic plays a role in pseudoscientific beliefs, including astrology. In astrology, each zodiac sign is associated with specific traits. For example, Aries, a “fire sign” symbolized by the ram, is often said to be passionate, confident, impatient, and aggressive. The fact that this description meshes well with the prototypical ram is no coincidence: the personality types linked to each star sign were chosen because they represent that sign. 11 The predictions that horoscopes make, rather than foretelling the future, are reverse-engineered based on what best fits with our image of each sign.

The representativeness heuristic is a mental shortcut that we use when deciding whether an object belongs to a class. Specifically, we tend to overemphasize the similarity or difference between the object and class to help us make this decision.

Our perception of people, animals, and objects relies heavily on categorization: grouping similar things together. Within each category exists is a prototype: the “average” member that best represents the category as a whole. When we use the representativeness heuristic, we compare something to our category prototype, and if they are similar, we instinctively believe there must be a connection.

Example 1 – Representativeness and stomach ulcers

When an Australian doctor discovered that a bacterium, and not stress, causes stomach ulcers, other medical professionals initially didn’t believe him because ulcers feel so similar to stress. In other words, stress is a more representative cause of an ulcer than bacteria are.

The personality types associated with each star sign in astrology are chosen because they are representative of the animal or symbol of that sign.

To avoid the representativeness heuristic, learn more about statistics and logical thinking, and ask others to point out instances where you might be relying too much on representativeness.

Related TDL articles

Why we see gambles as certainties.

The representativeness heuristic causes many other biases, including the gambling fallacy. This article explores the problem of gambling addiction and why it is so difficult to persuade people to stop.

TDL Perspectives: What Are Heuristics?

This interview with The Decision Lab’s Managing Director Sekoul Krastev delves into the history of heuristics, their applications in the real world, and their positive and negative effects. 

  • Bordalo, P., Coffman, K., Gennaioli, N., & Shleifer, A. (2016). Stereotypes. The Quarterly Journal of Economics, 131(4), 1753-1794.
  • Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131.
  • Feldman, N. H., Griffiths, T. L., & Morgan, J. L. (2009). The influence of categories on perception: Explaining the perceptual magnet effect as optimal statistical inference. Psychological Review, 116(4), 752-782. https://doi.org/10.1037/a0017196
  • Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, A. R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the national academy of sciences, 104(19), 7780-7785.
  • Radvansky, G. A. (2011). Human memory. Prentice Hall.
  • Tversky, A., & Kahneman, D. (1981). Judgments of and by representativeness (No. TR-3). STANFORD UNIV CA DEPT OF PSYCHOLOGY.
  • Fortune, E. E., & Goodie, A. S. (2012). Cognitive distortions as a component and treatment focus of pathological gambling: a review. Psychology of Addictive Behaviors, 26(2), 298.
  • Donaldson, L. (2017, December 19). When the media misrepresents Black men, the effects are felt in the real world. The Guardian. https://www.theguardian.com/commentisfree/2015/aug/12/media-misrepresents-black-men-effects-felt-real-world
  • Kahneman, D. (2003). A perspective on judgment and choice: mapping bounded rationality. American psychologist, 58(9), 697.
  • Gilovich, T., & Savitsky, K. (1996, March/April). Like goes with like: The role of representativeness in erroneous and pseudoscientific beliefs. The Skeptical Inquirer, 20 (2), 34-30. https://www.researchgate.net/profile/Thomas_Gilovich/publication/288842297_Like_goes_with_like_The_role_of_representativeness_in_erroneous_and_pseudo-scientific_beliefs/links/5799542208ae33e89fb0c80c/Like-goes-with-like-The-role-of-representativeness-in-erroneous-and-pseudo-scientific-beliefs.pdf
  • Weintraub, P. (2010, April 8). The doctor who drank infectious broth, gave himself an ulcer, and solved a medical mystery. Discover Magazine. https://www.discovermagazine.com/health/the-doctor-who-drank-infectious-broth-gave-himself-an-ulcer-and-solved-a-medical-mystery

Response Bias

Why do we give false survey responses, restraint bias, why do we overestimate our self-control, rosy retrospection, why do we think the good old days were so good.

Notes illustration

Eager to learn about how behavioral science can help your organization?

Get new behavioral science insights in your inbox every month..

7.3 Problem Solving

Learning objectives.

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving and decision making

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

Problem-Solving Strategies

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( Table 7.2 ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Everyday Connection

Solving puzzles.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( Figure 7.7 ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

Here is another popular type of puzzle ( Figure 7.8 ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

Take a look at the “Puzzling Scales” logic puzzle below ( Figure 7.9 ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

Pitfalls to Problem Solving

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but they just need to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. Duncker (1945) conducted foundational research on functional fixedness. He created an experiment in which participants were given a candle, a book of matches, and a box of thumbtacks. They were instructed to use those items to attach the candle to the wall so that it did not drip wax onto the table below. Participants had to use functional fixedness to overcome the problem ( Figure 7.10 ). During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

Link to Learning

Check out this Apollo 13 scene about NASA engineers overcoming functional fixedness to learn more.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in Table 7.3 .

Watch this teacher-made music video about cognitive biases to learn more.

Were you able to determine how many marbles are needed to balance the scales in Figure 7.9 ? You need nine. Were you able to solve the problems in Figure 7.7 and Figure 7.8 ? Here are the answers ( Figure 7.11 ).

As an Amazon Associate we earn from qualifying purchases.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/psychology-2e/pages/1-introduction
  • Authors: Rose M. Spielman, William J. Jenkins, Marilyn D. Lovett
  • Publisher/website: OpenStax
  • Book title: Psychology 2e
  • Publication date: Apr 22, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/psychology-2e/pages/1-introduction
  • Section URL: https://openstax.org/books/psychology-2e/pages/7-3-problem-solving

© Jan 6, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

VectorKnight/Shutterstock

Reviewed by Psychology Today Staff

A heuristic is a mental shortcut that allows an individual to make a decision, pass judgment, or solve a problem quickly and with minimal mental effort. While heuristics can reduce the burden of decision-making and free up limited cognitive resources, they can also be costly when they lead individuals to miss critical information or act on unjust biases.

  • Understanding Heuristics
  • Different Heuristics
  • Problems with Heuristics

Cat Box/Shutterstock

As humans move throughout the world, they must process large amounts of information and make many choices with limited amounts of time. When information is missing, or an immediate decision is necessary, heuristics act as “rules of thumb” that guide behavior down the most efficient pathway.

Heuristics are not unique to humans; animals use heuristics that, though less complex, also serve to simplify decision-making and reduce cognitive load.

Generally, yes. Navigating day-to-day life requires everyone to make countless small decisions within a limited timeframe. Heuristics can help individuals save time and mental energy, freeing up cognitive resources for more complex planning and problem-solving endeavors.

The human brain and all its processes—including heuristics— developed over millions of years of evolution . Since mental shortcuts save both cognitive energy and time, they likely provided an advantage to those who relied on them.

Heuristics that were helpful to early humans may not be universally beneficial today . The familiarity heuristic, for example—in which the familiar is preferred over the unknown—could steer early humans toward foods or people that were safe, but may trigger anxiety or unfair biases in modern times.

fizkes/Shutterstock

The study of heuristics was developed by renowned psychologists Daniel Kahneman and Amos Tversky. Starting in the 1970s, Kahneman and Tversky identified several different kinds of heuristics, most notably the availability heuristic and the anchoring heuristic.

Since then, researchers have continued their work and identified many different kinds of heuristics, including:

Familiarity heuristic

Fundamental attribution error

Representativeness heuristic

Satisficing

The anchoring heuristic, or anchoring bias , occurs when someone relies more heavily on the first piece of information learned when making a choice, even if it's not the most relevant. In such cases, anchoring is likely to steer individuals wrong .

The availability heuristic describes the mental shortcut in which someone estimates whether something is likely to occur based on how readily examples come to mind . People tend to overestimate the probability of plane crashes, homicides, and shark attacks, for instance, because examples of such events are easily remembered.

People who make use of the representativeness heuristic categorize objects (or other people) based on how similar they are to known entities —assuming someone described as "quiet" is more likely to be a librarian than a politician, for instance. 

Satisficing is a decision-making strategy in which the first option that satisfies certain criteria is selected , even if other, better options may exist.

KieferPix/Shutterstock

Heuristics, while useful, are imperfect; if relied on too heavily, they can result in incorrect judgments or cognitive biases. Some are more likely to steer people wrong than others.

Assuming, for example, that child abductions are common because they’re frequently reported on the news—an example of the availability heuristic—may trigger unnecessary fear or overprotective parenting practices. Understanding commonly unhelpful heuristics, and identifying situations where they could affect behavior, may help individuals avoid such mental pitfalls.

Sometimes called the attribution effect or correspondence bias, the term describes a tendency to attribute others’ behavior primarily to internal factors—like personality or character— while attributing one’s own behavior more to external or situational factors .

If one person steps on the foot of another in a crowded elevator, the victim may attribute it to carelessness. If, on the other hand, they themselves step on another’s foot, they may be more likely to attribute the mistake to being jostled by someone else .

Listen to your gut, but don’t rely on it . Think through major problems methodically—by making a list of pros and cons, for instance, or consulting with people you trust. Make extra time to think through tasks where snap decisions could cause significant problems, such as catching an important flight.

representative heuristic problem solving

Psychological experiments on human judgment under uncertainty showed that people often stray from presumptions about rational economic agents.

representative heuristic problem solving

Psychology, like other disciplines, uses the scientific method to acquire knowledge and uncover truths—but we still ask experts for information and rely on intuition. Here's why.

representative heuristic problem solving

We all experience these 3 cognitive blind spots at work, frequently unaware of their costs in terms of productivity and misunderstanding. Try these strategies to work around them.

representative heuristic problem solving

Have you ever fallen for fake news? This toolkit can help you easily evaluate whether a claim is real or phony.

representative heuristic problem solving

An insidious form of prejudice occurs when a more powerful group ignores groups with less power and keeps them out of the minds of society.

representative heuristic problem solving

Chatbot designers engage in dishonest anthropomorphism by designing features to exploit our heuristic processing and dupe us into overtrusting and assigning moral responsibility.

representative heuristic problem solving

How do social media influencers convert a scroll into a like, follow, and sale? Here are the psychological principles used by digital influencers.

representative heuristic problem solving

Sometimes, we submit to the oppressive aspects of life voluntarily, by accepting them as fixed and immutable even when they are not. We fall into a mental trap. Why?

representative heuristic problem solving

Despite the bombardment of societal messages to never quit, sometimes changing course is exactly what you should do.

representative heuristic problem solving

Many have experienced the "Mandela Effect." Some believe that the past has been subtly changed or that we live in a divergent reality. Here's what psychology has to say.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons
  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

17.3: The Representativeness Heuristic

  • Last updated
  • Save as PDF
  • Page ID 95172

  • Jason Southworth & Chris Swoyer
  • Fort Hays State & University University of Oklahoma

Mike is 6’2”, weighs over 200 lbs., (most of it muscle), lettered in two sports in college, and is highly aggressive. Which is more likely?

  • Mike is a pro football player.
  • Mike works in a bank.

Here, we are given several details about Mike; the profile includes his size, build, record as an athlete, and aggressiveness. We are then asked about the relative frequency of people with this profile that are pro football players, compared to those with the profile who are bankers.

What was your answer? There are almost certainly more bankers who fit the profile for the simple reason that there are so many more bankers than professional football players. We will return to this matter later in this chapter; the relevant point here is that Mike seems a lot more like our picture of a typical pro football player than like our typical picture of a banker. And this can lead us to conclude that he is more likely to be a pro football player.

Many of us made just this sort of error with Linda. Linda, you may recall, is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and she participated in antinuclear demonstrations. Based on this description, you were asked whether it is more likely that Linda is (i) a bank teller or (ii) a bank teller who is active in the feminist movement. Although the former is more likely, many people commit the conjunction fallacy and conclude that the latter is more probable.

What could lead to this mistake? Various factors probably play some role, but a major part of the story seems to be this. The description of Linda fits our profile (or stereotype) of someone active in today’s feminist movement. Linda strongly resembles (what we think of as) a typical or representative member of the movement. And because she resembles the typical or representative feminist, we think that she is very likely to be a feminist. Indeed, we may think this is so likely that we commit the conjunction fallacy.

We use the representativeness heuristic when we conclude that the more like a representative or typical member of a category something is, the more likely it is to be a member of that category. Put in slightly different words, the likelihood that x is an A depends on the degree to which x resembles your typical A. We reason like this: x seems a lot like your typical A; therefore, x probably is an A.

Sometimes this pattern of inference works, but it can also lead to very bad reasoning. For example, Linda resembles your typical feminist (or at least a stereotype of a typical feminist), so many of us conclude that she is likely to be a feminist. Mike resembles our picture of a pro football player, so many of us conclude that he probably is one. The cases differ because with Linda we go on to make a judgment about the probability of a conjunction, but with both Linda and Mike, we are misusing the representativeness heuristic.

Overreliance on the representativeness heuristic may be one of the reasons why we are tempted to commit the gambler’s fallacy. You may believe that the outcomes of flips of a given coin are random; the outcomes of later flips aren’t influenced by those of earlier flips. Then you are asked whether sequence HTHHTHTT is more likely than HHHHTTTT. The first sequence may seem much more like our conception of a typical random outcome (one without any clear pattern), and so, we conclude that it is more likely. Here the representative heuristic leads us to judge things that strike us as representative or normal to be more likely than things that seem unusual.

Specificity Revisited

We have seen that the more detailed and specific a description of something is, the less likely that thing is to occur. The probability of a quarter’s landing heads is 1/2, the probability of its landing heads with Washington looking north is considerably less. But as a description becomes more specific, the thing described often becomes more concrete and easier to picture, and the added detail can make something seem more like our picture of a typical member of a given group.

In Linda’s case, we add the claim that she is active in the feminist movement to the simple claim that she is a bank teller. The resulting profile resembles our conception of a typical feminist activist, and this can lead us to assume that she probably is a feminist activist. This may make it seem likely that she is a feminist activist. And this in turn makes it seem more likely that she is a bank teller and a feminist activist than that she is just a bank teller. But the very detail we add makes our claim, the conjunction, less probable than the simple claim that Linda is a bank teller.

In short, if someone fits our profile (which may be just a crude stereotype) of the average, typical, or representative kidnapper, scrap-booker, or computer nerd, we are likely to weigh this fact more heavily than we should in estimating the probability that they are a kidnapper, scrap-booker, or computer nerd. This is fallacious, because in many cases there will be many people who fit the relevant profile who are not members of the group.

pep

Find what you need to study

5.8 Biases and Errors in Thinking

5 min read • december 22, 2022

Dalia Savy

Haseung Jun

Sadiyya Holsey

Sadiyya Holsey

Attend a live cram event

Review all units live with expert teachers & students

Errors in Problem Solving

Because of our mental concepts and other processes, we may be biased or think of situations without an open mind. Let's discuss what those other processes are.

Fixation is only thinking from one point of view. It is in the inability to approach a situation from different perspectives 👀 Fixation is used interchangeably with your mental concept.

Functional Fixedness 

Functional fixedness is the tendency to only think of the familiar functions of an object.

An example of functional fixedness would be the candle problem . Individuals were given a box with thumbtacks, matches 🔥, and a candle 🕯️Then they were asked to put the candle on the wall in a way that the candle wax would not drip while it was lit.

Most of the subjects were unable to solve the problem. Some tried to solve it by trying to pin the candle on the wall with a thumbtack. The successful method was to attach the box to the wall using the thumbtacks. Then, put the candle in the box to light it.

Because of functional fixedness , individuals were unsuccessful because they couldn't understand how a box 📦 can be more than just a container for something.

The following two heuristics can lead us to make poor decisions and snap judgements, which downgrade our thinking.

Availability Heuristic

An availability heuristic is the ability to easily recall immediate examples from the mind about something. When someone asks you "What is the first thing that comes to mind when you think of . . .," you are using the availability heuristic .

Rather than thinking further about a topic, you just mention/assume other events based on the first thing that comes to your mind (or the first readily available concept in your mind).

This makes us fear the wrong things. Many parents may not let their children walk to school 🏫 because the only thing they could think of is that one kid going missing ⚠️This is the very first thing that comes to their mind and because of it, they fear their children suffering the same fate.

Therefore, we really fear what is readily in our memory.

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-gbpmOKfGFOGZ.png?alt=media&token=3be53495-25a3-4835-99dd-a11de70b4e2d

Image Courtesy of The Decision Lab .

Representativeness Heuristic

The representativeness heuristic is when you judge something based on how they match your prototype. This leads us to ignore information and is honestly the stem of stereotypes.

For example, if someone was asked to decide who most likely went to an ivy league school (when looking at a truck driver 🚚 and a professor 👩‍🏫👨‍🏫), most people would say the professor. This doesn't mean that the professor actually went to an ivy league school, this is just us being stereotypical because of our prototype for a person that goes to an ivy.

There are so many different types of biases and we experience each and every one of them in our everyday lives.

Confirmation Bias 

Confirmation bias is the tendency of individuals to support or search for information that aligns with their opinions and ignore information that doesn't. This eventually leads us to be more polarized ⬅️➡️ as individuals, and is another way of experiencing fixation .

A key example is how many republicans 🔴 watch Fox News to view a channel that confirms their political beliefs. People really dislike it when others have differing opinions and continue to find information that back up their own beliefs.

Belief Perseverance and Belief Bias

Belief perseverance is the tendency to hold onto a belief even if it has lost its credibility. This is different from belief bias , which is the tendency for our preexisting beliefs to distort logical thinking, making logical conclusions look illogical.

Halo Effect 

The halo effect is when positive impressions of people lead to positive views about their character and personality traits. For example, if you see someone as attractive you may think of them as having better personality traits and character even though this isn't necessarily true. 

Self-Serving Bias 

Self-serving bias is when a person attributes positive outcomes to their own doing and negative outcomes to external factors.

For example, if you do well on a test 💯 you may think it makes sense, because you did a good job of studying to prepare for the exam. But if you fail the test, you may put the blame on the teacher for not teaching all the material or for making the test too hard.

Attentional Bias 

Attentional bias is when people’s perceptions are influenced by recurring thoughts.

For example, if marine biology has been on your mind a lot lately, your conversations may include references to marine biology. You would also be more likely to notice information that relates to your thoughts (marine biology).

Actor-observer Bias

Actor-observer bias is when a person might attribute their own actions to external factors and the actions of others to internal factors.

For example, if you see someone else litter, you might think about how people are careless. But if you litter, you might say it was because there was no trash can🗑️ within sight.

Anchoring Bias 

Anchoring bias is when an individual relies heavily on the first piece of information given when making a decision. The first piece of information acts as an anchor and compares it to all subsequent information.

Hindsight Bias

Hindsight bias is when you think you knew something all along after the outcome has occurred. People overestimate their ability to have predicted a certain outcome even if it couldn't possibly have been predicted. People often say "I knew that."

Image Courtesy of Giphy .

Framing impacts decisions and judgments. It's the way we present an issue, and it can be a very powerful persuasion tool.

For example, a doctor could say one of two things about a surgery:

10% of people die 😲

90% of people survive 😌

Obviously, 10% of people die is a much more direct way to phrase the same thing. This makes it scarier than "90% of people survive." Framing is a very important tool!

https://firebasestorage.googleapis.com/v0/b/fiveable-92889.appspot.com/o/images%2F-BZxuFkgQ32F9.JPG?alt=media&token=23cbe5b7-1c07-4c0b-848b-15ad53984667

Key Terms to Review ( 17 )

Actor-Observer Bias

Anchoring Bias

Attentional Bias

Belief Bias

Belief Perseverance

Candle Problem

Confirmation Bias

Functional Fixedness

Halo Effect

Self-Serving Bias

Fiveable

Stay Connected

© 2024 Fiveable Inc. All rights reserved.

AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 17 February 2023

A brief history of heuristics: how did research on heuristics evolve?

  • Mohamad Hjeij   ORCID: orcid.org/0000-0003-4231-1395 1 &
  • Arnis Vilks 1  

Humanities and Social Sciences Communications volume  10 , Article number:  64 ( 2023 ) Cite this article

11k Accesses

5 Citations

2 Altmetric

Metrics details

Heuristics are often characterized as rules of thumb that can be used to speed up the process of decision-making. They have been examined across a wide range of fields, including economics, psychology, and computer science. However, scholars still struggle to find substantial common ground. This study provides a historical review of heuristics as a research topic before and after the emergence of the subjective expected utility (SEU) theory, emphasising the evolutionary perspective that considers heuristics as resulting from the development of the brain. We find it useful to distinguish between deliberate and automatic uses of heuristics, but point out that they can be used consciously and subconsciously. While we can trace the idea of heuristics through many centuries and fields of application, we focus on the evolution of the modern notion of heuristics through three waves of research, starting with Herbert Simon in the 1950s, who introduced the notion of bounded rationality and suggested the use of heuristics in artificial intelligence, thereby paving the way for all later research on heuristics. A breakthrough came with Daniel Kahneman and Amos Tversky in the 1970s, who analysed the biases arising from using heuristics. The resulting research programme became the subject of criticism by Gerd Gigerenzer in the 1990s, who argues that an ‘adaptive toolbox’ consisting of ‘fast-and-frugal’ heuristics can yield ‘ecologically rational’ decisions.

Similar content being viewed by others

representative heuristic problem solving

Rational use of cognitive resources in human planning

representative heuristic problem solving

People construct simplified mental representations to plan

representative heuristic problem solving

A synthesis of behavioural and mainstream economics

Introduction.

Over the past 50 years, the notion of ‘heuristics’ has considerably gained attention in fields as diverse as psychology, cognitive science, decision theory, computer science, and management scholarship. While for 1970, the Scopus database finds a meagre 20 published articles with the word ‘heuristic’ in their title, the number has increased to no less than 3783 in 2021 (Scopus, 2022 ).

We take this to be evidence that many researchers in the aforementioned fields find the literature that refers to heuristics stimulating and that it gives rise to questions that deserve further enquiry. While there are some review articles on the topic of heuristics (Gigerenzer and Gaissmaier, 2011 ; Groner et al., 1983 ; Hertwig and Pachur, 2015 ; Semaan et al., 2020 ), a somewhat comprehensive and non-partisan historical review seems to be missing.

While interest in heuristics is growing, the very notion of heuristics remains elusive to the point that, e.g., Shah and Oppenheimer ( 2008 ) begin their paper with the statement: ‘The word “heuristic” has lost its meaning.’ Even if one leaves aside characterizations such as ‘rule of thumb’ or ‘mental shortcut’ and considers what Kahneman ( 2011 ) calls ‘the technical definition of heuristic,’ namely ‘a simple procedure that helps find adequate, though often imperfect, answers to difficult questions,’ one is immediately left wondering how simple it has to be, what an adequate, but the imperfect answer is, and how difficult the questions need to be, in order to classify a procedure as a heuristic. Shah and Oppenheimer conclude that ‘the term heuristic is vague enough to describe anything’.

However, one feature does distinguish heuristics from certain other, typically more elaborate procedures: heuristics are problem-solving methods that do not guarantee an optimal solution. The use of heuristics is, therefore, inevitable where no method to find an optimal solution exists or is known to the problem-solver, in particular where the problem and/or the optimality criterion is ill-defined. However, the use of heuristics may be advantageous even where the problem to be solved is well-defined and methods do exist which would guarantee an optimal solution. This is because definitions of optimality typically ignore constraints on the process of solving the problem and the costs of that process. Compared to infallible but elaborate methods, heuristics may prove to be quicker or more efficient.

Nevertheless, the range of what has been called heuristics is very broad. Application of a heuristic may require intuition, guessing, exploration, or experience; some heuristics are rather elaborate, others are truly shortcuts, some are described in somewhat loose terms, and others are well-defined algorithms.

One procedure of decision-making that is commonly not regarded as a heuristic is the application of the full-blown theory of subjective expected utility (SEU) in the tradition of Ramsey ( 1926 ), von Neumann and Morgenstern ( 1944 ), and Savage ( 1954 ). This theory is arguably spelling out what an ideally rational decision would be, but was already seen by Savage (p. 16) to be applicable only in what he called a ‘small world’. Quite a few approaches that have been called heuristics have been explicitly motivated by SEU imposing demands on the decision-maker, which are utterly impractical (cf., e.g., Klein, 2001 , for a discussion). As a second defining feature of the heuristics we want to consider, therefore, we take them to be procedures of decision-making that differ from the ‘gold standard’ of SEU by being practically applicable in at least a number of interesting cases. Along with SEU, we also leave aside the rules of deductive logic, such as Aristotelian syllogisms, modus ponens, modus tollens, etc. While these can also be seen as rules of decision-making, and the universal validity of some of them is not quite uncontroversial (see, e.g., Priest, 2008 , for an introduction to non-classical logic), they are widely regarded as ‘infallible’. By stark contrast, it seems characteristic for heuristics that their application may fail to yield a ‘best’ or ‘correct’ result.

By taking heuristics to be practically applicable, but fallible, procedures for problem-solving, we will also neglect the literature that focuses on the adjective ‘heuristic’ instead of on the noun. When, e.g., Suppes ( 1983 ) characterizes axiomatic analyses as ‘heuristic’, he is not suggesting any rule, but he is saying that heuristic axioms ‘seem intuitively to organize and facilitate our thinking about the subject’ (p. 82), and proceeds to give examples of both heuristic and nonheuristic axioms. It may of course be said that many fundamental equations in science, such as Newton’s force = mass*acceleration, have some heuristic value in the sense indicated by Suppes, but the research we will review is not about the property of being heuristic.

Given that heuristics can be assessed against the benchmark of SEU, one may distinguish broadly between heuristics suggested pre-SEU, i.e., before the middle of the 20th century, and the later research on heuristics that had to face the challenge of an existing theory of allegedly rational decision-making. We will review the former in the section “Deliberate heuristics—the art of invention” below, and devote sections “Herbert Simon: rationality is bounded”, “Heuristics in computer science” and “Daniel Kahneman and Amos Tversky: heuristics and biases” to the latter.

To cover the paradigmatic cases of what has been termed ‘heuristics’ in the literature, we have to take ‘problem-solving’ in a broad sense that includes decision-making and judgement, but also automatic, instinctive behaviour. We, therefore, feel that an account of research on heuristics should also review the main views on how observable behaviour patterns in humans—or maybe animals in general—can be explained. This we do in the section “Automatic heuristics: learnt or innate?”.

While our brief history cannot aim for completeness, we selected the scholars to be included based on their influence and contributions to different fields of research related to heuristics. Our focus, however, will be on the more recent research that may be said to begin with Herbert Simon.

That problem-solving according to SEU will, in general, be impractical, was clearly recognized by Herbert Simon, whose notion of bounded rationality we look at in the section “Herbert Simon: rationality is bounded”. In the section “Heuristics in computer science”, we also consider heuristics in computer science, where the motivation to use heuristics is closely related to Simon’s reasoning. In the section “Daniel Kahneman and Amos Tversky: heuristics and biases”, we turn to the heuristics identified and analysed by Kahneman and Tversky; while their assessment was primarily that the use of those heuristics often does not conform to rational decision-making, the approach by Gigerenzer and his collaborators, reviewed in the section “Gerd Gigerenzer: fast-and-frugal heuristics” below, takes a much more affirmative view on the use of heuristics. Section “Critiques” explains the limitations and critiques of the corresponding ideas. The final section “Conclusion” contains the conclusion, discussion, and avenues for future research.

The evolutionary perspective

While we focus on the history of research on heuristics, it is clear that animal behaviour patterns evolved and were shaped by evolutionary forces long before the human species emerged. Thus ‘heuristics’ in the mere sense of behaviour patterns have been used long before humans engaged in any kind of conscious reflection on decision-making, let alone systematic research. However, evolution endowed humans with brains that allow them to make decisions in ways that are quite different from animal behaviour patterns. According to Gibbons ( 2007 ), the peculiar evolution of the human brain started thousands of years ago when the ancient human discovered fire and started cooking food, which reduced the amount of energy the body needed for digestion. This paved the way for a smaller intestinal tract and implied that the excess calories led to the development of larger tissues and eventually a larger brain. Through this organ, intelligence increased exponentially, resulting in advanced communication that allowed Homo sapiens to collaborate and form relationships that other primates at the time could not match. According to Dunbar ( 1998 ), it was in the time between 400,000 and 100,000 years ago that abilities to hunt more effectively took humans from the middle of the food chain right to the top.

It does not seem to be known when and how exactly the human brain developed the ability to reflect on decisions made consciously, but it is now widely recognized that in addition to the fast, automatic, and typically nonconscious type of decision-making that is similar to animal behaviour, humans also employ another, rather a different type of decision-making that can be characterized as slow, conscious, controlled, and reflective. The former type is known as ‘System 1’ or ‘the old mind’, and the latter as ‘System 2’ or ‘the new mind’ (Evans, 2010 ; Kahneman, 2011 ), and both systems have clearly evolved side by side throughout the evolution of the human brain. According to Gigerenzer ( 2021 ), humans as well as other organisms evolved to acquire what he calls ‘embodied heuristics’ that can be both innate or learnt rules of thumb, which in turn supply the agility to respond to the lack of information by fast judgement. The ‘embodied heuristics’ use the mental capacity that includes the motor and sensory abilities that start to develop from the moment of birth.

While a detailed discussion of the ‘dual-process theories’ of the mind is beyond the scope of this paper, we find it helpful to point out that one may distinguish between ‘System 1 heuristics’ and ‘System 2 heuristics’ (Kahneman 2011 , p. 98). While some ‘rules of decision-making’ may be hard-wired into the human species by its genes and physiology, others are complicated enough that their application typically requires reflection and conscious mental effort. Upon reflection, however, the two systems are not as separate as they may seem. For example, participants in the Mental Calculation World Cup perform mathematical tasks instantly, whereas ordinary people would need a pen and paper or a calculator. Today, many people cannot multiply large numbers or calculate a square root using only a pen and paper but can easily do this using the calculator app on their smartphone. Thus, what can be done by spontaneous effortless calculation by some, may for others require the application of a more or less complicated theory.

Nevertheless, one can loosely characterize the heuristics that have been explained and recommended for more or less well-specified purposes over the course of history as System 2 or deliberate heuristics.

Deliberate heuristics—the art of invention

Throughout history, scholars have investigated methods to solve complex tasks. In this section, we review those attempts to formulate ‘operant and voluntary’ heuristics to solve demanding problems—in particular, to generate new insights or do research in more or less specified fields. Most of the heuristics in this section have been suggested before the emergence of the SEU theory and the associated modern definition of rationality, and none of them deals with the kind of decision problems that are assumed as ‘given’ in the SEU model. The reader will notice that some historical heuristics were suggested for problems that, today, may seem too general to be solved. However, through the development of such attempts, later scholars were inspired to develop a more concrete understanding of the notion of heuristics.

The Greek origin

The term heuristic originates from the Greek verb heurísko , which means to discover or find out. The Greek word heúrēka , allegedly exclaimed by Archimedes when discovering how to measure the volume of a random object through water, derives from the same verb and can be translated as I found it! (Pinheiro and McNeill, 2014 ). Heuristics can thus be said to be etymologically related to the discipline of discovery, the branch of knowledge based on investigative procedures, and are naturally associated with trial techniques, including what-if scenarios and simple trial and error.

While the term heurísko does not seem to be used in this context by Aristotle, his notion of induction ( epagôgê ) can be seen as a method to find, but not prove, true general statements and thus as a heuristic. At any rate, Aristotle considered inductive reasoning as leading to insights and as distinct from logically valid syllogisms (Smith, 2020 ).

Pappus (4th century)

While a brief, somewhat cryptic, mention of analysis and synthesis appears in Book 13 of some, but not all, editions of Euclid’s Elements, a clearer explanation of the two methods was given in the 4th century by the Greek mathematician and astronomer Pappus of Alexandria (cf. Heath, 1926 ; Polya, 1945 ; Groner et al., 1983 ). While synthesis is what today would be called deduction from known truths, analysis is a method that can be used to try and find proof. Two slightly different explanations are given by Pappus. They boil down to this: in order to find proof for a statement A, one can deduce another statement B from A, continue by deducing yet another statement C from B, and so on, until one comes upon a statement T that is known to be true. If all the inferences are convertible, the converse deductions evidently constitute a proof of A from T. While Pappus did not mention the condition that the inferences must be convertible, his second explanation of analysis makes it clear that one must be looking for deductions from A which are both necessary and sufficient for A. In Polya’s paraphrase of Pappus’ text: ‘We enquire from what antecedent the desired result could be derived; then we enquire again what could be the antecedent of that antecedent, and so on, until passing from antecedent to antecedent, we come eventually upon something already known or admittedly true.’ Analysis thus described is hardly a ‘shortcut’ or ‘rule of thumb’, but quite clearly it is a heuristic: it may help to find a proof of A, but it may also fail to do so…

Al-Khawarizmi (9th century)

In the 9th century, the Persian thinker Mohamad Al-Khawarizmi, who resided in Baghdad’s centre of knowledge or the House of Wisdom , used stepwise methods for problem-solving. Thus, after his name and findings, the algorithm concept was derived (Boyer, 1991 ). Although a heuristic orientation has sometimes been contrasted with an algorithmic one (Groner and Groner, 1991 ), it is worth noting that an algorithm may well serve as a heuristic—certainly in the sense of a shortcut, and also in the sense of a fallible method. After all, an algorithm may fail to produce a satisfactory result. We will return to this issue in the section “Heuristics in computer science” below.

Zairja (10th century)

Heuristic methods were created by medieval polymaths in their attempts to find solutions for the complex problems they faced—science not yet being divorced from what today would appear as theology or astrology. Perhaps the first tangible example of a heuristic based on a mechanical device was using an ancient tool called a zairja , which Arab astrologers employed before the 11th century (Ritchey, 2022 ). It was designed to reconfigure notions into ideas through randomization and resonance and thus to produce answers to questions mechanically (Link, 2010 ). The word zairja may have originated from the Persian combination zaicha-daira , which means horoscope-circle. According to Ibn Khaldoun, ‘zairja is the technique of finding out answers from questions by means of connections existing between the letters of the expressions used in the question; they imagine that these connections can form the basis for knowing the future happenings they want to know’ (Khaldun, 1967 ).

Ramon Llull (1305)

The Majorcan philosopher Ramon Llull (or Raimundus Lullus), who was exposed to the Arabic culture, used the zairja as a starting point for his ars inveniendi veritatem that was meant to complement the ars demonstrandi of medieval Scholastic logic and on which he worked from around 1270–1305 (Link, 2010 ; Llull, 1308 ; Ritchey, 2022 ) when he finished his Ars Generalis Ultima (or Ars Magna ). Llull transformed the astrological and combinatorial components of the zairja into a religious system that took the fundamental ideas of the three Abrahamic faiths of Islam, Christianity, and Judaism and analysed them through symbolic and numeric reasoning. Llull tried to broaden his theory across all fields of knowledge and combine all sciences into a single science that would address all human problems. His thoughts impacted great thinkers, such as Leibniz, and even the modern theory of computation (Fidora and Sierra, 2011 ). Llull’s approach may be considered a clear example of heuristic methods applied to complicated and even theological questions (Hertwig and Pachur, 2015 ).

Joachim Jungius (1622)

Arguably, the German mathematician and philosopher Joachim Jungius was the first to use the terminology heuretica in a call to establish a research society in 1622. Jungius distinguished between three degrees or levels of learning and cognition: empirical, epistemic, and heuristic. Those who have reached the empirical level believe that what they have learned is true because it corresponds to experience. Those who have reached the epistemic level know how to derive their knowledge from principles with rigorous evidence. But those who have reached the highest level, the heuristic level, have a method of solving unsolved problems, finding new theorems, and introducing new methods into science (Ritter et al., 2017 ).

René Descartes (1637)

In 1637, the French philosopher René Descartes published his Discourse on Method (one of the first major works not written in Latin). Descartes argued that humans could utilize mathematical reasoning as a vehicle for progress in knowledge. He proposed four simple steps to follow in problem-solving. First, to accept as true only what is indubitable. Next, divide the problem into as many smaller subproblems as possible and helpful. After that, to conduct one’s thoughts in an orderly fashion, beginning with the simplest and gradually ascending to the most complex. And finally, to make enumerations so complete that one is assured of having omitted nothing (Descartes, 1998 ). In reference to his other methods, Descartes ( 1908 ) started working on the proper heuristic rules to transform every problem, when possible, into algebraic equations, thus creating a mathesis universalis or universal science. In his unfinished ‘Rules for the Direction of the Mind’ or Regulae ad directionem ingenii , Descartes suggested 21 heuristic rules (of planned 36) for scientific research like simplifying the problem, rewriting the problem in geometrical shape, and identifying the knowns and the unknowns. Although Leibniz criticized the rules of Descartes for being too general (Leibniz, 1880 ), this treatise outlined the basis for later work on complex problems in several disciplines.

Gottfried Wilhelm Leibniz (1666)

Influenced by the ideas of Llull, Jungius, and Descartes, the Prussian–German polymath Gottfried Wilhelm Leibniz suggested an original approach to problem-solving in his Dissertatio de Arte Combinatoria , published in Leipzig in 1666. His aim was to create a new universal language into which all problems could be translated and a standard solving procedure that could be applied regardless of the type of the problem. Leibniz also defined an ars inveniendi as a method for finding new truths, distinguishing it from an ars iudicandi , a method to evaluate the validity of alleged truths. Later, in 1673, he invented the calculating machine that could execute all four arithmetic operations and thus find ‘new’ arithmetic truths (Pombo, 2002 ).

Bernard Bolzano ( 1837 )

In 1837, the Czech mathematician and philosopher Bernard Bolzano published his four-volume Wissenschaftslehre (Theory of Science). The fourth part of his theory he called ‘Erfindungskunst’ or the art of invention, mentions in the introductory section 322 that ‘heuristic’ is just the Greek translation. Bolzano explains that the rules he is going to state are not at all entirely new, but instead have always been used ‘by the talented’—although mostly not consciously. He then explains 13 general and 33 special rules one should follow when trying to find new truths. Among the general rules are, e.g., that one should first decide on the question one wants to answer, and the kind of answer one is looking for (section 325), or that one should choose suitable symbols to represent one’s ideas (section 334). Unlike the general rules, the special ones are meant to be helpful for special mental tasks only. E.g., in order to solve the task of finding the reason for any given truth, Bolzano advises first to analyse or dissect the truth into its parts and then use those to form truths which are simpler than the given one (section 378). Another example is Bolzano’s special rule 28, explained in section 386, which is meant to help identify the intention behind a given action. To do so, Bolzano advises exploring the agent’s beliefs about the effects of his action at the time he decided to act, and explains that this will require investigating the agent’s knowledge, his degree of attention and deliberation, any erroneous beliefs the agent may have had, and ‘many other circumstances’. Bolzano continues to point out that any effect the agent may have expected to result from his action will not be an intended one if he considered it neither as an obligation nor as advantageous. While Bolzano’s rules can hardly be considered as ‘shortcuts’, he mentions again and again that they may fail to solve the task at hand adequately (cf. Hertwig and Pachur, 2015 ; Siitonen, 2014 ).

Frank Ramsey ( 1926 )

In Ramsey’s pathbreaking paper on ‘Truth and Probability’ which laid the foundation of subjective probability theory, a final section that has received little attention in the literature is devoted to inductive logic. While he does not use the word ‘heuristic’, he characterizes induction as a ‘habit of the mind,’ explaining that he uses ‘habit in the most general possible sense to mean simply rule or the law of behaviour, including instinct,’ but also including ‘acquired rules.’ Ramsey gives the following pragmatic justification for being convinced by induction: ‘our conviction is reasonable because the world is so constituted that inductive arguments lead on the whole to true opinions,’ and states more generally that ‘we judge mental habits by whether they work, i.e., whether the opinions they lead to are for the most part true, or more often true than those which alternative habits would lead to’ (Ramsey, 1926 ). In modern terminology, Ramsey was pointing out that mental habits—such as inductive inference—may be more or less ‘ecologically rational’.

Karl Duncker ( 1935 )

Karl Duncker was a pioneer in the experimental investigation of human problem-solving. In his 1935 book Zur Psychologie des produktiven Denkens , he discussed both heuristics that help to solve problems, but also hindrances that may block the solution of a problem—and reported on a number of experimental findings. Among the heuristics was a situational analysis with the aim of uncovering the reasons for the gap between the status quo and the problem-solvers goal, analysis of the goal itself, and of sacrifices the problem-solver is willing to make, of prerequisites for the solution, and several others. Among the hindrances to problem-solving was what Duncker called functional fixedness, illustrated by the famous candle problem, in which he asked the participants to fix a candle to the wall and light it without allowing the wax to drip. The available tools were a candle, matches, and a box filled with thumbtacks. The solution was to empty the box of thumbtacks, fix the empty box to the wall using the thumbtacks, put the candle in the box, and finally light the candle. Participants who were given the empty box as a separate item could solve this problem, while those given the box filled with thumbtacks struggled to find a solution. Through this experiment, Duncker illustrated an inability to think outside the box and the difficulty in using a device in a way that is different from the usual one (Glaveanu, 2019 ). Duncker emphasized that success in problem-solving depends on a complementary combination of both the internal mind and the external problem structure (cf. Groner et al., 1983 ).

George Polya ( 1945 )

The Hungarian mathematician George Polya can be aptly called the father of problem-solving in modern mathematics and education. In his 1945 book, How to Solve it , Polya writes that ‘heuristic…or ‘ ars inveniendi ’ was the name of a certain branch of study…often outlined, seldom presented in detail, and as good as forgotten today’ and he attempts to ‘revive heuristic in a modern and modest form’. According to his four principles of mathematical problem-solving, it is first necessary to understand the problem, then plan the execution, carry out the plan, and finally, reflect and search for improvement opportunities. Among the more detailed suggestions for problem-solving explained by Polya are to ask questions such as ‘can you find the solution to a similar problem?’, to use inductive reasoning and analogy, or to choose a suitable notation. Procedures inspired by Polya’s ( 1945 ) book and several later ones (e.g., Induction and Analogy in Mathematics of 1954 ) also informed the field of artificial intelligence (AI) (Hertwig and Pachur, 2015 ).

Johannes Müller (1968)

In 1968, the German scientist Johannes Müller introduced the concept of systematic heuristics while working on his postdoctoral thesis at the Chemnitz University of Technology. Systematic heuristics is a framework for improving the efficiency of intellectual work using problem-solving processes in the fields of science and technology.

The main idea of systematic heuristics is to solve repeated problems with previously validated solutions. These methods are called programmes and are gathered in a library that can be accessed by the main programme, which receives the requirements, prepares the execution plan, determines the required procedures, executes the plan, and finally evaluates the results. Müller’s team was dismissed for ideological reasons, and his programme was terminated after a few years, but his findings went on to be successfully applied in many projects across different industries (Banse and Friedrich, 2000 ).

Imre Lakatos ( 1970 )

In his ‘Methodology of Scientific Research Programmes’ that turned out to be a major contribution to the Popper–Kuhn controversy about the rationality of non-falsifiable paradigms in the natural sciences, Lakatos introduced the interesting distinction between a ‘negative heuristic’ that is given by the ‘hard core’ of a research programme and the ‘positive heuristic’ of the ‘protective belt’. While the latter suggests ways to develop the research programme further and to predict new facts, the ‘hard core’ of the research programme is treated as irrefutable ‘by the methodological decision of its protagonists: anomalies must lead to changes only in the ‘protective’ belt’ of auxiliary hypotheses. The Lakatosian notion of a negative heuristic seems to have received little attention outside of the Philosophy of Science community but may be important elsewhere: when there are too many ways to solve a complicated problem, excluding some of them from consideration may be helpful.

Gerhard Kleining ( 1982 )

The German sociologist Gerhard Kleining suggested a qualitative heuristic as the appropriate research method for qualitative social science. It is based on four principles: (1) open-mindedness of the scientist who should be ready to revise his preconceptions about the topic of study, (2) openness of the topic of study, which is initially defined only provisionally and allowed to be modified in course of the research, (3) maximal variation of the research perspective, and (4) identification of similarities within the data (Kleining, 1982 , 1995 ).

Automatic heuristics: learnt or innate?

Unlike the deliberate, and in some cases quite elaborate, heuristics reviewed above, at least some System 1 heuristics are often applied automatically, without any kind of deliberation or conscious reflection on the task that needs to be performed or the question that needs to be answered. One may view them as mere patterns of behaviour, and as such their scientific examination has been a long cumulative process through different disciplines, even though explicit reference to heuristics was not often made.

Traditionally, examining the behaviour patterns of any living creature, any study concerning thoughts, feelings, or cognitive abilities was regarded as the task of biologists. However, the birth of psychology as a separate discipline paved the way for an alternative outlook. Evolutionary psychology views human behaviour as being shaped through time and experience to promote survival throughout the long history of human struggle with nature. With many factors to consider, scholars have been interested in the evolution of the human brain, patterns of behaviour, and problem-solving (Buss and Kenrick, 1998 ).

Charles Darwin (1873)

Charles Darwin himself maybe qualifies for the title of first evolutionary psychologist, as his perceptions laid the foundations for this field that would continue to grow over a century later (Ghiselin, 1973 ).

In 1873, Darwin claimed that the brain’s articulations regarding expressions and emotions have probably developed similarly to its physical traits (Baumeister and Vohs, 2007 ). He acknowledged that personal demonstrations or expressions have a high capacity for interaction with different peers from the same species. For example, an aggressive look flags an eagerness to battle yet leaves the recipient with the option of retreating without either party being harmed. Additionally, Darwin, as well as his predecessor Lamarck, constantly emphasized the role of environmental factors in ‘the struggle for existence’ that could shape the organism’s traits in response to changes in their corresponding environments (Sen, 2020 ). The famous example of giraffes that grew long necks in response to trees growing taller is an illustration of a major environmental effect. Similarly, cognitive skills, including heuristics, must have also been shaped by the environments to evolve and keep humans surviving and reproducing.

Darwin’s ideas impacted the early advancement of brain science, psychology, and all related disciplines, including the topic of cognitive heuristics (Smulders, 2009 ).

William James (1890)

A few years later, in 1890, the father of American psychology, William James, introduced the notion of evolutionary psychology in his 1200-page text The Principles of Psychology , which later became a reference on the subject and helped establish psychology as a science. In its core content, James reasoned that many actions of the human being demonstrate the activity of instincts, which are the evolutionary embedded inclinations to react to specific incentives in adaptive manners. With this idea, James added an important building block to the foundation of heuristics as a scientific topic.

A simple example of such hard-wired behaviour patterns would be a sneeze, the preprogrammed reaction of convulsive nasal expulsion of air from the lungs through the nose and mouth to remove irritants (Baumeister and Vohs, 2007 ).

Ivan Pavlov (1897)

Triggered by scientific curiosity or the instinct for research, as he called it, the first Russian Nobel laureate, Ivan Pavlov, introduced classical conditioning, which occurs when a stimulus is used that has a predictive relationship with a reinforcer, resulting in a change in response to the stimulus (Schreurs, 1989 ). This learning process was demonstrated through experiments conducted with dogs. In the experiments, a bell (a neutral stimulus) was paired with food (a potent stimulus), resulting ultimately in the dogs salivating at the ringing of the bell—a conditioned response. Pavlov’s experiments remain paradigmatic cases of the emergence of behaviour patterns through association learning.

William McDougall (1909)

At the start of the 20th century, the Anglo-American psychologist William McDougall was one of the first to write about the instinct theory of motivation. McDougall argued that instincts trigger many critical social practices. He viewed instincts as extremely sophisticated faculties in which specific provocations such as social impediments can drive a person’s state of mind in a particular direction, for example, towards a state of hatred, envy, or anger, which in turn may increase the probability of specific practices such as hostility or violence (McDougall, 2015 ).

However, in the early 1920s, McDougall’s perspective about human behaviour being driven by instincts faded remarkably as scientists supporting the concept of behaviourism started to get more attention with original ideas (Buss and Kenrick, 1998 ).

John B. Watson (1913)

The pioneer of the psychological school of behaviourism, John B. Watson, who conducted the controversial ‘Little Albert’ experiment by imposing a phobia on a child to evidence classical conditioning in humans (Harris, 1979 ), argued against the ideas of McDougall, even within public debates (Stephenson, 2003 ). Unlike McDougall, Watson considered the brain an empty page ( tabula rasa as described by Aristotle). According to him. all personality traits and behaviours directly result from the accumulated experience that starts from birth. Thus, the story of the human mind is a continuous writing process featured by surrounding events and factors. This perception was supported in the following years of the 20th century by anthropologists who revealed many very different social standards in different societies, and numerous social researchers argued that the wide variety of cross-cultural differences should lead to the conclusion that there is no mental content built-in from birth, and that all knowledge, therefore, comes from individual experience or perception (Farr, 1996 ). In stark contrast to McDougall, Watson suggested that human intuitions and behaviour patterns are the product of a learning process that starts blank.

B. F. Skinner (1938)

Inspired by the work of Pavlov, the American psychologist B.F. Skinner took the classical conditioning approach to a more advanced level by modifying a key aspect of the process. According to Skinner, human behaviour is dependent on the outcome of past activities. If the outcome is bad, the action will probably not be repeated; however, if the outcome is good, the likelihood of the activity being repeated is relatively high. Skinner called this process reinforcement learning (Schacter et al., 2011 ). Based on reinforcement learning, Skinner also introduced the concept of operant conditioning, a type of associative learning process through which the strength of a behaviour is adjusted by reinforcement or punishment. Considering, for example, a parent’s response to a child’s behaviour, the probability of the child repeating an action will be highly dependent on the parent’s reaction (Zilio, 2013 ). Effectively, Skinner argues that the intuitive System 1 may get edited and that a heuristical cue may become more or less ‘hard-wired’ in the subject’s brain as a stimulus leading to an automatic response.

The DNA and its environment (1953 onwards)

Today, there seems to be wide agreement that behaviour patterns in humans and other species are to some extent ‘in the DNA’, the structure of which was discovered by Francis Crick and James Watson in 1953, but that they also to some extent depend on ‘the environment’—including the social environment in which the agent lives and has problems to solve. Today, it seems safe to say, therefore, that the methods of problem-solving that humans apply are neither completely innate nor completely the result of environmental stimuli—but rather the product of the complex interaction between genes and the environment (Lerner, 1978 ).

Herbert Simon: rationality is bounded

Herbert Simon is well known for his contributions to several fields, including economics, psychology, computer science, and management. Simon proposed a remarkable theory that led him to be awarded the Nobel Prize for Economics in 1978.

Bounded rationality and satisficing

In the mid-1950s, Simon published A Behavioural Model of Rational Choice, which focused on bounded rationality: the idea that people must make decisions with limited time, mental resources, and information (Simon, 1955 ). He clearly states the triangle of limitations in every decision-making process—the availability of information, time, and cognitive ability (Bazerman and Moore, 1994 ). The ideas of Simon are considered an inspiring foundation for many technologies in use today.

Instead of conforming to the idea that economic behaviour can be seen as rational and dependent on all accessible data (i.e., as optimization), Simon suggested that the dynamics of decision-making were essentially ‘satisficing,’ a notion synthesized from ‘satisfy’ and ‘suffice’ (Byron, 1998 ). During the 1940s, scholars noticed the frequent failure of two assumptions required for ‘rational’ decision-making. The first is that data is never enough and may be far from perfect, while people dependably make decisions based on incomplete data. Second, people do not assess every feasible option before settling on a decision. This conduct is highly correlated with the cost of data collection since data turns out to be progressively harder and costlier to accumulate. Rather than trying to find the ideal option, people choose the first acceptable or satisfactory option they find. Simon described this procedure as satisficing and concluded that the human brain in the decision-making process would, at best, exhibit restricted abilities (Barros, 2010 ).

Since people can neither obtain nor process all the data needed to make a completely rational decision, they use the limited data they possess to determine an outcome that is ‘good enough’—a procedure later refined into the take-the-best heuristic. Simon’s view that people are bounded by their cognitive limits is usually known as the theory of bounded rationality (cf. Gigerenzer and Selten, 2001 ).

Herbert Simon and AI

With the cooperation of Allen Newell of the RAND Corporation, Simon attempted to create a computer simulator for human decision-making. In 1956, they created a ‘thinking’ machine called the ‘Logic Theorist’. This early smart device was a computer programme with the ability to prove theorems in symbolic logic. It was perhaps the first man-made programme that simulated some human reasoning abilities to solve actual problems (Gugerty, 2006 ). After a few years, Simon, Newell, and J.C. Shaw proposed the General Problem Solver or GPS, the first AI-based programme ever invented. They actually aimed to create a single programme that could solve all problems with the same unified algorithm. However, while the GPS was efficient with sufficiently well-structured problems like the Towers of Hanoi (a puzzle with 3 rods and different-sized disks to be moved), it could not solve real-life scenarios with all their complexities (A. Newell et al., 1959 ).

By 1965, Simon was confident that ‘machines will be capable of doing any work a man can do’ (Vardi, 2012 ). Therefore, Simon dedicated most of the remainder of his career to the advancement of machine intelligence. The results of his experiments showed that, like humans, certain computer programmes make decisions using trial-and-error and shortcut methods (Frantz, 2003 ). Quite explicitly, Simon and Newell ( 1958 , p. 7) referred to heuristics being used by both humans and intelligent machines: ‘Digital computers can perform certain heuristic problem-solving tasks for which no algorithms are available… In doing so, they use processes that are closely parallel to human problem-solving processes’.

Additionally, the importance of the environment was also clearly observed in Newell and Simon’s ( 1972 ) work:

‘Just as scissors cannot cut paper without two blades, a theory of thinking and problem-solving cannot predict behaviour unless it encompasses both an analysis of the structure of task environments and an analysis of the limits of rational adaptation to task requirements’ (p. 55).

Accordingly, the term ‘task environment’ describes the formal structure of the universe of choices and results for a specific problem. At the same time, Newell and Simon do not treat the agent and the environment as two isolated entities, but rather as highly related. Consequently, they tend to believe that agents with different cognitive abilities and choice repertoires will inhabit different task environments even though their physical surroundings and intentions might be the same (Agre and Horswill, 1997 ).

Heuristics in computer science

Computer science as a discipline may have the biggest share of deliberately applied heuristics. As heuristic problem-solving has often been contrasted with algorithmic problem-solving—even by Simon and Newell ( 1958 )—it is worth recalling that the very notion of ‘algorithm’ was clarified only in the first half of the 20th century, when Alan Turing ( 1937 ) defined what was later named ‘Turing-machine’. Basically, he defined ‘mechanical’ computation as a computation that can be done by a—stylized—machine. ‘Mechanical’ being what is also known today as algorithmic, one can say that any procedure that can be performed by a digital computer is algorithmic. Nevertheless, many of them are also heuristics because an algorithm may fail to produce an optimal solution to the problem it is meant to solve. This may be so either because the problem is ill-defined or because the computations required to produce the optimal solution may not be feasible with the available resources. If the problem is ill-defined—as it often is, e.g., in natural language processing—the algorithm that does the processing has to rely on a well-defined model that does not capture the vagueness and ambiguities of the real-life problem—a problem typically stated in natural language. If the problem is well-defined, but finding the optimal solution is not feasible, algorithms that would find it may exist ‘in principle’, but require too much time or memory to be practically implemented.

In fact, there is today a rich theory of complexity classes that distinguishes between types of (well-defined) problems according to how fast the time or memory space required to find the optimal solution increases with increasing problem size. E.g., for problem types of the complexity class P, any deterministic algorithm that produces the optimal solution has a running time bounded by a polynomial function of the input size, whereas, for problems of complexity class EXPTIME, the running time is bounded by an exponential function of the input size. In the jargon of computer science, problems of the latter class are considered intractable, although the input size has to become sufficiently large before the computation of the optimal solution becomes practically infeasible (cf. Harel, 2000 ; Hopcroft et al., 2007 ). Research indicates that the computational complexity of problems can also reduce the quality of human decision-making (Bossaerts and Murawski, 2017 ).

Shortest path algorithms

A classic optimization problem that may serve to illustrate the issues of optimal solution, complexity, and heuristics goes by the name of the travelling salesman problem (TSP), which was first introduced in 1930. In this problem, several cities with given distances between each two are considered, and the goal is to find the shortest possible path through all cities and return to the starting point. For a small input size, i.e., for a small number of cities, the ‘brute-force’ algorithm is easy to use: write down all the possible paths through all the cities, calculate their lengths, and choose the shortest. However, the number of steps that are required by this procedure quickly increases with the number of cities. The TSP is today known to belong to the complexity class NP which is in between P and EXPTIME Footnote 1 ). To solve the TSP, Jon Bentley ( 1982 ) proposed the greedy (or nearest-neighbour) algorithm that will yield an acceptable result, but not necessarily the optimal one, within a relatively short time. This approach always picks the nearest neighbour as the next city to visit without regard to possible later non-optimal steps. Hence, it is considered a good-enough solution with fast results. Bentley argued that there may be better solutions, but that it approximates the optimal solution. Many other heuristic algorithms have been explored later on. There is no assurance that the solution found by a heuristic algorithm will be an ideal answer for the given problem, but it is acceptable and adequate (Pearl, 1984 ).

Heuristic algorithms of the shortest path are utilized nowadays by GPS frameworks and self-driving vehicles to choose the best route from any point of departure to any destination (for example, A* Search Algorithm). Further developed algorithms can also consider additional elements, including traffic, speed limits, and quality of roads, they may yield the shortest routes in terms of distance and the fastest ones in terms of driving time.

Computer chess

While the TSP consists of a whole set of problems which differ by the number of cities and the distances between them, determining the optimal strategy for chess is just one problem of a given size. The rules of chess make it a finite game, and Ernst Zermelo proved in 1913 that it is ‘determined’: if it were played between perfectly rational players, it would always end with the same outcome: either White always wins, or Black always wins, or it always ends with a draw (Zermelo, 1913 ). Up to the present day, it is not known which of the three is true, which points to the fact that a brute-force algorithm that would go through all possible plays of chess is practically infeasible: it would have to explore too many potential moves, and the required memory would quickly run out of space (Schaeffer et al., 2007 ). Inevitably, a chess-playing machine has to use algorithms that are ‘shortcuts’—which can be more or less intelligent.

While Simon and Newell had predicted in 1958 that within ten years the world chess champion would be a computer, it took until 1997, when a chess-playing machine developed by IBM under the name Deep Blue defeated grandmaster Garry Kasparov. Although able to analyse millions of possibilities due to their computing powers, today’s chess-playing machines apply a heuristic approach to eliminate unlikely moves and focus on those with a high probability of defeating their opponent (Newborn, 1997 ).

Machine learning

One of the main features of machine learning is the ability of the model to predict a future outcome based on past data points. Machine learning algorithms build a knowledge base similar to human experience from previous experiences in the dataset provided. From this knowledge base, the model can derive educated guesses.

A good demonstration of this is the card game Top Trumps in which the model can learn to play and keep improving to dominate the game. It does so by undertaking a learning path through a sequence of steps in which it picks two random cards from the deck and then analyses and compares them with random criteria. According to the winning result, the model iteratively updates its knowledge base in the same manner as a human, following the rule that ‘practice makes perfect.’ Hence the model will play, collect statistics, update, and iterate while becoming more accurate with each increment (Volz et al., 2016 ).

Natural language processing

In the world of language understanding, current technologies are far from perfect. However, models are becoming more reliable by the minute. When analysing and dissecting a search phrase entered into the Google search engine, a background model tries to make sense of the search criteria. Stemming words, context analysis, the affiliation of phrases, previous searches, and autocorrect/autocomplete can be applied in a heuristic algorithm to display the most relevant result in less than a second. Heuristic methods can be utilized when creating certain algorithms to understand what the user is trying to express when searching for a phrase. For example, using word affiliation, an algorithm tries to narrow down the meaning of words as much as possible toward the user’s intention, particularly when a word has more than one meaning but changes with the context. Therefore, a search for apple pie allows the algorithm to deduce that the user is highly interested in recipes and not in the technology company (Sullivan, 2002 ).

Search and big data

Search is a good example to appreciate the value of time, as one of the most important criteria is retrieving acceptable results in an acceptable timeframe. In a full search algorithm, especially in large datasets, retrieving optimal results can take a massive amount of time, making it necessary to apply heuristic search.

Heuristic search is a type of search algorithm that is used to find solutions to problems in a faster way than an exhaustive search. It uses specific criteria to guide the search process and focuses on more favourable areas of the search space. This can greatly reduce the number of nodes required to find a solution, especially for large or complex search trees.

Heuristic search algorithms work by evaluating the possible paths or states in a search tree and selecting the better ones to explore further. They use a heuristic function, which is a measure of how close a given state is to the goal state, to guide the search. This allows the algorithm to prioritize certain paths or states over others and avoid exploring areas of the search space that are unlikely to lead to a solution. The reached solution is not necessarily the best, however, a ‘good enough’ one is found within a ‘fast enough’ time. This technique is an example of a trade-off between optimality and speed (Russell et al., 2010 ).

Today, there is a rich literature on heuristic methods in computer science (Martí et al., 2018 ). As the problem to be solved may be the choice of a suitable heuristic algorithm, there are also meta-heuristics that have been explored (Glover and Kochenberger, 2003 ), and even hyper-heuristics which may serve to find or generate a suitable meta-heuristic (Burke et al., 2003 ). As Sörensen et al. ( 2018 ) point out, the term ‘metaheuristic’ may refer either to an ‘algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms’—or to a specific algorithm that is based on such a framework. E.g., a metaheuristic to find a suitable search algorithm may be inspired by the framework of biological evolution and use its ideas of mutation, reproduction and selection to produce a particular search algorithm. While this algorithm will still be a heuristic one, the fact that it has been generated by an evolutionary process indicates its superiority over alternatives that have been eliminated in the course of that process (cf. Vikhar, 2016 ).

Daniel Kahneman and Amos Tversky: heuristics and biases

Inspired by the concepts of Herbert Simon, psychologists Daniel Kahneman and Amos Tversky initiated the heuristics and biases research programme in the early 1970s, which emphasized how individuals make judgements and the conditions under which those judgements may be inaccurate (Kahneman and Klein, 2009 ).

In addition, Kahneman and Tversky emphasized information processing to elaborate on how real people with limitations can decide, choose, or estimate (Kahneman, 2011 ).

The remarkable article Judgement under Uncertainty: Heuristics and Biases , published in 1974, is considered the turning key that opened the door wide to research on this topic, although it was and still is considered controversial (Kahneman, 2011 ). In their research, Kahneman and Tversky identified three types of heuristics by which probabilities are often assessed: availability, representativeness, and anchoring and adjustment. In passing, Kahneman and Tversky mention that other heuristics are used to form non-probabilistic judgements; for example, the distance of an object may be assessed according to the clarity with which it is seen. Other researchers subsequently introduced different types of heuristics. However, availability, representativeness, and anchoring are still considered fundamental heuristics for judgements under uncertainty.

Availability

According to the psychological definition, availability or accessibility is the ease with which a specific thought comes to mind or can be inferred. Many people use this type of heuristic when judging the probability of an event that may have happened or will happen in the future. Hence, people tend to overestimate the likelihood of a rare event if it easily comes to mind because it is frequently mentioned in daily discussions (Kahneman, 2011 ). For instance, individuals overestimate their probability of being victims of a terrorist attack while the real probability is negligible. However, since terrorist attacks are highly available in the media, the feeling of a personal threat from such an attack will also be highly available during our daily life (Kahneman, 2011 ).

This concept is also present in business, as we remember the successful start-ups whose founders quit college for their dreams, such as Steve Jobs and Mark Zuckerberg, and ignore the thousands of ideas, start-ups, and founders that failed. This is because successful companies are considered a hot topic and receive broad coverage in the media, while failures do not. Similarly, broad media coverage is known to create top-of-mind awareness (TOMA) (Farris et al., 2010 ). Moreover, the availability type of heuristics was offered as a clarification for fanciful connections or irrelevant correlations in which individuals wrongly judge two events to be related to each other when they are not. Tversky and Kahneman clarified that individuals judge relationships based on the ease of envisioning the two events together (Tversky and Kahneman, 1973 ).

Representativeness

The representativeness heuristic is applied when individuals assess the probability that an object belongs to a particular class or category based on how much it resembles the typical case or prototype representing this category (Tversky and Kahneman, 1974 ). Conceptually, this heuristic can be decomposed into three parts. The first one is that the ideal case or prototype of the category is considered representative of the group. The second part judges the similarity between the object and the representative prototype. The third part is that a high degree of similarity indicates a high probability that the object belongs to the category, and a low degree of similarity indicates a low probability.

While the heuristic is often applied automatically within an instant and may be compelling in many cases, Tversky and Kahneman point out that the third part of the heuristic will often lead to serious errors or, at any rate, biases.

In particular, the representativeness heuristic can give rise to what is known as the base rate fallacy. As an example, Tversky and Kahneman consider an individual named Steve, who is described as shy, withdrawn, and somewhat pedantic, and report that people who have to assess, based on this description, whether Steve is more likely to be a librarian or a farmer, invariably consider it more likely that he is a librarian—ignoring the fact that there are many more farmers than librarians, the fact that an estimate of the probability that Steve is a librarian or a farmer, respectively, must take into account.

Another example is that a taxicab was engaged in an accident. The data indicates that 85% of the taxicabs are green and 15% blue. An eyewitness claims that the involved cab was blue. The court then evaluates the witness for reliability because he is 80% accurate and 20% inaccurate. So now, what would be the probability of the involved cab being blue, given that the witness identified it as blue as well?

To evaluate this case correctly, people should consider the base rate, 15% of the cabs being blue, and the witness accuracy rate, 80%. Of course, if the number of cabs is equally split between colours, then the only factor in deciding is the reliability of the witness, which is an 80% probability.

However, regardless of the colours’ distribution, most participants would select 80% to respond to this enquiry. Even participants who wanted to take the base rate into account estimated a probability of more than 50%, while the right answer is 41% using the Bayesian inference (Kahneman, 2011 ).

In relation to the representativeness heuristic, Kahnemann ( 2011 ) illustrated the ‘conjunction fallacy’ in the following example: based only on a detailed description of a character named Linda, doctoral students in the decision science programme of the Stanford Graduate School of Business, all of whom had taken several advanced courses in probability, statistics, and decision theory, were asked to rank various other descriptions of Linda according to their probability. Even Kahneman and Tversky were surprised to find that 85% of the students ranked Linda as a bank teller active in the feminist movement as more likely than Linda as a bank teller.

From these and many other examples, one must conclude that even sophisticated humans use the representativeness heuristic to make probability judgements without referring to what they know about probability.

Representativeness is used to make probability judgements and judgements about causality. The similarity of A and B neither indicates that A causes B nor that B causes A. Nevertheless, if A precedes B and is similar to B, it is often judged to be B’s cause.

Adjustment and anchoring

Based on Tversky and Kahneman’s interpretations, the anchor is the first available number introduced in a question forming the centre of a circle whose radius (up or down) is an acceptable range within which lies the best answer (Baron, 2000 ). This is used and tested in several academic and real-world scenarios and in business negotiations where parties anchor their prices to formulate the range of acceptance through which they can close the deal, deriving the ceiling and floor from the anchor. The impact is more dominant when parties lack time to analyse actions thoroughly.

Significantly, even if the anchor is way beyond logical boundaries, it can still bias the estimated numbers by all parties without them even realizing that it does (Englich et al., 2006 ).

In one of their experiments, Tversky and Kahneman ( 1974 ) asked participants to quickly calculate the product of numbers from 1 to 8 and others to do so from 8 to 1. Since the time was limited to 5 min, they needed to make a guess. The group that started from 1 had an average of 512, while the group that started from 8 had an average of 2250. The right answer was 40,320.

Perhaps this is one of the most unclear cognitive heuristics introduced by Kahneman and Tversky that can be interchangeably considered as a bias instead of a heuristic. The problem is that the mind tends to fixate on the anchor and adjust according to it, whether it was introduced implicitly or explicitly. Some scholars even believe that such bias/heuristic is unavoidable. For instance, in one study, participants were asked if they believed that Mahatma Gandhi died before or after nine years old versus before or after 140 years old. Unquestionably, these anchors were considered unrealistic by the audience. However, when the participants were later asked to give their estimate of Gandhi’s age of death, the group which was anchored to 9 years old speculated the average age to be 50, while the group anchored to the highest value estimated the age of death to be as high as 67 (Strack and Mussweiler, 1997 ).

Gerd Gigerenzer: fast-and-frugal heuristics

The German psychologist Gerd Gigerenzer is one of the most influential figures in the field of decision-making, with a particular emphasis on the use of heuristics. He has built much of his research on the theories of Herbert Simon and considers that Simon’s theory of bounded rationality was unfinished (Gigerenzer, 2015 ). As for Kahneman and Tversky’s work, Gigerenzer has a different approach and challenges their ideas with various arguments, facts, and numbers.

Gigerenzer explores how people make sense of their reality with constrained time and data. Since the world around us is highly uncertain, complex, and volatile, he suggests that probability theory cannot stand as the ultimate concept and is incapable of interpreting everything, particularly when probabilities are unknown. Instead, people tend to use the effortless approach of heuristics. Gigerenzer introduced the concept of the adaptive toolbox, which is a collection of mental shortcuts that a person or group of people can choose from to solve a current problem (Gigerenzer, 2000 ). A heuristic is considered ecologically rational if adjusted to the surrounding ecosystem (Gigerenzer, 2015 ).

A daring argument of Gigerenzer, which very much opposes the heuristics and biases approach of Kahneman and Tversky, is that heuristics cannot be considered irrational or inferior to a solution by optimization or probability calculation. He explicitly argues that heuristics are not gambling shortcuts that are faster but riskier (Gigerenzer, 2008 ), but points to several situations where less is more, meaning that results from frugal heuristics, which neglect some data, were nevertheless more accurate than results achieved by seemingly more elaborate multiple regression or Bayesian methods that try to incorporate all relevant data. While researchers consider this counterintuitive since a basic rule in research seems to be that more data is always better than less, Gigerenzer points out that the less-is-more effect (abbreviated as LIME) could be confirmed by computer simulations. Without denying that in some situations, the effect of using heuristics may be biased (Gigerenzer and Todd, 1999 ), Gigerenzer emphasizes that fast-and-frugal heuristics are basic, task-oriented choice systems that are a part of the decision-maker’s toolbox, the available collection of cognitive techniques for decision-making (Goldstein and Gigerenzer, 2002 ).

Heuristics are considered economical because they are easy to execute, seek limited data, and do not include many calculations. Contrary to most traditional decision-making models followed in the social and behavioural sciences, models of fast-and-frugal heuristics portray not just the result of the process but also the process itself. They comprise three simple building blocks: the search rule that specifies how information is searched for, the stopping rule that specifies when the information search will be stopped, and finally, the decision rule that specifies how the processed information is integrated into a decision (Goldstein and Gigerenzer, 2002 ).

Rather than characterizing heuristics as rules of thumb or mental shortcuts that can cause biases and must therefore be regarded as irrational, Gigerenzer and his co-workers emphasize that fast-and-frugal heuristics are often ecologically rational, even if the conjunction of them may not even be logically consistent (Gigerenzer and Todd, 1999 ).

According to Goldstein and Gigerenzer ( 2002 ), a decision maker’s pool of mental techniques may contain logic and probability theory, but it also embraces a set of simple heuristics. It is compared to a toolbox because just as a wood saw is perfect for cutting wood but useless for cutting glass or hammering a nail into a wall, the ingredients of the adaptive toolbox are intended to tackle specific scenarios.

For instance, there are specific heuristics for choice tasks, estimation tasks, and categorization tasks. In what follows, we will discuss two well-known examples of fast-and-frugal heuristics: the recognition heuristic (RH), which utilizes the absence of data, and the take-the-best heuristic (TTB), which purposely disregards the data.

Both examples of heuristics can be connected to decision assignments and to circumstances in which a decision-maker needs to decide which of two options has a higher reward on a quantitative scale.

Ideal scenarios would be deducing which one of two stock shares will have a better income in the next month, which of two cars is more convenient for a family, or who is a better candidate for a particular job (Goldstein and Gigerenzer, 2002 ).

The recognition heuristic

The recognition heuristic has been examined broadly with the famous experiment to determine which of the two cities has a higher population. This experiment was conducted in 2002, and the participants were undergraduate students: one group in the USA and one in Germany. The question was as follows: which has more occupants—San Diego or San Antonio? Given the cultural difference between the student groups and the level of information regarding American cities, it could be expected that American students would have a higher accuracy rate than their German peers. However, most German students did not even know that San Antonio is an American city (Goldstein and Gigerenzer, 2002 ). Surprisingly, the examiners, Goldstein and Gigerenzer, found the opposite of what was expected. 100% of the German students got the correct answer, while the American students achieved an accuracy rate of around 66%. Remarkably, the German students who had never known about San Antonio had more correct answers. Their lack of knowledge empowered them to utilize the recognition heuristic, which states that if one of two objects is recognized and the other is not, then infer that the recognized object has the higher value concerning the relevant criterion. The American students could not use the recognition heuristic because they were familiar with both cities. Ironically, they knew too much.

The recognition heuristic is an incredible asset. In many cases, it is used for swift decisions since recognition is usually systematic and not arbitrary. Useful applications may be cities’ populations, players’ performance in major leagues, or writers’ level of productivity. However, this heuristic will be less efficient in more difficult scenarios than a city’s population, such as the age of the city’s mayor or its sea-level altitude (Gigerenzer and Todd, 1999 ).

Take-the-best heuristic

When the recognition heuristic is not efficient because the decision-maker has enough information about both options, another important heuristic can be used that relies on hints or cues to arrive at a decision. The take-the-best (TTB) heuristic is a heuristic that relies only on specific cues or signals and does not require any complex calculations. In practice, it often boils down to a one-reason decision rule, a type of heuristic where judgements are based on a single good reason only, ignoring other cues (Gigerenzer and Gaissmaier, 2011 ). According to the TTB heuristic, a decision-maker evaluates the case by selecting the attributes which are important to him and sorts these cues by importance to create a hierarchy for the decision to be taken. Then alternatives are compared according to the first, i.e., the most important, cue; if an alternative is the best according to the first cue, the decision is taken. Otherwise, the decision-maker moves to the next layer and checks that level of cues. In other words, the decision is based on the most important attribute that allows one to discriminate between the alternatives (Gigerenzer and Goldstein, 1996 ). Although this lexicographic preference ordering is well known from traditional economic theory, it appears there mainly to provide a counterexample to the existence of a real-valued utility function (Debreu, 1959 ). Surprisingly, however, it seems to be used in many critical situations. For example, in many airports, the customs officials may decide if a traveller is chosen for a further check by looking only at the most important attributes, such as the city of departure, nationality, or luggage weight (Pachur and Marinello, 2013 ). Moreover, in 2012, a study explored voters’ views of how US presidential competitors would deal with the single issue that voters viewed as most significant, for example, the state of the economy or foreign policy. A model dependent on this attribute picked the winner in most cases (Graefe and Armstrong, 2012 ).

However, the TTB heuristic has a stopping rule applied when the search reaches a discriminating cue. So, if the most important signal discriminates, there is no need to continue searching for other cues, and only one signal is considered. Otherwise, the next most important signal will be considered. If no discriminating signal is found, the heuristic will need to make a random guess (Gigerenzer and Gaissmaier, 2011 ).

Empirical evidence on fast-and-frugal heuristics

More studies have been conducted on fast-and-frugal heuristics using analytical methods and simulations to investigate when and why heuristics yield accurate results on the one hand, and on the other hand, using experiments and observational methods to find out whether and when people use fast-and-frugal heuristics (Luan et al., 2019 ). Structured examinations and benchmarking with standard models, for example, regression or Bayesian models, have shown that the accuracy of fast-and-frugal heuristics relies upon the structure of the information environment (e.g., the distribution of signal validities, the interrelation between signals, etc.). In numerous situations, fast-and-frugal heuristics can perform well, particularly in generalized contexts, when making predictions for new cases that have not been previously experienced. Empirical examinations show that people utilize fast-and-frugal heuristics under a time constraint when data is hard to obtain or must be retrieved from memory. Remarkably, some studies have inspected how individuals adjust to various situations by learning. Rieskamp and Otto ( 2006 ) found that individuals seemingly learn to choose the heuristic that has the best performance in a specific domain. Nevertheless, Reimer and Katsikopoulos ( 2004 ) found that individuals apply fast-and-frugal heuristics when making inferences in groups.

While interest in heuristics has been increasing, some of the literature has been mostly critical. In particular, the heuristics and biases programme introduced by Kahneman and Tversky has been the target of more than one critique (Reisberg, 2013 ).

The arguments are mainly in two directions. The first is that the main focus is on the coherence standards such as rationality and that the detection of biases ignores the context-environmental factors where the judgements occur (B.R. Newell, 2013 ). The second is that notions such as availability or representativeness are vague and undefined, and state little regarding the procedures’ hidden judgements (Gigerenzer, 1996 ). For example, it has been argued that the replies in the acclaimed Linda-the-bank-teller experiment could be considered sensible instead of biased if one uses conversational or colloquial standards instead of formal probability theory (Hilton, 1995 ).

The argument of having a vague explanation for certain phenomena can be illustrated when considering the following two scenarios. People tend to believe that an opposite outcome will be achieved after having a stream of the same outcome (e.g., people tend to believe that ‘heads’ should be the next outcome in a coin-flipping game with many consecutive ‘tails’). This is called the gambler fallacy (Barron and Leider, 2010 ). By contrast, the hot-hand fallacy (Gilovich et al., 1985 ) argues that people tend to believe that a stream of the same outcome will continue when there is a lucky day (e.g., a player is taking a shot in a sport such as a basketball after a series of successful attempts). Ayton and Fisher ( 2004 ) argued that, although these two practices are quite opposite, they have both been classified under the heuristic of representativeness. In the two cases, a flawed idea of random events drives observers to anticipate that a certain stream of results is representative of the whole procedure. In the first scenario of coin flipping, people tend to believe that a long stream of tails should not occur; hence the head is predicted. While in the case of the sports player, the stream of the same outcome is expected to continue (Gilovich et al., 1985 ). Therefore, representativeness cannot be diagnosed without considering in advance the expected results. Also, the heuristic does not clarify why people have the urge to believe that a stream of random events should have a representative, while in real life, it does not (Ayton and Fischer, 2004 ).

Nevertheless, the most common critique of Kahneman and Tversky is the idea that ‘we cannot be that dumb’. It states that the heuristics and biases programme is overly pessimistic when assessing the average human decision-making. Also, humans collectively have accumulated many achievements and discoveries throughout human history that would not have been possible if their ability to adequate decision-making had been so limited (Gilovich and Griffin, 2002 ).

Similarly, the probabilistic mental models (PMM) theory of human inference inspired by Simon and pioneered by Gigerenzer has also been exposed to criticism (B.R. Newell et al., 2003 ). Indeed, the enticing character of heuristics that they are both easy to apply and efficient has made them famous within different domains. However, it has also made them vulnerable to replications or variations of the experiments that challenge the original results. For example, Daniel Oppenheimer ( 2003 ) argues that the recognition heuristic (RH) could not yield satisfactory results after replicating the experiment of city populations. He claims that the participants’ judgements failed to obey the RH not just when there were cues other and stronger than mere recognition but also in circumstances where recognition would have been the best cue available. In any case, one could claim that there are numerous methods in the adaptive toolbox and that under certain conditions, people may prefer to use heuristics other than the RH. However, this statement is also questionable since many heuristics that are thought to exist in the adaptive toolbox acknowledge the RH as an initial step (Gigerenzer and Todd, 1999 ). Hence, if individuals are not using the RH, they cannot use many of the other heuristics in the adaptive toolbox (Oppenheimer, 2003 ). Likewise, Newell et al. ( 2003 ) question whether the fast-and-frugal heuristics accurately explain actual human behaviour. In two experiments, they challenged the take-the-best (TTB) heuristic, as it is considered a building block in the PMM framework. The outcomes of these experiments, together with others, such as those of Jones et al. ( 2000 ) and Bröder ( 2000 ), show that the TTB heuristic is not a reliable approach even within circumstances favouring its use. In a somewhat heated debate published in the Psychological Review 1996, Gigerenzer’s criticism of Kahneman and Tversky that many of the so-called biases ‘disappear’ if frequencies rather than probabilities are assumed, was countered by Kahneman and Tversky ( 1996 ) by means of a detailed re-examination of the conjunction fallacy (or Linda Problem). Gigerenzer ( 1996 ) remained unconvinced, and was in turn, blamed by Kahneman and Tversky ( 1996 , p. 591) for just reiterating ‘his objections … without answering our main arguments’.

Our historical review has revealed a number of issues that have received little attention in the literature.

Deliberate vs. automatic heuristics

We have differentiated between deliberate and automatic heuristics, which often seem to be confused in the literature. While it is a widely shared view today that the human brain often relies heavily on the fast and effortless ‘System 1’ in decision-making, but can also use the more demanding tools of ‘System 2’, and it has been acknowledged, e.g. by Kahneman ( 2011 , p. 98), that some heuristics belong to System 1 and others to System 2, the two systems are not as clearly distinct as it may seem. In fact, the very wide range of what one may call ‘heuristics’ shows that there is a whole spectrum of fallible decision-making procedures—ranging from the probably innate problem-solving strategy of the baby that cries whenever it is hungry or has some other problem, to the most elaborate and sophisticated procedures of, e.g., Polya, Bolzano, or contemporary chess-engines. One may be tempted to characterize instinctive procedures as subconscious and sophisticated ones as conscious, but a deliberate heuristic can very well become a subconsciously applied ‘habit of the mind’ or learnt routine with experience and repetition. Vice versa, automatic, subconscious heuristics can well be raised to consciousness and be applied deliberately. E.g., the ‘inductive inference’ from tasty strawberries to the assumption that all red berries are sweet and edible may be quite automatic and subconscious in little children, but the philosophical literature on induction shows that it can be elaborated into something quite conscious. However, while the notion of consciousness may be crucial for an adequate understanding of heuristics in human cognition, for the time being, it seems to remain a philosophical mystery (Harley, 2021 ; Searle, 1997 ), and once programmed, sophisticated heuristic algorithms can be executed by automata.

The deliberate heuristics that we reviewed also illustrate that some of them can hardly be called ‘simple’, ‘shortcuts’, or ‘rules of thumb’. E.g., the heuristics of Descartes, Bolzano, or Polya each consist of a structured set of suggestions, and, e.g., ‘to devise a plan’ for a mathematical proof is certainly not a shortcut. Llull ( 1308 , p. 329), to take another example, wrote of his ‘ars magna’ that ‘the best kind of intellect can learn it in two months: one month for theory and another month for practice’.

Heuristics vs. algorithms

Our review of heuristics also allowed us to clarify the distinction between heuristics and algorithms. As evidenced by our glimpse at computer science, there are procedures that are quite obviously both an algorithm and a heuristic. Within computer science, they are in fact quite common. Algorithms of the heuristic type may be required for certain problems even though an algorithm that finds the optimal solution exists ‘in principle’—as in the case of determining the optimal strategy in chess, where the brute-force-method to enumerate all possible plays of chess is just not practically feasible. In other cases, heuristic algorithms are used because an exhaustive search, while practically feasible, would be too costly or time-consuming. Clearly, for many problems, there are also problem-solving algorithms which always do produce the optimal solution in a reasonable time frame. Given our definition of a heuristic as a fallible method, algorithms of this kind are counterexamples to the complaint that the notion has become so wide that ‘any procedure can be called a heuristic’. However, as we have seen, there are also heuristic procedures that are non-algorithmic. These may be necessary either because the problem to be solved is not sufficiently well-defined to allow for an algorithm, or because an algorithm that would solve the problem at hand, is not known or does not exist. Kleining’s qualitative heuristics is an example of non-algorithmic heuristics necessitated by the ill-defined problems of research in the social sciences, while Polya’s heuristic for solving mathematical problems is an example of the latter: an algorithm that would allow one to decide if a given mathematical conjecture is a theorem or not does not exist (cf. Davis, 1965 ).

Pre-SEU vs. post-SEU heuristics

As we noted in the introduction, the emergence of the SEU theory can be regarded as a kind of watershed for the research on heuristics, as it came to be regarded as the standard definition of rational choice. Post-SEU, fallible methods of decision-making would have to face comparison with this standard. Gigerenzer’s almost belligerent criticism of SEU shows that even today it seems difficult to discuss the pros and cons of heuristics unless one relates them to the backdrop of SEU. However, his criticism of SEU is mostly en passant and seems to assume that the SEU model requires ‘known probabilities’ (e.g., Gigerenzer, 2021 ), ignoring the fact that it is, in general, subjective probabilities, as derived from the agent’s preferences among lotteries, that the model relies on (cf. e.g., Jeffrey, 1967 or Gilboa, 2011 ). In fact, when applied to an ill-defined decision problem in, e.g., management, the SEU theory may well be regarded as a heuristic—it asks you to consider the possible consequences of the relevant set of actions, your preferences among those consequences, and the likelihood of those consequences. To the extent that one may get all of these elements wrong, SEU is a fallible method of decision-making. To be sure, it is not a fast and effortless heuristic, but our historical review of pre-SEU heuristics has illustrated that heuristics may be quite elaborate and require considerable effort and attention.

It is quite true, of course, that the SEU heuristic will hardly be helpful in problem-solving that is not ‘just’ decision-making. If, e.g., the problem to be solved is to find a proof for a mathematical conjecture, the set of possible actions will in general be too vast to be practically contemplated, let alone evaluated according to preferences and probabilities.

Positive vs. negative heuristics

To the extent that the study of heuristics aims at understanding how decisions are actually made, it is not only positive heuristics that need to be considered. It will also be required to investigate the conditions that may prevent the agent from adopting certain courses of action. As we saw, Lakatos used the notion of negative heuristics quite explicitly to characterize research programmes, but we also briefly review Duncker’s notion of ‘functional fixedness’ as an example of a hindrance to adequate problem-solving. A systematic study of such negative heuristics seems to be missing in the literature and we believe that it may be a helpful complement to the study of positive heuristics which has dominated the literature that we reviewed.

To the extent that heuristics are studied with the normative aim of identifying effective heuristics, it may also be useful to consider approaches that should not be taken. ‘Do not try to optimize!’ might be a negative heuristic favoured by the fast-and-frugal school of thought.

Heuristics as the product of evolution

Clearly, heuristics have always existed throughout the development of human knowledge due to the ‘old mind’s’ evolutionary roots and the frequent necessity to apply fast and sufficiently reliable behaviour patterns. However, unlike the behaviour patterns in the other animals, the methods used by humans in problem-solving are sufficiently diverse that the dual-process theory was suggested to provide some structure to the rich ‘toolbox’ humans can and do apply. As all our human DNA is the product of evolution, it is not only the intuitive inclinations to react to certain stimuli in a particular way that must be seen as the product of evolution, but also our ability to abstain from following our gut feelings when there is reason to do so, to reflect and analyse the situation before we embark on a particular course of action. Quite frequently, we experience a tension between our intuitive inclinations and our analytic mind’s judgement, but both of them are somehow the product of evolution, our biography, and the environment. Thus, to point out that gut feelings are an evolved capacity of the brain does in no way provide an argument that would support their superiority over the reflective mind.

Moreover, compared to the speed of problem change in our human lifetimes, biological evolution is very slow. The evolved capacities of the human brain may have been well-adapted to the survival needs of our ancestors some 300,000 years ago, but there is little reason to believe that they are uniformly well-adapted to human problem-solving in the 21st century.

Resource-bounded and ecological rationality

Throughout our review, the reader will have noticed that many heuristics have been suggested for specific problem areas. The methods of the ancient Greeks were mainly centred around solving geometrical problems. Llull was primarily concerned with theological questions, Descartes and Leibniz pursued ‘mechanical’ solutions to philosophical issues, Polya suggested heuristics for Mathematics, Müller for engineering, and Kleining for social science research. This already suggests that heuristics suitable for one type of problem need not be suitable for a different type. Likewise, the automatic heuristics that both the Kahneman-Tversky and the Gigerenzer schools focused on, are triggered by particular tasks. Simon’s observation that the success of a given heuristic will depend on the environment in which it is employed, is undoubtedly an important one that has motivated Gigerenzer’s notion of ecological rationality and is strikingly absent from the SEU model. If ‘environment’ is taken in a broad sense that includes the available resources, the cost of time and effort, the notion seems to cover what has been called resource-rational behaviour (e.g., Bhui et al., 2021 ).

Avenues of further research

A comprehensive study describing the current status of the research on heuristics and their relation to SEU seems to be missing and is beyond the scope of our brief historical review. Insights into their interrelationship can be expected from recent attempts at formal modelling of human cognition that take the issues of limited computational resources and context-dependence of decision-making seriously. E.g., Lieder and Griffiths ( 2020 ) do this from a Bayesian perspective, while Busemeyer et al. ( 2011 ) and Pothos and Busemeyer ( 2022 ) use a generalization of standard Kolmogorov probability theory that is also the basis of quantum mechanics and quantum computation. While it may seem at first glance that such modelling assumes even more computational power than the standard SEU model of decision-making, the computational power is not assumed on the part of the human decision-maker. Rather, the claim is that the decision-maker behaves as if s/he would solve an optimization problem under additional constraints, e.g., on computational resources. The ‘as if’ methodology that is employed here is well-known to economists (Friedman, 1953 ; Mäki, 1998 ) and also to mathematical biologists who have used Bayesian models to explain animal behaviour (McNamara et al., 2006 ; Oaten, 1977 ; Pérez-Escudero and de Polavieja, 2011 ). Evolutionary arguments might be invoked to support this methodology if a survival disadvantage can be shown to result from behaviour patterns that are not Bayesian optimal, but we are not aware of research that would substantiate such arguments. However, attempting to do so by embedding formal models of cognition in models of evolutionary game theory may be a promising avenue for further research.

NP stands for ‘nondeterministic polynomial-time’, which indicates that the optimal solution can be found by a nondeterministic Turing-machine in a running time that is bounded by a polynomial function of the input size. In fact, the TSP is ‘NP-hard’ which means that it is ‘at least as hard as the hardest problems in the category of NP problems’.

Agre P, Horswill I (1997) Lifeworld analysis. J Artif Intell Res 6:111–145

Article   Google Scholar  

Ayton P, Fischer I (2004) The hot hand fallacy and the gambler’s fallacy. Two faces of subjective randomness. Memory Cogn 32:8

Banse G, Friedrich K (2000) Konstruieren zwischen Kunst und Wissenschaft. Edition Sigma, Idee‐Entwurf‐Gestaltung, Berlin

Google Scholar  

Baron J (2000) Thinking and deciding. Cambridge University Press

Barron G, Leider S (2010) The role of experience in the Gambler’s Fallacy. J Behav Decision Mak 23:1

Barros G (2010) Herbert A Simon and the concept of rationality: boundaries and procedures. Brazilian. J Political Econ 30:3

Baumeister RF, Vohs KD (2007) Encyclopedia of social psychology, vol 1. SAGE

Bazerman MH, Moore DA (1994) Judgment in managerial decision making. Wiley, New York

Bentley JL (1982) Writing efficient programs Prentice-Hall software series. Prentice-Hall

Bhui R, Lai L, Gershman S (2021) Resource-rational decision making. Curr Opin Behav Sci 41:15–21. https://doi.org/10.1016/j.cobeha.2021.02.015

Bolzano B (1837) Wissenschaftslehre. Seidelsche Buchhandlung, Sulzbach

Bossaerts P, Murawski C (2017) Computational complexity and human decision-making. Trends Cogn Sci 21(12):917–929

Article   PubMed   Google Scholar  

Boyer CB (1991) The Arabic Hegemony. A History of Mathematics. Wiley, New York

Bröder A (2000) Assessing the empirical validity of the “Take-the-best” heuristic as a model of human probabilistic inference. J Exp Psychol Learn Mem Cogn 26:5

Burke E, Kendall G, Newall J, Hart E, Ross P, Schulenburg S (2003) Hyper-heuristics: an emerging direction in modern search technology. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Busemeyer JR, Pothos EM, Franco R, Trueblood JS (2011) A quantum theoretical explanation for probability judgment errors. Psychol Rev 118(2):193

Buss DM, Kenrick DT (1998) Evolutionary social psychology. In: D T Gilbert, S T Fiske, G Lindzey (eds.), The handbook of social psychology. McGraw-Hill, p. 982–1026

Byron M (1998) Satisficing and optimality. Ethics 109:1

Davis M (ed) (1965) The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions. Raven Press, New York

MATH   Google Scholar  

Debreu G (1959) Theory of value: an axiomatic analysis of economic equilibrium. Yale University Press

Descartes R (1908) Rules for the Direction of the Mind. In: Oeuvres de Descartes, vol 10. In: Adam C, Tannery P (eds). J Vrin, Paris

Descartes R (1998) Discourse on the method for conducting one’s reason well and for seeking the truth in the sciences (1637) (trans and ed: Cress D). Hackett, Indianapolis

Dunbar RIM (1998) Grooming, gossip, and the evolution of language. Harvard University Press

Duncker K (1935) Zur Psychologie des produktiven Denkens. Springer

Englich B, Mussweiler T, Strack F (2006) Playing dice with criminal sentences: the influence of irrelevant anchors on experts’ judicial decision making. Personal Soc Psychol Bull 32:2

Evans JSB (2010) Thinking twice: two minds in one brain. Oxford University Press

Farr RM (1996) The roots of modern social psychology, 1872–1954. Blackwell Publishing

Farris PW, Bendle N, Pfeifer P, Reibstein D (2010) Marketing metrics: the definitive guide to measuring marketing performance. Pearson Education

Fidora A, Sierra C (2011) Ramon Llull, from the Ars Magna to artificial intelligence. Artificial Intelligence Research Institute, Barcelona

Frantz R (2003) Herbert Simon Artificial intelligence as a framework for understanding intuition. J Econ Psychol 24:2. https://doi.org/10.1016/S0167-4870(02)00207-6

Friedman M (1953) The methodology of positive economics. In: Friedman M (ed) Essays in positive economics. University of Chicago Press

Ghiselin MT (1973) Darwin and evolutionary psychology. Science (New York, NY) 179:4077

Gibbons A (2007) Paleoanthropology. Food for thought. Science (New York, NY) 316:5831

Gigerenzer G (1996) On narrow norms and vague heuristics: a reply to Kahneman and Tversky. 1939–1471

Gigerenzer G (2000) Adaptive thinking: rationality in the real world. Oxford University Press, USA

Gigerenzer G (2008) Why heuristics work. Perspect Psychol Sci 3:1

Gigerenzer G (2015) Simply rational: decision making in the real world. Evol Cogn

Gigerenzer G (2021) Embodied heuristics. Front Psychol https://doi.org/10.3389/fpsyg.2021.711289

Gigerenzer G, Gaissmaier W (2011) Heuristic decision making. Annual Review of Psychology 62, p 451–482

Gigerenzer G, Goldstein DG (1996) Reasoning the fast and frugal way: models of bounded rationality. Psychol Rev 103:4

Gigerenzer G, Selten R (eds) (2001) Bounded rationality: the adaptive toolbox. MIT Press

Gigerenzer G, Todd PM (1999) Simple heuristics that make us smart. Oxford University Press, USA

Gilboa I (2011) Making better decisions. Decision theory in practice. Wiley-Blackwell

Gilovich T, Griffin D (2002) Introduction—heuristics and biases: then and now in heuristics and biases: the psychology of intuitive judgment (8). Cambridge University Press

Gilovich T, Vallone R, Tversky A (1985) The hot hand in basketball: on the misperception of random sequences. Cogn Psychol 17:3

Glaveanu VP (2019) The creativity reader. Oxford University Press

Glover F, Kochenberger GA (eds) (2003) Handbook of metaheuristics. International series in operations research & management science, vol 57. Springer, Boston, MA

Goldstein DG, Gigerenzer G (2002) Models of ecological rationality: the recognition heuristic. Psychol Rev 109:1

Graefe A, Armstrong JS (2012) Predicting elections from the most important issue: a test of the take-the-best heuristic. J Behav Decision Mak 25:1

Groner M, Groner R, Bischof WF (1983) Approaches to heuristics: a historical review. In: Groner R, Groner M, Bischof WF (eds) Methods of heuristics. Erlbaum

Groner R, Groner M (1991) Heuristische versus algorithmische Orientierung als Dimension des individuellen kognitiven Stils. In: Grawe K, Semmer N, Hänni R (Hrsg) Üher die richtige Art, Psychologie zu betreiben. Hogrefe, Göttingen

Gugerty L (2006) Newell and Simon’s logic theorist: historical background and impact on cognitive modelling. In: Proceedings of the human factors and ergonomics society annual meeting. Symposium conducted at the meeting of SAGE Publications. Sage, Los Angeles, CA

Harel D (2000) Computers Ltd: what they really can’t do. Oxford University Press

Harley TA (2021) The science of consciousness: waking, sleeping and dreaming. Cambridge University Press

Harris B (1979) Whatever happened to little Albert? Am Psychol 34:2

Heath TL (1926) The thirteen books of Euclid’s elements. Introduction to vol I, 2nd edn. Cambridge University Press

Hertwig R, Pachur T (2015) Heuristics, history of. In: International encyclopedia of the social behavioural sciences. Elsevier, pp. 829–835

Hilton DJ (1995) The social context of reasoning: conversational inference and rational judgment. Psychol Bull 118:2

Hopcroft JE, Motwani R, Ullman JD (2007) Introduction to Automata Theory, languages, and computation. Addison Wesley, Boston/San Francisco/New York

Jeffrey R (1967) The logic of decision, 2nd edn. McGraw-Hill

Jones S, Juslin P, Olsson H, Winman A (2000) Algorithm, heuristic or exemplar: Process and representation in multiple-cue judgment. In: Proceedings of the 22nd annual conference of the Cognitive Science Society. Symposium conducted at the meeting of Erlbaum, Hillsdale, NJ

Kahneman D (2011) Thinking, fast and slow. Farar, Straus and Giroux

Kahneman D, Klein G (2009) Conditions for intuitive expertise: a failure to disagree. Am Psychol 64:6

Kahneman D, Tversky A (1996) On the reality of cognitive illusions. In: Psychological Review, 103(3), p 582–591

Khaldun I (1967) The Muqaddimah. An introduction to history (trans: Arabic by Rosenthal F). Abridged and edited by Dawood NJ. Princeton University Press

Klein G (2001) The fiction of optimization. In: Gigerenzer G, Selten R (eds) Bounded Rationality: The Adaptive Toolbox. MIT Press Editors

Kleining G (1982) Umriss zu einer Methodologie qualitativer Sozialforschung. Kölner Z Soziol Sozialpsychol 34:2

Kleining G (1995) Von der Hermeneutik zur qualitativen Heuristik. Beltz

Lakatos I (1970) Falsification and the methodology of scientific research programmes. In: Lakatos I, Musgrave A (eds) Criticism and the growth of knowledge. Cambridge University Press

Leibniz GW (1880) Die Philosophischen Schriften von GW Leibniz IV, hrsg von CI Gerhardt

Lerner RM (1978) Nature Nurture and Dynamic Interactionism. Human Development 21(1):1–20. https://doi.org/10.1159/000271572

Lieder F, Griffiths TL (2020) Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences. Vol 43, e1. Cambridge University Press

Link D (2010) Scrambling TRUTH: rotating letters as a material form of thought. Variantology 4, p. 215–266

Llull R (1308) Ars Generalis Ultima (trans: Dambergs Y), Yanis Dambergs, https://lullianarts.narpan.net/

Luan S, Reb J, Gigerenzer G (2019) Ecological rationality: fast-and-frugal heuristics for managerial decision-making under uncertainty. Acad Manag J 62:6

Mäki U (1998) As if. In: Davis J, Hands DW, Mäki U (ed) The handbook of economic methodology. Edward Elgar Publishing

Martí R, Pardalos P, Resende M (eds) (2018) Handbook of heuristics. Springer, Cham

McDougall W (2015) An introduction to social psychology. Psychology Press

McNamara JM, Green RF, Olsson O (2006) Bayes’ theorem and its applications in animal behaviour. Oikos 112(2):243–251. http://www.jstor.org/stable/3548663

Newborn M (1997) Kasparov versus Deep Blue: computer chess comes of age. Springer

Newell A, Shaw JC, Simon HA (1959) Report on a general problem-solving program. In: R. Oldenbourg (ed) IFIP congress. UNESCO, Paris

Newell A, Simon HA (1972) Human problem solving. Prentice-Hall, Englewood Cliffs, NJ

Newell BR (2013) Judgment under uncertainty. In: Reisberg D (ed) The Oxford handbook of cognitive psychology. Oxford University Press

Newell BR, Weston NJ, Shanks DR (2003) Empirical tests of a fast-and-frugal heuristic: not everyone “takes the best”. Organ Behav Hum Decision Processes 91:1

Oaten A (1977) Optimal foraging in patches: a case for stochasticity. Theor Popul Biol 12(3):263–285

Article   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Oppenheimer DM (2003) Not so fast! (and not so frugal!): rethinking the recognition heuristic. Cognition 90:1

Pachur T, Marinello G (2013) Expert intuitions: how to model the decision strategies of airport customs officers? Acta Psychol 144:1

Pearl J (1984) Heuristics: intelligent search strategies for computer problem solving. Addison-Wesley Longman Publishing Co, Inc

Pérez-Escudero A, de Polavieja G (2011) Collective animal behaviour from Bayesian estimation and probability matching. Nature Precedings

Pinheiro CAR, McNeill F (2014) Heuristics in analytics: a practical perspective of what influences our analytical world. Wiley Online Library

Polya G (1945) How to solve it. Princeton University Press

Polya G (1954) Induction and analogy in mathematics. Princeton University Press

Pombo O (2002) Leibniz and the encyclopaedic project. In: Actas do Congresso Internacional Ciência, Tecnologia Y Bien Comun: La atualidad de Leibniz

Pothos EM, Busemeyer JR (2022) Quantum cognition. Annu Rev Psychol 73:749–778

Priest G (2008) An introduction to non-classical logic: from if to is. Cambridge University Press

Book   MATH   Google Scholar  

Ramsey FP (1926) Truth and probability. In: Braithwaite RB (ed) The foundations of mathematics and other logical essays. McMaster University Archive for the History of Economic Thought. https://EconPapers.repec.org/RePEc:hay:hetcha:ramsey1926

Reimer T, Katsikopoulos K (2004) The use of recognition in group decision-making. Cogn Sci 28:6

Reisberg D (ed) (2013) The Oxford handbook of cognitive psychology. Oxford University Press

Rieskamp J, Otto PE (2006) SSL: a theory of how people learn to select strategies. J Exp Psychol Gen 135:2

Ritchey T (2022) Ramon Llull and the combinatorial art. https://www.swemorph.com/amg/pdf/ars-morph-1-draft-ch-4.pdf

Ritter J, Gründer K, Gabriel G, Schepers H (2017) Historisches Wörterbuch der Philosophie online. Schwabe Verlag

Russell SJ, Norvig P, Davis E (2010) Artificial intelligence: a modern approach, 3rd edn. Prentice-Hall series in artificial intelligence. Prentice-Hall

Savage LJ (ed) (1954) The foundations of statistics. Courier Corporation

Schacter D, Gilbert D, Wegner D (2011) Psychology, 2nd edn. Worth

Schaeffer J, Burch N, Bjornsson Y, Kishimoto A, Muller M, Lake R, Lu P, Sutphen S (2007) Checkers is solved. Science 317(5844):1518–1522

Article   ADS   MathSciNet   CAS   PubMed   MATH   Google Scholar  

Schreurs BG (1989) Classical conditioning of model systems: a behavioural review. Psychobiology 17:2

Scopus (2022) Search “heuristics”. https://www.scopus.com/standard/marketing.uri (TITLE-ABS-KEY(heuristic) AND (LIMIT-TO (SUBJAREA,"DECI") OR LIMIT-TO (SUBJAREA,"SOCI") OR LIMIT-TO (SUBJAREA,"BUSI"))) Accessed on 16 Apr 2022

Searle JR (1997) The mystery of consciousness. Granta Books

Semaan G, Coelho J, Silva E, Fadel A, Ochi L, Maculan N (2020) A brief history of heuristics: from Bounded Rationality to Intractability. IEEE Latin Am Trans 18(11):1975–1986. https://latamt.ieeer9.org/index.php/transactions/article/view/3970/682

Sen S (2020) The environment in evolution: Darwinism and Lamarckism revisited. Harvest Volume 1(2):84–88. https://doi.org/10.2139/ssrn.3537393

Shah AK, Oppenheimer DM (2008) Heuristics made easy: an effort-reduction framework. Psychol Bull 134:2

Siitonen A (2014) Bolzano on finding out intentions behind actions. In: From the ALWS archives: a selection of papers from the International Wittgenstein Symposia in Kirchberg am Wechsel

Simon HA (1955) A behavioural model of rational choice. Q J Econ 69:1

Simon HA, Newell A (1958) Heuristic problem solving: the next advance in operations research. Oper Res 6(1):1–10. http://www.jstor.org/stable/167397

Article   MATH   Google Scholar  

Smith R (2020) Aristotle’s logic. In: Zalta EN (ed) The Stanford encyclopedia of philosophy, 2020th edn. Metaphysics Research Lab, Stanford University

Smulders TV (2009) Darwin 200: special feature on brain evolution. Biology Letters 5(1), p. 105–107

Sörensen K, Sevaux M, Glover F (2018) A history of metaheuristics. In: Martí R, Pardalos P, Resende M (eds) Handbook of heuristics. Springer, Cham

Stephenson N (2003) Theoretical psychology: critical contributions. Captus Press

Strack F, Mussweiler T (1997) Explaining the enigmatic anchoring effect: mechanisms of selective accessibility. J Person Soc Psychol 73:3

Sullivan D (2002) How search engines work. SEARCH ENGINE WATCH, at http://www.searchenginewatch.com/webmasters/work.Html (Last Updated June 26, 2001) (on File with the New York University Journal of Legislation and Public Policy). http://www.searchenginewatch.com

Suppes P (1983) Heuristics and the axiomatic method. In: Groner R et al (ed) Methods of Heuristics. Routledge

Turing A (1937) On computable numbers, with an application to the entscheidungsproblem. Proc Lond Math Soc s2-42(1):230–265

Article   MathSciNet   MATH   Google Scholar  

Tversky A, Kahneman D (1973) Availability: a heuristic for judging frequency and probability. Cogn Psychol 5:2

Tversky A, Kahneman D (1974) Judgment under uncertainty: heuristics and biases. Science (New York, NY) 185::4157

Vardi MY (2012) Artificial intelligence: past and future. Commun ACM 55:1

Vikhar PA (2016) Evolutionary algorithms: a critical review and its future prospects. Paper presented at the international conference on global trends in signal processing, information computing and communication (ICGTSPICC). IEEE, pp. 261–265

Volz V, Rudolph G, Naujoks B (2016) Demonstrating the feasibility of automatic game balancing. Paper presented at the proceedings of the Genetic and Evolutionary Computation Conference, pp. 269–276

von Neumann J, Morgenstern O (1944) Theory of games and economic behaviour. Princeton University Press, Princeton, p. 1947

Zermelo E (1913) Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. In: Proceedings of the fifth international congress of mathematicians. Symposium conducted at the meeting of Cambridge University Press, Cambridge. Cambridge University Press, Cambridge

Zilio D (2013) Filling the gaps: skinner on the role of neuroscience in the explanation of behavior. Behavior and Philosophy, 41, p. 33–59

Download references

Acknowledgements

We would like to extend our sincere thanks to the reviewers for their valuable time and effort in reviewing our work. Their insightful comments and suggestions have greatly improved the quality of our manuscript.

Open Access funding enabled and organized by Projekt DEAL.

Author information

Authors and affiliations.

HHL Leipzig Graduate School of Management, Leipzig, Germany

Mohamad Hjeij & Arnis Vilks

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mohamad Hjeij .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies with human participants performed by any of the authors.

Informed consent

Additional information.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hjeij, M., Vilks, A. A brief history of heuristics: how did research on heuristics evolve?. Humanit Soc Sci Commun 10 , 64 (2023). https://doi.org/10.1057/s41599-023-01542-z

Download citation

Received : 25 July 2022

Accepted : 30 January 2023

Published : 17 February 2023

DOI : https://doi.org/10.1057/s41599-023-01542-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Socialisation approach to ai value acquisition: enabling flexible ethical navigation with built-in receptiveness to social influence.

  • Joel Janhonen

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

representative heuristic problem solving

What are heuristics? Representative vs. availability heuristics

One topic that many of my psychology tutoring students get confused about is the topic of heuristics, which comes up when they study judgment and decision-making.

What is a heuristic?

A heuristic is a rule-of-thumb. It is a shortcut to solving a problem when you’re too lazy or overwhelmed or otherwise unable to solve it the proper way.

Here’s an example. Let’s say someone asked you: “Hey! How long is the gestational period of the African elephant?”

The proper response to this strange question would be to say, “Hmm, I don’t know. Hold on one second, let me check.” At this point, you would pull out your smartphone and Google until you stumble upon the Wikipedia page for gestational periods of various mammals. But what if you didn’t have your phone on you, or you didn’t feel like taking it out of your bag? Then you might say, “Hmm, well, the gestational period for humans is about 9 months, but elephants are bigger, so I’m gonna say…15 months?” (The correct answer is 645 days, or about 21 months).

So you would be wrong, but hey, it’s a weird question anyway, and you were kind of close. [If $10,000 or your reputation were on the line, then you’d probably take the time to Google.] This is the heuristic approach to answering the question because you used some information you already knew to make an educated guess (but still a guess!) to answer the question.

Heuristics come in all flavors, but two main types are the representativeness heuristic and the availability heuristic. Students often get these confused, but I’m going to see if I can clear up how they’re different with the use of some examples.

The Availability Heuristic

The availability heuristic is when you make a judgment about something based on how available examples are in your mind. So, this heuristic has a lot to do with your memory of specific instances and what you’ve been exposed to. Some examples:

  • Judging the population of cities (when cities are more available in your mind, like New York or Berlin, you will overestimate their populations).
  • Judging the frequency of deaths from different causes (morbid, I know). People tend to overestimate the number of deaths from, say, airplane crashes, but underestimate the number of deaths from, say, asthma. This is because people hear about deaths from airplane crashes in the news, so they can bring to mind a fair number of examples of this, but they can’t bring to mind examples of people dying from asthma. This is why reading the news can actually be misleading, since rare instances can be covered to the point of seeming commonplace.
  • One of my favorite examples: “Are there more words that begin with “r” or that have “r” as their third letter?” To answer this question, you can’t help but bring specific words to mind. Words that begin with “r” are easy to think of; words that have “r” as their third letter are harder to think of, so many people answer this question with “words that begin with ‘r’” when in fact, that’s the wrong answer.

The Representative Heuristic

On to representativeness . These decisions tend to be based on how similar an example is to something else (or how typical or representative the particular case in question is). In this way, representativeness is basically stereotyping. While availability has more to do with memory of specific instances, representativeness has more to do with memory of a prototype, stereotype or average. Let me try to make this clear with some examples:

  • “Linda the bank teller” – this is one of the most famous examples. It comes from the work of Kahneman and Tversky. In this problem, you are told a little bit about Linda, and then asked what her profession is likely to be. Linda is described as an avid protester who went to an all girls’ college. She is an environmentalist, politically liberal, etc. (I’m making up these details, but the information that subjects got in this study is quite similar). Basically, she’s described in such a way that you can’t help but think that she must be a feminist, because the prototype/stereotype that you have in your head is that women who are like Linda are feminists. So when people are asked if Linda is more likely to be a bank teller (working for The Man!) or a feminist bank teller, most people say the latter, even though that doesn’t make any sense, in terms of probability. In this case, people use a shortcut that involved a stereotype to answer the question, and they ignored actual likelihoods.
  • “Tom W.” – another classic example. Even when you know that people are way more likely to be psychology majors than engineering majors, people still say that Tom W. is likely to be an engineer, when he was originally described as a nerd . You know - someone who plays video games, likes building things, doesn’t have the highest social IQ. We think engineers tend to be like that, and that people like that tend to be engineers, so we’ll ignore the facts and go with a stereotype.

I can see why representativeness and availability seem similar, because when you use these heuristics, you are always using information that you had in the past to make a guess. But representativeness is less about particular examples, and more about stereotypes (which are probably formed on the basis of examples, but it’s often unclear where the stereotype even originated!). Availability is about particular examples and how readily they come to mind. This is why we tend to use availability when judging the number of things, because counting examples that come to mind is one way to answer that kind of question.

Heuristics on AP or GRE Psychology Tests 

I hope that was helpful, or at least fun! Another psychology tutor tip I have for you, if you’re preparing for the AP Psych or GRE Psych tests, is that these tests tend to use examples that you probably have come across in your review already. So if you memorize which examples go with which heuristics, that’s another way to answer those questions correctly. Obviously, trying to abstract the underlying principles behind the two heuristics is a lot better, but if you’re studying to the test, definitely memorize the famous examples.

For more information about heuristics, biases and decision-making, check out Nobel Laureate Daniel Kahneman’s book Thinking Fast and Slow.

Related Content

Logo for UH Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Thinking and Intelligence

Problem Solving

OpenStaxCollege

[latexpage]

Learning Objectives

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

PROBLEM-SOLVING STRATEGIES

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( [link] ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( [link] ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.

Here is another popular type of puzzle ( [link] ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.

Take a look at the “Puzzling Scales” logic puzzle below ( [link] ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

PITFALLS TO PROBLEM SOLVING

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

representative heuristic problem solving

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [link] .

Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students.

Were you able to determine how many marbles are needed to balance the scales in [link] ? You need nine. Were you able to solve the problems in [link] and [link] ? Here are the answers ( [link] ).

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1:  blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

Review Questions

A specific formula for solving a problem is called ________.

  • an algorithm
  • a heuristic
  • a mental set
  • trial and error

A mental shortcut in the form of a general problem-solving framework is called ________.

Which type of bias involves becoming fixated on a single trait of a problem?

  • anchoring bias
  • confirmation bias
  • representative bias
  • availability bias

Which type of bias involves relying on a false stereotype to make a decision?

Critical Thinking Questions

What is functional fixedness and how can overcoming it help you solve problems?

Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.

How does an algorithm save you time and energy when solving a problem?

An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.

Personal Application Question

Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

Problem Solving Copyright © 2014 by OpenStaxCollege is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Complimentary 1-hour tutoring consultation Schedule Now

Problem-solving and decision making

Problem-solving refers to a way of reaching a goal from a present condition, where the present condition is either not directly moving toward the goal, is far from it, or needs more complex logic in order to find steps toward the goal.

Types of problem-solving

There are considered to be two major domains in problem-solving : mathematical problem solving, which involves problems capable of being represented by symbols, and personal problem solving, where some difficulty or barrier is encountered.

Within these domains of problem-solving, there are a number of approaches that can be taken. A person may decide to take a trial and error approach and try different approaches to see which one works the best. Or they may decide to use an algorithm approach following a set of rules and steps to find the correct approach. A heuristic approach can also be taken where a person uses previous experiences to inform their approach to problem-solving.

MCAT Problem-solving and decision making

Barriers to effective problem solving 

Barriers exist to problem-solving they can be categorized by their features and tasks required to overcome them.

The mental set is a barrier to problem-solving. The mental set is an unconscious tendency to approach a problem in a particular way. Our mental sets are shaped by our past experiences and habits. Functional fixedness is a special type of mindset that occurs when the intended purpose of an object hinders a person’s ability to see its potential other uses.

The unnecessary constraint is a barrier that shows up in problem-solving that causes people to unconsciously place boundaries on the task at hand.

Irrelevant information is a barrier when information is presented as part of a problem, but which is unrelated or unimportant to that problem and will not help solve it. Typically, it detracts from the problem-solving process, as it may seem pertinent and distract people from finding the most efficient solution.

Confirmation bias is a barrier to problem-solving. This exists when a person has a tendency to look for information that supports their idea or approach instead of looking at new information that may contradict their approach or ideas.

Strategies for problem-solving

There are many strategies that can make solving a problem easier and more efficient. Two of them, algorithms and heuristics, are of particularly great psychological importance.

A heuristic is a rule of thumb, a strategy, or a mental shortcut that generally works for solving a problem (particularly decision-making problems). It is a practical method, one that is not a hundred per cent guaranteed to be optimal or even successful, but is sufficient for the immediate goal. Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps.

An algorithm is a series of sets of steps for solving a problem. Unlike a heuristic, you are guaranteed to get the correct solution to the problem; however, an algorithm may not necessarily be the most efficient way of solving the problem. Additionally, you need to know the algorithm (i.e., the complete set of steps), which is not usually realistic for the problems of daily life.

Biases can affect problem-solving ability by directing a problem-solving heuristic or algorithm based on prior experience.

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. There are several forms of bias which can inform our decision-making process and problem-solving ability:

Anchoring bias -Tendency to focus on one particular piece of information when making decisions or problem-solving

Confirmation bias – Focuses on information that confirms existing beliefs

MCAT Problem-solving and decision making

Hindsight bias – Belief that the event just experienced was predictable

Representative bias – Unintentional stereotyping of someone or something

Availability bias – Decision is based upon either an available precedent or an example that may be faulty

Belief bias – casting judgment on issues using what someone believes about their conclusion. A good example is belief perseverance which is the tendency to hold on to pre-existing beliefs, despite being presented with evidence that is contradictory.

MCAT Problem-solving and decision making

Khan Academy

MCAT Official Prep (AAMC)

Sample Test P/S Section Passage 3 Question 12

Practice Exam 2 P/S Section Passage 8 Question 40

Practice Exam 2 P/S Section Passage 8 Question 42

Practice Exam 4 P/S Section Question 12

• Problem-solving can be considered when a person is presented with two types of problems – mathematical or personal

• Barriers exist to problem-solving maybe because of the mental set of the person, constraints on their thoughts or being presented with irrelevant information

• People can typically employ a number of strategies in problem-solving such as heuristics, where a general problem-solving method is applied to a problem or an algorithm can be applied which is a set of steps to solving a problem without a guaranteed result

• Biases can affect problem-solving ability by directing a problem-solving heuristic or algorithm based on prior experience.

Mental set: an unconscious tendency to approach a problem in a particular way

Problem : the difference between the current situation and a goal

Algorithm: problem-solving strategy characterized by a specific set of instructions

Anchoring bias: faulty heuristic in which you fixate on a single aspect of a problem to find a solution

Availability bias : faulty heuristic in which you make a decision based on information readily available to you

Confirmation bias : faulty heuristic in which you focus on information that confirms your beliefs

Functional fixedness: inability to see an object as useful for any other use other than the one for which it was intended

Heuristic : mental shortcut that saves time when solving a problem

Hindsight bias : belief that the event just experienced was predictable, even though it really wasn’t

Problem-solving strategy : a method for solving problems

Representative bias:  faulty heuristic in which you stereotype someone or something without a valid basis for your judgment

Working backwards: heuristic in which you begin to solve a problem by focusing on the end result

Your Notifications Live Here

{{ notification.creator.name }} Spark

{{ announcement.creator.name }}

representative heuristic problem solving

Trial Session Enrollment

Live trial session waiting list.

representative heuristic problem solving

Next Trial Session:

{{ nextFTS.remaining.months }} {{ nextFTS.remaining.months > 1 ? 'months' : 'month' }} {{ nextFTS.remaining.days }} {{ nextFTS.remaining.days > 1 ? 'days' : 'day' }} {{ nextFTS.remaining.days === 0 ? 'Starts Today' : 'remaining' }} Starts Today

Recorded Trial Session

This is a recorded trial for students who missed the last live session.

Waiting List Details:

Due to high demand and limited spots there is a waiting list. You will be notified when your spot in the Trial Session is available.

  • Learn Basic Strategy for CARS
  • Full Jack Westin Experience
  • Interactive Online Classroom
  • Emphasis on Timing

representative heuristic problem solving

Next Trial:

Free Trial Session Enrollment

Daily mcat cars practice.

New MCAT CARS passage every morning.

You are subscribed.

{{ nextFTS.remaining.months }} {{ nextFTS.remaining.months > 1 ? 'months' : 'month' }} {{ nextFTS.remaining.days }} {{ nextFTS.remaining.days > 1 ? 'days' : 'day' }} remaining Starts Today

Welcome Back!

Please sign in to continue..

representative heuristic problem solving

Please sign up to continue.

{{ detailingplan.name }}.

  • {{ feature }}

43 Problem Solving

[latexpage]

Learning Objectives

By the end of this section, you will be able to:

  • Describe problem solving strategies
  • Define algorithm and heuristic
  • Explain some common roadblocks to effective problem solving

People face problems every day—usually, multiple problems throughout the day. Sometimes these problems are straightforward: To double a recipe for pizza dough, for example, all that is required is that each ingredient in the recipe be doubled. Sometimes, however, the problems we encounter are more complex. For example, say you have a work deadline, and you must mail a printed copy of a report to your supervisor by the end of the business day. The report is time-sensitive and must be sent overnight. You finished the report last night, but your printer will not work today. What should you do? First, you need to identify the problem and then apply a strategy for solving the problem.

PROBLEM-SOLVING STRATEGIES

When you are presented with a problem—whether it is a complex mathematical problem or a broken printer, how do you solve it? Before finding a solution to the problem, the problem must first be clearly identified. After that, one of many problem solving strategies can be applied, hopefully resulting in a solution.

A problem-solving strategy is a plan of action used to find a solution. Different strategies have different action plans associated with them ( [link] ). For example, a well-known strategy is trial and error . The old adage, “If at first you don’t succeed, try, try again” describes trial and error. In terms of your broken printer, you could try checking the ink levels, and if that doesn’t work, you could check to make sure the paper tray isn’t jammed. Or maybe the printer isn’t actually connected to your laptop. When using trial and error, you would continue to try different solutions until you solved your problem. Although trial and error is not typically one of the most time-efficient strategies, it is a commonly used one.

Another type of strategy is an algorithm. An algorithm is a problem-solving formula that provides you with step-by-step instructions used to achieve a desired outcome (Kahneman, 2011). You can think of an algorithm as a recipe with highly detailed instructions that produce the same result every time they are performed. Algorithms are used frequently in our everyday lives, especially in computer science. When you run a search on the Internet, search engines like Google use algorithms to decide which entries will appear first in your list of results. Facebook also uses algorithms to decide which posts to display on your newsfeed. Can you identify other situations in which algorithms are used?

A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A “rule of thumb” is an example of a heuristic. Such a rule saves the person time and energy when making a decision, but despite its time-saving characteristics, it is not always the best method for making a rational decision. Different types of heuristics are used in different types of situations, but the impulse to use a heuristic occurs when one of five conditions is met (Pratkanis, 1989):

  • When one is faced with too much information
  • When the time to make a decision is limited
  • When the decision to be made is unimportant
  • When there is access to very little information to use in making the decision
  • When an appropriate heuristic happens to come to mind in the same moment

Working backwards is a useful heuristic in which you begin solving the problem by focusing on the end result. Consider this example: You live in Washington, D.C. and have been invited to a wedding at 4 PM on Saturday in Philadelphia. Knowing that Interstate 95 tends to back up any day of the week, you need to plan your route and time your departure accordingly. If you want to be at the wedding service by 3:30 PM, and it takes 2.5 hours to get to Philadelphia without traffic, what time should you leave your house? You use the working backwards heuristic to plan the events of your day on a regular basis, probably without even thinking about it.

Another useful heuristic is the practice of accomplishing a large goal or task by breaking it into a series of smaller steps. Students often use this common method to complete a large research project or long essay for school. For example, students typically brainstorm, develop a thesis or main topic, research the chosen topic, organize their information into an outline, write a rough draft, revise and edit the rough draft, develop a final draft, organize the references list, and proofread their work before turning in the project. The large task becomes less overwhelming when it is broken down into a series of small steps.

Problem-solving abilities can improve with practice. Many people challenge themselves every day with puzzles and other mental exercises to sharpen their problem-solving skills. Sudoku puzzles appear daily in most newspapers. Typically, a sudoku puzzle is a 9×9 grid. The simple sudoku below ( [link] ) is a 4×4 grid. To solve the puzzle, fill in the empty boxes with a single digit: 1, 2, 3, or 4. Here are the rules: The numbers must total 10 in each bolded box, each row, and each column; however, each digit can only appear once in a bolded box, row, and column. Time yourself as you solve this puzzle and compare your time with a classmate.

A four column by four row Sudoku puzzle is shown. The top left cell contains the number 3. The top right cell contains the number 2. The bottom right cell contains the number 1. The bottom left cell contains the number 4. The cell at the intersection of the second row and the second column contains the number 4. The cell to the right of that contains the number 1. The cell below the cell containing the number 1 contains the number 2. The cell to the left of the cell containing the number 2 contains the number 3.

Here is another popular type of puzzle ( [link] ) that challenges your spatial reasoning skills. Connect all nine dots with four connecting straight lines without lifting your pencil from the paper:

A square shaped outline contains three rows and three columns of dots with equal space between them.

Take a look at the “Puzzling Scales” logic puzzle below ( [link] ). Sam Loyd, a well-known puzzle master, created and refined countless puzzles throughout his lifetime (Cyclopedia of Puzzles, n.d.).

A puzzle involving a scale is shown. At the top of the figure it reads: “Sam Loyds Puzzling Scales.” The first row of the puzzle shows a balanced scale with 3 blocks and a top on the left and 12 marbles on the right. Below this row it reads: “Since the scales now balance.” The next row of the puzzle shows a balanced scale with just the top on the left, and 1 block and 8 marbles on the right. Below this row it reads: “And balance when arranged this way.” The third row shows an unbalanced scale with the top on the left side, which is much lower than the right side. The right side is empty. Below this row it reads: “Then how many marbles will it require to balance with that top?”

PITFALLS TO PROBLEM SOLVING

Not all problems are successfully solved, however. What challenges stop us from successfully solving a problem? Albert Einstein once said, “Insanity is doing the same thing over and over again and expecting a different result.” Imagine a person in a room that has four doorways. One doorway that has always been open in the past is now locked. The person, accustomed to exiting the room by that particular doorway, keeps trying to get out through the same doorway even though the other three doorways are open. The person is stuck—but she just needs to go to another doorway, instead of trying to get out through the locked doorway. A mental set is where you persist in approaching a problem in a way that has worked in the past but is clearly not working now.

Functional fixedness is a type of mental set where you cannot perceive an object being used for something other than what it was designed for. During the Apollo 13 mission to the moon, NASA engineers at Mission Control had to overcome functional fixedness to save the lives of the astronauts aboard the spacecraft. An explosion in a module of the spacecraft damaged multiple systems. The astronauts were in danger of being poisoned by rising levels of carbon dioxide because of problems with the carbon dioxide filters. The engineers found a way for the astronauts to use spare plastic bags, tape, and air hoses to create a makeshift air filter, which saved the lives of the astronauts.

representative heuristic problem solving

Check out this Apollo 13 scene where the group of NASA engineers are given the task of overcoming functional fixedness.

Researchers have investigated whether functional fixedness is affected by culture. In one experiment, individuals from the Shuar group in Ecuador were asked to use an object for a purpose other than that for which the object was originally intended. For example, the participants were told a story about a bear and a rabbit that were separated by a river and asked to select among various objects, including a spoon, a cup, erasers, and so on, to help the animals. The spoon was the only object long enough to span the imaginary river, but if the spoon was presented in a way that reflected its normal usage, it took participants longer to choose the spoon to solve the problem. (German & Barrett, 2005). The researchers wanted to know if exposure to highly specialized tools, as occurs with individuals in industrialized nations, affects their ability to transcend functional fixedness. It was determined that functional fixedness is experienced in both industrialized and nonindustrialized cultures (German & Barrett, 2005).

In order to make good decisions, we use our knowledge and our reasoning. Often, this knowledge and reasoning is sound and solid. Sometimes, however, we are swayed by biases or by others manipulating a situation. For example, let’s say you and three friends wanted to rent a house and had a combined target budget of $1,600. The realtor shows you only very run-down houses for $1,600 and then shows you a very nice house for $2,000. Might you ask each person to pay more in rent to get the $2,000 home? Why would the realtor show you the run-down houses and the nice house? The realtor may be challenging your anchoring bias. An anchoring bias occurs when you focus on one piece of information when making a decision or solving a problem. In this case, you’re so focused on the amount of money you are willing to spend that you may not recognize what kinds of houses are available at that price point.

The confirmation bias is the tendency to focus on information that confirms your existing beliefs. For example, if you think that your professor is not very nice, you notice all of the instances of rude behavior exhibited by the professor while ignoring the countless pleasant interactions he is involved in on a daily basis. Hindsight bias leads you to believe that the event you just experienced was predictable, even though it really wasn’t. In other words, you knew all along that things would turn out the way they did. Representative bias describes a faulty way of thinking, in which you unintentionally stereotype someone or something; for example, you may assume that your professors spend their free time reading books and engaging in intellectual conversation, because the idea of them spending their time playing volleyball or visiting an amusement park does not fit in with your stereotypes of professors.

Finally, the availability heuristic is a heuristic in which you make a decision based on an example, information, or recent experience that is that readily available to you, even though it may not be the best example to inform your decision . Biases tend to “preserve that which is already established—to maintain our preexisting knowledge, beliefs, attitudes, and hypotheses” (Aronson, 1995; Kahneman, 2011). These biases are summarized in [link] .

Please visit this site to see a clever music video that a high school teacher made to explain these and other cognitive biases to his AP psychology students.

Were you able to determine how many marbles are needed to balance the scales in [link] ? You need nine. Were you able to solve the problems in [link] and [link] ? Here are the answers ( [link] ).

The first puzzle is a Sudoku grid of 16 squares (4 rows of 4 squares) is shown. Half of the numbers were supplied to start the puzzle and are colored blue, and half have been filled in as the puzzle’s solution and are colored red. The numbers in each row of the grid, left to right, are as follows. Row 1:  blue 3, red 1, red 4, blue 2. Row 2: red 2, blue 4, blue 1, red 3. Row 3: red 1, blue 3, blue 2, red 4. Row 4: blue 4, red 2, red 3, blue 1.The second puzzle consists of 9 dots arranged in 3 rows of 3 inside of a square. The solution, four straight lines made without lifting the pencil, is shown in a red line with arrows indicating the direction of movement. In order to solve the puzzle, the lines must extend beyond the borders of the box. The four connecting lines are drawn as follows. Line 1 begins at the top left dot, proceeds through the middle and right dots of the top row, and extends to the right beyond the border of the square. Line 2 extends from the end of line 1, through the right dot of the horizontally centered row, through the middle dot of the bottom row, and beyond the square’s border ending in the space beneath the left dot of the bottom row. Line 3 extends from the end of line 2 upwards through the left dots of the bottom, middle, and top rows. Line 4 extends from the end of line 3 through the middle dot in the middle row and ends at the right dot of the bottom row.

Many different strategies exist for solving problems. Typical strategies include trial and error, applying algorithms, and using heuristics. To solve a large, complicated problem, it often helps to break the problem into smaller steps that can be accomplished individually, leading to an overall solution. Roadblocks to problem solving include a mental set, functional fixedness, and various biases that can cloud decision making skills.

Review Questions

A specific formula for solving a problem is called ________.

  • an algorithm
  • a heuristic
  • a mental set
  • trial and error

A mental shortcut in the form of a general problem-solving framework is called ________.

Which type of bias involves becoming fixated on a single trait of a problem?

  • anchoring bias
  • confirmation bias
  • representative bias
  • availability bias

Which type of bias involves relying on a false stereotype to make a decision?

Critical Thinking Questions

What is functional fixedness and how can overcoming it help you solve problems?

Functional fixedness occurs when you cannot see a use for an object other than the use for which it was intended. For example, if you need something to hold up a tarp in the rain, but only have a pitchfork, you must overcome your expectation that a pitchfork can only be used for garden chores before you realize that you could stick it in the ground and drape the tarp on top of it to hold it up.

How does an algorithm save you time and energy when solving a problem?

An algorithm is a proven formula for achieving a desired outcome. It saves time because if you follow it exactly, you will solve the problem without having to figure out how to solve the problem. It is a bit like not reinventing the wheel.

Personal Application Question

Which type of bias do you recognize in your own decision making processes? How has this bias affected how you’ve made decisions in the past and how can you use your awareness of it to improve your decisions making skills in the future?

Creative Commons License

Share This Book

  • Increase Font Size

IMAGES

  1. Heuristics In Psychology: Definition & Examples

    representative heuristic problem solving

  2. 22 Heuristics Examples (The Types of Heuristics)

    representative heuristic problem solving

  3. Heuristic Problem Solving: A comprehensive guide with 5 Examples

    representative heuristic problem solving

  4. Heuristics

    representative heuristic problem solving

  5. What Is A Representative Heuristic And How Does It Im

    representative heuristic problem solving

  6. What Is A Heuristic And Why Heuristics Matter In Business

    representative heuristic problem solving

VIDEO

  1. Teaching method-Inductive& deductive,Analytic&synthetic, Heuristic, Problem solving @b.eddetailing

  2. Sergio Rajsbaum: Modeling distributed computing task computability with dynamic epistemic logic

  3. Heuristic Search in AI

  4. Constrain Propagation Algorithm

  5. Decision Making & Heuristic

  6. Marketing Fundamentals 10: Influencing Customers, Representative Heuristic and Bud Light's Challenge

COMMENTS

  1. How the Representativeness Heuristic Affects Decisions and Bias

    The representativeness heuristic is just one type of mental shortcut that allows us to make decisions quickly in the face of uncertainty. While this can lead to quick thinking, it can also lead us to ignore factors that also shape events. Fortunately, being aware of this bias and actively trying to avoid it can help.

  2. Heuristics In Psychology: Definition & Examples

    Psychologists refer to these efficient problem-solving techniques as heuristics. A heuristic in psychology is a mental shortcut or rule of thumb that simplifies decision-making and problem-solving. Heuristics often speed up the process of finding a satisfactory solution, but they can also lead to cognitive biases.

  3. Representativeness Heuristic

    The representativeness heuristic occurs when we estimate the probability of an event based on how similar it is to a known situation. In other words, we compare it to a situation, prototype, or stereotype we already have in mind. Representativeness heuristic example. You are sitting at a coffee shop and you notice a person in eccentric clothes ...

  4. Heuristic Problem Solving: A comprehensive guide with 5 Examples

    The four stages of heuristics in problem solving are as follows: 1. Understanding the problem: Identifying and defining the problem is the first step in the problem-solving process. 2. Generating solutions: The second step is to generate as many solutions as possible.

  5. Representativeness Heuristic in Psychology: Complete Guide

    While Westerners often exhibit a strong tendency to rely on representativeness heuristic, Asians tend to consider a broader range of factors when making judgments. This cultural disparity can have implications for problem-solving and decision-making on global issues such as climate change. Implications for Society

  6. Representativeness heuristic

    The representativeness heuristic is used when making judgments about the probability of an event being representional in character and essence of known prototypical event. It is one of a group of heuristics (simple rules governing judgment or decision-making) proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in ...

  7. Heuristics and biases: The science of decision-making

    A heuristic is a word from the Greek meaning 'to discover'. It is an approach to problem-solving that takes one's personal experience into account. Heuristics provide strategies to scrutinize a limited number of signals and/or alternative choices in decision-making. Heuristics diminish the work of retrieving and storing information in ...

  8. Representativeness Heuristic

    The representativeness heuristic is so pervasive that many researchers believe it is the foundation of several other biases that affect our processing, including the conjunction fallacy and the gambler's fallacy. The conjunction fallacy occurs when we assume multiple things are more likely to co-occur than a single thing on its own.

  9. 7.3 Problem Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. ... Representative bias describes a faulty way of thinking, ...

  10. Heuristics

    2. Next. A heuristic is a mental shortcut that allows an individual to make a decision, pass judgment, or solve a problem quickly and with minimal mental effort. While heuristics can reduce the ...

  11. 17.3: The Representativeness Heuristic

    Here the representative heuristic leads us to judge things that strike us as representative or normal to be more likely than things that seem unusual. Specificity Revisited . We have seen that the more detailed and specific a description of something is, the less likely that thing is to occur. The probability of a quarter's landing heads is 1 ...

  12. Biases and Errors in Thinking

    Representativeness Heuristic. The representativeness heuristic is when you judge something based on how they match your prototype. This leads us to ignore information and is honestly the stem of stereotypes. ... The candle problem is a cognitive performance test measuring the influence of functional fixedness on problem-solving tasks ...

  13. Representativeness Heuristic: Understanding Decision Making Bias

    We use heuristics when we make a decision or solve a problem by using a rule of thumb strategy in order to shorten the process. Representativeness- Representativeness, in terms of problem solving and decision making, refers to an existing group or set of circumstance that exists in our minds as most similar to the problem or decision at hand.

  14. A brief history of heuristics: how did research on heuristics evolve

    In relation to the representativeness heuristic, Kahnemann ... Simon HA, Newell A (1958) Heuristic problem solving: the next advance in operations research. Oper Res 6(1):1-10.

  15. What are heuristics? Representative vs. availability heuristics

    In this way, representativeness is basically stereotyping. While availability has more to do with memory of specific instances, representativeness has more to do with memory of a prototype, stereotype or average. Let me try to make this clear with some examples: "Linda the bank teller" - this is one of the most famous examples.

  16. Representativeness Heuristic: Definition & Examples

    The representativeness heuristic often occurs when we make quick, snap judgments without taking the time to consider all available information carefully. By being aware of the representativeness heuristic and using these strategies to avoid it, you can make more informed and accurate judgments in a wide range of situations. Reference

  17. 7.3 Problem-Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. A "rule of thumb" is an example of a heuristic.

  18. Heuristic

    A heuristic (/ h j ʊ ˈ r ɪ s t ɪ k /; from Ancient Greek εὑρίσκω (heurískō) 'to find, discover'), or heuristic technique, is any approach to problem solving that employs a practical method that is not fully optimized, perfected, or rationalized, but is nevertheless sufficient for reaching an immediate, short-term goal or ...

  19. 8.2 Problem-Solving: Heuristics and Algorithms

    Algorithms. In contrast to heuristics, which can be thought of as problem-solving strategies based on educated guesses, algorithms are problem-solving strategies that use rules. Algorithms are generally a logical set of steps that, if applied correctly, should be accurate. For example, you could make a cake using heuristics — relying on your ...

  20. Problem Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. ... Representative bias describes a faulty way of thinking, ...

  21. Problem Solving And Decision Making

    A heuristic is a rule of thumb, a strategy, or a mental shortcut that generally works for solving a problem (particularly decision-making problems). It is a practical method, one that is not a hundred per cent guaranteed to be optimal or even successful, but is sufficient for the immediate goal. Working backwards is a useful heuristic in which ...

  22. Problem Solving

    A heuristic is another type of problem solving strategy. While an algorithm must be followed exactly to produce a correct result, a heuristic is a general problem-solving framework (Tversky & Kahneman, 1974). You can think of these as mental shortcuts that are used to solve problems. ... Representative bias describes a faulty way of thinking, ...

  23. Psych 202 Ch 6 Concept Checks Flashcards

    Study with Quizlet and memorize flashcards containing terms like Explain how functional fixedness and mental set are examples of the negative impact of past experience., Explain why we tend to use heuristics and not algorithms, even though algorithms gaurantee correct solutions., Explain how the anchoring and adjustment heuristic may lead you ...