Created by Materia for OpenMind Recommended by Materia
18
Start Behavioral Economics: Past, Present, and Future
Article from the book Towards a New Enlightenment? A Transcendent Decade

Behavioral Economics: Past, Present, and Future

Estimated reading time Time 18 to read

Consolidated by the award of the 2017 Economics Nobel Prize to behavioral economist Richard Thaler, behavioral economics is enjoying a golden age. It combines a diverse range of insights from across the social sciences—including economists’ powerful analytical tools alongside rich evidence about real human behavior from other social sciences—especially psychology and sociology. This article explores the evolution of behavioral economics and some key behavioral insights about incentives and motivations; social influences—including social learning, peer pressure, and group-think; heuristics and biases; decision-making under risk and uncertainty; present bias and procrastination; and nudging policy tools. These all illustrate how behavioral economics provides businesses and policy makers with a rich understanding of how real people think, choose, and decide.

Today, it seems as though everyone is talking about behavioral economics. Governments are embedding behavioral insights into policy. Commercial businesses are using it to inform their marketing strategies. Lessons from behavioral economics are informing relationships between employers and employees. Even in the silos of academia, most applied research teams—most obviously other social scientists but also natural scientists, from neuroscientists through to behavioral ecologists, computer scientists and engineers—are keen to bring behavioral economists into the multidisciplinary teams so that they can connect their research with insights from behavioral economics. Why? Because behavioral economics combines a unique collection of insights from social science. It brings together economists’ powerful analytical tools, traditionally applied in a restricted way to unraveling the economic incentives and motivations driving us all. But it also addresses the fundamental flaw in non-behavioral economics: its highly restrictive conception of rationality, based on assumptions of agents able easily to apply mathematical tools in identifying the best solutions for themselves or their businesses. Herbert Simon made some early progress in re-conceptualizing rationality in economics—via his concept of “bounded rationality”, that is, rationality bounded by constraints in information available or in cognitive processing ability (Simon, 1955). Modern behavioral economists have taken this further by bringing together rich insights from psychology to capture how economic incentives and motivations are changed, often fundamentally, by psychological influences. Neither the economics nor the psychology can stand alone. Without economics, the psychology lacks analytical structure and direction—especially in describing everyday decision-making. Without the psychology, economics lacks external consistency and intuitive appeal. Together, the subjects are uniquely insightful. Together, they enable us powerfully to understand what and how real people think, choose, and decide in ways that no single academic discipline has managed before—generating not only new theoretical insights but also new practical and policy insights that, at best, have the power to change livelihoods, prosperity, and well-being across a range of dimensions.

The Past

Behavioral economics may seem to many observers to be a new thing, for better or worse. Most of the excitement about behavioral economics has bubbled-up in the past ten or so years. The first milestone was the award of the 2002 Nobel Prize jointly to economic psychologist Daniel Kahneman, alongside Vernon L. Smith—an experimental economist whose insights and tools inspired behavioral economists even though experimental economics is not behavioral economics. The second was the award of the 2017 Nobel to behavioral economist Richard Thaler, who has written colorfully about his contributions in his book Misbehaving (Thaler, 2106). Thaler is most famous for his work on behavioral finance and behavioral public policy—commonly known as “nudging,” named after his best-selling book with legal scholar Cass Sunstein—the book this year celebrating its tenth anniversary (Thaler and Sunstein, 2008). These thinkers have had enormous influence on modern policy—not least through advising the policy-making teams of the then US President Barak Obama and the then UK Prime Minister David Cameron. The establishment of a “nudge” unit in Cameron’s Cabinet Office spawned the growth of similar units around the world—from Australia to Lebanon to Mexico, to name just a few.

The progress of behavioral economics between the two milestones of the 2002 and 2017 Nobel Prizes mirrors the emergence of behavioral economics from a largely theoretical subject through to a subject that now has enormous real-world policy relevance—for public and commercial policy makers alike. It also has much to offer ordinary people in understanding some of the decision-making challenges they face. But behavioral economics is a much older discipline than these two twenty-first-century milestones might suggest. Some could argue that all economics should be about behavior if behavior is what drives choices and decision-making. Economics is the study of decisions after all. But, from the nineteenth century onward, economics started to move away from behavior as it might be richly understood in terms of the psychology of choice toward observed choices as a measure of revealed preferences. In providing a neat and simple story about these preferences revealed when we make our choices, the story can only be made sufficiently simple if economists assume that economic decision-makers are constrained by strict behavioral rules—specifically in assuming that consumers aim to maximize their satisfaction and businesses aim to maximize profits. In mainstream economics, consumers and firms are assumed to do this in the best way they can by implementing mathematical rules to identify the best solutions. Modern economists, in the process of building these neat mathematical models that captured these behavioral rules, stripped out all the socio-psychological complexities of real-world decision-making.

Historically, however, and before modern economics mathematicized the analysis of choice, economists spent plenty of time thinking about how the incentives and motivations that are the stuff of economic analysis are affected by psychological influences, including going all the way back to Adam Smith. Adam Smith is popularly associated with his defense of free markets in his 1776 masterpiece The Nature and Causes of the Wealth of Nations, in which he advocates that the “invisible hand” of the price mechanism should be allowed to operate without government intervention. But in his 1759 masterpiece—The Theory of Moral Sentiments—he also wrote extensively about sympathy and other social emotions that drive our interactions with others around us—key insights seen in modern behavioral economics research.

The Present

We have said a lot about where behavioral economics comes from without saying too much about what behavioral economists actually do. In understanding this more deeply, we can look at a range of themes which behavioral economists explore to illustrate the power and relevance of their insights. Behavioral economics is now an enormous literature and doing justice to it all in one chapter is impossible, but a few key themes dominate and we will focus here on those insights from behavioral economics that are most powerful and enduring in illuminating real-world decision-making problems. These include behavioral analyses of incentives/motivations; social influences; heuristics, bias, and risk; time and planning; and impacts of personality and emotions on decision-making (see Baddeley, 2017 and 2018b, for detailed surveys of these and other behavioral economics literatures).

Incentives and Motivations

As we noted above, economics is essentially about incentives and motivations—traditionally focusing on money as an incentive, for example in explaining a decision to work as a balancing act in which wages earned persuade workers to give up their leisure time. Psychologists bring a broader understanding of motivation into behavioral economics—specifically by disentangling extrinsic motivations from intrinsic motivations. Extrinsic motivations include all the rewards and punishments external to us—money is the most obvious, but physical punishments would be another example. Alongside these are intrinsic motivations—such as pride in a job done well, dutifulness, and intellectual engagement. Some of the most famous behavioral experiments were conducted by psychologist Dan Ariely and his team (see Ariely, 2008). Some of these experimental studies show that experimental participants’ decisions to contribute to a charity or public good are partly driven by social factors: people are more generous when their donations are revealed than when information about their donations is kept private. Intrinsic motivations drive effort, often as much and sometimes more than external monetary incentives. Students, for example, are often prepared to work harder for an intellectual challenge than for money. This illustrates that we are not driven just by external incentives and disincentives—whether these be money, physical rewards/punishments or social consequences. Chess, computer games, and also physical challenges associated with sport are all things that engage people’s attention and enthusiasm even without monetary rewards.

Disentangling intrinsic and extrinsic motivations is not straightforward, however. There is the added complication that extrinsic incentives “crowd out” intrinsic motivations. A classic study of this was conducted by behavioral economists Uri Gneezy and Aldo Rustichini. An Israeli nursery school was struggling with the problem of parents arriving late to pick up their children so they instituted a system of fines for latecomers. The fines had a perverse effect, however, in increasing the number of late pickups by parents rather than reducing them. Gneezy and Rustichini attributed this to a crowding-out problem: introducing the fine crowded-out parents’ incentives to be dutiful in arriving on time. Instead, parents were interpreting the fine as a price: in paying a fine they were paying for a service and so it became an economic exchange in which dutifulness in arriving punctually became less relevant (Gneezy and Rustichini, 2000).

A specific set of motivations that behavioral economists have spent a lot of time exploring are the social motivations and these are illustrated most extensively in what is possibly the most famous behavioral experimental game: the Ultimatum Game, devised by Werner Güth and colleagues (Güth et al., 1982). In the Ultimatum Game, the experimenter gives an experimental participant a sum of money to distribute—let’s say they give Alice $100 and ask her to propose giving a portion of this money to a second experimental participant: Bob. Bob is instructed to respond by either accepting Alice’s offer or rejecting it. If Bob rejects Alice’s offer then neither of them get anything. Standard economics predicts that Alice will be self-interested when she plays this game and will aim to offer Bob the very lowest offer that she thinks she will get away with. In this case she would offer Bob $1, and if Bob is similarly rational he would accept $1 because $1 is better than $0. In reality, however, in a very extensive range of studies of the Ultimatum Game—including studies across cultures, socioeconomic characteristics, and even experiments with monkeys playing the game for juice and fruit—the proposers are remarkably generous in offering much more than the equivalent of $1. On the other hand, the responders will often reject even relatively generous offers. What is going on? Behavioral economists explain these findings, and other findings from similar games, in terms of our social preferences. We do not like seeing unequal outcomes—we experience inequity aversion, and we experience it in two forms: disadvantageous inequity aversion, and advantageous inequity aversion. Disadvantageous inequity aversion is when we do not want to suffer inequity ourselves. In the Ultimatum Game, Bob will suffer disadvantageous inequity aversion when Alice makes a mean offer—and this may lead him to reject offers of relatively large amounts. On the other hand, advantageous inequity aversion is about not wanting to see others around us treated unfairly—so Alice will not make the minimum possible offer to Bob because she reasons that that would be unfair. Unsurprisingly, we worry much more about disadvantageous inequity aversion than advantageous inequity aversion, but both have been demonstrated—across a large number of experimental studies—to have a strong influence on our tendencies toward generosity.

La reina de Reino Unido, Isabel II, y el príncipe Felipe, duque de Edimburgo, durante una visita al remodelado King Edward Court Shopping Centre de Windsor, Inglaterra, febrero de 2008
Queen Elizabeth II of the United Kingdom and Prince Philip, Duke of Edinburgh, during a visit to the remodeled King Edward Court Shopping Centre in Windsor, England, February 2008

Social Influences

Linking to these insights around social preferences, behavioral economists have explored some other ways in which social influences affect our decisions and choices. Broadly speaking, these social influences can be divided into informational influences and normative influences (Baddeley, 2018a). Informational influences are about how we learn from others. In situations where we do not know much or are facing a complex and uncertain series of potential outcomes, it makes sense for us to look at what others are doing, inferring that they may know better than we do about the best course of action. Economists have analyzed this phenomenon in terms of updating our estimates of probabilities—and a classic example outlined by Abhijit Banerjee is restaurant choice (Banerjee, 1992). We are new to a city—perhaps visiting as tourists—and we see two restaurants, both of which look similar but we have no way of knowing which is better. We see that one is crowded and the other is empty and—perhaps counter-intuitively—we do not pick the empty restaurant, which might be more comfortable and quieter. Instead, we pick the crowded restaurant. Why? Because we infer that all those people who have chosen the crowded restaurant ahead of the empty restaurant know what they are doing, and we follow their lead—using their actions (the restaurant they choose) as a piece of social information. We respond to these informational influences in a rational way—perhaps not the extreme form of rationality that forms the cornerstone of a lot of economics, but nonetheless sensible—the outcome of a logical reasoning process.

Normative social influences are less obviously rational and are about how we respond to pressures from the groups around us. In explaining these social pressures, behavioral economics draws on key insights from social psychologists, such as Stanley Milgram and Solomon Asch, and their colleagues. Stanley Milgram created controversy with his electric shock experiments. Experimental participants were instructed by an experimenter to inflict what they thought were severe electric shocks on other people hidden from view. The participants in Milgram’s experiments could still hear the people who were supposedly receiving the shocks. In fact, these people were just actors but the experimental participants did not know this and a significant number of the participants (not all) were prepared to inflict what they were told were life-threatening levels of shock: the actors pretended to experience severe pain, screaming and at worst in some cases going worryingly quiet after the shocks. Milgram explained the fact that his participants were prepared to act in these apparently ruthless ways as evidence that we are susceptible to obedience to authority. We are inclined to do what we are told, especially when we confront physically and psychologically challenging scenarios. Milgram’s evidence was used in part to explain some of the atrocities associated with the Holocaust—in an attempt to answer the puzzle of why so many otherwise ordinary civilians not only observe but also actively engage in atrocities.

Another influential set of social psychology experiments that have informed behavioral economists include Solomon Asch’s experiments (Asch, 1955). He devised a line experiment to test for conformity: experimental participants were asked to look at a picture of a line and then match it with another line of the same length. This was an easy task, but Asch and his colleagues added a complication by exposing their participants to other people’s guesses. Unbeknownst to their participants, the groups deciding about the line lengths in fact included a large number of experimental confederates instructed to lie about the length of the lines. To illustrate with a simple example: imagine that twenty participants are gathered together to complete the line task but nineteen are in cahoots with the experimenter and there is only one genuine participant. If the others all came up with a stupid, wrong answer to this simple question about lines, what would the twentieth, genuine participant do? Asch and his colleagues found that many of the genuine participants (though, tellingly, not all) changed their minds away from the correct answer to give an obviously wrong answer when they saw others making a wrong guess. In other words, many participants seemed inclined to ensure that their answers conformed with the answers from the other participants in their group, without considering that these participants might be mistaken, or lying. The emotional responses of the participants were variable. Those who stuck with their original answers did so confidently. The conformists who changed their answers to fit with the group varied—some experiencing distressing self-doubt; others blaming other participants for their mistakes. Why would a person change their mind to what otherwise would seem like an obviously wrong answer? This experiment does not resolve the rational versus irrational question. It may seem irrational to give the wrong answer just because you see others getting it wrong. The Nobel Prize-winning economist Robert Shiller came up with another explanation, consistent with rational decision-making: perhaps the real participants were thinking that it is much more likely that their single decision was wrong than that nineteen others were wrong. They were balancing the probabilities and coming to the conclusion that the chances that such a large number of other people could be wrong were small and so it made sense to follow them (Shiller, 1995).

Social influences can be divided into informational and normative influences. The former are about how we learn from others, while the latter are about how we respond to pressures from groups around us

More generally, many of us use others’ choices and actions to guide our own choices and actions—such as in the restaurant example above. When we copy other people we are using a rule of thumb—a simple decision-making tool that helps us to navigate complex situations, especially situations characterized by information overload and choice overload. In today’s world, the ubiquity of online information and reviews is another way in which we use information about others’ choices and actions as a guide. For example, when we are buying a new computer or booking a hotel, we will find out what others have done and what others think before deciding for ourselves. In these situations, when thinking through lots of information and many choices is a cognitive challenge, it makes sense to follow others and adopt what behavioral economists would call a herding “heuristic.” Following the herd is a quick way to decide what to do. This brings us to the large and influential literature on heuristics and bias, developing out of Daniel Kahneman’s and his colleague Amos Tversky’s extensive experimental work in this field.

Heuristics, Bias, and Risk

What are heuristics? Heuristics are the quick decision-making rules we use to simplify our everyday decision-making and they often work well, but sometimes they create biases in our decision-making. In other words, in some situations when we use heuristics they lead us into systematic mistakes. The psychologist Gerd Gigerenzer makes the important observation, however, that heuristics are often good guides to decision-making because they are fast and frugal. They often work well, especially if people are given simple techniques to enable them to use heuristics more effectively (Gigerenzer, 2014).

When thinking through lots of information and many choices is a cognitive challenge, it makes sense to follow others and adopt what behavioral economists would call a herding “heuristic”

If you Google behavioral bias today, you will get a long and unstructured list, and, in devising a taxonomy of heuristics and their associated biases, a good place to start is Daniel Kahneman and Amos Tversky’s taxonomy of heuristics—as outlined in their 1974 Science paper (Tverksy and Kahneman, 1974) and summarized for a lay audience in Kahneman (2011). Kahneman and Tversky identified three categories of heuristic, based on evidence from an extensive range of experiments they had conducted, including the availability, representativeness, and anchoring and adjustment heuristics.

The availability heuristic is about using information that we can readily access—either recent events, first moments, or emotionally vivid or engaging events. Our memories of these types of highly salient information distort our perceptions of risk. A classic example is the impact that vivid and sensationalist news stories have on our choices, linking to a specific type of availability heuristic—the affect heuristic. For example, vivid accounts of terrible plane and train crashes stick in our memory leading us to avoid planes and trains when, objectively, we are far more likely to be run over by a car when crossing the road, something we do every day without thinking too hard about it. We misjudge the risk—thinking plane and train crashes are more likely than pedestrian accidents—and this is because information about plane crashes is far more available, readily accessible, and memorable for us.

The representativeness heuristic is about judgments by analogy—we judge the likelihood of different outcomes according to their similarity to things we know about already. In some of their experiments, Kahneman and Tversky asked their participants to read a person’s profile and judge the likelihood that this profile described a lawyer versus an engineer. They discovered that many of their participants judged the likelihood that a person described was a lawyer or an engineer according to how similar the profile was to their preconceptions and stereotypes about the characteristic traits of lawyers versus engineers.

Our aversion to inequity has a powerful influence on our tendency to be generous
BBVA-OpenMind-Ilustracion-Michelle-Baddeley-Economia-conductual_3
A volunter from the Spanish NGO Proactivie Open Arms helps an inmigrant rescued thirty-two kilometers off the Libyan coast to debark at the Italian port of Crotone in March 2017

Anchoring and adjustment is about how we make our decisions relative to a reference point. For example, when participants in Kahneman and Tversky’s experiments were asked to guess the number of African nations in the United Nations, their guesses could be manipulated by asking them first to spin a wheel to give them a number. Those who spun a lower number on the wheel also guessed a smaller number of African countries in the UN.

Another seminal contribution from Kahneman and Tversky emerges from their analyses of heuristics and bias: their own unique behavioral theory of risk—what they call “prospect theory” (Kahneman and Tversky, 1979). They devised prospect theory on the basis of a series of behavioral experiments which suggested some fundamental flaws in expected utility theory—economists’ standard theory of risk. The differences between these two different approaches to understanding risky decision-making are complex but one of the fundamental features of expected utility theory is that it assumes that people’s risk preferences are stable: if someone is a risk-taker then they are a risk-taker. They will not shift their decisions if the risky choices they are offered are framed in a different way. This connects with three fundamental features of prospect theory: in prospect theory, risk preferences are shifting. People’s risk preferences do shift in prospect theory. For example, they are more inclined to take risks to avoid losses—linking to a key insight from prospect theory: “loss aversion.” Standard economics predicts that whether we are facing losses or gains, we decide in the same way according to the absolute magnitude of the impact for us. In prospect theory, however, people confront losses and gains differently—we worry much more about losses than we do about gains, and one facet of this is that we will take bigger risks to avoid losses than we will to accrue gains. This also links to another key feature of prospect theory. We make decisions according to our reference point—and most often this is the status quo, our starting points. This feature connects directly to the anchoring and adjustment heuristic, which we explored above.

Time and Planning

A whole other swathe of behavioral economics literature taps into some important insights about our ability to plan our choices and decisions over time. Standard economics predicts that we form stable preferences about time, as we do for risk. This means that it does not matter what time horizon we are considering. If we are impatient, we are impatient, no matter what the context. Behavioral economists overturn this understanding of how we plan and make decisions over time, building on the substantial evidence from psychological experiments that we are disproportionately impatient in the short term—we suffer from what behavioral economists call present bias. We overweight benefits and costs that come sooner relative to those that come later. For example, if we are choosing between spending on our credit card today or tomorrow and comparing this choice with spending on our credit card in a year or a year and a day, then standard economics predicts that our choices should be time consistent: if we prefer to spend today then we should prefer to spend in a year; and if we prefer to spend in a year and a day, then we should also prefer to spend tomorrow. But behavioral experiments show that we are disproportionately impatient in the short term relatively to the longer term: we prefer to spend today over tomorrow, but when planning for the future we prefer to spend in a year and a day than in a year. We overweight immediate rewards. Behavioral economists such as David Laibson have captured this within alternative theories of discounting to that incorporated in standard economics—specifically in the form of hyperbolic discounting (Laibson, 1997). This is more than an academic curiosity because it has significant implications in our everyday lives—in explaining everything from procrastination to addiction. Present bias can explain why we delay actions that are costly or unpleasant. It can also explain a range of bad habits, or lack of good habits. A telling experiment was one conducted by economists Stefano DellaVigna and Ulrike Malmendier in their study of gym-going habits. Looking at a dataset from a real-world gym, they found that some people signed-up for annual contracts and then attended the gym only a handful of times—even though they had been offered pay-as-you-go membership as an alternative (DellaVigna and Malmendier, 2006). Over the course of a year and sometimes longer, these very occasional gym-goers were effectively paying enormous sums per visit when they would not have had to if they had more accurately forecasted their future behavior when they signed-up for the gym. This is difficult to explain in terms of standard economic analysis, but once behavioral economists allow for present bias, this behavior becomes explicable. Gym-goers plan at the outset to go to the gym many times, but they change their plans when confronted with the immediate choice between going to the gym versus another (more) enjoyable activity.

Behavioral experiments show that we are disproportionately impatient in the short term relatively to the longer term: we prefer to spend today over tomorrow, but when planning for the future we prefer to spend in a year and a day than in a year

Present bias can also explain why we overeat and struggle so hard to give up nicotine, alcohol, and other drugs. There are nuances too—in the way some of us deal with our tendency toward present bias. More sophisticated decision-makers realize that they suffer present bias so they bind their future selves, using what behavioral economists call commitment devices. For example, they might freeze their credit card in a block of ice to stop their future self being tempted into impulsive spending splurges. New businesses have developed around these commitment devices—including online self-tracking tools such as Beeminder. When you sign-up for Beeminder, you set out your goals, and if you fail to meet those goals, Beeminder charges you for your transgression.

A key feature of present bias, and other biases, is that we are not all equally susceptible. Some of us are better at self-control than others and there is a large and growing literature on individual differences, including personality traits, and the role these play in explaining our different susceptibilities to behavioral bias. Learning lessons from psychologists, behavioral economists are using personality tests to help to explain some differences in susceptibility to biases such as present bias. One set of tests now widely used by behavioral economists is the Big Five OCEAN tests—where OCEAN stands for Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. In their analyses of the impact of individual differences on economic success, Lex Borghans and Nobel Prize-winner James Heckman found, for example, that conscientiousness comes out as being a key trait strongly correlated with success in life, confirming earlier findings from psychologist Walter Mischel—famous for the Marshmallow Experiment—which studied children’s capacity to resist temptation when choosing between one marshmallow today versus two tomorrow: those children who were better able to exert self-control in resisting temptation were also more likely to succeed later in life (Borghans et al., 2008; Mischel, 2014).

The Future: Nudging and Beyond

All these insights from behavioral economics are now changing mainstream economics, and also having a strong impact on policy-making via nudging, as highlighted in the introduction. So, are there new horizons for behavioral economics, or do we know all we need to know? For nudging, more evidence is needed to capture how robust and scalable nudging policies really are—and there has been progress in this direction. Another key area that has been largely neglected until recently is behavioral macroeconomics. British economist John Maynard Keynes pioneered the analysis of psychological influences, particularly social conventions, in financial markets and the implications for macroeconomies more generally—see for example Keynes (1936). Some of Keynes’s insights are being reimagined today, for instance by Nobel Prize-winning economists including George Akerlof and Robert Shiller (see Akerlof, 2002; and Akerlof and Shiller, 2009). These insights were complemented by American economist George Katona’s macroeconomic insights, especially his analyses of consumer sentiment (Katona, 1975). Katona’s influence endures through the University of Michigan’s Consumer Sentiment Index—outputs from which are also still being widely used today (see for example Curtin, 2018). A significant hurdle for behavioral macroeconomics, however, is that it is difficult coherently to aggregate into a macroeconomic model the complexities of behavior identified by behavioral economists within a microeconomic context. New methodologies are coming on board however, for example in the form of agent-based modeling and machine learning. If these new methods can be applied successfully in developing coherent behavioral macroeconomic models, then behavioral economics will generate an even more exciting and innovative range of insights in the forthcoming decade than it has in the last.

El paso de los transeúntes se refleja en la fachada de un centro comercial de la zona de compras de Omotesando, Tokio, marzo de 2013
Pedestrians on a crosswalk are reflected in the facade of a mall in Tokyo’s Omotesando shopping district, March 2013

Bibliography

—Akerlof, George. 2002. “Behavioural macroeconomics and macroeconomic behavior.” American Economic Review 92(3): 411–433.
—Akerlof, George, and Shiller, Robert. 2009. Animal Spirits: How Human Psychology Drives the Economy and Why It Matters for Global Capitalism. Princeton: Princeton University Press.
—Ariely, Dan. 2008. Predictably Irrational – The Hidden Forces that Shape Our Decisions. New York: Harper Collins.
—Asch, Solomon. 1955. “Opinions and social pressure.” Scientific American 193(5): 31–35.
—Baddeley, Michelle. 2017. Behavioural Economics: A Very Short Introduction. Oxford: Oxford University Press.
—Baddeley, Michelle. 2018a. Copycats and Contrarians: Why We Follow Others… and When We Don’t. London/New Haven: Yale University Press.
—Baddeley, Michelle. 2018b. Behavioural Economics and Finance (2nd edition). Abingdon: Routledge.
—Banerjee, Abhijit. 1992. “A simple model of herd behavior.” Quarterly Journal of Economics 107(3): 797–817.
—Borghans, Lex, Duckworth, Angela Lee, Heckman, James J., and Ter Well, Bas. 2008. “The economics and psychology of personality traits.” Journal of Human Resources 43(4): 972–1059.
—Curtin, Richard. 2018. Consumer Expectations: Micro Foundations and Macro Impact. New York/Cambridge: Cambridge University Press.
—DellaVigna, Stefano, and Malmendier, Ulrike. 2006. “Paying not to go to the gym.” American Economic Review 96(3): 694–719.
—Gigerenzer, Gerd. 2014. Risk Savvy: How to Make Good Decisions. London: Penguin Books.
—Gneezy, Uri, and Rustichini, Aldo. 2000. “A fine is a price.” Journal of Legal Studies 29(1): 1–17.
—Güth, Werner, Schmittberger, Rolf, and Schwarze, Bernd. 1982. “An experimental analysis of ultimatum bargaining.” Journal of Economic Behavior and Organization 3: 367–388.
—Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Strauss and Giroux.
—Kahneman, Daniel, and Tversky, Amos. 1979. “Prospect theory – an analysis of decision under risk.” Econometrica 47(2): 263–292.
—Katona, George. 1975. Psychological Economics. New York: Elsevier.
—Keynes, John Maynard. 1936. The General Theory of Employment, Interest and Money. London: Royal Economic Society/Macmillan.
—Laibson, David. 1997. “Golden eggs and hyperbolic discounting.” Quarterly Journal of Economics 112: 443–478.
—Milgram, Stanley. 1963. “Behavioral study of obedience.” Journal of Abnormal and Social Psychology. 67: 371–378.
—Mischel, Walter. 2014. The Marshmallow Test: Why Self-Control Is the Engine of Success. New York: Little, Brown and Company.
—Shiller, Robert. 1995. “Conversation, information and herd behavior.” American Economic Review 85(2): 181–185.
—Simon, Herbert. 1955. “A behavioral model of rational choice.” Quarterly Journal of Economics 69: 99–118.
—Thaler, Richard H. 2016. Misbehaving – The Making of Behavioural Economics. London: Allen Lane.
—Thaler, Richard H., and Sunstein, Cass. 2008. Nudge – Improving Decisions About Health, Wealth and Happiness. London/New Haven: Yale University Press.
—Tversky, Amos, and Kahneman, Daniel. 1974. “Judgement under uncertainty: Heuristics and bias.” Science 185: 1124–1121.

Quote this content
Listening
Behavioral Economics: Past, Present, and Future
Mute
Close

Comments on this publication

Name cannot be empty
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved