Customer: “Waiter, there’s a fly in my soup…”
-Sesame Street
In the classic Sesame Street skit, “There’s a Fly in the Soup,” an irate customer argues with waiter Grover about a fly in his soup, and is given the run around, only to be served the soup of the day: “Cream of Mosquito.” It’s a feature, not a bug.
To recap, our last installment on behavioral economics discussed Tversky and Kahneman’s Prospect Theory and various heuristics that some scholars have identified as irrational, with deleterious effects on optimal decision-making.
But what if these heuristics are not a bug, but a feature, that do not ultimately hurt, but help people to make good choices? After all, evolution equipped human beings to become masters of their respective universe.
A Fly in the Ointment
If you subscribe to the classical behavioral economics paradigm, (as championed by Tversky, Kahneman, and their academic heirs, such as Cass Sunstein and Richard Thaler), decisions arrived through heuristics, or “quick and dirty” assumptions about the world, are bugs in the firmware of the human operating system. In other words, heuristic thinking is inherently short-sighted and lacking in perceptual nuance.
Not necessarily so, says Gerd Gigerenzer, director emeritus of the Center for Adaptive Behavior and Cognition at the Max Planck Institute for Human Development in Berlin. Gigerenzer harkens back to Herbert Simons’ original idea of heuristics as a satisficing mechanism. In other words, humans naturally make “good enough” decisions that tick the most important boxes in their respective priority stacks. Gigerenzer argues that satisficing is an adaptive feature of human nature, deeply ingrained in our ability to make “fast and frugal” decisions under conditions of limited time and information. Thus, heuristics-based decisions are inextricably linked to the social and technological contexts in which they arise. In this view, risk aversion is not seen as a flaw resulting in missed opportunities, but a strategy for avoiding absolute ruin. Moreover, base rate fallacies, or incorrect assumptions about the probability of low incidence events occurring (e.g., plane crashes), can be corrected through the properly contextualized presentation of information. Gigerenzer’s natural frequencies training method has, for example, been used to teach medical students to understand the difference between sensitivity and specificity in test results. Similarly, Gigerenzer and colleagues have found empirical evidence for the tendency of experts to rely on intuition, while amateurs focus on detailed analysis, further complicating the suggestion that heuristic thinking is necessarily flawed.
Contrast this view with classical economics, which supposes maximizing, or optimal goal seeking, as the quintessential human endeavor. Maximizing entails going to great lengths to uncover ultimate goals and requires a great deal of time and energy investment to make decisions. Behavioral economics in the mode of Kahneman and Tversky acknowledge that satisficing is often used in lieu of maximizing, but conceive of it as inescapably erroneous. Thus, somewhat paradoxically, behavioral economics betrays its ultimate belief in the rational man as default in its understanding of satisficing strategies as inherently deficient.
Nudging Along
But what does understanding human behavior as adaptive vs. deficient look like in practical policy applications?
Case in point: the controversial Nudge Units, made famous by Thaler and Sunstein’s popular science book Nudge. First modeled after the U.K.’s Behavioral Insights Team, so-called Nudge Units have elbowed their way into top advisory roles in governments everywhere from London to Tokyo and inform policy on diverse topics such as taxation, health, and energy usage. Broadly speaking, these units are tasked with influencing consumer and citizen behavior by shaping policy contexts and “nudging” individuals toward “good” behavior. For example, your state may make organ donation the default status for your driver’s license, requiring you to check an extra box to opt out of the program. Or in the realm of energy policy, your local utility may send you home energy reports comparing your electricity use to other homes on your street, hoping to spur electron guzzlers into conformity with their more frugal neighbors.
Who Nudges the Nudgers?
A common criticism of “nudging” is its propensity for creating paternalistic institutional structures intervening in consumer choice. As Gigerenzer suggests, meddling in choice architecture without understanding local contexts can create harmful unintended consequences. Consider, for example, a utility using dynamic pricing with surcharges at peak hours (typically 4-10pm, when people get home from work and turn on their electronics and appliances). Now consider an elderly homeowner dependent on a CPAP machine, whose unavoidable electricity usage is subsequently caught up in these pricing mechanisms. Nudging has a tendency to hurt the most vulnerable, who do not have the means to defer behaviors.
Another, more existential problem is that nudging may not even work at all. While requiring restaurants to provide calorie data on their menus may not be quite in the same ballpark as Big Brother, is shaming people for ordering a double cheeseburger really necessary? And here is the rub: does this heavy-handed approach not also destroy the intended incentives? A recent meta-analysis of 200+ studies on nudge-type programs found little to no effect of behavioral interventions on desired outcomes. Put into context, Nudge Units may be little more than taxpayer-funded exercises in policy finger-painting.
Which brings us to our next and final criticism of nudge research: it’s riddled with fraud. A lengthy 2023 piece in The New Yorker provided an in-depth look at the data fabrication scandal engulfing the work of behavioral economists Dan Ariely and Francesca Gino. Ariely and Gino’s experiments suggested that signing an honesty pledge at the top, rather than the bottom, of a form, led people to report information more accurately. Their now-retracted 2012 paper made a big splash in academic and policy wonk circles, with promising implications of improving honesty on survey responses, tax returns, etc. The problem? An independent blogger discovered through detailed study of the pair’s excel sheets that the data had been misleadingly manipulated, and outright fabricated in some instances. Yes, a paper on dishonesty was itself a fraud. Truth is stranger than fiction in the metaverse.
The scandal surrounding Ariely and Gino kicked off wider repercussions in academia, contributing to the “replication crisis” (but certainly not the sole source). Behavioral economic studies in general have been at the center of this crisis, as experiments often involve small sample sizes with inconsistent demographic attributes, subjectively designed batteries of survey questions, and high variability in data analysis methods. Researchers may argue that studying human decision making is inherently messy and therefore not comparable to the replication expected in certain domains of the physical sciences.
The implication for policymakers, however, suggests that that the Cream of Nudge soup is missing one important ingredient: a whole lot of salt.
Electrically yours,
K.T.