The point of this post about priming under choice architecture tools is to make it clear that while priming does work in some contexts, priming is a tool that requires exceptional care to be used efficiently. For an overview of what priming is, you can have a look at the wikipedia entry. Be warned though, that entry seems to present a very “pro-priming” view. In reality, priming as a tool has:
- been established on the back of studies that generally don’t replicate
- had very small samples (think 40, yes, that small)
- are now under fire even by Daniel Kahneman, one of the giants of behavioral economics
A priming example revealed baseless: power poses
So, let’s go into the details a bit. First, for an overview on what some “behavioral economics” practitioners are preaching, you go read this booklet called “The Power of Priming”. In it, there are 70 pages listing priming examples after priming examples. For example, a couple of pages are devoted to “Power poses” (pp 57-58). I’m picking on this specific priming example simply because it was highlighted recently by Tim Harford as a good example of exaggerated effect.
Power poses are pose where you adopt an expansive body posture for 2 mins and supposedly that make you more confident. Power poses come from a study conducted by Harvard’s Amy Cuddy et al. on 42 participants1 The problem is that as it stands, here’s the state of debate in terms of replication, taken from Amy Cuddy’s own wikipedia page (with links to the publications, because you shouldn’t rust wikipedia either….)
In 2014, Eva Ranehill and other researchers tried to replicate this experiment with a larger group of participants and a double-blind setup. Ranehill et al. found that power posing increased subjective feelings of power, but did not affect hormones or actual risk tolerance. They published their results in Psychological Science.
Carney, Cuddy, & Yap responded in the same issue of Psychological Science, with an overview of 33 published studies related to power posing, including the Ranehill et al. study. Almost all had reported significant effects of some kind. The overview noted methodological differences between their 2010 study and the Ranehill replication, which may have moderated the effects of posing.
Two researchers at the Wharton School, Simmons & Simonsohn, later shared a meta-analysis of the same 33 studies on their statistics blog. Based on the distribution of p-values reported across the studies (the ‘p-curve’), they concluded that studies so far have demonstrated little to no average effect of power posing. Their analyses will appear in Psychological Science 
In a pre-registered direct replication, Garrison et al  found that expansive (vs. contractive) body postures had either no effect or actually reduced psychological states associated with power. The fact the study was pre-registered, had a large n (over 300), used multiple measures of power (an ultimatum game, a gamble, and feelings of being powerful and in charge), and tested not only posing but adopting a direct eye gaze increases confidence that expansive poses have no or in fact negative effects on feelings of power.
So, essentially, power poses don’t have any effect, at least as far as science can tell. And now to quote Tim Harford on the popularity of Cuddy’s initial findings:
It inspired a book, and a TED talk that has been watched 34 million times.
That doesn’t make it any less false.
Most priming studies are very problematic
If you’re not convinced that Priming studies as a whole are problematic, then you can do 2 things:
- read the start of Jason Collins’ superb post on “bad behavioral science”, which adresses priming directly (read the full post while you’re at it as it’s excellent!)
- a good and easy read on priming as well: Power of Suggestion: The amazing influence of unconscious cues is among the most fascinating discoveries of our time—that is, if it’s true
But don’t take my words (and all these others) for it. Even Daniel Kahneman switched position on priming. Kahneman wrote about priming very confidently in his 2011 best-seller Thinking, Fast and Slow
When I describe priming studies to audiences, the reaction is often disbelief. This is not a surprise: System 2 believes that it is in charge and that it knows the reasons for its choices. Questions are probably cropping up in your mind as well: How is it possible for such trivial manipulations of the context to have such large effects? …
The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.
But following the first doubts, the 2007 Economics Nobel prize winner penned an open letter to the Priming community in Nature. An extract:
As all of you know, of course, questions have been raised about the robustness of priming results. The storm of doubts is fed by several sources, including the recent exposure of fraudulent researchers, general concerns with replicability that affect many disciplines, multiple reported failures to replicate salient results in the priming literature, and the growing belief in the existence of a pervasive file drawer problem that undermines two methodological pillars of your field: the preference for conceptual over literal replication and the use of meta-analysis. Objective observers will point out that the problem could well be more severe in your field than in other branches of experimental psychology, because every priming study involves the invention of a new experimental situation.
For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
The issue is, the accumulation of results and replication efforts since then has only added even more doubts to these results. Among them, for example, the replication attempts of around 100 social psychology experiments by Brian Nosek et all published in Science in 2015.
So, when in doubt, basically, doubt priming.
For practitioners, what to do?
Now the question is: if we are trying to use priming as tool for behavior change, can we use it? If yes, how?
Priming can be used, but with caution. First, it is now widely admitted that even when priming does have an effect, that effect is likely very small. So don’t believe the hype that priming your users or consumers will be the miraculous trick that will get you results.
Second, some priming results seem valid. The problem is sorting through the mountain of priming studies that rely on low-powered small sample results to find the priming situations that are likely to replicate in a business context.
Third, priming is much more context dependent than generally said. When pressed on the replication issues, most researchers defend themselves by saying that priming results may not replicate if the country is changed, how the experimenter behaves, etc. Priming does not “just works”, it takes a lot of fine-tuning to make priming work. And due to publication bias, you never hear about all the studies that tried to elicit a priming effect but never got one.
So next time you hear about priming, be very, very skeptical!
Some relevant case studies (if any):
- Carney, Dana R.; Cuddy, Amy J. C.; Yap, Andy J. (October 2010). “Power Posing – Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance“. Psychological Science 21 (10): 1363–1368. doi:10.1177/0956797610383437. PMID 20855902. ↩