I have outlined briefly the typical steps of a behavioral intervention project but this bears repeating. All stakeholders should agree on a set of 2-3 business metrics that will measure the success of any behavioral intervention right when the project starts.
A single metric to be improved is ideal, with a few others used to check that no negative impact is occurring elsewhere being very useful. No more than 3 metrics certainly, or you run the risk of not being focused enough.
These should be business metrics, not metrics that relate to behavioral change. Sure, you need to measure many behavioral metrics to evaluate how your intervention is impacting the actual behaviors of your customers or users, but they are only means towards an improvement in business metrics. Successful behavioral interventions start with clearly defined business metrics to be improved.
I know this seems obvious, but many times projects are launched without a clear definition of what “success” looks like. It’s a recipe for disaster, especially when it’s a behavioral intervention that can yield a large impact.
Case in point: reporting only behavioral metrics
This case study actually prompted this post. An excerpt:
In 2010, Disney Research wanted to see if it could nudge guests into changing their ways in the Disneyland hotel, Anaheim, California.
It tested the impact of leveraging commitment bias on hotel guests. Specifically, it asked guests to pledge and commit to reusing their towels each day during their stay. Guests were also given a Friends of the Earth pin to wear after pledging.
It tested its intervention over the course of a few weeks during which time more than 2000 guests took part.
The impact? Guests were 25% more likely to reuse their towels when they had pledged to do so and had received a Friends of the Earth pin, they also hung up over 40% more used towels compared with a control group.
Well, that’s great, but that’s only part of the story and only a set of behavioral metrics. With only this data, we just get another confirmation of the reality of the commitment bias (which is interesting in itself, sure). Now what should you do if you were the executive in charge of deciding whether or not to expand this policy? Well, you can’t tell.
To decide, you would need to know, among other things, the effect of this nudge on:
- how guests feel welcome when they are asked to pledge to reuse towels: maybe they say yes but hate being asked it, and resent the hotel for actually making them think through this
- the overall satisfaction of guests during and after their stay: they may act according to their pledge but do so reluctantly and feel conflicting feelings about it. They may view this as a negative constraint that they now have to go through with, but don’t feel too happy about it.
- the rate at which guests book another stay: taking the pledge and upholding it might have resulted in a stay less enjoyable. The next time they book a hotel stay, they may remember this and avoid the properties where they think they may have to pledge this again.
Without this data (which I’m sure Disney gathered), behavioral metrics are no more useful than academic examples.