Beyond average thinking

Data science is changing global and local business.

One might think that immense data sets and expensive tools are required to make sense of this increasingly complex world, but that’s not true. Before we even look at data sets, we can all cultivate better data sense.

To that end, I’d encourage us all to look at one common kind of data with suspicion: the average. We all use averages on daily basis— e.g., when we split a dinner bill or keep tabs on our favorite athletes. They can be useful. But the following examples show how this easy-to-compute shorthand can also lead us astray.

Making money online

I do a lot of work in e-commerce, in part because I find it interesting. Also, all my clients, regardless of industry, care about making money. Until there’s a zombie apocalypse, knowing how to make money within and across digital ecosystems is a valuable skill.

For a pure e-commerce business, Average Order Value (AOV) is a standard measure of financial health. It’s a good metric as far as metrics go. In addition to aggregate revenue, it gives business owners a sense of how things are going compared to last week, last year, or some other benchmark.

But an averaged view of user purchase and onsite behavior isn’t all that actionable on its own. The average tells us whether “things” are up or down, but it doesn’t tell us which specific things are working or not working.

For example, if our site sells $10 socks and $200 jackets, and the average order size this month is $100 (and that’s lower than we’d like)… do we sell more socks? More jackets? Run a promotion? Raise prices? Lower them? Refresh our advertising messages? Redesign our mobile website? Streamline our checkout process? Switch performance marketing vendors? Have an existential meeting about this on a daily basis?

Thankfully, there are easy and well-known ways to quickly get beyond the simple average. Avinash Kaushik is a master at digital marketing analytics, and he is eloquent throughout his work about the necessity of looking at user behavior at a “cohort” level — paying attention not just to the average, but what individual groups do. If we review our site data at a cohort level, we might find we have many different kinds of shoppers who behave in statistically consistent ways: holiday gift-givers, power buyers, one-time visitors, socks-only-buyers, etc. Instead of optimizing for the average, we will have a higher impact if we prioritize, then optimize, for the individual groups.

Running a successful business generally

It turns out that deliberately segmenting one’s audiences at the “top of the funnel” and performing cohort analysis at the “bottom of the funnel” are essential components of business, brand, and marketing strategy generally. I’ve written about this before, most recently in the pieces “What is brand?” and “What is marketing?”.

The mechanics of doing this analysis can get quite heady. For example, see this recent strategy+business article entitled “Getting Value Propositions Right with Data and Analytics” by David Meer looking at the product development approaches of global CPG companies. The article has many interesting findings, but the main take-away is simple: companies that sub-segment their audiences optimize their footprint and their impact, while those that rely on simple averages fail by degrees.

Companies of any kind or size can commit to “top-down” and “bottom-up” audience segmentation before, and sometimes without, high volumes of data or complex analysis.

Investing for the future

Managing a personal retirement plan forces you to make iterative predictions about the future: your own and that of global capital markets. Many Americans use some variation of the “Random Walk Down Wall Street” approach, with broadly diversified portfolios, dollar cost averaging, and dynamic allocations based on age and risk tolerance. With this strategy, any big meltdowns in the market will theoretically be offset by subsequent run-ups, so it will all “average out” in the end.

There are a number of risks to this otherwise-solid approach, including an implicit assumption that future markets will behave like previous ones and that market turbulence has a cyclic cause and not a structural one. But even investors who assume that normal market conditions will persist in the coming decades — e.g., no zombie apocalypse, no crash of the US Dollar — might still be surprised to learn, per this article, that average stock market returns aren’t average. Even with a well-diversified portfolio, in mathematical simulations 69.2% of investors earn less than the average and 8.9% even lose principal over a 30-year period. This is in conflict with the conventional wisdom that investment outcomes are normally distributed and long time horizons ameliorate risk. It likely also changes how any one of us might expect our investments to perform in the future.

Making healthy decisions

One last example of how averages can lead us astray: at some point in our lives, we will all find ourselves reading new medical and scientific research, perhaps looking for promising experimental treatments that can treat or cure an illness. Scientific journals have rigorous filter criteria for publication… So, of the research reports that clear those filters, what percentage do you think are later proven to be true, in the sense that the initial results can be replicated?

You might think the number is quite high, or if you’re very conservative, you might estimate the odds as being about 50/50. But in 2005, John P. Ioannidis wrote a shocking paper asserting that the majority of published research findings are false. Bayer Labs later confirmed that only 35% of its own experiments could be replicated. That’s much worse than a coin toss.

Ioannidis’s finding is not just empirical but also mathematical. Given that a) scientists only run experiments in line with their current hypotheses, and b) only successful experiments get submitted for publication, and c) research publications have clear and rigorous, yet imperfect publication criteria, we can forecast mathematically a high rate of inaccuracy, which is indeed what we find. Nate Silver describes these two papers and their broader significance in his wonderful book The Signal and the Noise.

Ioannidis’s detailed math is interesting for those who care to read the complete article, but the takeaway can be useful to any of us regardless. Rather than assume that publishers are bad actors or bad practitioners, we can look at individual publications or subject domains and apply a percentage likelihood of accuracy to the findings. E.g., we could estimate that any new medical treatment in pre-clinical research, despite promising experimental findings, has less than a 35% probability of being replicable, with additional completely unknown risks of side effects, complications, or limitations. This likely will affect our personal risk assessments regarding which treatments we want to pursue.


Averages can of course be useful— often they’re just what we need. But whenever you hear or see an average, some questions you can ask yourself:

  • Is the data normally distributed?
  • Does a “cohort” level view provide more insight?
  • What is our % confidence in the findings?
  • What is our % confidence in our confidence assessment?
  • Are there any external variables or implicit assumptions at work?

These questions might lead to some real “aha’s” — and beyond-average thinking — even before you re-crunch the numbers.