Reading Between the Lines: The Unspoken Truths of Employee Surveys
TLDR: This article covers some of the key challenges that come with running and interpreting employee surveys. We also cover some specific actions companies can take to get more accurate results.
When researching this post, I couldn’t find much recent evidence about how employees respond to employee surveys in the UK. So I commissioned a poll from Survation, a British polling company I’ve worked with in the past.
This poll was paid for by my newsletter, and so the full version of this article will be for paid subscribers only. Please consider subscribing - it costs less than the price of a pint and it helps me to protect the time to write the newsletter.
In my last post about the Civil Service I discussed an excellent report which used a long form discovery technique (naturalistic interviews and thematic analysis of the transcripts).
This kind of discovery work is typically how I start my work with clients. One key reason for this is that the management of companies often gets a very incomplete picture of how things are on the ground for their staff, regardless of how close to the work they are or how much time they invest in designing and running employee surveys. This isn’t their fault, it’s time intensive & resource intensive to conduct detailed long-form discovery work, and there is often a perception that this work is “fluffy and subjective”.
Around this time of year, many companies are looking to launch an end of year employee survey in part to meet this demand for “discovery” data. Employee surveys are easy to run and can offer a “numerically-precise” picture of the world, which people perceive as hard data even if the data is equally subject to bias and poor reliability. I find surveys are highly effective and I use them all the time, but they have some significant limitations. The aim of this post is to highlight those limitations so you can be more deliberate about designing surveys and interpreting their results.
In general, surveys are good at getting a little bit of information from a lot of people. But:
Surveys rely on self report measures which can be biased and/or unreliable
There are challenging trade-offs between data richness and data quality
Bias & poor reliability in self report measures
Self-report measures have well known limitations. “Everyone lies” by economist and data scientist Seth Stephens-Davidowitz highlights the broad evidence base which highlights the gap between what people say (largely measured through survey data) and what they do (based on search data and social media data). Even in anonymous surveys, people feel like they will be judged for giving true answers so they skew their responses towards what they consider to be the “norm” - this is known as social desirability bias.
An amazing new paper by a joint team from the University of Leeds and University of Cambridge captures this complexity well. They used a “Think Aloud Protocol” to get participants to elaborate on what they’re thinking about when answering survey questions. Around 1 in 4 participants appear to have misinterpreted what survey questions were asking or misunderstood an item in the survey. They also found that participants struggled to answer survey questions in a way that captured their “multiple selves” and felt like the surveys missed out on nuance. While their work didn’t focus on employee surveys specifically, the complexity in answers points to these wider challenges in self report measures, which can further drive bias and poor reliability.
Within the world of employee survey data, the picture is even more complicated.
Keep reading with a 7-day free trial
Subscribe to Uncover - Rethinking how work works to keep reading this post and get 7 days of free access to the full post archives.