The myths and misunderstandings behind ineffective D&I: Basing your Diversity & Inclusion strategy on data
This post is drawing on Chapter 6 of Heidi R. Andersen’s Diversity Intelligence from 2021, one of the most useful reads I found when getting into EDI.
Diversity strategy has to be driven by data, and for that to be possible, one of the first steps in every EDI department must be a well-designed inclusion survey. “An inclusion survey is essentially a perception gap analysis, which shows the level of inclusion that different identity groups within your company experience and how they thrive differently in this culture” writes Heidi R. Andersen. This quote is so essential that anyone tasked with doing any sort of diversity & inclusion work might as well write it on a post-it note and stick it somewhere around their workstation. It should be a constant reminder that the relational, human side of EDI relying on soft-skills and the technical, evidence side are not rivals but partners.
But how do we go about gathering the data we need to build an effective EDI strategy on?
Start by being clear about the decision you want the data to inform. If your leadership asks for a survey because they read somewhere that they should measure inclusion, push back gently and ask what decisions the results should enable. Are you trying to reduce employee turnover in a particular department, make promotion processes more transparent, or test whether a recent policy change actually improved employee well-being? Naming the decision narrows the scope, keeps the survey useful, and prevents measurement from becoming a vanity exercise. Once you know the decision, design a mixed method approach that gives you both the where and the why: a concise, validated questionnaire to produce reliable, segmentable metrics, and a set of interviews or focus group discussions to surface the lived experience behind the numbers. Quantitative items should map to core outcomes you care about like belonging, psychological safety, perceived fairness of promotion and pay, and intent to stay. At the same time, the demographic and role data let you cross‑tabulate and find the perception gaps that matter. The qualitative work is equally as important. It explains the anomalies and points to practical fixes you would be unlikely to have guessed from numbers alone.
One of the clearest lessons in Andersen’s chapter is how measurement exposes holes in what organisations think they know about people and motivation. A common myth we still hear is that fewer women reach senior roles because fewer women want them. Andersen describes a striking example that overturns that assumption: in a set of inclusion surveys her company, the Living Institute ran across seven financial institutions, women scored themselves higher on leadership ambition than men did. The chapter reports that female respondents averaged 8.5 out of 10 on ambition compared with 7.6 for men, and senior women scored 9.7 versus 8.8 for senior men.
“In fact, the reality revealed over and over again in our inclusion surveys, as well as other research like a 2017 study from the Boston Consulting Group, is that women are already more ambitious than their male colleagues and this idea that they do not want leadership positions simply is not true.”
Thus, there is a better explanation for why there are fewer women compared to men in senior leadership:
“The data demonstrated that male leaders exhibited an unconscious bias against female leadership talent, which was the real reason why more women had not advanced to higher leadership positions.”
That reversal matters because it changes the remedy. If you believe the problem is a lack of ambition, you design programmes to “fix” women: confidence workshops, motivation seminars, more networking events. If the data shows ambition is present but leadership pipelines and decision‑making are biased, the right response is structural: transparent promotion criteria, sponsorship programmes that hold leaders accountable, inclusive leadership training that helps managers recognise and develop talent they might otherwise overlook. Andersen’s point is simple and practical: measurement doesn’t just validate intuition, it also reveals where your intuition is wrong and where your energy will actually move the needle.
So when you plan a D&I survey, design it to test the assumptions that matter. Ask whether leaders can see the talent in front of them, whether promotion criteria are applied consistently, and whether those who aspire to lead have equal access to sponsorship and stretch assignments. Use surveys to surface ambition and perception gaps, and use interviews to explain why those gaps exist. Then translate the findings into targeted interventions that change leader behaviour and system design rather than trying to change people who are already ready and willing.
Leaders will only fund measurement if you show how it links to business outcomes. Build a one‑page dashboard for the executive team that ties three headline inclusion metrics to concrete operational impacts: turnover cost, time to fill critical roles, and customer or product outcomes where relevant. Put a short qualitative excerpt on the page so the numbers have a human face. A single, well‑chosen story often does more to shift minds than a table of statistics.
Beware the common pitfalls that turn data into theatre. Running a one‑off survey without a published action plan breeds cynicism. It is hard to understate how important it is to follow up with a clear action plan as soon as possible after the data has been collected. D&I strategy that takes too long to see the light after promises made next to an inclusion questionnaire will more likely than not cost you employee trust and buy-in motivation.
Another thing to keep in mind about publishing inclusion data is that over‑segmenting small populations risks identifiability, so set minimum cell sizes and combine categories where necessary to protect confidentiality. Make sure that you are also not ignoring qualitative signals because the quantitative numbers ‘look fine’. This is a fast route to missed risk, so treat open text and interviews as equal evidence. Lastly, make sure to never use data to punish individuals. Even if lower satisfaction numbers are clearly linked to particular department heads, measurement and the publishing of data must be diagnostic and developmental, not punitive. Use this data as an opportunity to encourage personal and professional growth, for example through inclusive leadership training offers.
Finally, keep the human work front and centre. Measurement is a muscle you build, not a report you file. Use the data to open conversations, not to close them. Share findings with humility, invite interpretation from the groups most affected, and co‑design solutions where possible. The soft skills you already use, like listening, convening, translating between lived experience and operational levers are what make data useful. When you combine disciplined measurement with the relational work of securing buy‑in, EDI stops feeling like an optional extra even for the most sceptical, and becomes a practical, measurable route to happier employees and better business.