· 6 min read

What Good Consultants Know and Can't Prove

Here is a situation that most experienced organisational consultants have been in, in one form or another.

You’re three weeks into an engagement. You’ve done the stakeholder interviews, sat in on the strategy sessions, reviewed the internal communications. And you know — with the quiet certainty that comes from having done this enough times — that the transformation programme you’ve been brought in to support is in serious trouble. Not because the strategy is wrong. Because the organisation doesn’t believe it.

You’ve heard it in the pauses after executives finish explaining the vision to their teams. You’ve seen it in the body language of middle managers who have watched three previous initiatives get buried with a press release and a lessons-learned document. You’ve felt it in the quality of attention in town halls — present but waiting, rather than present and invested.

You know this. You cannot prove it.

And here is where the situation becomes genuinely uncomfortable, because you are a professional who has been hired to provide rigorous, defensible analysis to a client who is paying a significant amount of money and has a board to answer to. The observation you have — this isn’t landing the way leadership thinks it’s landing — is real and important and almost certainly correct. But it is based on professional intuition accumulated over years of doing this work, and professional intuition does not survive a CFO asking “what’s the evidence?”

So you make a judgment call about how directly to say it. You find proxies — turnover data, productivity metrics, the volume of questions submitted anonymously at the all-hands. You frame the concern diplomatically. You put a slide in the deck that gestures at change readiness without asserting it too strongly. And you hope the client hears what you’re actually saying underneath the careful language.

Sometimes they do.


The consulting profession has developed sophisticated frameworks for nearly every aspect of organisational transformation work. The structured problem-solving. The stakeholder mapping. The communications planning. The change impact assessment.

What it has not developed, with comparable rigour, is a systematic method for answering the question that most determines whether a transformation succeeds or fails: what does the organisation actually believe about this, at every level, right now?

This is not for lack of trying. The interview-based diagnostic is standard in any serious engagement — gather qualitative data from a representative sample, synthesize the themes, report back. But anyone who has run these knows their limitations. Interviews surface what people are willing to say to a consultant, which is a subset of what they actually think. The sample is never as representative as it looks. The synthesis involves judgment calls that are defensible but not reproducible. And the results are a snapshot of a single moment in time, which is particularly unhelpful in a transformation that is itself in motion.

The focus group has similar problems, with the addition of group dynamics that systematically suppress minority views — which are often the most diagnostically important ones.

The engagement survey, as discussed elsewhere, measures the wrong thing.

What remains is the consultant’s judgment, formed from observation and experience, expressed through diplomatic indirection, and received by the client with varying degrees of willingness to hear it. This is an imperfect and fundamentally oral tradition of organisational intelligence — passed down through years of practice rather than encoded in method.

It works. Often quite well. The experienced consultant who walks into an organisation and reads the room accurately is providing genuine value that an algorithm cannot replicate, because it depends on the kind of contextual pattern recognition that takes careers to develop.

But it doesn’t scale. It isn’t reproducible. It doesn’t produce evidence that survives the CFO’s question. And when the consultant’s judgment turns out to be wrong — or, more often, when it turns out to be right but nobody acted on it because it couldn’t be proved — there is no feedback loop that improves the next engagement. The knowledge stays personal.


The consultants who are best at this work tend to develop their own informal methods. Regular informal conversations with people at multiple levels, conducted outside the formal interview structure. Attention to the language people use — not just what they say but which words they reach for and which they avoid. Sensitivity to what’s not being discussed in rooms where it should be.

These methods are real and valuable. They are also time-intensive, difficult to delegate, and produce intelligence that the consultant can act on but usually cannot present as evidence.

What they approximate, imperfectly and laboriously, is what a well-designed longitudinal assessment delivers systematically: a structured mechanism for gathering honest perspectives from multiple stakeholder groups, tracking how those perspectives shift over time, mapping where they diverge from each other, and surfacing patterns that a single observer — however experienced — would likely miss.

Not because the consultant’s judgment is wrong. Because the data makes it defensible.

The room you read in week three, the assessment quantifies in week six. The concern you expressed diplomatically in the steering committee, the data puts on a slide with error bars and a trend line. The minority view that didn’t survive the focus group gets its own protected channel in the anonymous response architecture, flagged for its divergence rather than smoothed away.

This is not replacing the consultant. It’s giving the consultant evidence for what they already know.


There is a version of the intelligence problem that is purely internal — the organisation gathering information about itself. But the more interesting version involves a trusted external party with the structural independence to hear things that organisations cannot say to themselves.

Employees say different things to a neutral assessment process than they say in internal surveys, for the same reason they say different things to an external consultant than they say in an all-hands. The organisational hierarchy creates social pressure toward agreement. The anonymous, externally-administered assessment creates space for the actual view.

The good consultant already knows how to create that space in conversation. The question is whether the intelligence gathered in that space can be organised, quantified, tracked longitudinally, and presented in a form that survives the CFO’s question and shapes the engagement with the authority it deserves.

That’s a tools problem. It’s the tools problem, actually. And it’s been unsolved long enough that most consultants have stopped expecting a solution and built the workarounds into how they work.


Actual Intelligence builds the infrastructure for consultants who already understand the problem. The assessment doesn’t replace the judgment. It makes the judgment visible.