Mark Schenk, writing at Anecdote | If you can’t measure it…

I recently heard a presentation that mentioned the truism ‘if you can’t measure it you can’t manage it. It reminded me of how uncomfortable I have always been with this statement and the way it gets touted like a mantra in some organisations. If we view the functions of management as ‘plan, organise, lead, control, direct’, then both ‘measuring and managing’ appear to be more appropriate in an ordered world where cause and effect are knowable. For complex situations, where cause and effect cannot be predicted with acccuracy, the concepts of measure and manage aren’t sufficient to be successful. Measure and manage also don’t make any allowance for emergence and tend to overlook any unintended consequences. Fortunately I think many more people recite this truism than really believe it.

I prefer to view the function of management as ‘creating the conditions that enable people to be successful’. I also much prefer the concept expressed by Albert Einstein: “Not everything that counts can be counted, and not everything that can be counted counts”.

Nor is the count the thing counted. In the best case and presumptively, business degreed managers had to slog through math coursework framed by an explanation of what is the difference between metrics and the phenomena measured, and, crucially, what metrics cannot measure. There’s another crucial step just beyond this which would be to explain how because something can’t be measured accurately (or at all,) doesn’t mitigate its phenomenal effects.

You can test this by asking any manager how the metrics implicate the operations of the thing measured, and, how those operations hang together systematically to describe measurable differentials and contingencies unfolded over time. Note there is in an old fashioned sense a kind of newtonian hydraulics implicit under the numbers.

The key point is that there exist other differentiated and contingent phenomena which are also implicit in the (so-called) system and these are not accurately quantifiable. Which is to say there is, necessarily, a kind of uncertainty principal at work in systematic metrical extrapolations and systematic qualitative descriptions because only some of the effects in a system can be, in effect, computed and qualitatively apprehended.

It follows from this that any rigorous system-awareness would also necessarily acknowledge the cognitive bias made effective through any and all sensemaking which is overly reductive, overly deterministic, overly blind to this uncertainty factor.

This open up to something very interesting. Suppose, then, that there are tacit workarounds to this problem of fuzziness. Those workarounds would instantiate various ad hoc heurisms and I would tentatively submit those heuristic features are not easily measured.

Leave a Comment

Filed under Gregory Bateson, psychology

Leave a Reply

Your email address will not be published. Required fields are marked *