A reader pushed back on the judgement graph essay with a question I had been careful not to ask directly: what happens when a VP’s judgements are demonstrably bad?
The judgement graph calibrates reasoning against outcomes. That is the stated purpose. But calibration cuts both ways. If the system shows which reasoning patterns produce good results, it also shows which ones don’t. And those patterns are attached to people.
Most enterprise AI conversation treats this as a technical problem. The difficulty is political. The same system that enables organisational learning also enables accountability. A graph that says “this reasoning pattern consistently underperforms” is, in practice, a graph that says “this person’s judgement is consistently wrong.”
The essay argued that judgement graphs capture what enterprises have never systematically stored. That is true. But it understates a harder question: are enterprises ready to act on what the graph reveals, especially when it reveals something about someone with authority?
The honest answer is that most are not. The Architecture of Dissent series explored the structural conditions under which organisations can surface uncomfortable truths. A judgement graph without those conditions is just a more expensive way to confirm what everyone already suspects but nobody says.
A second fair critique: the essay assumes judgement can be captured. Much senior decision-making is pattern recognition that the decision-maker cannot articulate. “I’ve seen this before and it feels wrong” is real judgement, but it does not produce a typed artifact. What the judgement graph captures is the expressible portion of reasoning. That is still far more than what enterprises capture today, which is nothing. But the gap between expressed reasoning and actual reasoning is worth being honest about.
Both critiques make the same underlying point: the hard part of judgement infrastructure is not technical. It is organisational.