← Essays

April 9, 2026 · 5 min read

Your Blind Spots Are Not Accidental

The questions your writing avoids are more revealing than the ones it engages

When you ask an AI to read your writing and find your blind spots, you expect it to return something like a list of cognitive biases. Availability heuristic. Confirmation bias. Dunning-Kruger. The generic catalogue of ways humans get things wrong.

That is not what it returns.

What it returns is specific. "Your writing engages seriously with critique of institutions but consistently avoids the operational question of what would replace them." Or: "You cite empirical research heavily when it supports your frameworks and treat it as methodologically suspect when it doesn't." Or: "You engage extensively with French theory but never with the analytic tradition that critiques it, even where the critiques are directly relevant."

These are not generic biases. They are fingerprints of avoidance — and they are almost never accidental.

Why avoidance is patterned

Blind spots cluster for reasons. The most common: following a line of thinking seriously would require revising a conclusion you are not ready to revise. Not because you are intellectually dishonest — because the conclusion is structural. It underlies too much of your other thinking. To pull it out would require rebuilding too much.

The philosopher who became a thoroughgoing anti-realist in graduate school and never seriously engaged with scientific realism again. The entrepreneur who thinks systemically about everything except the organizational dynamics of their own team. The rationalist who applies Bayesian reasoning everywhere except to the question of whether Bayesian reasoning is the right tool.

In each case, you can see the avoidance from outside. From inside, it is invisible because it presents as simply "not finding those arguments interesting" or "not having gotten around to that yet."

The protection function of blind spots

There is a reason cognitive scientists describe certain belief systems as "self-sealing." If every counterargument gets reframed as evidence of the critic's confusion, or classified as addressing a different question than the one you're asking, or acknowledged in principle but never actually incorporated — the system is protecting something.

Usually it is protecting something real. A framework that has been genuinely productive. An intellectual identity that is bound up with certain positions. A community whose esteem depends on holding particular views.

The blind spot is not stupidity. It is self-preservation, operating below the threshold of awareness.

Why making them explicit helps

The value of having your blind spots named is not that you immediately overcome them. You don't. The protection function doesn't dissolve just because you can now see it operating.

The value is that they become available for examination. The thing that was happening below the threshold of awareness is now above it. You can now ask: is this avoidance load-bearing? If I followed this argument seriously, would it actually require me to revise something, or just expand something? Is the counterargument I've been avoiding actually as strong as I've been implicitly treating it?

Sometimes the answer is: yes, the avoidance was protecting something real, and it should. Sometimes the answer is: no, I've been avoiding an argument that doesn't actually threaten anything, and I have no good reason to keep avoiding it.

You can't have that internal conversation about a blind spot you haven't identified. Once it's named, the conversation is at least possible.

The asymmetry of self-knowledge

There is a persistent asymmetry in intellectual self-knowledge: we are generally much better at identifying the blind spots of people we disagree with than our own. We see clearly how someone else's avoidance is motivated, how their framework is self-sealing, how they dismiss inconvenient evidence. We apply much less precision to ourselves.

This is not hypocrisy. It is the structure of being inside a mind rather than outside it. You cannot triangulate on yourself the way you can triangulate on someone else.

What an outside reader — human or AI — can do is provide the external perspective that the structure of self-examination makes unavailable from inside. Not because they know you better than you know yourself in every respect. Because they are not inside the protective structure you have built, and can see its shape from outside.

The question worth sitting with, after you've seen your blind spots named: which of these am I ready to examine, and which am I still protecting? That answer is itself a form of self-knowledge worth having.

Your notes already contain your fingerprint.

Extract yours →