March 2026 · 12 min
Predictive Processing and Why Your Priors Are Likely Wrong
There is a model of human perception that most people carry around implicitly, and it goes something like this: the world is out there, the senses take it in, and the brain assembles a picture of what is happening. Perception as reception. The mind as camera.
This model is wrong. Not approximately wrong — architecturally wrong. And the ways in which it is wrong have direct, practical implications for anyone attempting to see their own situation clearly.
The Brain as Prediction Machine
The framework that has largely replaced the camera model in cognitive neuroscience goes by various names — predictive processing, predictive coding, the Bayesian brain hypothesis. The core claim, developed most rigorously by Karl Friston and extended by Andy Clark, Jakob Hohwy, and others, is this: the brain is not primarily in the business of receiving sensory data. It is primarily in the business of predicting sensory data.
At every level of the cortical hierarchy, the brain is generating predictions about what it expects to encounter — and then comparing those predictions against the signals actually arriving from the senses. What you consciously experience as perception is not raw input. It is the brain’s best current model of the world, updated only when the discrepancy between prediction and input — the prediction error — is large enough to warrant revision.
This is not a minor philosophical distinction. It means that your experience of reality is, in a very literal sense, a controlled hallucination. You are not seeing the world. You are seeing what your brain expects to see, corrected at the margins by incoming data. Most of the time, this works remarkably well. The predictions are accurate enough that you navigate the world without incident. But “accurate enough to navigate” and “accurate” are not the same thing.
Why Priors Are Sticky
In the predictive processing framework, your existing beliefs and expectations are called priors. They are not neutral starting points. They are active, weighted predictions that shape what you perceive before any evidence arrives. Strong priors — beliefs held with high confidence — are particularly resistant to updating, because the system treats incoming evidence that contradicts them as noise rather than signal.
This is computationally efficient. If you have a strong model of how your kitchen is arranged, you do not need to re-examine every object each time you walk in. The prior handles it. But the same mechanism that makes you efficient in your kitchen makes you resistant to updating your picture of yourself, your relationships, and your situation — because those priors are also strong, also weighted, and also actively shaping what you perceive.
Consider how this plays out in self-knowledge. You carry a model of who you are — your capabilities, your values, your motivations. This model was not arrived at through careful empirical investigation. It was assembled over years from partial evidence, emotional experience, social feedback, and motivated reasoning. It is a prior. And like all strong priors, it resists disconfirming evidence.
When someone offers you feedback that contradicts your self-model, the prediction error is generated — but the system has several ways to minimize that error without actually updating the model. You can dismiss the source. You can reinterpret the feedback. You can acknowledge it intellectually without allowing it to alter the operative prior. These are not conscious strategies. They are the default operations of a system designed to maintain the stability of its own predictions.
Motivated Reasoning as Prediction-Error Minimization
Motivated reasoning — the tendency to arrive at conclusions you want to arrive at — is one of the most robust findings in cognitive science. Within the predictive processing framework, it is not a bug. It is a feature of the system’s commitment to minimizing prediction error.
When you have a strong prior about yourself — “I am a good leader,” “I am honest in my relationships,” “I made the right decision” — evidence that contradicts this prior creates metabolically expensive prediction error. The cheapest resolution, computationally, is to explain the evidence away. The more expensive resolution is to revise the prior. And the most expensive resolution of all is to revise a prior that is deeply integrated into your identity — because that revision cascades through the entire model.
This is why genuinely honest self-assessment is rare. It is not a matter of willpower or moral character. It is a matter of working against the fundamental architecture of the system you are using to do the assessment. Introspection, done alone, tends to confirm existing priors rather than challenge them — because the same system that generated the priors is the system doing the introspecting.
The Role of Dialectical Challenge
This is where the dialectical process becomes not merely useful but epistemologically necessary. If the problem is that your own cognitive system is designed to protect its priors from revision, then the solution cannot come from inside that system alone. You need an external source of prediction error — one that is calibrated enough to target the right priors and persistent enough to prevent the system from explaining the error away.
A skilled dialectical partner does exactly this. They generate prediction errors that you cannot easily dismiss, because the errors are generated in real time, in conversation, through a process you are participating in. When someone asks you a question and your answer does not hold up under examination — when the internal model visibly fails to account for the evidence you yourself are presenting — the prediction error is hard to minimize without actually updating the model.
This is different from reading a book that challenges your beliefs, or from a friend telling you something you do not want to hear. Those can be dismissed. A live dialectical exchange, conducted honestly, creates a form of prediction error that is harder to dismiss because you are co-producing the evidence. The contradictions are emerging from your own statements, surfaced by someone who is paying close enough attention to notice them.
Implications
If you accept the predictive processing framework — and the empirical support for it is substantial and growing — then several things follow for anyone serious about self-knowledge:
First, your current picture of your own situation is almost certainly wrong in ways you cannot see from inside it. This is not a moral judgment. It is a structural consequence of how the system works.
Second, the standard approaches to self-improvement — reflection, journaling, goal-setting — are limited by the fact that they operate within the same system that generated the miscalibrated priors in the first place. They can be useful at the margins, but they cannot reliably surface the deep structural errors.
Third, genuine updating requires external, adversarial, sustained input — someone who can generate prediction errors that your system cannot cheaply explain away. This is what honest dialectical inquiry provides. Not advice. Not motivation. Not a plan. A sustained challenge to the accuracy of your model, conducted by someone who is not invested in your model being right.
The brain is a prediction machine. It is very good at what it does. But what it does is not the same as seeing clearly. Seeing clearly requires something the system was not designed to do on its own — and the willingness to seek it out is not weakness. It is the most rigorous form of intellectual honesty available.
References
- Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
- Clark, A. (2016). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.
- Hohwy, J. (2013). The Predictive Mind. Oxford University Press.
- Seth, A. K. (2021). Being You: A New Science of Consciousness. Dutton. [The “controlled hallucination” framing.]
- Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.