Monday, 12 January 2026

AI Is Not a Calculator – It’s a Thinking Prosthesis

I keep hearing that AI “stops us thinking”. It is usually presented as a warning, occasionally as a lament, and almost never as a properly examined claim. What it mostly reveals is a category error, combined with a deep discomfort with tools that refuse to behave like oracles.


Some people treat AI as if it were a calculator. One plus one equals two. A universal truth embedded in silicon. Trust the computer and the answer is correct. From there comes the leap: if you trust computers, you must trust AI. But that is precisely the mistake. A calculator executes deterministic rules over fixed inputs. AI does not. It generates probabilistic language based on patterns. Treating one like the other is not sophistication. It is confusion.

Used properly, AI is not a calculator at all. It is a thinking prosthesis. It does not replace cognition any more than a prosthetic limb replaces intent. It extends reach, redistributes effort, and still requires control, judgement, and constant feedback from the user. Used well, it amplifies thought. Used badly, it is clumsy, misleading, or actively harmful. That is not a flaw. It is the nature of tools.

It is also a mistake to think of AI as a sentient being with beliefs or intentions. But here is the apparent paradox: the correct way to use it is to treat it exactly as you would a fallible human debater. Not because it is conscious, but because the discipline required is the same. You challenge assumptions, test claims, ask for evidence, and push back when something does not ring true. You do not grant authority simply because something sounds fluent. You interrogate.

This is why AI is so effective at cutting through populist politics. Populism survives on vagueness, emotional pressure, and the hope that nobody calmly checks the workings. Feed a manifesto through AI in this interrogative mode and the theatre evaporates. “We will cut taxes and increase spending” turns into “which taxes, by how much, and offset how?”. “We will fix the NHS” becomes “with what workforce, on what timescale, and under what funding constraints?”. The slogans do not survive contact with basic questions of cost, legality, capacity, and sequencing. That is not ideology. It is arithmetic and law.

Crucially, this works only because AI is not being treated as an authority. It is being used as a hostile civil servant, a policy analyst who keeps asking dull, unfashionable questions until either the proposal makes sense or it collapses. The judgement still belongs to the human. The machine just refuses to be impressed.

For artists, thinking often does mean struggle. False starts, ambiguity, the slow and sometimes painful discovery of intent. In that context, the process is not incidental, it is the work. If a tool bypasses that struggle and delivers something fluent too early, it can hollow out meaning. That concern is legitimate. Artists are not being precious when they defend it.

The problem arises when that definition of thinking is universalised. Enter the transactional user of AI. Prompt in, answer out, job done. No interrogation, no context, no ownership of consequences. These are also the people most exercised about hallucinations and wrong answers. Of course they are. AI systems are designed to be fluent and accommodating, which nudges users towards passive consumption. That is a design issue as much as a user one. But when an answer turns out to be wrong, the reaction is telling. Instead of adjusting how the tool is used, the entire thing is dismissed. The baby is thrown out with the bathwater, as though fallibility were a scandal rather than the default condition of all knowledge.

This is often framed as concern about confirmation bias. AI, we are told, just tells you what you want to hear. But that gets the causality backwards. AI has no beliefs to defend. It mirrors the frame it is given. Ask leading questions and you will usually get agreeable answers. Reward fluency over challenge and you will get fluency. That is not ideological bias. It is deference. The crucial difference from humans is that AI does not resist contradiction in the way people often do. Ask it to argue the opposite case and it will do so immediately. Used properly, that makes it a de-biasing tool rather than a reinforcing one.

We have been here before. Encyclopaedias were once treated as definitive, yet they were frozen in time until the next edition and often wrong. Wikipedia simply made that reality explicit. It is dynamic, corrigible, and openly human. It contains errors because humans introduce them, and it improves because humans challenge them. No one sensible claims Wikipedia “stops us thinking”. It demands thinking. AI sits much closer to that tradition than to a calculator dispensing truth.

The same lesson applies to photography. It did not destroy art, but it did disrupt it. It removed the monopoly on realism and forced a redefinition of what mattered. Some felt threatened. Others were liberated. The outcome was not cultural collapse but change. Confusing a method with meaning led critics astray then, and it does so now.

Outside the arts, this distinction matters even more. In medicine, engineering, economics, or policy, thinking is not measured by how much effort you expend generating information. It is measured by judgement, synthesis, and responsibility. Tools are used precisely so attention can move to where mistakes actually matter. That does not absolve designers of responsibility for how tools shape behaviour, but nor does it justify pretending the tools dispense certainty.

There is a final point that is often missed. Over long use, the most important thing being trained is not the AI but the user. A thinking prosthesis only works if the wearer learns how to walk with it. People who dip in occasionally expect certainty and are disappointed. Those who treat the system as something to argue with, rather than something to obey, improve.

AI does not stop thinking. It shifts where thinking happens and makes intellectual laziness more visible. Used well, it sharpens judgement. Used badly, it produces fluent nonsense. Treating it like a calculator is not trust. It is a category error, and one we have made before.


No comments: