3 Comments
User's avatar
Data Dynamics & ML's avatar

Great post. Thanks for articulating it clearly

Chris Bateman's avatar

Dear John,

I appreciate the effort to expose the inner workings of LLM-based AI, although you cannot in fact expose this part of the system, only the steps taken in the inference engines, which if I have understood this piece correctly is what you are working upon. Attempts to expose these token-manipulation steps could potentially be helpful at preventing the twin epistemic disasters of our time, namely (1) epistemic fracture and the 'anything goes' discourse of cultural madness and (2) premature certainty and the 'mass society' censorship that goes with it.

As to be expected from someone working in the field, you write very positively about these tools!

"Anyone who has used AI to draft an email, summarize a report, or brainstorm ideas in the last few years has already felt the colossal power and potential of this technology."

That sentence reads upbeat, and is clearly intended to express this positivity. As a former AI researcher (it was my Masters degree), the "colossal power and potential" here is not something I view primarily positively. I am reminded of the idea that atomic power would be an unalloyed 'advance' for humanity... right before it was harnessed to make an unprecedently powerful bomb.

You yourself raise one of the key risks when you say: "When disagreement is flattened through government control or by technological design, learning breaks down fast — and our capacity to discover truth and generate knowledge falls with it."

I couldn't agree more with this sentiment! Hence my support of FIRE.

However, there is a problem that runs far deeper than this, and the idea that we are going to advance our position with respect to knowledge, rather than accelerate the epistemic fracture, seems to me somewhat optimistic. The illusion that LLM-based AI can advance 'knowledge' is based on the conception of knowledge as propositional. This is the prevailing epistemic regime, and it has been a quietly rolling juggernaut of disaster, yet we have largely failed to recognise it as such because we no longer engage with epistemology. We just assume that we have a model that works, even as it breaks, and breaks ever more seriously!

My short work of epistemology, Wikipedia Knows Nothing, is coming out in a second edition in a few weeks time. I'd be glad to send you a copy if you're interested in looking at the epistemic problem from a radically different perspective. 🙂

The bottom line is that the potential positive impact of these tools consists primarily of accelerated workflows for people who *already* have knowledge in such-and-such a field. The acceleration of epistemic collapse - whether by the Carnage Behind Door Number 1 (fracture) or the Horror behind Door Number 2 (mass society censorship) - seems to me to be the far more likely outcome of the widespread adoption of these tools. They promise 'speed and intelligence', but require well-honed prior skills to operate in epistemic safety... skills which the tools themselves undermine in their current forms by offering 'lazy shortcuts' and the cognitive offloading of mindful practice.

I greatly appreciate your attempts to retrofit a panopticon into the genie's bottle. If only we could have had these sensible discussions about what we were letting out before we opened the stopper and charged on ahead regardless. But then, prudence has never been our species strong suit.

With unlimited love,

Dr Chris Bateman