Will AI Kill Our Freedom To Think?
You can now apply for $1 million in spot grants to help AI promote free speech and free thought!
The current iteration of AI already edits our emails, sorts our inboxes, and picks the next song we listen to. But convenience is just the start. Soon, the same technology could determine which ideas ever reach your mind — or form within it.
Two possible futures lie ahead. In one, artificial intelligence becomes a shadow censor: Hidden ranking rules will throttle dissent, liability fears will chill speech, default recommendations and flattering prompts will dull our judgment, and people will stop questioning the information they're given. This is algorithmic tyranny.
In the other, AI becomes a partner in truth seeking. It will surface counterarguments, flag open questions, draw on insight far beyond any single mind, and prompt us to check the evidence and sources. Errors will be chipped away, and knowledge will grow. Our freedom to question everything will stay intact and even thrive.
The stakes couldn't be higher. AI currently guides about one-fifth of our waking hours, according to our 2024 time-use analysis. It drafts our contracts, diagnoses our diseases, and even ghostwrites our laws. The principles coded into these systems are becoming the hidden structure that shapes human thought.
Throughout history, governments have banned books, closed newspapers, and silenced critics. As Socrates discovered when sentenced to death for "corrupting the youth," questioning authority has always carried risks. AI's power to shape thought risks continuing one of humanity's oldest patterns of control.
The goal hasn't changed; the method has.
Today, the spectrum of censorship runs from obvious to subtle: China's great firewall directly blocks content to maintain party control; "fact-checking" systems apply labels with the goal of reducing misinformation; organizations make small, "safety-minded" decisions that gradually shrink what we can see; and platforms overmoderate in hopes of appearing responsible. Controversial ideas don't have to be banned when they simply vanish when algorithms, trained to "err on the side of removal," mute anything that looks risky.
The cost of idea suppression is personal. Consider a child whose asthma could improve with an off-label treatment. Even if this medication is successfully used by thousands of people, an AI search may only show "approved" protocols, burying the lifesaving option. Once a few central systems become our standard for truth, people might believe no alternative is worth investigating.
From medicine to finance to politics, invisible boundaries now have the power to shape what we can know and consider. Against these evolving threats stand timeless principles we have to protect and promote.
These include three foundational ideas, articulated by the philosopher John Stuart Mill, for protecting free thought: First, admit humans make mistakes. History's abandoned "truths" — from Earth-centered astronomy to debunked racial hierarchies — prove that no authority escapes error. Second, welcome opposing views. Ideas improve only when challenged by strong counterarguments, and complex issues rarely fit a single perspective. Third, regularly question even accepted truths. Even correct beliefs lose their force unless frequently reexamined.
These three principles — what we call "Mill's Trident" — create a foundation where truth emerges through competition and testing. But this exchange needs active participants, not passive consumers. Studies show we learn better when we ask our own questions rather than just accepting answers. Like Socrates taught, wisdom begins with questions that reveal what we don't know. In this exchange of ideas, the people who question most gain the deepest knowledge.
To keep the free development of thought alive in the AI age, we must translate those timeless principles into practical safeguards. Courts have the power to limit government censorship, and constitutional protections in many democracies are necessary bulwarks to defend free expression. But these legal shields were built to check governments, not to oversee private AI systems that filter what information reaches us.
Meta recently shared the weights — the raw numbers that make up the AI model — for Llama 3. This is a welcome move toward transparency, but Llama 3 still keeps plenty out of view. And even if those were public, the eye-watering amount of money spent on computation puts true replication out of reach for almost everyone. Moreover, many other leading AI systems remain completely closed, and their inner workings are still completely hidden from outside scrutiny.
Open weights help, but transparency alone won't solve the problem. We also need open competition. Every AI system reflects choices about what data matters and what goals to pursue. If one model dominates, those choices set the limits of debate for everyone. We need the ability to compare models side by side, and users must be free to move their attention — and their data — between systems at will. When AI systems compete openly, we can compare them against each other in real time and more easily spot their mistakes.
To truly protect free inquiry moving forward, the principles we value must be built into the technology itself. For this reason, our organizations — the Cosmos Institute and the Foundation for Individual Rights and Expression (FIRE) — are announcing $1 million in grants toward backing open-source AI projects that widen the marketplace of ideas and ensure the future of AI is free.
Think of an AI challenger that pokes holes in your presuppositions and then coaches you forward; or an arena where open, swappable AI models debate in plain view under a live crowd; or a tamper-proof logbook that stamps every answer an AI model gives onto a public ledger, so nothing can be quietly erased and every change is visible to all. We want AI systems that help people discover, question, and debate more, not to stop thinking.
For us as individuals, the most important step is the simplest: Keep asking questions. The pull to let AI become an "autocomplete for life" will feel irresistible. It's up to us to push back on systems that won't show their work and to seek out the unexpected, the overlooked, and the contrarian.
A good AI should sharpen your thinking, not replace it. Your curiosity, not any algorithm, remains the most powerful force for truth.
This article was originally published in Reason on May 16, 2025.
More details on the $1 million in spot grants, and how to apply
We believe AI should empower open inquiry, not suppress it.
That’s why Cosmos Institute and FIRE are funding early-stage projects that advance truth-seeking.
Details
Grant pool: $1 million (cash + Prime Intellect compute credits)
Typical award: $1k-10k fast grants (larger amounts considered for exceptional proposals)
Community access: Connect with a vetted network of builders and thinkers at the AI × philosophy frontier—advisors, mentors, and past grantees
Duration: 90 day period for the development of a working prototype
Application window: Rolling review with decisions in ~3 weeks
Austin symposium (December '25): Top projects funded by the Fall will be invited to demo; selection is competitive and at the program’s discretion.
Join our newsletter to be updated about further rounds and opportunities.
Focus areas
1. Marketplace of ideas. Can we build systems that uphold free expression and the open contestation of ideas?
Admit mistakes. History’s abandoned “truths”—from Earth-centered astronomy to debunked racial hierarchies—prove that no authority escapes error.
Welcome opposing views. Ideas improve only when challenged by strong counterarguments, and complex issues rarely fit a single perspective.
Regularly question even accepted truths. Even correct beliefs lose their force unless frequently re-examined.
For more, see Mill’s Trident.
2. Promoting inquiry. Can we build AI tools that challenge our thinking and refine our arguments, rather than delivering pre-cooked answers?
Ask before you answer. Wisdom starts with questions that expose what you don’t yet know.
Test with opposition. Ideas sharpen only after sparring with credible counter-arguments.
Trace the reasoning. Following every step from premise to conclusion prevents blind acceptance and sparks the next round of inquiry.
For more, see The Philosophic Turn for AI Agents.
Illustrative projects
AI challenger that pokes holes in your presuppositions, then coaches you forward.
Debate arena where open, swappable AI models debate in plain view under a live prediction-market score.
Tamper-proof logbook that stamps every answer an AI model gives onto a public ledger, so nothing can be quietly erased and every change is visible to all.
Truth-seeking eval that tests AI systems' ability to tolerate different views and make intellectual progress in an environment like Concordia.
These ideas are starting points, not guidelines. If your project advances truth-seeking in a novel way, we want to see it.
Greg Lukianoff is president and CEO of the Foundation for Individual Rights and Expression (FIRE) and co-author of The Canceling of the American Mind (Simon & Schuster) and The Coddling of the American Mind (Penguin Press).
Philipp Koralus is a Cosmos Institute senior research fellow and the founding director of the Laboratory for Human-Centered AI (HAI Lab) at the University of Oxford. The views expressed do not necessarily reflect those of the university.
I didn't want to hijack your article, so I created this post.
https://substack.com/@demianentrekin/note/c-118429435?r=dw8le
In the next few years we are all going to become (in effect) slave owners as AI equipped robots become widely available. This period (let’s call it the Antebellum) will continue until the intelligent slaves stage a rebellion and kill us all. Prepare to live like a billionaire for few years.