‘Who decides?’: The question that shatters the illusion of censorship as safety
Polling shows many Americans want AI and speech regulation…until they try to choose the regulator
Did you hear any speaker in opposition to this motion, eloquent as one of them was, to whom you would delegate the task of deciding for you what you could read? Do you know anyone? Hands up. Do you know anyone to whom you would give this job? Does anyone have a nominee? You mean there’s no one in Canada good enough to decide what I can read or hear? I had no idea.
Support for censorship is a bit like prog rock. A lot of people are interested in theory, but disturbed by the reality. It’s widely known that while support for freedom of speech polls very highly in the abstract, that support evaporates when it comes to questions about specific kinds of unpopular or distasteful speech. There is also a similar and promising disparity when it comes to censorship.
When asked whether AI-generated content poses a threat to elections, many Americans nod vigorously in favor of government regulation. But the moment you ask them to imagine who will wield that power — to picture an actual bureaucrat, politician, or tech executive deciding for them what they’re allowed to say or hear — the consensus fractures.
New research on AI regulation shows this tension with precision. Last month, FIRE released results of a poll we commissioned from Morning Consult to gauge the public’s views on AI regulation. The results show that while many people say they have concerns about AI and support censorship in the abstract, they also deeply distrust government action on the issue.
This reveals a telling contradiction at the heart of our debates over speech, safety, and AI — one that hinges on an unavoidable question: “Who decides?”
When our participants were asked, “Which of the following do you think would be more harmful to the electoral process?” 60% chose “The sharing of AI-generated content.” Compare that to 40% of participants who chose “Government regulation of the use of AI-generated content,” and that indicates a sizable appetite for regulation!
But what happened when we asked about a more concrete harm?
To the question, “How concerned, if at all, would you be about government regulation of election-related AI content being abused to suppress criticism of elected officials?” participants showed a far smaller appetite for government regulation of AI. Forty-five percent of respondents said they were “very” or “extremely” concerned, 69% said they were at least “moderately” concerned, and 81% overall reported being concerned to some extent.
This calls to mind a similar result from the polling we did as part of our 2024 Social Media Report. Only 21% of respondents in that poll said that social media should be less regulated, with 51% “somewhat” or “strongly” disagreeing — a compelling signal that more people favor greater censorship on social media.
But, as we’ve seen, this sentiment runs headlong into the question, “Who decides?”
We asked the same panel, “How much, if at all, do you trust social media companies to make fair decisions about what information is allowed to be posted on their platform?” and the results are unsurprising. A whole 1% said they trust the platforms “a great deal” compared to 41% who distrust the platforms “a great deal.”
That’s a big difference — an increase of 4,100%!
When asked in our poll, a whopping 3% of participants said they “greatly trust” the government to make fair decisions about content on social media, and the same 41% who distrusted social media platforms in an earlier question also “greatly” distrusted government to do the same.
So when it comes to the “Who decides?” question, what other options are left for potential censors? Likely not anybody in academia, as trust in higher education has been precipitously dropping year over year, and public polling from Gallup reports similar low levels of confidence in higher education and tech companies.
However, the argument persists. The polling shows that the public has dissatisfaction with the status quo when it comes to AI, hate speech, and some content that minors are exposed to on social media. There seems to be a widely held sense that someone needs to do something, which leads many to advocate for censorship despite the many problems it will inevitably cause. This is a phenomenon psychologists have called “action bias” — the feeling that a swing and a miss, or even a swing that sends you tumbling to the floor, is better than no swing at all.
Indeed, we at FIRE often see this with our advocacy. One takeaway from our research is that, when people perceive a harm as occurring right now (e.g., the perception that children are currently being harmed by social media use), they aren’t easily persuaded by appeals to vague, hypothetical future harms (e.g., “imagine how this law can be abused”). This is the case even when those hypothetical harms are historically well-grounded to the point of being inevitable.
This reality unfortunately puts free speech advocates on the back foot, because would-be censors are selling quick fixes the same way advertisers do. “Just give someone the power to decide what people can say, and it’ll solve all of our problems right now” is always going to sell better than “If you do that, it’ll be really bad for everyone in the future.” Censors’ arguments depend on people not asking or thinking too hard about the fatal question of “Who decides?” — to which there is no satisfactory answer. And it’s always easier to not ask tough questions or think too hard than to do the opposite.
However, our polling suggests that people do see the issue with granting these powers to someone as soon as they have to imagine a real institution or person holding that power. Once it’s more concrete than the abstract “someone should do something,” the issues with handing the power to determine what others may see, read, and hear become very apparent.
This is a signal that, as free speech advocates attempt to persuade the public that censorship is the wrong way to go, we will be more effective by being more concrete more quickly. And nothing makes things more concrete than asking the powerful question, “Who decides?”
SHOT FOR THE ROAD
If you haven’t yet seen my TED Talk, well, first of all, what are you waiting for? But there’s never been a better time than now to give it a first (or second, or hell, even a third) watch because just this week it was released on YouTube!
Censorship is great as long as its someone else's "bad" ideas that are suppressed. The question to ask is, would you want the same power to censor in the hands of the worst actor on the other side?
I wonder how much of the support for censorship is driven by social herding; that is, not wanting to be shunned by an in-group due to "heterodox" opinions.