New KOSA, same as the old KOSA
Why this co-author of The Coddling of the American Mind opposes the Kids Online Safety Act
Back in April, I wrote about my disagreement with my friend Jonathan Haidt on some of the potential solutions to the problems of the phone-based childhood that he proposes in his mega bestseller, “The Anxious Generation.” It’s worth noting that I spent most of the piece agreeing with him more than I disagreed. Given I co-authored “The Coddling of the American Mind” with him, that isn't a big surprise. One of my points of disagreement though is simply that when you are trying to solve a problem that relates to expression or, as here, expressive platforms, you should exhaust every possible solution before focusing on top-down legislative solutions that will almost always have unintended consequences.
I liken this to the doctor's advice that when it comes to medications, you should start low and go slow. So, for example, I couldn't endorse more avidly the idea that every school in the country should be a phone-free zone. And this has nothing to do with whether or not phones are a profound vector for mental health issues in teens. It relies simply on the fact that they are highly distracting. I often say that teachers and administrators wouldn't have allowed us to have a fax machine, VCR, television, radio, or video game console at our desk back when I was in school, and they shouldn't allow them now just because all of those things fit into a tiny futuristic machine.
I had seen many proposals to try to address the problem using top-down power or legislative solutions, and being a civil libertarian, most of them concerned me. But I would say the newest and most concerning approach to dealing with the problem of expressive platforms is using a product liability rationale. To do this, legislators focus on the alleged harms of an expressive platform and then use vague and broad language to achieve their regulatory goal. This exactly mirrors a now-discredited test for freedom of speech that preceded our strong interpretation of the First Amendment.
This was the “bad tendency test,” advocated by Oliver Wendell Holmes, who would later become a supporter of increased First Amendment protections. The bad tendency test essentially allowed the government to argue that if speech might have a tendency somewhere down the line to create an unlawful outcome, they could police the speech and potentially punish it.
But here’s the problem. There is no limiting principle to such an open-ended concept. You can argue that potentially any speech could have a bad tendency somewhere down the line, when you’re in the world of speculation and imagination. Which brings me to the Kids Online Safety Act, a federal legislative proposal that garnered endorsements from Elon Musk and Donald Trump Jr. over the weekend.
The misguided (and unconstitutional) Kids Online Safety Act
The Kids Online Safety Act (KOSA) seems to be made in the image of the UK’s Online Safety Act, with some largely symbolic concessions to that First Amendment, as filtered through the wisdom of lobbyists for some of the regulated companies. KOSA enumerates a list of things that are harmful to minors (e.g., eating disorders, depression, Internet addiction, etc.), then requires platforms to “exercise reasonable care” to mitigate those harms. It also requires that, if a platform knows someone is a minor, it must limit the ability of other people on the platform to communicate with them, permit the user to control personalized recommendations, and limit design features that encourage frequent use (e.g., infinite scrolling, push notifications, badges, etc.).
This leads to five problems with KOSA that have not been addressed by this new version:
It allows the government to impose punishment for the impact of ideas which leads to censorship.
The problem with imposing a duty of care on speech is that it allows someone (in this case, the Federal Trade Commission) to punish speakers whenever speech might have contributed to a harmful outcome. On top of a framework that incentivizes government to entangle itself into private speech, this means that speakers, to protect themselves from liability, must predict how ideas will affect listeners. But people — including kids — are individuals, and react to content differently and unpredictably. So how are platforms to figure out what to do?
This version of KOSA doesn’t give any better guidance. An earlier version of KOSA had the phrase “reasonable care” in the standard. After critics pointed out this was too vague, the current version has been adjusted to say:
A covered platform shall exercise reasonable care […] where a reasonable and prudent person would agree that such harms were reasonably foreseeable by the covered platform and would agree that the design feature is a contributing factor to such harms.
Which is the same thing, just wordier.
FIRE’s Lead Counsel for Tech Policy Ari Cohn asked a good question: If they think that’s better, what did they think “reasonable care” meant before? The fact remains that platforms won’t have any useful standard by which to judge how the content on its newsfeed might impact any particular user. In the face of such uncertainty, the obvious and most likely course is to restrict access to potentially sensitive or controversial ideas (the majority of which are constitutionally protected) and features, to avoid doing anything that elicits punishment later.
Because “harmful to minors” can only be defined after the fact, platforms will over-censor and under-innovate to avoid liability.
Right now, we regulate Internet platforms in a way that places the liability for unlawful content on the party that creates it, not the website or service that hosts or curates it. (That’s Section 230 of the otherwise ill-fated Communications Decency Act.) Doing that has encouraged both the growth of existing platforms and lowered the barriers to entry for entrepreneurs starting new ones. If we hadn’t chosen to regulate platforms in this way, it’s unlikely that many would have survived this long. They likely would have been sued out of existence before they got to the size that caused us to contemplate regulation.
KOSA would totally reverse this paradigm. If the bill passes, platforms will face liability for both their “design features” and for what their users post (because they are served, arranged, or suggested through those design features — after all, nobody would be complaining about the design features if all the content were educational). But it doesn’t end there. Platforms might also be liable for anything that, after the fact, looks possibly connected to something harmful to minors.
Existing platforms might be able to bear the weight of that kind of moderation, but even then, corporations are famously risk-averse and will likely overcorrect once in CYA mode. New or smaller platforms certainly couldn’t. And the existing platforms would ossify into doing what they already do, but more of it. If KOSA defines their prior innovations (like infinite scroll) as harmful despite not being intended as such, services that host user-generated content are going to be very circumspect about innovating (even when it comes to protecting kids).
Because who can predict how minors will react to the next innovation?
Minors have First Amendment rights.
I know this is inconvenient, but all civil liberties are an inconvenience to government regulation.
The government can’t just decide it’s going to deny 15-year-olds access to content (or design features that dictate the presentation of that content) on the basis of their age. California learned this lesson when it tried to limit access to violent video games. As the Supreme Court wrote in that case, quoting an earlier decision: “[M]inors are entitled to a significant measure of First Amendment protection, and only in relatively narrow and well-defined circumstances may government bar public dissemination of protected materials to them.” But here, the exception is not narrow. It isn’t bound by anything more than the imagination (or worse, ideological positions) of whoever sits on the FTC at any given moment.
Note that the platforms don’t need government intervention to take steps to protect children, even the steps KOSA would implement (and that Elon Musk seems to support, now that X has consulted on the current form). X could ban minors today, if it wanted to, but it hasn’t.
I’m aware of the data correlating social media use and harm to minors, and I’m far from indifferent to those harms, especially as my boys start to reach the ages where social expectations will include the use of these platforms. But constitutional law cautions us that, where the right to free expression is implicated, we should use the least restrictive means to achieve our goals. It also cautions us that the deprivation of a civil right is always harmful. We have so much more we can try before we resort to something like KOSA, starting with banning phones in schools.
When it comes to regulating speech, we should always start low and go slow.
It mandates the collection of personal data in a way that makes minors and everyone else less safe.
In a post on X, NetChoice pointed out that KOSA “contains a de-facto age verification mandate where all companies covered by the law will need to gather and retain users' personal identification to properly determine who is and is not a minor.”
KOSA says it should not be construed to require platforms to perform age verification. But it does require them to limit certain features for users who are minors, and protect those users from various harms. And failure to do so can cost a platform dearly — anyone remember the FTC’s $5 billion penalty against Facebook? The only way to ensure compliance and safety from ruin will be to determine — to a high level of certainty — which accounts belong to minors. That means collecting personal information of users. And if you collect a lot of personal information into one place, hackers will try to steal that information, and they’ll eventually succeed. And that’s to say nothing of how well-resourced authoritarian regimes and rogue government agents could utilize such a useful treasure trove of dissenters’ personal information. As the rapper-cum-philosopher int eighty reminds us, “regardless of the hardware, service, or encoding, connect it to the Internet and someone’s gonna own it.”
When a bill explicitly says that it does not require something, it’s worth asking whether that language is necessary because the cumulative effect of the bill leads to the inescapability of that thing. And while we’re on that subject…
When legislation says it doesn’t intend to violate the First Amendment, the safe bet is that it will.
The odds that KOSA will trample on our First Amendment rights are sufficiently high that its authors included a clause to tell us it isn’t intended to trample on First Amendment rights.
The so-called “savings” clause says KOSA enforcement isn’t intended to be “based upon the viewpoint of users expressed by or through any speech, expression, or information protected by the First Amendment[.]” But “savings” clauses don’t work when they leave the regulated group to decide whether to risk punishment or not.
San Francisco State University had a speech code with a savings clause that said “nothing in this Code may conflict” with regulations protecting First Amendment rights. The court that enjoined it asked rhetorically whether students were likely to risk violating the code in the hope authorities agreed their conduct fit into the clause. It wrote: “To us, this question is self-answering — and the answer condemns to valuelessness the allegedly ‘saving’ provision[.]”
In general, the government has a bad habit of sticking a “savings” clause on anything it knows is likely to do harm to First Amendment rights, and the presence of such a clause is a good reason to deepen any existing skepticism. (We would never attempt this in our personal lives. We don’t send the IRS our tax returns and write on the outside, “nothing in this envelope is intended to constitute tax fraud.”)
Let’s learn from our allies’ mistakes
I get that people focused on the potential harms of the phone-based childhood may think I am being unreasonable in my opposition to attempts to fix the problem. But my and FIRE's societal role is to resist government encroachments on speech and to anticipate the unintended consequences of regulations that claim not to police speech but inevitably do. I am quite aware of the probable harms of a phone-based childhood, and I am increasingly thankful to have grown up before the phone-based childhood. But when it comes to context, my concerns are the larger centralization of power, free speech hostility, and the policing of speech in Western Europe and the Anglosphere at a level I have never seen in my lifetime.
My mother is British, and when I used to visit England and drink pints, smoke cigarettes, and crack jokes in pubs, they would kind of laugh at Americans for our political correctness. I even had an argument with Spiked’s Brendan O'Neill, who, back in 2010, claimed that British free speech rights are better protected because they don't have the First Amendment. Well, that is certainly not panning out to be true, as Brits are routinely paid visits by the police for offensive comments they make on social media. Many are arrested and have even gone to jail.
As I pointed out in a previous post, the number of Brits arrested for offensive comments on social media in 2015 and 2016 exceeded the number of people arrested during America's first Red Scare that likewise lasted two years. Canada is considering a law that would result in life imprisonment for certain speech crimes. Ireland has been considering a hate speech law that is likewise draconian. And the EU is a constant font of terrible ideas to police speech that not only holds back European innovation, but places an almost comical level of trust in experts and the centralization of power to fix any and all problems posed by technology or human nature. They are bound to be disappointed and in the process produce a union that is less and less free. So, yes, I am very aware of the larger context, and I think you should be too, but the truth is we are surrounded on all sides by countries that think they can police speech to achieve discrete social goals, always with the best of intentions, and with no articulable limiting principle.
Sometimes when people point out the parade of horribles that could potentially happen when you have vague and broad laws that relate to speech, they are accused of engaging in the slippery slope fallacy. But the problem is, when it comes to free expression, it’s not a fallacy, it's a tendency. I have called this the slippery slope tendency for years now, because when you factor in things as vague and broad as offense or, as here, harm to minors, there is no limiting principle. Our campuses have been trying to police hateful or harmful speech since the 1980s, and as I wrote in “The Canceling of the American Mind,” it has resulted in the modern era of campus Cancel Culture.
And professors have internalized the lessons of Cancel Culture.
In a survey of 6,269 faculty members released this week, FIRE found that one in seven have been disciplined or threatened with discipline for their teaching, research, on-campus discussion, or off-campus speech. And 87% found it difficult to have an open and honest on-campus conversation about at least one of 19 hot-button topics (e.g., gun control, abortion, China, etc.).
I truly hate to disagree with Jon. I wasn’t kidding when I wrote that I genuinely love the man. But it’s my job to resist threats to freedom of speech, to resist centralization of power over speech by the government, and to point out the unintended consequences of otherwise benign-seeming laws. And under that analysis, KOSA comes nowhere near passing the test.
SHOT FOR THE ROAD
You may remember from this past weekend’s free speech update that Edison, NJ attempted to ban all props (including American flags and the Constitution!) during town council meetings. Well, very shortly after FIRE sent a letter to the council, they did the right thing and voted to schedule a repeal of the unconstitutional ordinance, even citing FIRE’s advocacy as the reason for the quick reversal. Check out the video below for just one recent example of how FIRE’s work makes an impact! And, of course, as you’re thinking about your end of year giving, as renowned Washington Post columnist George Will once said, don’t forget to “write a check to FIRE … A terrific, litigious, scrappy group of people.”
I love you too Greg. I look forward to talking about this with you and seeing if we can work through at least some of our disagreements.
Excellent post that really outlines the potential harms of this legislation. Thank you for writing this. Though I understand your criticism of phones in schools, I think we should empower teachers to make those decisions in their classrooms and be careful not make government mandates about cell phones in schools or pass laws making them illegal to carry on campuses (see: the many teenagers arrested for carrying beepers in the 90s). For many kids, cell phones aid in their learning by functioning as calculators, word processors, and allow for easy documentation of lesson materials like slides. Some restrictions are reasonable imo, but let’s not let the government get involved. Appreciate your article and your work defending free speech!!