Escaping the AI Guilt Trap in Texas
Decades of anti-discrimination overreach devastated academic freedom—now it’s targeting the future of knowledge
I started at Stanford Law School in 1997. I don't say this to note how old I am, but rather to point out that my tenure there was just two years after the notorious Stanford Law School speech code was defeated, in a court case called Corry v. Stanford University (FIRE Executive Vice President
no interviewed the case’s namesake, Rob Corry, for the podcast back in 2017, which I encourage you to check out!).Stanford is a private university, which would normally mean that it isn’t beholden to First Amendment standards. However, after the passing of a 1992 California Education Code statute known as the Leonard Law, this was no longer the case. Named after its legislative sponsor Sen. William R. Leonard, the Leonard Law essentially extends some (but not all) First Amendment protections to students at non-religious, private institutions of higher education in California. It was passed to prevent universities like Stanford, California’s most prestigious university (sorry, Berkeley!), from adopting a politically correct speech code — which by then was increasingly seen as a relic of the excessively politically-correct 1980s and early 1990s, and which would infringe upon the free speech rights of students.
You might think a discussion of the Stanford Speech Code would have been omnipresent, particularly at a school that prided itself on its constitutional law faculty. But I can only remember one discussion about it on the student listserv.
At the time, some of my schoolmates assured everyone that the speech code wasn't really about speech at all. Rather, the code was nothing more than an anti-discrimination policy — which of course made striking it down an unconscionable act. Now, you might be wondering: What did I do then? Did I ask more questions? Did I read the code for myself? Nope. My barely conscious desire to be a good liberal had shut off my and many others’ critical faculties. Even though I had gone to law school specifically to do First Amendment law, and even though I strongly believed no one had a right not to be offended, I took at face value the assertion that the speech code had been an anti-discrimination policy, and did not push further.
It was only in 2001, when I began working as the first legal director for FIRE, that I started to understand the true nature of speech codes like these. Going back all the way to the 1960s, and accelerating through the 1970s and 80s, all attempts to regulate speech with what might be called “politically correct speech codes” used anti-discrimination as their rationale. These efforts relied primarily on the guilt of good liberals, who would feel terrible opposing something geared towards fighting discrimination, to enact nominal harassment codes that clearly banned protected speech. And because of that guilt, the codes received surprisingly minimal pushback, even among otherwise staunch supporters of free speech on campus.
As university faculty and administrators have increasingly drifted to the left, on-campus regulations have focused on anti-discrimination as their primary engine (oftentimes based on the speech-restrictive theories that were developed by people like Richard Delgado, Mari Matsuda, and others, much of which I get into in my recent piece about the AAUP’s downfall). As a result of this, practically any level of intrusion into freedom of speech and academic freedom can be justified on anti-discrimination grounds. This is particularly true if the codes adopt the “disparate impact” conception of anti-discrimination, which argues that if the results of two groups in a given area are unequal, there must be discrimination at play on the basis of that grouping. This difference between equality of opportunity, which I heartily endorse, and equality of outcome (also generally known as “equity”), has been developing for some time. Jonathan Haidt and I are critical of the concept in “The Coddling of the American Mind.”
As Thomas Sowell has pointed out in his books, “Discrimination and Disparities” and “Intellectuals and Race,” there simply isn’t an example of a society in which all groups do equally well at all things. So if you focus your regulation on the goal of equal outcomes, you will have to keep upping the ante every time equal outcomes don’t occur (which will basically be always). This will inevitably create a condition that justifies granting pretty much any power the authorities claim to need to achieve their goal, including the power to restrict and suppress speech.
This argument, of course, was pretty clear to people in Generation X, my generation, who watched the Soviet Union — the first big attempt to achieve equity in human history — collapse. We also saw the second big attempt, China, have to turn to some free market solutions in order to prevent itself from following suit.
But younger people aren't taught these lessons. Instead, they learn from K-12 on up to see discrimination as the central and sacred wrong of American society. I say “sacred” because, as Jonathan Haidt pointed out in his book “The Righteous Mind,” it tends to produce a unipolar morality in which all other moral concerns are reduced in favor of a concern for care towards the historically marginalized. This is how we end up in situations like those Rikki Schlott and I described in “The Cancelling of the American Mind,” where the last two decades, and particularly in the last seven years, having the “wrong” position on practically any hot-button culture war issue can get you in serious trouble.
This dynamic obviously doesn’t work. It has been the source of a great many of the free speech violations on campus that FIRE has had to focus on since its inception in 1999. But it also has another, extremely deleterious effect for our discourse and our society: it destroys public trust in our institutions of knowledge production specifically, and in expertise more generally.
Unfortunately there are plenty of examples of precisely the kind of thing I’m talking about here. Carole Hooven, for instance, was forced out of Harvard for having the opinion that biological sex is real. Also at Harvard, Roland Fryer was targeted for publishing a study that found no racial differences in the frequency of officer-involved shootings. At Stanford, Jay Bhattacharrya was targeted for questioning mask and vaccine mandates during the COVID-19 pandemic. At the University of Pittsburgh, Associate Professor of Cardiology Norman Wang’s teaching privileges were revoked because he published a research paper examining the potential harms of affirmative action policies. The list goes on.
When the general public witnesses incidents like these, they are eventually going to come to the realization that dissent is not tolerated in higher education. It will be a clear sign to them that these institutions are holding ideological conformity above free inquiry, open debate, and intellectual diversity. As a result, the public will no longer trust any “truths” or “information” our institutions enshrine or disseminate. This is terrible — not just for the institutions themselves, but also for our ability to rely on expertise and, most importantly, our ability to discover knowledge.
This leads me to the new Texas law meant to regulate artificial intelligence, which uses the same rationale and poses precisely the same kinds of threats as the anti-discrimination speech codes FIRE has been seeing and fighting for decades.
Anti-discrimination, liberal guilt, and the problem with the Responsible AI Governance Act
HB1709, also known as “The Responsible AI Governance Act” or TRAIGA, is designed to regulate artificial intelligence systems in order to prevent “algorithmic discrimination” — a term George Mason University’s
describes as:The notion that an AI system might discriminate, intentionally or unintentionally, against a consumer based on their race, color, national origin, gender, sex, sexual orientation, pregnancy status, age, disability status, genetic information, citizenship status, veteran status, military service record, and, if you reside in Austin, which has its own protected classes, marital status, source of income, and student status. It also seeks to ensure the “ethical” deployment of AI by creating an exceptionally powerful AI regulator, and by banning certain use cases, such as social scoring, subliminal manipulation by AI, and a few others.
Rather than try to summarize all the great information in Ball’s article here, I would encourage you all to read it in its entirety. But for the purposes of our discussion there are a few points to establish:
First, like its previous California incarnation, SB 1047, TRAIGA’s primary method for countering the aforementioned discrimination is through imposing penalties for “negligence” on the part of AI developers, distributors, and even some users.
If that sounds broad, you don’t know the half of it. As Ball points out, “discrimination can be deemed to have occurred regardless of discriminatory intent; in other words, even if you provably did not intend to discriminate, you can still be found to have discriminated so long as there is a negative effect of some kind on any of the above-listed groups.”
Second, TRAIGA also requires the writing and constant updating of compliance documentation in the form of “High-Risk Reports,” “Risk Identification and Management Policies,” and “Impact Assessments” which, as Ball explains, will end up being a full time job — one that the government cannot force companies to undertake. When California imposed similar reporting and assessment requirements on social media companies in the name of protecting children, the 9th Circuit had little trouble finding that the requirements compelled speech in violation of the First Amendment.
But it wouldn’t be a blatantly censorial regulation without the creation of some governing body granted easily-abusable power, would it? Enter the Texas Artificial Intelligence Council, which Ball describes as “the most powerful AI regulator in America, and therefore among the most powerful in the world.” Among other things, TRAIGA would grant this council the power to ensure the “ethical development and deployment of AI.” As Ball notes, “this regulator is sure to become captured by special interests who will lobby for socially suboptimal policies.”
Ding ding ding!
One last bit from Ball’s piece:
Creating “reasonable care” negligence liability for language models will guarantee, beyond a shadow of a doubt, that AI companies heavily censor their model outputs to avoid anything that could possibly be deemed offensive by anyone. If you thought AI models were HR-ified today, you haven’t seen anything yet. Mass censorship of generative AI is among the most easily foreseeable outcomes of bills like TRAIGA; it is comical that TRAIGA creates a regulator with the power to investigate companies for complying with TRAIGA.
It's always important to remember our history. It allows us to see patterns and, hopefully, prevent us from making the same mistakes and fighting the same fights over and over again. The current generation of campus speech codes all use anti-discrimination rationales to exercise tremendous and unwarranted power over everyday life in higher education, over speech, and even over the production of knowledge itself. Given how TRAIGA is structured, it’s clear that the discourse surrounding AI regulation is going to apply the same tactics. If we let them, they will also inevitably have the same consequences: destroying faith in our institutions and corrupting our systems of knowledge production.
Given the power and potential of artificial intelligence to affect every aspect of our lives, those outcomes might be even more disastrous. One inevitable result of TRAIGA, and of other laws like it that we’re sure to see in the coming years, would be to limit knowledge itself if it's found to be potentially “discriminatory.” It’s worth reiterating: when faced with potentially crushing penalties and the costs of dealing with enforcement actions, AI developers have every incentive to stay as far from the line as possible. So the risk to knowledge may extend even further to anything that an enforcer could possibly even claim is discriminatory.
The deeply-ingrained liberal guilt that I experienced was one of the reasons why I was so initially uncritical of anything labeled an anti-discrimination policy without bothering to even think through how it threatened free speech and academic freedom. But we've had decades to see what that lack of critical thinking can do to our institutions of knowledge production. And it simply cannot be allowed to repeat itself once again with regards to AI — which, like it or not, is going to become the epistemic operating system of the planet.
How you use knowledge is up to you. And if you use it to discriminate, then you are blameworthy. But you can't distort the production of knowledge itself in hopes that coerced conclusions will be just as respected as ones that are arrived at through a fair and impartial process. We do not ban or regulate controversial or even bigoted ideas because they may inspire others to discriminate—and AI presents no reason to depart from this approach.
I don't think the issues Ball is pointing out are a mere side effect of what some of the regulators want to achieve, either. As cynical as it might sound, I really do think that for at least some of them, the immense power that TRAIGA’s overly-broad language would grant is the point. As I’ve noted before, power always wants more power — and it will always disguise itself as either advocating for the powerless or being powerless themselves in order to justify its desire. Once that power exists, there is no telling how it will ultimately be abused. As Justice Felix Frankfurter observed, liberty is extinguished “heedlessly at first, then stealthily, and brazenly in the end.”
And given the fact that in our modern world no one is more powerful than those who control what we can know, it’s no wonder we’re starting to see such rabid action surrounding who gets to hold the reins of artificial intelligence. Without power over knowledge and knowledge generation, power begins to lose its power. Power doesn’t like that.
It’s up to us to keep them disappointed.
The Texas law should and hopefully will be defeated. But even if that happens, it won’t be the end of our problems. As we’ve established here, there is a long history of invoking anti-discrimination as a justification for censorship, and TRAIGA won’t be the last proposed law designed to limit the knowledge-producing potential of AI. We fell for the anti-discrimination line of argument in higher education decades ago, and its devastating effect on those institutions and their reliability cannot be overstated. The resulting chaos on campus has kept FIRE busy for over 25 years, and will likely continue to do so long into the future. The arena of AI is exponentially bigger and even more consequential to our expressive rights, as well as our ability to produce and disseminate knowledge. We have a lot to lose here. We can’t fall for it again.
SHOT FOR THE ROAD
While we’re on the topic of AI, FIRE has a lot of great resources on its website about it. This article in our Research and Learn section, “Artificial intelligence, free speech, and the First Amendment” is an excellent primer on all the ways AI and free speech intersect, and the arguments for keeping AI free from censorship.
"...if you focus your regulation on the goal of equal outcomes, you will have to keep upping the ante every time equal outcomes don’t occur (which will basically be always)..."
This is the engine of the Left's Permanent Revolution, which is really the will-to-power of adversary intellectuals funded and lauded by American academia and our liberal gentry: an infinite moral, political, social and epistemic crusade for the Left priesthood to own and operate our entire cognitive infrastructure and to be the official embodiment of the True and the Good.
And as utopia is always just beyond our grasp, there will always be more commissars needed, more rules and laws and speech codes, more Struggle Sessions and lessons in etiquette, all with the goal of making us passive puppets who obey on command and are programmed to regurgiate approved dogma.
It seems that the West is just stuck with a permanent class of moral entrepreneurs who are always wielding heavy doses of guilt and shame, always demanding we repent and atone for original sin and bow down to their sacred commandments—the faith may be different, but we seem to have no way to rid us of these meddlesome priests.
I remember experiencing the same liberal guilt at my high school. FIRE has been incredible for me. I'm a fiction writer, in addition to writing articles (you might've seen me at The Coddling Substack!) I used to go to great lengths to delete and destroy any stories that contained a small piece of language deemed "offensive" by some. Every time I release a story with controversial ideas now, I remind myself I have free speech to do so!
AI has been a tense topic in my community thanks to artists perceiving it as stealing their jobs and their work. While I agree that there are ethical concerns to be heard here, I'm not sure if training a device on many artworks or written works it observes a small portion of is considered copyright infringement - then I'd also owe royalties to every book I've ever read (or at least be considered a thief!) for influencing my language development. And I don't want to restrict the use of tools in an admittedly competitive field, one that nobody is entitled too given we may stop caring about any given entertainment product any day now... It's not a role like teachers or lawyers where we need personnel for a clear societal purpose.
My point being, I'm taking advantage of my free speech here... But despite the many important perspectives in this conversation, not just my own, I feel like that's an easy way to be guilted into AI bans too. And not all, but some pockets of artist culture are already filled with that kind of "liberal guilt" you mention!
Anyways, thanks for the piece and for doing the work that you do.