How institutional review boards threaten groundbreaking research in higher ed
Proposed reforms by FIRE and the Academic Freedom Alliance would ensure safe and ethical research practices while also guarding against ideological and bureaucratic overreach
Last month, the Academic Freedom Alliance issued guidance highlighting the threat that Institutional Review Boards (IRBs) pose to the freedom to conduct research at colleges and universities.
If you’re outside academia, you may not have even heard of IRBs (sometimes called Human Subjects Committees). But they wield consequential power over what research is pursued in our truth-seeking institutions, and as a result the knowledge the wider public gets to have and benefit from. Through overreach and mission creep, the arbitrary and often viewpoint-discriminatory processes employed by IRBs can too easily be used to suppress unpopular research, preventing consequential and potentially groundbreaking scientific endeavors from getting off the ground.
What is an IRB, and why do they exist?
An IRB is a committee at a college or university that reviews and approves any kind of research involving human subjects. The idea is straightforward: the IRB wants to make sure that nobody is harmed and that participants know what they’re signing up for when participating in a study.
If this sounds reasonable, that’s because it is. In fact, IRBs were established by federal directives in the U.S. after major ethical abuses in research. One of the most infamous was the untreated syphilis study at Tuskegee between 1932 and 1972, in which the U.S. government studied the effects of syphilis on black men for decades without treating them — or even telling them what disease they had.
In 1979, the federal government issued the Belmont Report, which laid out three core principles for ethical research:
Respect for persons: People must give informed consent.
Beneficence: The benefits of research must be weighed against the risks.
Justice: The burdens and benefits of research should be shared fairly.
These reforms also drew on earlier research ethics principles reflected in the Nuremberg Code, which were written in the aftermath of the Holocaust and in response to the medical atrocities committed by Nazi doctors.
IRBs, then, became an oversight mechanism for enforcing these principles at the local institutional level. Today, any institution conducting federally regulated human subjects research is required to have an IRB, and many other institutions voluntarily maintain similar review bodies.
These safeguards serve a vital purpose. No one wants a return to the days when researchers could experiment on people without their knowledge or consent, or inflict lasting harm on participants just to attain some desired research data.
So, what went wrong?
Why IRBs have become a problem
Like many well-intentioned oversight mechanisms, over time a significant number of IRBs have become object lessons in scope creep and unchecked authority.
What started as a system for protecting people from genuine physical and psychological harm has, at many institutions, ballooned into a bureaucratic apparatus that can delay, distort, or even derail research — sometimes for reasons that have nothing to do with participant safety.
For example, many IRBs now have authority over research that poses no meaningful risk to participants. Simple surveys, or in some cases even the analysis of public records, routinely face scrutiny similar to that received by far riskier research. Aspects of studies that wouldn’t have warranted a second look a decade ago are now subject to intense review at our major institutions. In effect, bureaucratic hurdles are slowing or preventing research that has no meaningful risk of harm.
This has led some to argue that IRBs “subject researchers to petty tyranny,” and others to claim that they “have become nationwide instruments for implementing censorship” because of their ability to operate as bureaucratic gatekeepers: IRBs have the power to permit research that meets their satisfaction, and suppress data and prohibit publication for research that doesn’t. This is particularly true on hot-button, controversial issues.
Funnily enough, the expanding scope of IRBs over time may in part be explainable by more ethical research being conducted at colleges and universities.
In one of several studies, psychology professor David Levari and his colleagues asked participants to play the role of an IRB reviewer. Participants read a series of research proposals and then decided whether each should be approved or rejected. With the obvious caveat that these clearly weren’t trained IRB reviewers, Levari found that as the prevalence of genuinely unethical research proposals decreased over time, reviewers did not start to rate fewer proposals as unethical. Rather, they started rejecting ethically ambiguous proposals that they would have approved earlier on. In other words, even as research practices improve, IRBs may keep finding problems by unconsciously lowering the bar for what counts as problematic.
People wielding hammers will always find more nails.
Needlessly expanding scope isn’t the only problem, however. IRBs can also be weaponized to suppress or stymie research on unpopular or hotly debated topics. As former IRB chairs Jessica Hehman and Catherine Salmon recount, there are many well-documented cases where IRB review processes have been used to thwart research for ideological reasons — or to simply protect an institution’s brand.
Consider the case of Elizabeth Loftus and Melvin Guyer. In the late 1990s, at the peak of the “Memory Wars” — the sudden popularity and then debate over the eventually-discredited practice of recovered memory therapy — Loftus and Guyer embarked on a pivotal investigation into a Jane Doe case they believed overstated proof of recovered traumatic memory. In turn, their articles and investigation provoked prolonged institutional retaliation.
A central actor in the retaliation, it turned out, was the University of Michigan IRB, whose chair drafted a confidential memo that was leveraged to target both Loftus and Guyar. Both researchers were eventually cleared, but in the three years of investigation they — and in particular, Loftus — were subjected to the seizure of research materials, lengthy misconduct investigations, and threats of professional sanction. The shadow of these events still lingers decades later.
IRBs in some cases also appear to look out for themselves and their institutions more than those they are actually charged with protecting. In studies that have surveyed the researchers themselves — federally-funded principle investigators in one, criminology researchers in others — about their views of IRBs, many reported perceiving that IRBs are more concerned about protecting themselves or the university from liability than actually protecting human subjects.
The problem isn’t altogether new either. Back in 1985, psychology professor Stephen Ceci and colleagues found tentative evidence that IRBs were more likely to block research on politically sensitive topics than similar research on less controversial topics. They argued that some IRBs were reacting not just to participant risk, but also to worries about what the research might imply politically or socially.
And that’s where the academic freedom issues begin: IRBs began drifting beyond protecting participants and toward policing ideas.
FIRE and the AFA’s proposed solutions to IRB overreach
FIRE researchers have also seen or personally experienced IRB overreach firsthand.
In one case, there was an almost year-long delay by an IRB for a project looking at public records. We have also seen delays or punitive audits of benign survey research which involved asking faculty about their views on free speech and related topics.
FIRE researchers have also been on the receiving end of legal threats from faculty, as well as frivolous ethics complaints submitted by faculty to their IRBs. Just to give you a sense of how far IRB overreach can extend, these complaints were filed during the data collection phase, before anything was even analyzed. The goal was to suppress research — specifically, research questions — that these faculty members disagreed with, before any conclusions or findings could be reported.
If these patterns sound familiar, they should. IRB overreach is an integral part of what Greg and his Canceling of the American Mind co-author Rikki Schlott have referred to as the Conformity Gauntlet: the long series of ideological hurdles that independent-minded academics face at every stage of their careers, from graduate school admissions to tenure review. Central players in the Conformity Gauntlet, of course, are DEI statements, bias response teams, and secret disciplinary hearings. As we have seen, IRBs, too, can be captured by ideology and used as a checkpoint where nonconformist research or researchers can be pressured or silenced. No single hurdle is insurmountable, but when combined these barriers powerfully incentivize conformity and disincentivize open and fearless inquiry — the very thing universities should exist to protect.
FIRE has previously endorsed IRB reforms aimed at protecting academic freedom by guarding against this type of overreach. For example, when we discussed potential ways the National Institutes of Health could protect academic freedom through its grant agreements, we wrote:
NIH could help prevent the abuse of Institutional Review Boards. When IRB review is appropriate for an NIH-funded project, NIH could require that review be limited to the standards laid out in the gold-standard Belmont Report. Additionally, it could create a reporting system for abuse of IRB processes to suppress, or delay beyond reasonable timeframes, ethical research, or violate academic freedom.
The AFA’s new guidance goes further than this, endorsing ten reforms from a set of principles called the Mudd Code. Many of these proposals are similar to what FIRE has recommended in the past, and include things like transparency requirements, which would make it easier to identify if viewpoint discrimination had taken place; clarity on risk thresholds; clarification that IRB approval is not necessary for the re-evaluation of existing science; and the addition of a science advocate to the composition of IRBs themselves.
These reforms are aimed at preserving the important role of ensuring safe and ethical research practices, while also guarding against the kind of ideological and bureaucratic overreach that can too easily be used to suppress unpopular research.
While some of the AFA and Mudd Code recommendations fall outside FIRE’s narrow scope as a free-speech watchdog, we applaud them for bringing light to the threat that IRBs can pose to academic freedom. We also commend them for proposing concrete, principled reforms to address it.
Our institutions of higher education are places where difficult, groundbreaking, and important research should be done. That requires asking questions and dealing with subjects that might be uncomfortable and controversial. It also means grappling with results that may be politically or ideologically inconvenient. The truth remains true regardless of our concerns, and if we fail to discover it we will be compromising much more than we think.
SHOT FOR THE ROAD
Why does free speech have a “branding problem”?
Once an institution becomes one-sided, it stops needing free speech and starts seeing it as a threat. When people feel like they’re the majority, the temptation is to control speech instead of defend it.
It’s a slow-motion train wreck.
I discussed all of this and more with Olivia Gross at the ASU+GSV Summit:






