15 Comments
User's avatar
The Ivy Exile's avatar

I share a lot of Greg's concerns in this piece about ideological meddling in AI from right and left, but would argue that the left has worked so extensively to make AI reflect their worldview that the right's efforts to do the same will take a long time to potentially achieve more balanced AI, let alone systems with a rightward bias. For years I covered Columbia's "Data Science Day," which often cheered a social credit system by any other name for Americans automating constant thumbs on the scale for different people based on perceived "intersectionality." It was terrifying.

https://ivyexile.substack.com/p/social-justice-by-algorithm

Expand full comment
Ken Kovar's avatar

Intersectionality must die 😝

Expand full comment
The Ivy Exile's avatar

I once had the peculiar fortune to work with the inventor of intersectionality and doyen of DEI, Professor Kimberle Crenshaw...

https://ivyexile.substack.com/p/critiquing-race-theory

Expand full comment
Ken Kovar's avatar

I just really hate the word itself.... what the hell does it really really mean??? it's on a par with microaggression in terms of telling me that this person does not reside in the real world. We do not have to accept this horrible language... sorry

Expand full comment
The Ivy Exile's avatar

I do think the most basic point of the word in describing circumstances back in the 1980s when it was coined, that black women being at the "intersection" of distinct disadvantages faced by women and by black people, respectively, would probably overall get the shortest end of the stick, is a valid observation worthy of further consideration. But then over time it got spun into baroque variations of the ever more abstruse ways the most holy downcasts of society had ultimate moral claim to being the gold, silver, and bronze medal victims.

Expand full comment
Jim Carmine's avatar

I use AI to help me with my research in theology, philosophy and anthropology. There is without any doubt a clear multicultural liberal bias if I am not extremely careful with prompt engineering. I generally have to work around its guardrails.

Expand full comment
Ken Kovar's avatar

That’s a good use case

Expand full comment
Stosh Wychulus's avatar

This is a Herculian task, and I hope we are up to it. I truly wish I was more optimistic. The question for me is who gets to guide this , who becomes the overseer, how are these people chosen?

Expand full comment
Ken Kovar's avatar

Herculean on top of sysyphus

Expand full comment
Stosh Wychulus's avatar

I have no worthy stable cleaning animation

https://vimeo.com/23083554

Expand full comment
Greg's avatar

Completely agree. But sometimes this is what the pendulum must do in order eventually to get re-centered.

Expand full comment
Alexandra Vollman's avatar

At the outset, I disagreed with the idea that something like this order wasn’t necessary, but by the end, you had changed my mind. But that is assuming that people will actually take a stand against these built-in biases. This (and AI in general) is what keeps me up at night. 😥

Expand full comment
Shawn's avatar

Fixing the woke-ism in LLMs looks like an uphill battle based on my recent exchange with Grok, which itself is put forth as maximally "truth-seeking." I raised the issue of Grok subtly and insidiously substituting the asexual synonym "they" instead of the appropriate "he" in referring to philosopher Michael Huemer. Grok defended the error under the auspices of taking a "cautious approach to avoid assumptions about gender based on names." Even after several different approaches to have Grok recognize that Michael is a male name and either admit the failure or acquiesce and offer to use male pronouns for my specific interaction, the LLM programming demurred. That thread with Grok is here - https://x.com/i/grok?conversation=1947028551099887945

Expand full comment
Ken Kovar's avatar

This is the beginning of the end for this laughable reactionary approach to AI. Oh the irony

Expand full comment
Joe Raab's avatar

I have worked in and around the government for a number of years, so I recognize that even the best intentioned rules can become a negative factor when applied by malicious or just incompetent

people.

That said the only focus on DEI in the actual EO is presented in terms of symptoms or examples of the problem. TBH, I haven't seen any examples of anyone pointing to right wing bias in AI. The actual policy guidance on Truth Seeking and Ideological Neutrality are generally viewpoint neutral (other than an explanatory "such as DEI").

Again, it's possible for this to be taken in a bad direction, but that's pretty true of any government action. As presented in the EO, there's not much to indicate it's about eliminating “not pro-Trump”.

Expand full comment