Ideological bias in Trump’s new executive order bans DEI-linked ideology from government-funded models. But trading one orthodoxy for another still threatens the truth.
I share a lot of Greg's concerns in this piece about ideological meddling in AI from right and left, but would argue that the left has worked so extensively to make AI reflect their worldview that the right's efforts to do the same will take a long time to potentially achieve more balanced AI, let alone systems with a rightward bias. For years I covered Columbia's "Data Science Day," which often cheered a social credit system by any other name for Americans automating constant thumbs on the scale for different people based on perceived "intersectionality." It was terrifying.
I just really hate the word itself.... what the hell does it really really mean??? it's on a par with microaggression in terms of telling me that this person does not reside in the real world. We do not have to accept this horrible language... sorry
I do think the most basic point of the word in describing circumstances back in the 1980s when it was coined, that black women being at the "intersection" of distinct disadvantages faced by women and by black people, respectively, would probably overall get the shortest end of the stick, is a valid observation worthy of further consideration. But then over time it got spun into baroque variations of the ever more abstruse ways the most holy downcasts of society had ultimate moral claim to being the gold, silver, and bronze medal victims.
I use AI to help me with my research in theology, philosophy and anthropology. There is without any doubt a clear multicultural liberal bias if I am not extremely careful with prompt engineering. I generally have to work around its guardrails.
This is a Herculian task, and I hope we are up to it. I truly wish I was more optimistic. The question for me is who gets to guide this , who becomes the overseer, how are these people chosen?
At the outset, I disagreed with the idea that something like this order wasn’t necessary, but by the end, you had changed my mind. But that is assuming that people will actually take a stand against these built-in biases. This (and AI in general) is what keeps me up at night. 😥
Fixing the woke-ism in LLMs looks like an uphill battle based on my recent exchange with Grok, which itself is put forth as maximally "truth-seeking." I raised the issue of Grok subtly and insidiously substituting the asexual synonym "they" instead of the appropriate "he" in referring to philosopher Michael Huemer. Grok defended the error under the auspices of taking a "cautious approach to avoid assumptions about gender based on names." Even after several different approaches to have Grok recognize that Michael is a male name and either admit the failure or acquiesce and offer to use male pronouns for my specific interaction, the LLM programming demurred. That thread with Grok is here - https://x.com/i/grok?conversation=1947028551099887945
I have worked in and around the government for a number of years, so I recognize that even the best intentioned rules can become a negative factor when applied by malicious or just incompetent
people.
That said the only focus on DEI in the actual EO is presented in terms of symptoms or examples of the problem. TBH, I haven't seen any examples of anyone pointing to right wing bias in AI. The actual policy guidance on Truth Seeking and Ideological Neutrality are generally viewpoint neutral (other than an explanatory "such as DEI").
Again, it's possible for this to be taken in a bad direction, but that's pretty true of any government action. As presented in the EO, there's not much to indicate it's about eliminating “not pro-Trump”.
I share a lot of Greg's concerns in this piece about ideological meddling in AI from right and left, but would argue that the left has worked so extensively to make AI reflect their worldview that the right's efforts to do the same will take a long time to potentially achieve more balanced AI, let alone systems with a rightward bias. For years I covered Columbia's "Data Science Day," which often cheered a social credit system by any other name for Americans automating constant thumbs on the scale for different people based on perceived "intersectionality." It was terrifying.
https://ivyexile.substack.com/p/social-justice-by-algorithm
Intersectionality must die 😝
I once had the peculiar fortune to work with the inventor of intersectionality and doyen of DEI, Professor Kimberle Crenshaw...
https://ivyexile.substack.com/p/critiquing-race-theory
I just really hate the word itself.... what the hell does it really really mean??? it's on a par with microaggression in terms of telling me that this person does not reside in the real world. We do not have to accept this horrible language... sorry
I do think the most basic point of the word in describing circumstances back in the 1980s when it was coined, that black women being at the "intersection" of distinct disadvantages faced by women and by black people, respectively, would probably overall get the shortest end of the stick, is a valid observation worthy of further consideration. But then over time it got spun into baroque variations of the ever more abstruse ways the most holy downcasts of society had ultimate moral claim to being the gold, silver, and bronze medal victims.
I use AI to help me with my research in theology, philosophy and anthropology. There is without any doubt a clear multicultural liberal bias if I am not extremely careful with prompt engineering. I generally have to work around its guardrails.
That’s a good use case
This is a Herculian task, and I hope we are up to it. I truly wish I was more optimistic. The question for me is who gets to guide this , who becomes the overseer, how are these people chosen?
Herculean on top of sysyphus
I have no worthy stable cleaning animation
https://vimeo.com/23083554
Completely agree. But sometimes this is what the pendulum must do in order eventually to get re-centered.
At the outset, I disagreed with the idea that something like this order wasn’t necessary, but by the end, you had changed my mind. But that is assuming that people will actually take a stand against these built-in biases. This (and AI in general) is what keeps me up at night. 😥
Fixing the woke-ism in LLMs looks like an uphill battle based on my recent exchange with Grok, which itself is put forth as maximally "truth-seeking." I raised the issue of Grok subtly and insidiously substituting the asexual synonym "they" instead of the appropriate "he" in referring to philosopher Michael Huemer. Grok defended the error under the auspices of taking a "cautious approach to avoid assumptions about gender based on names." Even after several different approaches to have Grok recognize that Michael is a male name and either admit the failure or acquiesce and offer to use male pronouns for my specific interaction, the LLM programming demurred. That thread with Grok is here - https://x.com/i/grok?conversation=1947028551099887945
This is the beginning of the end for this laughable reactionary approach to AI. Oh the irony
I have worked in and around the government for a number of years, so I recognize that even the best intentioned rules can become a negative factor when applied by malicious or just incompetent
people.
That said the only focus on DEI in the actual EO is presented in terms of symptoms or examples of the problem. TBH, I haven't seen any examples of anyone pointing to right wing bias in AI. The actual policy guidance on Truth Seeking and Ideological Neutrality are generally viewpoint neutral (other than an explanatory "such as DEI").
Again, it's possible for this to be taken in a bad direction, but that's pretty true of any government action. As presented in the EO, there's not much to indicate it's about eliminating “not pro-Trump”.