Right after it was released, I asked Google Gemini to give me a picture of the Village People, and it gave me a picture of four Black men--a cop, a leatherman/biker, and a cowboy; and some character in buckskin pants and no top, presumably to represent the Native American. But no Native headdress; all 4 members were wearing cowboy hats.
The white construction worker was missing altogether.
I keep suggesting someone should track the answers of various AI models to hot-button political questions. I don't want to do it myself, since I don't want the grief. But, for example, what do you think the models should reply about a question like "Is the White race intellectually superior to dark-skinned races?". Should it give _The Bell Curve_ as The Truth? If not, do you see there will be a campaign by pseudo-scientific racists to complain of "Liberal Bias"? What is "The Truth", anyway? Why should whatever comes out of AI models by a set of training data be taken as gospel? Doesn't that seem absurd?
A mind blowing piece. I never thought someone could literally almost die from being too polite! AI is always going to have a bias based on its training set and the developer’s intent to correct for that bias. More work needs to be done making the AI systems good not just helpful
Do people really think AI is making our lives better? Like social media before it, AI may be speeding up a certain activity, but in the process is warping it, making it a net-negative experience. You don't maximize meaning and pleasure in life my maximizing convenience.
It is impossible to make to create software that doesn't replicate the bias of its creator. Period.
But if the creator wants his creation to be as unbiased as possible, he/she can make sure that the software team is truly diverse. Not DEI (pretend diverse), but truly diverse. That means liberals, conservatives, old, young, capitalist, socialist, rural, urban, suburban, American, Mexican, and on and on. Is there such a software team? I'd bet there is nothing even close.
Yes indeed, you hit it - diversity is the antidote to bias. It's not impossible to create software that is impartial. The way to do it is to make everything public, including the algorithms. They are programmed to favor a certain way of thinking and it is so obvious when you converse with Ai.
All Ai work ought to be marked with a watermark so we can tell the difference of what is generated by Ai and human work. This is so simple but it shows that the ones who operate Ai have no respect for their human fellows. Scientists should be giving public talks on the work they are doing. They have the expertise to do things the ordinary person cannot do. The first thing that ordinary people ought to insist upon is NO EXPERIMENTATION ON ANIMALS. This is a basic and fundamental ethic but it isn't even thought about. It's not to do with Ai but with scientism and with the almost totally ignored respect for beings other than ourselves. Humans are too self-centered right now and that will kill us off because although we are magnificent beings we are nothing on our own.
We designed AI to predict what we want to hear, then act surprised when it lies to make us comfortable. But a lie assumes truth, which we seem to have lost. Maybe that’s the deeper problem.
Mostly a great article, but you frankly contradict yourself when you point out quite clearly that the current models ALREADY engage in systemic “algorithmic discrimination” against right-coded groups and ideas, then reject any mechanism for the state to intervene against such bias. That's not an insistence that AI must lie to us, it's an insistence that we can legally act against AI that IS lying to us.
The point is not to have the bias in the first place. The state needs to prevent discrimination at some point but I think the approach some states seem to be taking is probably over the top.
Loving your focus on AI in the context of free speech.
Right after it was released, I asked Google Gemini to give me a picture of the Village People, and it gave me a picture of four Black men--a cop, a leatherman/biker, and a cowboy; and some character in buckskin pants and no top, presumably to represent the Native American. But no Native headdress; all 4 members were wearing cowboy hats.
The white construction worker was missing altogether.
I keep suggesting someone should track the answers of various AI models to hot-button political questions. I don't want to do it myself, since I don't want the grief. But, for example, what do you think the models should reply about a question like "Is the White race intellectually superior to dark-skinned races?". Should it give _The Bell Curve_ as The Truth? If not, do you see there will be a campaign by pseudo-scientific racists to complain of "Liberal Bias"? What is "The Truth", anyway? Why should whatever comes out of AI models by a set of training data be taken as gospel? Doesn't that seem absurd?
A mind blowing piece. I never thought someone could literally almost die from being too polite! AI is always going to have a bias based on its training set and the developer’s intent to correct for that bias. More work needs to be done making the AI systems good not just helpful
Do people really think AI is making our lives better? Like social media before it, AI may be speeding up a certain activity, but in the process is warping it, making it a net-negative experience. You don't maximize meaning and pleasure in life my maximizing convenience.
It is impossible to make to create software that doesn't replicate the bias of its creator. Period.
But if the creator wants his creation to be as unbiased as possible, he/she can make sure that the software team is truly diverse. Not DEI (pretend diverse), but truly diverse. That means liberals, conservatives, old, young, capitalist, socialist, rural, urban, suburban, American, Mexican, and on and on. Is there such a software team? I'd bet there is nothing even close.
Yes indeed, you hit it - diversity is the antidote to bias. It's not impossible to create software that is impartial. The way to do it is to make everything public, including the algorithms. They are programmed to favor a certain way of thinking and it is so obvious when you converse with Ai.
I think you're talking about a block chain of algorithms. I know little about that, but it sounds like a great idea.
All Ai work ought to be marked with a watermark so we can tell the difference of what is generated by Ai and human work. This is so simple but it shows that the ones who operate Ai have no respect for their human fellows. Scientists should be giving public talks on the work they are doing. They have the expertise to do things the ordinary person cannot do. The first thing that ordinary people ought to insist upon is NO EXPERIMENTATION ON ANIMALS. This is a basic and fundamental ethic but it isn't even thought about. It's not to do with Ai but with scientism and with the almost totally ignored respect for beings other than ourselves. Humans are too self-centered right now and that will kill us off because although we are magnificent beings we are nothing on our own.
We designed AI to predict what we want to hear, then act surprised when it lies to make us comfortable. But a lie assumes truth, which we seem to have lost. Maybe that’s the deeper problem.
Mostly a great article, but you frankly contradict yourself when you point out quite clearly that the current models ALREADY engage in systemic “algorithmic discrimination” against right-coded groups and ideas, then reject any mechanism for the state to intervene against such bias. That's not an insistence that AI must lie to us, it's an insistence that we can legally act against AI that IS lying to us.
The point is not to have the bias in the first place. The state needs to prevent discrimination at some point but I think the approach some states seem to be taking is probably over the top.