Thursday, October 11, 2018

There Is No Such Thing As "Racist" AI

In the continuing War On Noticing, those afflicted with self-imposed cognitive dissonance have declared that AI is or can be "racist". Let's put this nonsense to rest. First. AI has no survival instinct. It is not alive. AI is concerned with one thing and one thing only: recognizing patterns and modifying it's behavior to better recognize said patterns. That pattern may be how a human player plays a First Person Shooter. It may be on how a cell divides when it is healthy vs. when it is not. It may be recognizing certain proteins in a biological experiment. All the AI cares about is recognizing the pattern. Period.

However; in today's world, recognizing patters has become "problematic". If you notice that certain populations commit more crimes, then you are racist. If you notice that certain populations do poorly in standardized tests, you are racist. If you notice that certain populations have vaginas, uteri(?) and ovaries, you are "transphobic". You can be fired for stating obvious shit like: "there are males and females." It is "bullying" to assert documented facts. AI could care less about your attitude about these things. If there is a pattern, there is a pattern. Here's the latest nonsense:

The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.
So what would you feed this program?
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
So the data, which we've discussed on this blog repeatedly, is that males dominate the hard sciences including the higher end of the computing fields. This is because these fields require high IQ's and males dominate at the far right and far left sides of the IQ curve. That is, there are more utterly stupid and stupendously brilliant men than there are women. On top of that, women's revealed preferences show that they are more inclined to engage in careers where there is more empathy and less competition. This is known stuff. It is only controversial to those who are suffering from self-imposed cognitive dissonance.

Thus the AI wasn't given "biased" data. The data is, what it is. If it were fed data that was manipulated to "balance" the numbers, then the data would be wrong. Why would we want to feed a neutral program wrong data?

In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.
No, Amazon's system discerned the pattern that males are, generally speaking, the better candidates based on the data. Notice that the Amazon system did not outright reject women because they were women. If the AI wanted to reject women outright, it would have flagged every "female" name it came across. It would have flagged every candidate who self-identified as female. Nowhere in this report does it indicate that the AI did so. Hence, the AI wasn't biased against women, it was biased against women who displayed certain patterns.

Noticed that it downgraded persons who graduated from two all women's schools. Were these schools known for their STEM credentials? Wouldn't YOU want to know what these schools were so that you don't send your children there? Is it "strange" that an AI would discover that certain schools produce a pattern of sub-par candidates?

I'm not here to say that AI is perfect. The fact of the matter is that AI is imperfect and it gets better as it refines its pattern recognition. The problem for many people infected with self-imposed cognitive dissonance is that AI will eventually see the most obvious (and not so obvious) patterns. The question is whether society will accept that these patterns exist or if they will purposely pollute the data in order to make themselves feel better about their false beliefs.