The company’s experimental hiring tool used artificial intelligence to give job candidates scores ranging from one to five stars - much like shoppers rate products on Amazon, some of the people said.So what would you feed this program?
That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.So the data, which we've discussed on this blog repeatedly, is that males dominate the hard sciences including the higher end of the computing fields. This is because these fields require high IQ's and males dominate at the far right and far left sides of the IQ curve. That is, there are more utterly stupid and stupendously brilliant men than there are women. On top of that, women's revealed preferences show that they are more inclined to engage in careers where there is more empathy and less competition. This is known stuff. It is only controversial to those who are suffering from self-imposed cognitive dissonance. Thus the AI wasn't given "biased" data. The data is, what it is. If it were fed data that was manipulated to "balance" the numbers, then the data would be wrong. Why would we want to feed a neutral program wrong data?
In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.No, Amazon's system discerned the pattern that males are, generally speaking, the better candidates based on the data. Notice that the Amazon system did not outright reject women because they were women. If the AI wanted to reject women outright, it would have flagged every "female" name it came across. It would have flagged every candidate who self-identified as female. Nowhere in this report does it indicate that the AI did so. Hence, the AI wasn't biased against women, it was biased against women who displayed certain patterns. Noticed that it downgraded persons who graduated from two all women's schools. Were these schools known for their STEM credentials? Wouldn't YOU want to know what these schools were so that you don't send your children there? Is it "strange" that an AI would discover that certain schools produce a pattern of sub-par candidates? I'm not here to say that AI is perfect. The fact of the matter is that AI is imperfect and it gets better as it refines its pattern recognition. The problem for many people infected with self-imposed cognitive dissonance is that AI will eventually see the most obvious (and not so obvious) patterns. The question is whether society will accept that these patterns exist or if they will purposely pollute the data in order to make themselves feel better about their false beliefs.