an AI resume screener had been trained on CVs of employees already at the firm, giving people extra marks if they listed “baseball” or “basketball” – hobbies that were linked to more successful staff, often men. Those who mentioned “softball” – typically women – were downgraded.
Marginalised groups often “fall through the cracks, because they have different hobbies, they went to different schools”
This is not “AI”. This is “key word filtering”. It’s been done for decades. Why, tech writers, why must you not use your brains when writing these articles? … We aren’t going to believe a word you write if you can’t get basic facts figured out.
Ah, but the AI part comes in not knowing what the keywords are because it’s all mangled into some neural network soup.
They use their brains just fine. They know AI is clickbait gold, and That’s all that matters.
A few well informed people get turned off by it? Who cares, they got a big chunk of readers from news aggregators.
No, it’s pretty clear that this is a result of modern “AI”… key word filtering wouldn’t push applicants mentioning basketball/baseball up and softball down, unless HR is explicitly being sexist and classiest/racist like that.
I mean, the problem has existed for sure before ML & AI was being used, but this is pretty clearly the result of an improperly advised/trained dataset which is very different from key word filtering. I don’t think HR a decade ago was giving/deducting extra points on applicants for resumes for mentioning sports/hobbies irrelevant to the job
Why do you think you can’t use a badly trained “AI” to keyword filter?
You can. It’s just not new, not news.