New York City businesses that use artificial intelligence to help find hires now have to show the process was free from sexism and racism.
A new law, which takes effect Wednesday, is believed to be the first of its kind in the world. Under New York’s new rule, hiring software that relies on machine learning or artificial intelligence to help employers choose preferred candidates or weed out bad ones — called an automatic employment decision tool, or AEDT — must pass an audit by a third-party company to show it’s free of racist or sexist bias.
Companies that run AI hiring software must also publish those results. Businesses that use third-party AEDT software can no longer legally use such programs if they haven’t been audited.
Companies are increasingly using automated tools in their hiring processes. Cathy O’Neil, the CEO of Orcaa, a consulting firm that has been running audits of hiring tools for companies that want to be in good standing with New York’s new law, said the rise in tools that automatically judge job candidates has become necessary because job seekers are also using tools that send out huge numbers of applications.
“In the age of the internet, it’s a lot easier to apply for a job. And there are tools for candidates to streamline that process. Like ‘give us your resume and we will apply to 400 jobs,’” O’Neil said. “They get just too many applications. They have to cull the list somehow, so these algorithms do that for them.”
AI-infused hiring programs have drawn scrutiny, most notably over whether they end up exhibiting biases based on the data they’re trained on. Studies have long found that programs that use machine learning or artificial intelligence often exhibit racism, sexism and other biases.
As flashy generative AI applications like ChatGPT and Midjourney have surged in popularity, federal lawmakers and even many tech company executives have repeatedly called for regulation. But so far, there’s little sense from Congress what that might look like.
Experts say that while the New York law is important for workers, it’s still very limited.
Julia Stoyanovich, a computer science professor at New York University and a founding member of the city’s Automatic Decisions Systems Task Force, said it’s an important start but still very limited.
“First of all, I’m really glad the law is on the books, that there are rules now and we’re going to start enforcing them,” Stoyanovich said.
“But there are also lots of gaps. So, for example, the bias audit is very limited in terms of categories. We don’t look at age-based discrimination, for example, which in hiring is a huge deal, or disabilities,” she added.
Share your thoughts