When it comes to employment decisions, it's not just AI that may have biases
Connecting state and local government leaders
New York City’s new law meant to reduce AI-based discriminatory hiring practices is incomplete, experts say. The legislation fails to address human bias, among other loopholes.
Artificial intelligence-based tools can ease administrative burdens that plague state and local government agencies, including hiring processes. But use of the technology in the employment sector has raised concerns regarding the fairness of machine-made decisions.
In general, AI tools can help hiring managers rank job applicants based on criteria such as years of experience or previous job roles. “The ranking isn't replacing what the hiring team does, but helping make their job easier,” said Daniel Castro, director of the Center for Data Innovation. “[Hiring teams] want to spend more time interviewing versus just reading through the actual resumes themselves.”
But AI-based hiring services may fall flat when the algorithms used to assess job candidates are biased. “There’s a risk that you could train a system to say the ideal candidate is the one we already have, and that could embed existing stereotypes [and] workforce culture into the system,” he said.
Public school systems that use hiring algorithms, for example, may inadvertently select women’s applications more frequently, because women have traditionally filled teaching positions. A candidate could include a resume detail as small as participation on the women’s volleyball team, Castro said, and an algorithm could immediately rank the individual higher or lower on the roster depending on the criteria an organization is searching for.
To address workers’ long-standing fears of biased AI hiring systems, New York City is cracking down on automated employment decisions with a new law that requires employers to conduct annual third-party bias audits on the AI tools they use to assess job candidates.
The city’s Department of Consumer and Worker Protection’s Local Law 144 also requires that employers make information about their audits publicly available and notify employees or job candidates that AI services are being used during the hiring or promotion process.
Bias audits must include data reflecting the organization’s selection rate of candidates for each race, ethnicity and sex category required by the U.S. Equal Employment Opportunity Commission for reporting purposes. Employers and employment agencies are prohibited from using automated employment decision tools that have not been audited within the last year.
Despite the law’s attempt to level the employment playing field, some experts and advocacy groups have expressed their disapproval of the legislation, which went into effect July 5.
So, why the skepticism toward Local Law 144?
“There’s no real criteria for [third-party audits]. It’s not like these types of audits exist, so there’s a lot of questions about how it can actually be done, whether it will be effective and if they're going to achieve what they're trying to achieve—which is obviously reducing bias in hiring and employment decisions,” Castro said.
Plus, biased humans trump biased algorithms. Even when based on algorithm inputs, hiring decisions are ultimately still made by people, Castro said, “so there’s a lot of concern that [policymakers] are picking on the algorithm here, and they’re not actually getting to the root problems, which is [the] hiring biases that companies have.”
Meanwhile, other cities and states are also trying to limit discriminatory AI-enabled hiring.
The District of Columbia is considering legislation under the Stop Discrimination by Algorithms Act of 2023 that aims to eliminate discriminatory practices in life-altering decisions such as employment, education or housing. Under this law, employers would have to conduct third-party discrimination audits on their AI service providers and notify individuals on how their information is used in automated decision tools.
California also recently took a stab at regulating AI in decision-making. Earlier this year, the state legislature introduced AB 302, which would require the California Department of Technology to review state agencies’ use of high-risk automated decision systems, meaning those that affect employment, health care and criminal justice. If passed, the department would also be required to assess an agency’s plan to mitigate inaccurate, discriminatory or biased decisions.
State and local policymakers can audit AI providers as much as they want, but if bias among hiring teams is not addressed, then they may be “throwing away money on these kinds of algorithmic systems that aren’t that advanced,” Castro said. He also questioned whether it’s necessary to exert additional time and resources to scrutinize such systems.
Policymakers should ensure that the regulations they establish are not too narrow, especially for evolving technologies like AI. A wait-and-see approach might be the ideal route for state and local governments, Castro said, but policymakers also should review existing employment laws and consider how they can be applied to AI hiring tools.
“Now is probably not the time to jump in with new regulations,” Castro said. “You don’t know exactly what you’re doing; you don’t know exactly where the problems are.”
NEXT STORY: Child welfare staffing crisis can only be solved by addressing capacity issues first