A research study examining the efficacy of New York City legislation aimed at addressing bias in AI hiring algorithms has revealed that the implementation of the law has been largely ineffective.
New York City Local Law 144 (LL144), which was enacted in 2021 and became effective on January 1 of the following year, with enforcement commencing in July 2023, mandates that employers utilizing automated employment decision tools (AEDTs) conduct annual audits to identify race and gender bias. Additionally, they are required to disclose these audit results on their websites and indicate in job postings their utilization of such software in the hiring process.
Conducted by researchers from Cornell University, the nonprofit organization Consumer Reports, and the nonprofit Data & Society Research Institute, the study has not yet been officially published but was shared with The Register. The findings indicate that out of 391 surveyed employers, only 18 had complied with the mandatory publication of audit reports stipulated by the law. Moreover, merely 13 employers, of which 11 had also published audit reports, included the essential transparency notices.
According to Jacob Metcalf, a researcher at Data & Society and one of the study’s authors, LL144 affords employers significant discretion in determining whether their systems fall under the law’s purview. He highlighted that there are various avenues through which employers can evade compliance.
Despite the law not mandating specific actions in response to discriminatory outcomes revealed in audits, companies found to be using biased AEDTs may still face repercussions. Metcalf emphasized that employers disclosing audits indicating disparate impacts are susceptible to alternative forms of legal action, such as costly civil lawsuits related to employment discrimination.
In a separate endeavor, Metcalf and his colleagues are preparing a second paper focusing on the experiences of auditors evaluating AEDTs employed by companies in New York City. The preliminary findings, reviewed by The Register, suggest instances of discrimination uncovered during audits.
While laws resembling LL144 have been under consideration in other jurisdictions, progress has been hindered as policymakers recognize the limited effectiveness of New York City’s legislation in combatting AI hiring biases. Metcalf noted that sponsors of similar bills are reassessing their approaches, with little advancement reported on comparable laws elsewhere.
The European Union’s AI Act, tentatively agreed upon in December 2023, categorizes AI applications in recruitment within the “high risk” classification, necessitating thorough reviews before market entry and during operation. Despite this, the EU has yet to finalize the legislation.
The prevalence of bias in AI technologies has been widely acknowledged, with instances such as the lawsuit against HR software company Workday for alleged discrimination against Black job candidates underscoring the persistent challenges that laws like LL144 aim to address.
Despite its shortcomings, LL144 is viewed by researchers as a foundational step towards more effective regulation. Metcalf emphasized the need for ongoing experimentation with accountability structures, acknowledging that while initial criticisms of LL144 have been validated, valuable insights have been gleaned for future enforcement strategies.
A key takeaway from LL144, according to Metcalf, is the necessity to broaden the scope of AEDT software usage that falls under regulatory oversight. The current abstract language in the law, defining covered usage based on assisting or replacing discretionary decision-making in employment matters, leaves room for interpretation. To enhance the effectiveness of future laws combating AEDT discrimination, experts advocate for eliminating qualifications on the use of AI hiring algorithms.
In conclusion, Metcalf stressed that any system generating scores should be considered within the regulatory scope without granting employers discretion in determining compliance, as this approach may undermine the accountability objectives that such laws aim to achieve.