When you peruse Hilke Schellmann’s book “The Algorithm,” you may find solace if you’ve harbored concerns about how candidate-screening algorithms could be hindering your chances of securing your dream job. The investigative reporter and NYU news professor sheds light on how HR departments utilize technology software, which not only perpetuates bias but also falls short of their purported objective of selecting the best candidate.
Schellmann delves into various aspects of this software, including character assessments that analyze facial expressions, vocal tones, and social media behavior. In an intriguing experiment, she posed as a job seeker to explore these tools, ranging from resume screeners to digital game-based assessments. Despite intentionally providing nonsensical information in German, one application surprisingly deemed her a perfect fit for the position. While her Twitter activity garnered praise for “consistency,” her LinkedIn profile received a less favorable evaluation.
The revelations may prompt you to reconsider your reliance on platforms like LinkedIn and contemplate starting your own business. However, Schellmann offers constructive guidance on how society can curb biased HR systems. In an interview condensed for brevity and clarity, she imparts valuable insights for job seekers on navigating the challenges posed by these algorithms.
During her exploration of the HR tech landscape in 2018, Schellmann was struck by the proliferation of AI tools in the industry, signaling a significant shift in the HR domain. Despite software companies touting bias-free solutions, the reality is that AI can inadvertently perpetuate biases present in the training data it processes. For instance, Schellmann uncovered a case where a resume screening tool altered a candidate’s scores upon encountering the term “American American” on their resume.
The accountability for AI-driven discrimination remains a contentious issue, with legal implications yet to be fully elucidated. While vendors often disclaim responsibility, the ultimate decision-making power lies with the organizations utilizing these tools. However, instances have emerged where applicants below a certain AI-assigned threshold were automatically rejected, raising questions about the fairness and efficacy of these systems.
The opacity surrounding AI algorithms poses a challenge in ensuring accountability and transparency in high-stakes decision-making scenarios. The reliance on deep neural networks further complicates the interpretability of these tools, potentially hindering efforts to rectify biases ingrained in the data used for training.
The utilization of game-based personality assessments and physical appearance analyses in hiring processes evokes parallels with outdated pseudosciences like phrenology and physiognomy. Schellmann draws attention to the fallacy of attributing intrinsic qualities to external features, highlighting the limitations of such practices in accurately assessing candidates’ suitability for roles.
In navigating the evolving landscape of AI-driven hiring practices, job seekers can leverage AI tools to enhance their application materials and interview preparation. Strategies such as optimizing resume content to align with job descriptions and proactively engaging with hiring managers on platforms like LinkedIn can help candidates differentiate themselves in a competitive job market.
As the discourse around AI ethics and accountability continues to evolve, Schellmann advocates for increased transparency and scrutiny of AI tools, emphasizing the need for independent evaluation and oversight mechanisms to mitigate biases and ensure fair and equitable hiring practices.