The revolution in the hiring process is driven by innovative knowledge. Tools like ChatGPT are simplifying the task of job seekers in crafting resumes and cover letters that stand out, as previously mentioned. Furthermore, employers are increasingly turning to advanced technological solutions to sift through applications, assess applicant qualifications, and match potential candidates.
Pre-employment screening assessments are on the rise, encompassing tasks that may involve demonstrating skills in programming, completing task simulations, or undergoing psychometric evaluations aimed at evaluating intangible traits such as character, attitudes, integrity, and “emotional intelligence.”
The principles underlying these scientific methodologies have been under scrutiny for over a century, taking on a contemporary perspective in today’s world.
James Klusaritz has a profound understanding of these assessments. He admits that he will never view a bubble in the same light again.
Reflecting on his experience, Klusaritz shared, “It’s almost like PTSD.” Shortly after graduating from the University of Pennsylvania with a degree in economics and beliefs, he contemplated a career in the corporate sector. However, his interactions with prominent consulting firms like McKinsey & Co. and PwC often led to a familiar automated response, requesting him to engage in activities like the bubble game as part of the evaluation process, a component of the analysis conducted by Pymetrics, now integrated into Harver.
The bubble game entails clicking on a digital balloon to gradually inflate it, accumulating wealth in the process. However, excessive clicks can lead to the money disappearing. With balloons of varying colors and inflation rates, the task becomes increasingly intricate.
Klusaritz found it challenging to discern the metrics being assessed or how to excel in the evaluation, a deliberate design choice, according to Tomas Chamorro-Premuzic, an organizational psychologist, author, and chief innovation officer at Manpower Group. These assessments aim to uncover latent potential by predicting an individual’s capability to perform tasks they haven’t encountered before or to acquire skills they haven’t yet demonstrated.
Initially utilized by the military during World War I to identify soldiers susceptible to shell shock, psychometric tests have transitioned into the business realm, albeit with practical limitations.
Chamorro-Premuzic explained, “In the past, individuals had to undergo lengthy assessments in assessment centers, involving simulations lasting hours in an office setting.” Present-day psychometric evaluations are concise and predominantly digital. For instance, the Traitify personality test, utilized by major corporations like McDonald’s, prides itself on being the world’s quickest, taking approximately 90 seconds to complete.
Heather Myers, leading the team behind the Traitify test, emphasized the challenge of developing brief yet predictive assessments. The test prompts users to swipe through images and indicate their reactions, akin to a driver in traffic labeled “unfazed,” to evaluate behavioral alignment with the desired roles.
While Traitify adopts a mobile game format over AI for expediting psychometric evaluations, many platforms leverage AI technology. Beth Bynum from the Human Resources Research Organization highlighted the efficiency enhancement potential of AI in test administration.
However, concerns regarding transparency persist. Bynum cautioned, “The abundance of data increases the risk of unintentionally incorporating irrelevant factors into predictions.” She illustrated this with an example where an algorithm trained to differentiate huskies from wolves associated the presence of snow with wolves, leading to a flawed inference.
In the realm of employment, instances like Amazon’s AI recruitment experiment, which exhibited gender bias in candidate evaluation, underscore the need for vigilance. Ben Porr, an organizational psychologist at Harver, emphasized the importance of human oversight in algorithm calibration to mitigate biases and optimize predictive accuracy.
All platforms interviewed for this feature emphasized the involvement of scientific teams and third-party validation to ensure assessment validity, reliability, and fairness. They stressed that these tools should complement human judgment, offering standardized, evidence-based metrics to enhance the objectivity and fairness of hiring decisions.
Despite the rapid proliferation of these assessments, regulatory oversight remains limited in the United States, signaling the need for increased scrutiny.
Tomas Chamorro-Premuzic from Manpower warned of the potential exacerbation of inequality due to AI’s implementation in the short term.
James Klusaritz, having transitioned from the corporate sector to a comedic venture on TikTok, remains unaware of his performance in the Pymetrics balloon test. He finds solace in creating TikTok content that resonates with his audience by humorously addressing common grievances or personal dislikes.