How Myers-Briggs and AI are misused

Say you are looking for a job that has a pretty good idea of ​​what employers want to hear. Like many companies nowadays, your potential new job will give you a personality test as part of the hiring process. You plan to give answers that show that you are enthusiastic, that you are hardworking and that you are a real person.

Then they put you on camera while taking the verbal test and frowning slightly during one of your answers, and their facial analysis program decides that you are “difficult.”

I’m sorry, please follow!

This is just one of the many issues related to the growing use of artificial intelligence in employment, says the new documentary “Persona: The Dark Truth Behind Personality Tests”, which premieres Thursday on HBO Max.

The film, directed by Tim Travers Hawkins, begins with the origins of the Myers-Briggs Type Indicator personality test. Created in the middle of the 20th century by a mother-daughter team, it classifies people based on four factors: introversion / extraversion, feeling / intuition, thinking / feeling and judgment / perception. The test, which has an astrology-like cult for the 16 four-letter “types,” has evolved into an employment tool used throughout corporate America, along with successors such as the “Big Five,” which measures five major personality traits: openness , conscientiousness, extraversion, agreeableness and neuroticism.

“Persona” claims that the written test contains certain biases in the oven; for example, the potential for discrimination against those unfamiliar with the type of language or scenarios used in the test.

And, according to the film, incorporating artificial intelligence into the process makes things even more problematic.

The technology scans applications written for words with red signals and, when a camera interview is involved, examines applicants for facial expressions that could contradict the answers.

Four generations of women Briggs Meyers.
Four generations of women Briggs Meyers.
HBO Max

“[It] it operates on the basis of pseudo-scientific reasoning from the 19th century, according to which emotions and character can be standardized from facial expressions, “he told Post.

Ajunwa, who appears in the film, says that the potential for prejudice is huge. “Given that automatic systems are usually trained on the white faces and voices of men, the facial expressions or voice tones of women and racial minorities can be misjudged. In addition, there is the concern about confidentiality resulting from the collection of biometric data. ”

A widely used recruitment company, HireVue, would analyze the applicants’ facial movements, speech choice and voice before ranking them against other applicants based on an automatically generated “employability” score, the Washington Post reported. The company has since stopped the practice, it announced it last month.

Although they argue that “visual analysis no longer adds significant added value to assessments”, the move followed a cry over the potentially harmful effects.

Cathy O’Neil is a consultant in data science, the author of “Weapons of Mathematical Destruction: How Big Data Inequality Grows and Threats to Democracy” and one of the experts interviewed in “Persona”. His company, O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), provided an audit of practices to HireVue following their announcement.

“No technology is inherently harmful; it’s just a tool, “she told The Post in an email. “Just as a sharp knife can be used to cut bread or kill a person, facial recognition could be used to harm individuals or communities. . . This is especially true because people often assume that technology is objective and even perfect. If we have a blind faith in something deeply complex and deeply opaque, this is always a mistake. “

A typical Myers-Briggs personality test question.
A typical Myers-Briggs personality test question.
HBO Max

There have been a number of legislative actions around the use of facial algorithms in recent years. But New York City is the first to introduce a bill that would specifically regulate their use in the hiring process. It would require companies to disclose to applicants that they are using the technology and to conduct an annual bias audit.

Just as a sharp knife can be used to cut bread or kill a person, facial recognition could be used to harm individuals or communities.

Data Science Consultant Cathy O’Neil

But Ajunwa believes this does not go far enough. It is “a necessary first step in preserving workers’ civil liberties,” she said. But “what we need are federal regulations that are attached to federal anti-discrimination laws and that would apply in all states, not just New York.”

Those who met Isabel Briggs Myers, seeing the test, hand in hand with AI, being used to mercilessly determine whether people are “employed” seem far from its original intention, which was to help users to find the real calls.

As one of Briggs Myers’ nieces in the film puts it, “I think there are ways to use it that she would like to correct.”

.Source