3 lessons from Stanford’s Covid-19 vaccine algorithm disaster

Stanford found himself in hot water last week after implementing a faulty Covid-19 vaccine distribution algorithm. But the fiasco offers a warning story that extends far beyond Stanford’s own doors – and teaches crucial lessons as the country prepares to face complex decisions about who gets the vaccine, when and why.

At the heart of the disaster is a rules-based formula designed to determine the order in which thousands of Stanford health workers should be vaccinated. The tool took into account employee-based variables such as age, job-based variables and public health guidelines, according to the MIT Technology Review. But the flaws in the calculation meant that hospital administrators and other employees working from home were in front of the line, while only seven of Stanford’s 1,300 medical residents made the list.

Experts told STAT that what went wrong seems to be a story with unintended consequences, which often occur at the intersection of human intuition and artificial intelligence. Here are some key points to consider about the incident and the broader issues it reflects.

advertisement

Blame people, not algorithms

In his first attempts to explain the problem, Stanford administrators blamed the algorithm. Despite the best intentions, they explained, the algorithm made a mistake that people had to answer.

It’s a bit like blaming the hammer for the lack of a nail.

advertisement

Experts told STAT that this is a human problem from start to finish. Saying otherwise aggravates the problem by involving all the algorithms without understanding how its use went wrong.

“To me, this seems to be a case of well-meaning people who want to be guided by data and make a sincere mistake,” said Nigam Shah, a professor of bioinformatics at Stanford. “We should use this as a learning opportunity rather than indignation.”

Critically, Stanford’s algorithm was not powered by machine learning, in which the computer learns from data without explicit programming by humans. Rather, it was rule-based, as the MIT Technology Review explained, which means that people wrote a set of instructions that the tool simply operated.

The inevitable conclusion is that something went wrong with these instructions. But what was it? And why weren’t these issues captured and corrected before the tool was used? These are fundamental questions for the people involved, not the tool they used.

Julie Greicius, Stanford’s chief foreign minister, did not answer questions from the state, including what went wrong with the algorithm, but said the university quickly revised its vaccine distribution plan to prioritize health care workers, including residents and peers. Stanford also created a new committee that takes into account the interests of all stakeholders, she said.

“We are optimistic that all of our health care workers will receive the vaccine in the next two weeks,” Greicius added.

Beware of structural biases in the data

In building an algorithm that decides which personnel to protect first, Stanford should have decided which is more important: preventing deaths from Covid-19 or stopping virus infections. Depending on the outcome they wanted to prevent, the algorithm will consider a variety of important considerations – including age, job title, and theoretical risk of Covid-19 exposure – but may weigh them differently.

The algorithm seems to have generally sought to avoid death rather than infection. For this reason, it would give additional weight to factors such as age and less weight to factors such as theoretical exposure.

Complicating matters further, the tool does not appear to have explained the actual exposure of workers to the virus and changes in hospital rules and protocol during the pandemic, several experts and a Stanford colleague argued.

“I think it was conceived with the best of intentions,” said Jeffrey Bien, a Stanford oncologist, “but there are difficult decisions to make. If you design the algorithm from the point of view of: prevent as many deaths as possible, it would be different from trying to prevent as many infections as possible ”.

Take, for example, a 68-year-old chief clinician who normally cares for patients in the hospital but sees patients remotely during a pandemic. Their age and normal job requirements would theoretically put the clinician at increased risk of the virus. But given the circumstances, the clinician would have virtually no physical interaction with potential patients with Covid-19 and reduced exposure to the virus.

On the other hand, medical residents, peers and trainees would be largely considered at lower risk due to age and job requirements in non-pandemic periods.

But considering that these younger residents now interact with dozens of Covid-19 patients every day makes this theoretical risk unnecessary. Much more important is their real risk – the real likelihood, based on these interactions, of becoming infected with Covid-19.

However, if Stanford’s algorithm had really been programmed to avoid deaths, many front-line employees – despite the disproportionately high risk of exposure to Covid-19 – would be behind the line when it came time to distribute. the vaccine because of their age.

“There’s a difference between your theoretical population and the population you actually have (the algorithm),” said Andrew Beam, an artificial intelligence expert and professor of epidemiology at Harvard TH Chan School of Public Health. “You’re right to think that the elderly are at risk, but if those elderly people don’t actually care for Covid patients, you have to account for that, and that seems to be the fundamental mismatch here.”

Validate the algorithms before implementing them

Because this was a direct rules-based algorithm, Beam said, Stanford’s developers could have assumed it would produce the result they intended. After all, they understood all the factors that the algorithm takes into account, so of course it would give priority to people for vaccination.

But the way to know for sure is to test the algorithm before it is implemented.

“He could have sent an e-mail saying, ‘Here’s our vaccine allocation tool, it would bother you to put in your job, age and level of training – then they could see very quickly what it would look like. allocation, ”Beam said. “He would have said, ‘Lord, we will vaccinate five of our 1,300 inhabitants.’

Such an audit is a crucial step in the development of AI, especially in medicine, where injustice can undermine a person’s health, as well as their confidence in the care system. The insidious thing about bias is that it is so difficult for people to see or police themselves. But AI has a way of making this clear to everyone.

“The problem with computers,” Beam said, “is that they do exactly what you tell them to do.”

Source