Timnit Gebru’s departure from Google exposes a crisis in AI

This year he had many things, including bold claims about the discoveries of artificial intelligence. Industry commentators have speculated that the GPT-3 language generation model could have achieved “general artificial intelligence,” while others praised Alphabet’s subsidiary Alphaepold’s DeepMind protein folding algorithm and ability to transform biology. “. While the basis for such claims is thinner than effusive headlines, this has not done much to dampen the enthusiasm of the entire industry, whose profits and prestige are dependent on the proliferation of AI.

In this context, Google fired Timnit Gebru, our dear friend and colleague and leader in the field of artificial intelligence. She is also one of the few black women in AI research and an unwavering advocate for bringing more COPD, women and non-Western people into the field. By and large, she excelled at the job Google has committed to, including demonstrating racial and gender disparities in facial analysis technologies and developing reporting guidelines for data sets and AI models. Ironically, this and her vocal plea for those underrepresented in AI research are also the reasons, she says, the company fired her. According to Gebru, after asking her and her colleagues to withdraw critical research on large-scale (profitable) AI systems, Google Research told its team that it accepted his resignation, despite the fact that he did not resign. (Google declined to comment on this story.)

Google’s awful treatment of Gebru exposes a double crisis in AI research. The domain is dominated by an elite male workforce, mostly white, and is controlled and funded primarily by major players in the industry – Microsoft, Facebook, Amazon, IBM and yes, Google. With Gebru’s shooting, the civility policy that curtailed the young effort to build the necessary bumpers around AI was torn, raising questions about the racial homogeneity of the AI ​​workforce and the ineffectiveness of corporate diversity programs at the center of discourse. But this situation has also shown that – no matter how sincere a company may look like Google’s promises – company-funded research can never be divorced from the realities of power and revenue and capital flows.

This should concern us all. With the proliferation of AI in areas such as health care, criminal justice and education, researchers and lawyers are raising urgent concerns. These systems make determinations that directly shape lives, while incorporating them into structured organizations to reinforce histories of racial discrimination. AI systems also concentrate power in the hands of those who design and use them, while hiding the responsibility (and responsibility) behind the complex computing veneer. The risks are profound and the incentives are decidedly perverse.

The current crisis exposes the structural barriers that limit our ability to build effective protections around AI systems. This is especially important because the populations subject to prejudice and prejudice in AI predictions and determinations are primarily BIPOC people, women, religious and gender minorities and the poor – those who have endured the burden of structural discrimination. Here we have a clear racial divide between those who benefit – corporations and primarily white male researchers and developers – and those most likely to be hurt.

Take, for example, facial recognition technologies, which have been shown to “recognize” people with darker skin less often than those with lighter skin. This alone is alarming. But these racial “errors” are not the only problems with facial recognition. Tawana Petty, director of organization at Data for Black Lives, points out that these systems are disproportionately deployed in predominantly black neighborhoods and cities, while cities that have been successful in banning and pushing back against the use of facial recognition are predominantly white.

Without independent, critical research that focuses on the perspectives and experiences of those harmed by these technologies, our ability to understand and challenge the overweight claims made by the industry is significantly hampered. Google’s treatment of Gebru makes it increasingly clear where the company’s priorities appear to be when critical work pushes back its business incentives. This makes it almost impossible to ensure that AI systems are accountable to the people most vulnerable to harm.

Industry audits are still compromised by the close links between technology companies and seemingly independent academic institutions. Researchers from corporations and academia publish articles together and rub elbows at the same conferences, with some researchers even holding competing positions at technology companies and universities. This blurs the line between academic and corporate research and hides the incentives that support such work. It also means that the two groups look terribly similar – AI research in academia suffers from the same dangerous problems of racial and gender homogeneity as its corporate counterparts. Moreover, top IT departments accept large sums of funding from Big Tech research. We only need to look at Big Tobacco and Big Oil for worrying patterns that expose how much influence large companies can exert on public understanding of complex scientific issues when knowledge creation is left in their hands.

.Source