Google sparked a riot earlier this month when it fired Timnit Gebru, the co-leader of a company research team studying the ethical implications of artificial intelligence. Google claims that it accepted its “resignation”, but Gebru, who is black, claims that she was fired because she drew unwanted attention to the lack of diversity of Google’s workforce. She had also been at loggerheads with her supervisors over their request to withdraw a paper she had co-authored on ethical issues associated with certain types of AI models that are central to Google’s business.
On this week’s Trend Lines podcast, WPR’s Elliot Waldman was joined by Karen Hao, senior AI reporter for the MIT Technology Review, to discuss Gebru’s elimination and its implications for the increasingly important field of AI ethics.
Listen to the full interview with Karen Hao on the Trend Lines podcast:
If you like what you hear, subscribe to Trend Lines:
The following is a partial transcript of the interview. It was slightly edited for clarity.
World policy review: First of all, could you tell us a little bit about Gebru and the kind of stature he has in AI, given the pioneering research he did and how he got to Google to get started?
Karen Hao: Timnit Gebru, you could say, is one of the cornerstones of the field of AI ethics. He got his doctorate. in AI ethics at Stanford under the guidance of Fei-Fei Li, who is one of the pioneers of the entire field of AI. When Timnit completed his doctorate at Stanford, he joined Microsoft for a postdoctoral fellowship before joining Google after being approached based on his impressive work. Google was starting its AI ethics team and they thought it would be a wonderful person to co-lead it. One of the studies for which she is known is one for which she co-authored with another researcher of color, Joy Buolamwini, on algorithmic discrimination that occurs in commercial facial recognition systems.
The newspaper was published in 2018 and, at that time, the revelations were quite shocking, because they audited commercial facial recognition systems that were already sold by the technology giants. The results of the paper showed that these systems, which were sold on the premise that they were extremely precise, were in fact extremely inaccurate, especially on darker-skinned and feminine faces. In the two years since the paper was published, there have been a number of events that have ultimately led these tech giants to give up or suspend the sale of their facial recognition products to the police. The seed of these actions was actually planted by the newspaper that Timnit co-authored. So she is a very big presence in the field of AI ethics and has done a lot of innovative work. She also co-founded a non-profit organization called Black in AI, which really promotes diversity in technology and AI specifically. It is a force of nature and a well-known name in space.
We should think about how to develop new AI systems that do not rely on this method of brute force to snatch billions and billions of sentences from the internet.
IN PR: What exactly are the ethical issues that Gebru and her co-authors identified in the paper that led to her dismissal?
Hao: The paper talked about the risks of large-scale language models, which are essentially AI algorithms that are trained on an enormous amount of text. You can imagine that they are trained on all the articles that have been published on the internet – all the sub-edits, the Reddit threads, the Twitter and Instagram subtitles – everything. And try to learn how to build sentences in English and how they could then generate sentences in English. One of the reasons Google is so interested in this technology is that it helps fuel their search engine. In order for Google to give you relevant results when you search for a query, it needs to be able to capture or interpret the context of what you’re saying, so that if you enter three random words, it can gather the intent of what you’re looking for.
What Timnit and her co-authors point out in this paper is that this relatively recent area of research is beneficial, but it also has some rather significant disadvantages that need to be discussed further. One of them is that these models consume an enormous amount of electricity because they run on very large data centers. And given that we are in a global climate crisis, the field should consider that doing this research could exacerbate climate change and then have downstream effects, which will have a disproportionate impact on communities. marginalized and developing countries. Another risk he points out is that these models are so large that they are very difficult to analyze and also capture large areas of very toxic internet.
So, they end up normalizing a lot of sexist, racist or abusive language that we don’t want to perpetuate in the future. But due to the lack of examination of these models, we are not able to fully dissect the types of things they learn and then eliminate them. Finally, the conclusion of the paper is that there are great advantages for these systems, but there are also great risks. And, as a field, we should spend more time thinking about how we can actually develop new artificial language systems that don’t rely so much on this method of brute force, just to train it with billions and billions of sentences taken from the internet.
IN PR: And how did Gebru’s supervisors react to Google?
Hao: What’s interesting is that Timnit said – and this was supported by her former teammates – that the paper was actually approved for a conference. This is a very classic process for her team and within the larger Google AI research team. The purpose of all research is to contribute to academic discourse and the best way to do this is to submit it to an academic conference. They prepared this paper together with several external collaborators and sent it to one of the premiere conferences in AI ethics for next year. She had been approved by her manager and others, but then, at the last minute, she received a notification from her superiors above her manager, saying she had to withdraw the paper.
Very little was revealed to him as to why he had to withdraw the paper. He then went on to ask many questions about who told him to withdraw the paper, why he was being asked to withdraw it, and whether or not changes could be made to make it more enjoyable for presentation. She continued to be blocked and received no further clarification, so she ended up sending an e-mail just before she went on holiday on Thanksgiving, saying she would not withdraw her work unless certain conditions were met first.
Silicon Valley has a conception of how the world works, based on the disproportionate representation of a certain subset of the world. That is, white men, usually upper class.
She asked who gives the feedback and what the feedback is. She also requested meetings with several directors to explain what happened. The way their research was treated was extremely disrespectful and it was not the way researchers were traditionally treated at Google. He wanted an explanation for why they had done this. And if they did not meet these conditions, she would have a sincere conversation with them about a last meeting at Google, so that she could create a transition plan, leave the company without problems and publish the newspaper outside the context of Google. Then he went on vacation, and in the middle of it, one of his direct reports sent him a text message saying they had received an email saying that Google had accepted his resignation.
IN PR: As for the issues that Gebru and her co-authors raise in their work, what does it mean for AI to have what appears to be this massive level of moral hazard, in which the communities that are most at risk due to impact Gebru and its co-authors have identified – environmental and other ramifications – are marginalized and often have no voice in technology, while the engineers who build these AI models are largely isolated from the risks?
Hao: I think this becomes the core of what has been an ongoing discussion in this community for the past two years, which is that Silicon Valley has a conception of how the world works based on the disproportionate representation of a particular subset of the world. . That is, white men, usually upper class. The values they have from their cross-section of lived experience have now somehow become the values that everyone must live. But it doesn’t always work that way.
They do this cost-benefit analysis that it is worth creating these very large language models and it is worth spending all the money and electricity to get the benefits of this type of research. But it is based on their values and their experience, and it may not be the same cost-benefit analysis that someone might do in a developing country, where they would rather not. then deals with the effects of climate change. This was one of the reasons why Timnit was so determined to ensure that there was more diversity at the decision-making table. If you have more people who have different experiences, who can then analyze the impact of these technologies through their lenses and bring their voices into the conversation, then we probably have more technologies that do not so much benefit them to a group at the expense of others.
Editor’s note: the photo above is available below CC BY 2.0 license.