Almost three weeks after Black’s sudden departure ethicist of artificial intelligence Timnit Gebru, more details appear about the new set of policy shadows launched by Google for its research team.
After analyzing internal communications and talking to researchers affected by the rule change, Reuters reported on Wednesday that the tech giant recently added a process to review “sensitive topics” for scientists’ work, and at least three times explicitly called on scientists to refrain from throwing Google technology in. a negative light.
Under the new procedure, scientists must meet with special policy, policy and public relations teams before continuing AI research on so-called controversial topics that could include facial analyzes and categories of race, sex or political affiliation.
In a revised example by Reuters, scientists who researched the recommendation AI used to populate user feeds on platforms such as YouTube – a property owned by Google – have developed a paper detailing concerns that the technology could be used to promote ‘misinformation, discrimination or unfair results’ and ‘insufficient content diversity’, and leads to ‘political polarization’. After a review by a senior manager who trained researchers to get a more positive tone, the final publication suggests that systems can promote “accurate information, fairness and diversity of content.”
“Technological advances and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” reads an internal website describing the policy.
G / O Media may receive a commission
In recent weeks – and especially after Gebru’s departure, a well-known researcher who seems to have disagreed with the increases, after sounding the alarm about censorship infiltrating the research process – Google has faced increased control over potential bias in its internal division of research.
Four researchers on staff who spoke to Reuters validated Gebru’s claims, saying they also believe Google is beginning to interfere with critical studies of the technology’s potential for harm.
“If we research what is appropriate, given our expertise and are not allowed to publish this on grounds that are inconsistent with high-quality peer review, then we are facing a serious censorship issue,” Margaret Mitchell , said a senior scientist at the company.
In early December, Gebru claimed she was fired by Google having pushed back against an order not to publish research that claims that AI capable of imitating speech could put the disadvantages of marginalized populations.