Exclusive: Google promises changes to research oversight after the internal riot

(Reuters) – Google Alphabet Inc. will change procedures before July to review the work of scientists, according to a city hall recording heard by Reuters as part of an effort to quell internal turmoil over the integrity of its artificial intelligence (AI) research.

PHOTO FILE: Google’s name is displayed outside the company’s office in London, UK November 1, 2018. REUTERS / Toby Melville

Speaking at a staff meeting last Friday, Google Research executives said they were working to regain confidence after the company fired two prominent women and fired them, according to a one-hour recording, the contents of which were confirmed. from two sources.

The teams are already testing a questionnaire that will assess projects for risk and help scientists navigate reviews, said Maggie Johnson, the research unit’s chief operating officer. This initial change will be launched by the end of the second quarter, and most newspapers will not require further verification, she said.

Reuters reported in December that Google had introduced a review of “sensitive topics” for studies involving dozens of issues, such as China or the bias of its services. Internal reviewers have called for at least three papers on AI to be amended to refrain from throwing Google technology in a negative light, Reuters reported.

Jeff Dean, Google’s senior vice president who oversees the division, said on Friday that the analysis of “sensitive topics” was “confusing” and that he had instructed senior research director Zoubin Ghahramani to clarify the rules, according to the recording.

Ghahramani, a Cambridge University professor who joined Google in September at Uber Technologies Inc., said during City Hall, “We need to be comfortable with that discomfort” of self-critical research.

Google declined to comment on Friday’s meeting.

An internal e-mail seen by Reuters provided new details about the concerns of Google researchers, showing exactly how Google’s legal department modified one of the three AI papers, called “Extracting Training Data from Large Language Models.” (bit.ly/3dL0oQj)

The e-mail, dated February 8, from a co-author of the paper, Nicholas Carlini, was sent to hundreds of colleagues, seeking to draw their attention to what he called “deeply insidious” edits by the company’s lawyers.

“Let’s be clear here,” the 1,200-word e-mail said. “When we, as academics, write that we have a ‘concern’ or find something ‘worrying’ and a Google lawyer asks us to change it to sound nicer, that’s a lot of Big Brother’s intervention.”

The necessary changes, according to his e-mail, included “negative-neutral” swaps, such as changing the word “concerns” to “considerations” and “dangers” to “risks”. Lawyers also asked for references to Google technology to be deleted; the authors’ finding that the AI ​​leaked copyrighted content; and the words “violation” and “sensitive” were said in the e-mail.

Carlini did not respond to requests for comment. Google, in response to questions about the email, disputed the claim that lawyers were trying to control the tone of the newspaper. The company said it had no problems with the subjects investigated by the newspaper, but found some legal terms used incorrectly and led to a complete change.

RISK AUDIT

Last week, Google also named Marian Croak, a pioneer of Internet audio technology and one of Google’s few black vice presidents, to strengthen and manage 10 teams studying issues such as racial bias in algorithms and technology for people with disabilities. .

Croak said at Friday’s meeting that it would take time to address researchers’ concerns about AI ethics and mitigate the damage to the Google brand.

“Please consider me fully responsible for trying to turn the situation around,” she said on the record.

Johnson added that the AI ​​organization brings in a consulting firm for a comprehensive assessment of the impact of racial equity. The department’s first such audit would lead to recommendations “that will be quite difficult,” she said.

Tensions in Dean’s division deepened in December, after Google dropped Timnit Gebru, co-leader of its ethical research team AI, following its refusal to withdraw a paper on language-generating AI. Gebru, who is black, accused the company at the time of reviewing its work differently because of its identity and the marginalization of employees from underrepresented backgrounds. Almost 2,700 employees signed an open letter in support of Gebru. (bit.ly/3us5kj3)

During City Hall, Dean explained what scholarship the company will support.

“We want responsible AI investigations and AI ethics,” Dean said, citing the study of the technology’s environmental costs. But it is problematic to cite the data “close to a factor of one hundred”, while ignoring more accurate statistics and Google’s efforts to reduce emissions, he said. Dean previously criticized Gebru’s work for not including important conclusions about the impact on the environment.

Gebru defended the citation of his work. “It is a very bad thing for Google to appear defensively against a paper that has been cited by so many similar institutions,” she told Reuters.

Employees continued to post about their frustrations over the past month on Twitter as Google investigated and then fired Margaret Mitchell, AI’s co-ethical leader for moving electronic files out of the company. Mitchell said on Twitter that he had acted “to raise concerns about race and gender inequality and to talk about Google’s problematic dismissal of Dr. Gebru.”

Mitchell had contributed to the newspaper that led to Gebru’s departure and to a version published online last month without Google affiliation called “Shmargaret Shmitchell” as a co-author. (bit.ly/3kmXwKW)

Asked for comments, Mitchell expressed his lawyer’s disappointment at Dean’s criticism of the newspaper and said her name had been removed following a company order.

Reporting by Paresh Dave and Jeffrey Dastin; Editing by Jonathan Weber and Lisa Shumaker

.Source