Google tells scientists to use “positive tone” in AI research, documents show | Technology

Sign up for the Guardian Today newsletter in the USA

This year, Google decided to tighten control over the work of scientists by launching a review of “sensitive topics,” and in at least three cases, the authors called for refraining from casting technology in a negative light, according to internal communications and job interviews.

Google’s new review procedure requires researchers to consult legal, political and public relations teams before pursuing topics such as facial and feelings analysis and classification of race, gender or political affiliation, according to internal web pages explaining the policy.

“Technological advances and the increasing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” said one of the pages for research staff. Reuters could not determine the date of the post, although three current employees said the policy began in June.

Google declined to comment on the story.

The “sensitive topics” process adds a round of scrutiny to Google’s standard review of trap work, such as revealing trade secrets, said eight current and former employees.

For some projects, Google officials intervened in later stages. A senior Google manager who reviewed a study on content recommendation technology shortly before publication this summer told authors to “be very careful to get a positive tone,” according to internal correspondence read Reuters.

The manager added: “This does not mean that we should hide from the real challenges” presented by the software.

Subsequent correspondence from a researcher to reviewers shows the authors “updated to remove all references to Google products.” A project seen by Reuters mentions YouTube owned by Google.

Four staff researchers, including lead researcher Margaret Mitchell, said they believe Google is beginning to interfere with crucial studies of potential technology damage.

“If we research the right things, given our expertise, and we are not allowed to publish this on grounds that are not consistent with high-quality peer review, then we are entering a serious censorship issue,” Mitchell said.

Google says on its public website that scientists have “substantial” freedom.

Tensions between Google and some of its employees came into effect this month after the sudden departure of scientist Timnit Gebru, who led a team of 12 people with Mitchell focused on ethics in artificial intelligence (AI) software.

Gebru says Google fired her after questioning an order not to publish research claiming that AI mimics speech could disadvantage marginalized populations. Google said it accepted and accelerated its resignation. It could not be determined whether Gebru’s work was subject to a review of “sensitive subjects.”

Jeff Dean, Google’s senior vice president, said in a statement this month that Gebru’s newspaper was referring to the potential damages without discussing the ongoing efforts to address them.

Dean added that Google supports the AI ​​ethics exchange and is “actively working to improve our paper review processes because we know that too many checks and balances can become cumbersome.”

Sensitive topics

The explosion in AI research and development in the technology industry has led authorities in the US and elsewhere to propose rules for its use. Some have cited scientific studies showing that facial analysis software and other AI can perpetuate prejudice or erode privacy.

In recent years, Google has incorporated AI into all of its services, using technology to interpret complex search queries, decide recommendations on YouTube, and complete sentences in Gmail. His researchers have published more than 200 papers in the last year on the responsible development of AI, among more than 1,000 projects in total, Dean said.

Studying Google’s bias services is among the “sensitive topics” according to the company’s new policy, according to an internal website. Dozens of other “sensitive topics” listed include the oil industry, China, Iran, Israel, Covid-19, home security, insurance, location data, religion, self-driving vehicles, telecommunications, and systems that recommend or customize web content.

Google’s work, for which the authors received a positive tone, discusses the AI ​​recommendation, which services such as YouTube use to customize user content streams. A revised project by Reuters included “concern” that the technology could promote “misinformation, discriminatory or otherwise unfair” and “insufficient content diversity”, as well as “political polarization”.

The final publication says instead that systems can promote “accurate information, fairness and diversity of content”. Published version, entitled What do you optimize for? Aligning referral systems with human values ​​has omitted credit for Google researchers. Reuters could not determine why.

A work done this month on artificial intelligence for understanding a foreign language softened a reference to how the Google Translate product made mistakes after a request from the company’s reviewers, a source said. The published version says that the authors used Google Translate and a separate sentence says that part of the research method was “reviewing and correcting inaccurate translations”.

For a paper published last week, a Google employee described the process as a “long way,” involving more than 100 email exchanges between researchers and reviewers, according to internal correspondence.

The researchers found that AI could cough up personal data and copyrighted material – including a page from a “Harry Potter” novel – that was extracted from the Internet to develop the system.

A draft described how such disclosures could infringe copyright or violate European privacy law, said a person familiar with the matter. Following the company’s reviews, the authors removed the legal risks, and Google published the paper.

.Source