[ad_1]
Google has secured a tight grip around the research done by its staff, according to internal communications and other information obtained by Reuters.
A new “sensitive topics” review procedure has, in at least three cases, “requested authors refrain from casting its technology in a negative light.” News of Google’s new process broke after Reuters was able to get its hands on internal Google communications and speak with researchers involved with the work. There were at least 13 “sensitive topics” listed in the new review policy, including bias, China, COVID-19 and Israel.
Google executives stepped in during the late stages of a research project on content recommendation technology over the summer. A senior Google manager who reviewed the research told the study’s authors to “take great care to strike a positive tone,” according to internal Google correspondence. Further discussions between researchers and reviewers showed the authors “updated to remove all references to Google products,” Reuters reported.
Reuters was able to retrieve an early draft of the study which mentioned YouTube, Google’s sister company. It also included “concerns” that content recommendation and personalization technology can promote “disinformation, discriminatory or otherwise unfair results” and “insufficient diversity of content,” as well as lead to “political polarization.”
The final published version instead said these systems can promote “accurate information, fairness, and diversity of content,” Reuters reported. It also omitted credit to Google researchers.
Google has been quick to fire controversial staffers at the company, and even did so last week. James Damore, a software engineer at Google, was fired in 2017 for writing a memo that raised questions about the gender diversity efforts at Google. In it he “suggested that at least some of the male-female disparity in tech could be attributed to biological differences,” according to Damore himself. “[A]nd, yes, I said that bias against women was a factor too,” he added.
Google’s new review policy asked researchers to get permission from the company’s legal, policy and public relations teams before initiating projects on topics such as “face and sentiment analysis and categorizations of race, gender or political affiliation,” according to the investigation.
Scientific studies have argued “facial analysis software and other AI can perpetuate biases or erode privacy,” according to Reuters. Two of the “sensitive topics” Google wishes to shy away from when attempting to understand how to best implement its own technology.
The revealing report also indicated four staff researchers said they believe Google is starting to interfere with “crucial studies of potential technology harms.” Over the last year “more than 1,000 projects each year turn into published papers,” said Senior Vice President of Google Research Jeff Dean. Including, “more than 200 publications focused on responsible AI development in the last year alone.”
Conservatives are under attack. Contact Google at (650) 253-000, or by mail to 1600 Amphitheatre Parkway Mountain View, CA 94043, and demand that the platform provide transparency: Companies need to design open systems so that they can be held accountable, while giving weight to privacy concerns. If you have been censored, contact us at the Media Research Center contact form, and help us hold Big Tech accountable.
[ad_2]
Source link