The CEOs of Facebook, Twitter and Google testify before Congress about misinformation

Members of the House Energy and Commerce Committee are expected to press Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai and Twitter CEO Jack Dorsey about their platforms’ efforts to stop allegations of electoral fraud and vaccine skepticism. Opaque algorithms that prioritize user involvement and promote misinformation could also come under control, a committee note suggested.

Technology platforms, which had already faced intense pressure to remove misinformation and foreign interference that led to the 2020 elections, came under greater scrutiny in the coming months. Even though some of the companies have taken new steps to counter conspiracy theories, it has not been enough to stop President Donald Trump’s staunch supporters from storming the US Chapter.

The meeting also marks the first time in front of the CEO’s Congress, since Trump was banned or suspended from those platforms following the Capitol riots. In the comments prepared, some of the directors address the events of January 6.

“The Capitol attack was a horrific attack on our values ​​and our democracy, and Facebook is committed to assisting law enforcement in bringing insurgents to justice,” Zuckerberg’s testimony reads. But Zuckerberg also adds, “We do more to address misinformation than any other company.”

The hearings coincide with legislation under active scrutiny in both the House and Senate to restrict the technology industry. Some bills target the economic dominance of companies and alleged anti-competitive practices. Others approach the platforms approach in terms of content moderation or data confidentiality. The various proposals could introduce new tough requirements for technology platforms or expose them to greater legal liability in ways that can reshape the industry.

For headquarters executives, Thursday’s session could also be their last chance to present a personal case to lawmakers before Congress commits to potentially radical changes to federal law.

At the heart of the upcoming political battle is section 230 of the Communications Act of 1934, the shield of responsibility for signatures that gives sites legal immunity for much of the content posted by their users. Members of both sides have called for updates to the law, which has been widely interpreted by the courts and is credited with the development of the open internet.

What the Biden administration means to the future of Silicon Valley

Written testimony from executives ahead of Thursday’s profile hearing outlines potential areas of common ground with lawmakers and suggests areas where companies plan to work with Congress – and areas where Big Tech could push back.

Zuckerberg intends to argue for the restriction of the scope of section 230. In his written comments, Zuckerberg says that Facebook is in favor of a form of conditional liability, in which online platforms could be sued in connection with user content if companies do not comply certain good practices established by an independent third party.
The other two CEOs do not enter the debate on section 230, nor do they discuss the role of government to the same extent. But it offers their general visions for moderating content. Pichai’s testimony requires clearer content policies and provides users with a way to challenge content decisions. Dorsey’s testimony reiterates its calls for more user-driven content moderation and the creation of better settings and tools that allow users to personalize their online experience.
So far, the executive directors have had a great deal of experience confessing to Congress. Zuckerberg and Dorsey recently appeared in the Senate in November on content moderation. And before that, Zuckerberg and Pichai testified in the House last summer on antitrust issues.
In the days leading up to Thursday’s hearing, the companies claimed to have acted aggressively to reverse the misinformation. Facebook said Monday that it removed 1.3 billion fake accounts last fall and now has more than 35,000 people working on content moderation. Twitter said this month will start applying warning labels for misinformation about the coronavirus vaccine and said repeated violations of its Covid-19 policies could lead to permanent bans. YouTube said this month that it has removed tens of thousands of videos containing misinformation about the Covid vaccine, and in January after the Capitol riots, has announced it would restrict the channels that share false statements that doubt the outcome of the 2020 elections.

But these claims of progress are unlikely to reassure committee members, whose note cited several research papers indicating that misinformation and extremism are still rampant on the platforms.

.Source