U.S. Sen. John Thune (R-S.D.), chairman of the Senate Committee on Commerce, Science, and Transportation, will convene a hearing titled “Terrorism and Social Media: #IsBigTechDoingEnough?” at 10:00 a.m. on Wednesday, January 17, 2018. The hearing will examine the steps social media platforms are taking to combat the spread of extremist propaganda over the Internet.
- Ms. Monika Bickert, Head of Global Policy Management, Facebook
- Ms. Juniper Downs, Global Head of Public Policy and Government Relations, YouTube
- Mr. Carlos Monje, Director, Public Policy and Philanthropy, U.S. & Canada, Twitter
- Mr. Clinton Watts, Robert A. Fox Fellow, Foreign Policy Research Institute
Wednesday, January 17, 2018
Committee on Commerce, Science, and Transportation
This hearing will take place in Russell Senate Office Building, Room 253. Witness testimony, opening statements, and a live video of the hearing will be available on www.commerce.senate.gov.
If you are having trouble viewing this hearing, please try the following steps:
- Clear your browser's cache - Guide to clearing browser cache
- Close and re-open your browser
- If the above two steps do not help, please try another browser. Google Chrome and Microsoft Edge have the highest level of compatibility with our player.
Chairman John Thune
I want to thank everyone for being here to examine what social media companies are doing to combat terrorism – including terrorist propaganda and terrorist recruitment efforts – online.
The positive contributions of social media platforms are well documented.
YouTube, Facebook, and Twitter, among others, help to connect people around the world, give voice to those oppressed by totalitarian regimes, and provide a forum for discussions of every political, social, scientific, and cultural stripe.
These services have thrived online because of the freedom made possible by the uniquely American guarantee of free speech, and by a light touch regulatory policy.
But, as is so often the case, enemies of our way of life have sought to take advantage of our freedoms to advance hateful causes.
Violent Islamic terrorist groups like ISIS have been particularly aggressive in seeking to radicalize and recruit over the internet and various social media platforms.
The companies that our witnesses represent have a very difficult task: preserving the environment of openness upon on which their platforms have thrived, while seeking to responsibly manage and thwart the actions of those who would use their services for evil.
We are here today to explore how they are doing that, what works, and what could be improved.
Instances of Islamic terrorists using social media platforms to organize, instigate, and inspire are well documented.
For example, the killer responsible for the Orlando nightclub shooting – in which 49 innocent people were murdered, and 53 were injured – was reportedly inspired by digital material that was readily available on social media.
And this issue is not new.
Over the course of several years, YouTube hosted hundreds of videos by senior al-Qaeda recruiter Anwar al-Awlaki.
Although the company promised in 2010 to remove all videos that advocated violence, al-Awlaki’s “Call to Jihad” video, in which he advocates for western Muslims to carry out attacks at home, remained on the site for years.
In fact, a New York Times report suggested that al-Awlaki videos influenced the Fort Hood terrorist, the Boston Marathon bombers, and the terrorist attacks in San Bernardino and Orlando.
This issue is also international in scope.
In response to recent terror attacks in London, British Prime Minister Theresa May has been especially outspoken in calling on social media platforms to do more to combat the kind of radicalization that occurs online.
Last fall, for example, she was joined by other European leaders in calling upon social media companies to remove terrorist content from their sites within one to two hours after it appears.
As we’ll hear today, the companies before us are increasingly using technology to speed up their efforts to identify and neutralize the spread of terrorist content.
In a recent blog post, Facebook said that Artificial Intelligence now removes 99 percent of ISIS and Al-Qaeda related terror content even before it can be flagged by a member of the community, and sometimes even before it can be seen by any users.
YouTube is also teaming up with Jigsaw, the in-house think tank of Google’s parent company Alphabet, to test a new method of counter-radicalization referred to as the “Redirect Method.”
Seeking to “redirect” or re-focus potential terrorists at an earlier stage in the radicalization process, YouTube offers users searching for specific terrorist information additional videos made specifically to deter them from becoming radicalized.
A little over a year ago, Facebook, YouTube, Microsoft, and Twitter committed to sharing a database of unique “hashes” and “digital fingerprints” of some of the most extreme terrorist-produced content used for influence or recruitment.
By cross-sharing this information, terrorist content on each of the hosts’ platforms will be more readily identified, hopefully resulting in faster and more efficient deletion of this material.
Essentially, these companies are claiming they can tag individual videos and photos and, using automation, can kick them off their platforms before they are ever seen.
We all have a vested interest in their success, and I believe this Committee has a significant role to play in overseeing the effectiveness of their efforts.
I want to thank Ms. Bickert, Ms. Downs, and Mr. Monje for being here as representatives of their companies.
To Mr. Watts, I look forward to hearing your thoughts about disrupting and defeating terrorism.
I now recognize the Ranking Member for any opening statement he may have.
This hearing marks the first time that the Commerce Committee has had the three largest social media companies testify before us. And their appearance is long overdue. These social media platforms – and those of many other smaller companies – have revolutionized the way Americans communicate, connect and share information.
But, at the same time, these platforms have created a new – and stunningly effective – way for nefarious actors to attack and harm our citizens and our nation. Frankly, it is startling that today, a terrorist can be radicalized and trained to conduct attacks all through social media. And then a terrorist cell can activate that individual to conduct an attack through the internet – creating in effect a terrorist drone controlled by social media.
I look forward to hearing from our witnesses about what their companies are doing to make sure their platforms are not being exploited and manipulated by terrorists and criminals.
Using social media to radicalize and influence users is not limited to extremists. Nation states, too, are exploiting social media vulnerabilities to conduct campaigns against this nation and interfere with our democracy.
We know that Russian hackers—at Vladimir Putin’s direction—attempted to influence the 2016 presidential election through cyberattacks and spreading propaganda and disinformation through paid social media trolls and botnets on Facebook and Twitter.
And, we also know that Putin is likely to do it again.
In its January 6, 2017 assessment, the U.S. intelligence community said that Putin and his intelligence services see the election influence campaign as a success and will seek to influence future elections, right here in the United States, and abroad.
This should be a wake-up call to YouTube, Facebook, Twitter and to all Americans, regardless of party. This was an attack on the very foundation of American democracy and we must do everything in our power to see that it never happens again.
Mr. Watts, we welcome your expertise in understanding how bad actors like Russia use the internet and social media to influence not just our elections, but other aspects of American life. Everything from what we see and buy online, what we know to be true, and how we keep our families safe.
We even know that Putin is reaching down deep into our government. For example, as part of the Federal Communications Commission’s (FCC’s) net neutrality proceeding, about 500,000 comments were traced to Russian IP addresses. That’s equally shocking and concerning – we should want to know why these comments were filed. And all of us should be very concerned about what will happen next.
In the end, I have several basic questions for our witnesses: what have we learned about how the Russians attacked us? What have social media companies done to assess this threat, both individually and collectively? What have they done to address this threat? And what more do they need to do to be ready for the future?
Mr. Clint WattsRobert A. Fox FellowForeign Policy Research Institute
Ms. Juniper DownsDirector Public Policy and Government RelationsYouTube
Monika BickertHead of Product Policy and Counterterrorism
Carlos Monje JrDirector, Public Policy and Philanthropy, U.S. & Canada