10:00 AM Hart Senate Office Building 216
U.S. Sen. John Thune, R-S.D., chairman of the Subcommittee on Communications, Technology, Innovation, and the Internet, will convene a hearing titled, “Optimizing for Engagement: Understanding the Use of Persuasive Technology on Internet Platforms,” at 10:00 a.m. on Tuesday, June 25, 2019. The hearing will examine how algorithmic decision-making and machine learning on internet platforms might be influencing the public. Witnesses will provide insight on ways technology companies use algorithms and machine learning to influence outcomes, and whether algorithm transparency or algorithm explanation are appropriate policy responses.
- Mr. Tristan Harris, Co-Founder and Executive Director, Center for Humane Technology
- Ms. Rashida Richardson, Director of Policy Research, AI Now Institute
- Ms. Maggie Stanphill, Director, Google User Experience, Google, Inc.
- Dr. Stephen Wolfram, Founder and Chief Executive Officer, Wolfram Research
*Witness list subject to change
Tuesday, June 25, 2019
Subcommittee on Communications, Technology, Innovation, and the Internet
This hearing will take place in the Hart Senate Office Building 216. Witness testimony, opening statements, and a live video of the hearing will be available on www.commerce.senate.gov.
If you are having trouble viewing this hearing, please try the following steps:
- Clear your browser's cache - Guide to clearing browser cache
- Close and re-open your browser
- If the above two steps do not help, please try another browser. Google Chrome and Microsoft Edge have the highest level of compatibility with our player.
Chairman John Thune
Good morning. I want to thank everyone for being here today to examine the use of persuasive technologies on internet platforms.
Each of our witnesses today has a great deal of expertise with respect to the use of artificial intelligence and algorithms more broadly, as well as in the more narrow context of engagement and persuasion, and brings unique perspectives to these matters.
Your participation in this important hearing is appreciated, particularly as this Committee continues its work on crafting data privacy legislation.
I’ve convened this hearing in part to inform legislation I’m developing that would require internet platforms to give consumers the option to engage with the platform without having the experience shaped by algorithms driven by user-specific data.
Internet platforms have transformed the way we communicate and interact, and they have made incredibly positive impacts on society in ways too numerous to count.
The vast majority of content on these platforms is innocuous, and at its best, it is entertaining, educational, and beneficial to the public.
However, the powerful mechanisms behind these platforms meant to enhance engagement also have the ability – or at least the potential – to influence the thoughts and behaviors of literally billions of people.
That is one reason why there is widespread unease about the power of these platforms, and why it is important for the public to better understand how these platforms use artificial intelligence and opaque algorithms to make inferences from the reams of data about us that affect behavior and influence outcomes.
Without safeguards, such as real transparency, there is a risk that some internet platforms will seek to optimize engagement to benefit their own interests, and not necessarily to benefit the consumer’s interest.
In 2013, former Google Executive Chairman, Eric Schmidt, wrote that modern technology platforms “are even more powerful than most people realize, and our future will be profoundly altered by their adoption and successfulness in societies everywhere.”
Since that time, algorithms and artificial intelligence have rapidly become an important part of our lives, largely without us even realizing it.
As online content continues to grow, large technology companies rely increasingly on AI-powered automation to select and display content that will optimize engagement.
Unfortunately, the use of artificial intelligence and algorithms to optimize engagement can have an unintended – and possibly even dangerous – downside. In April, Bloomberg reported that YouTube has spent years chasing engagement while ignoring internal calls to address toxic videos, such as vaccination conspiracies and disturbing content aimed at children.
Earlier this month, the New York Times reported that YouTube’s automated recommendation system was found to be automatically playing a video of children playing in their backyard pool to other users who had watched sexually themed content.
That is truly troubling, and it indicates the real risks in a system that relies on algorithms and artificial intelligence to optimize for engagement.
And these are not isolated examples.
For instance, some have suggested that the so-called “filter bubble” created by social media platforms like Facebook may contribute to our political polarization by encapsulating users within their own comfort zones or echo chambers.
Congress has a role to play in ensuring companies have the freedom to innovate, but in a way that keeps consumers’ interests and wellbeing at the forefront of their progress.
While there must be a healthy dose of personal responsibility when users participate in seemingly free online services, companies should also provide greater transparency about how exactly the content we see is being filtered.
Consumers should have the option to engage with a platform without being manipulated by algorithms powered by their own personal data – especially if those algorithms are opaque to the average user.
We are convening this hearing in part to examine whether algorithmic explanation and transparency are policy options Congress should be considering.
Ultimately, my hope is that at this hearing today, we are able to better understand how internet platforms use algorithms, artificial intelligence, and machine learning to influence outcomes.
We have a very distinguished panel before us.
Today, we are joined by Tristan Harris, the co-founder of the Center for Humane Technology, Ms. Maggie Stanphill, the director of Google’s User Experience, Dr. Stephen Wolfram, founder of Wolfram Research, and Ms. Rashida Richardson, the director of policy research at the AI Now Institute.
Thank you again for your participation on this important topic.
I now recognize Ranking Member Schatz for any opening remarks he may have.
Mr. Tristan HarrisCo-Founder and Executive DirectorCenter for Humane Technology
Ms. Rashida RichardsonDirector of Policy ResearchAI Now Institute
Ms. Maggie StanphillDirectorGoogle User Experience, Google, Inc.
Dr. Stephen WolframFounder and Chief Executive OfficerWolfram Research