The MIT professor describes practical approaches to improving the internet.
This article was originally published on Stanford Human-Centered AI -BGF
For Alex “Sandy” Pentland, a longtime professor of information technology at MIT, big societal questions have always been top of mind. And that focus has led to big impact. His group developed a digital health system for rural health workers in developing countries that today (with support from the Gates Foundation) guides health services for 400 million people. Another effort resulted in the deployment of tools for ensuring fair and unbiased social services support for 80 million children across Latin America. Another spinoff developed open-source identity and authentication mechanisms now built into most smartphones and relied on by 85% of humanity.
In 2008, Pentland began co-leading discussions at Davos that were widely recognized as the genesis of the European Union’s General Data Protection Regulation (GDPR). Today, he serves on the board of the UN Foundation’s Global Partnership for Sustainable Development Data, which uses data to track countries’ progress toward 17 different sustainability goals.
This spring, Pentland joined Stanford HAI’s Digital Economy Lab as a center fellow and the faculty lead for research on Digital Platforms and Society. Here he hopes to continue building a better digital ecosystem for all and to address the ways in which social media and AI are impacting democracy and society. We recently sat down with Pentland to ask him about his plans.
What do you mean by building a better digital ecosystem?
Thirty to forty years ago, we suddenly had the internet. While we’ve done many good things with it, we’ve also done some questionable things. And people are scared about what’s going to come next, such as bad actors using AI in nefarious ways; widespread misinformation altering our shared community understanding; and cyberattacks that affect our financial system. I’d like to see us build a better digital ecosystem so that we can have a thriving, creative, safe society.
What does that look like in practical terms?
There are a variety of ways we can achieve that goal. For example, courts and law enforcement need a way to uncover the real identity of online actors.
A second idea is that we need to draw a line between individual expression and mass expression. For example, consider the case where influencers often have more than a million “friends” on social media. Anyone with that many people following them can make money and build a reputation while saying whatever they want. And I think these overly powerful voices ought to be treated like businesses and not like individuals. They shouldn’t be able to cry “fire” in a crowded theater or tell outright lies. Those are basic standards we demand of other businesses. TV news shows or newspapers can’t publish something just to generate outrage. Digital media should be responsible for taking the same kind of care to protect the public good. If you’re going to express your ideas to a million people, then you’re a business and you ought to be regulated like a business.
We also need to reduce partisan animosity online. Currently, digital media are designed to get us to react quickly, which results in unthinking responses that in turn lead to cascades of behavior where everyone becomes outraged. We need a system that instead encourages people to communicate in ways that support democratic processes rather than tearing them apart. Large-scale experiments find that online discussions are improved when people are encouraged to reflect a little bit on what they are about to say. For example, we see less division and outrage online when we add an extra step that allows time for reflection before replying or forwarding, or add a prompt to consider what a comment will do to your reputation.
You have said we need to rethink the internet’s architecture. What does that mean?
We need to have new security standards. In the early days of the internet, the developers didn’t include important security and digital identity features because users were mostly government employees and university faculty. But today, everyone is on the internet and that means bad actors have an opportunity to do all sorts of damaging things. Nations that don’t like us can disrupt our cyber world through distributed attacks, bots, and troll farms. People can spread mis- and disinformation on social media without reprisal. And these behaviors destroy our ability to discuss things meaningfully with each other and to make rational decisions.
In some cases, fixing the problem will require changing often subtle little things down in the guts of the internet. As an example, if someone is producing 50,000 tweets a day, that’s a bot, not a human. That’s an obvious case, but there are other things we can do to find bots more efficiently, determine when foreign nations are interfering in elections, and better deal with ransomware and cyberattacks. The problems we have now evolved because the architecture of the internet was never completed. And now maybe the time has finally come to finish the job.
So, at the Stanford Digital Economy Lab, we’re going to try out various fixes experimentally to see what sorts of economic and social incentives work and then hopefully make change happen.
While at Stanford you’re also joining the team of researchers including Condoleezza Rice, Erik Brynjolfsson, and Nate Persily who are working on a series of essays dubbed the “Digitalist Papers.” Tell me about that.
The Digitalist Papers will be modeled after the Federalist Papers, which were a series of 85 essays written by three people in 1787-88 arguing for the ratification of the U.S. Constitution. They made the case for the creation of a country by design rather than by accident or force.
Today, we have the internet, smartphones, and AI, so perhaps there’s a better form of governance that we can design – something that’s more transparent, more accountable, and perhaps wiser. And so, for the Digitalist Papers, we’re assembling experts from around the world from a variety of fields – economics, politics, law, technology – to write essays about how the intersection of technology with each of these fields might lead to better governance.
We’re hopeful that putting these essays out in the world will change the terms of the discussion and shift what people believe they should be working toward.
We’ve been talking about improving the digital ecosystem in general. Do you have particular thoughts about how AI currently plays – or will play – a role in our digital ecosystem?
First of all, AI is not new. The first AIs in the 1960s were logic engines. And then came expert systems, and then came collaborative filtering. All of these are pervasive today and have had some negative effects, from centralizing data like never before to allowing for a surveillance society.
So, we should think about what the current wave of AI is going to do before it really takes off. And it’s not artificial general intelligence, or AGI, that worries me. It’s that AI is becoming pervasive in so many parts of our lives, including our medical system, our transportation system, and our schooling system. It’s going to be everywhere, just like the previous waves of AI were. And we need to make sure that it’s prosocial.
To me, AI has always been and continues to be a way of finding and using patterns in data. So, if you want to control AI, you have to control the data it feeds on by demanding privacy rights and ownership rights over data. Without that, AI will just run amok. Data are like the food for AI, and if you want to control AI, you have to control the data.
What is it that drives you and keeps you doing this work?
I think that developing a humanistic digital infrastructure is one of the best things a person can do right now. If I can help create a world that is human centered and that harnesses all these new digital tools and AIs for the good of society, that would be about the best thing I could do with my life just because it is so transformative.