These are the presentations of Professors Alex “Sandy” Pentland and Lily Tsai, MIT at the BGF Conference on April 30, 2024. They discussed utilizing AI in constructive, ethical, and democratic manners for a better world and civic life.
Alex Sandy Pentland
From my perspective, AI is another wave in a long chain of technological advancements. The physical world and evolution have shaped humans into a social species, collaborating in groups of around 150, according to Dunbar’s number. Traditionally, these tribes would come together into clans or larger groups, which varied up to about 1500 people. We often discuss social capital, which has two types: bonding capital within smaller groups to establish cultural norms, and bridging capital that fosters innovation and interaction between larger groups, moving our society forward. However, these interactions are limited by numbers, physical distances, and communication barriers.
Human history is a testament to our efforts to transcend these physical limits. For example, the early Sumerians used marks to count sheep and convey information over distances. The Egyptians used papyrus spreadsheets to build pyramids, similar to tools we use today. Other inventions, like dual-ledger accounting by the Medici, enabled error correction and fraud detection in trade. Printing, radio, TV, and the internet all aimed to transcend our physical limitations but also centralized power, marginalizing communities and leading to inefficiencies. Variation between communities is necessary to discover better ways of operating and to remain robust to unexpected changes.
The history of AI reflects similar patterns. In the 1950s, AI focused on optimal resource allocation, foundational to the Soviet system, which was flawed due to the need for good data and clear objectives. Despite its limitations, optimal resource allocation remains a common computation today. The next wave was expert systems, codifying human rules into systems, bringing efficiency but also decimating local communities and control. In the 2000s, data harvesting and maximal estimation led to feedback mechanisms in social media responsible for misinformation and societal distress.
Today, AI is evolving to analyze everything online, raising copyright issues and concerns about uniformity. For instance, other countries worry that current AI reflects primarily English and American values, or Chinese values, with little choice in between. To address these challenges, we are editing a series of volumes, akin to The Federalist Papers, envisioning a future with AI.
Lily Tsai
I’m going to switch gears from discussing the governance of AI to exploring governance with AI. I want to share some insights from the work that Sandy and I have been doing on what pluralist societies might need from digital civic infrastructure and the potential for digitally mediated civic engagement.
As Sandy mentioned, humans developed in an era when collective decision-making was limited to small groups with similar concerns. As societies grew and needed to coordinate across larger distances, it became necessary to send representatives. However, representatives, even those democratically elected, don’t always share the interests of their constituents. When they diverge, opportunities for corruption and elite capture can arise.
With technologies that enable large numbers of people to communicate and make decisions on the same platform, we now have new opportunities for digitally enabled direct democracy at scale. Quantitative experiments, sometimes involving tens of millions of individuals, have examined scaling inclusiveness and efficiency in decision-making via digital networks. These studies suggest that large networks of non-experts can make practical and productive decisions and engage in collective action.
Some might warn against technology that could further reinforce the nationalization of politics. In the United States, citizens have increasingly turned away from local community issues, such as students skipping school or empty storefronts on Main Street, and become fixated on national politics. This shift has led to partisan mega-identities, where a single vote can indicate a person’s partisan preference as well as their religion, race, ethnicity, gender, neighborhood, and even favorite grocery store.
Polarization, which didn’t arise solely because of social media, plays out on platforms that increase the speed and scale of interaction, ramping up the emotional intensity of confrontation. Some people retreat to spaces with like-minded individuals, while others get drawn into conflicts, neither of which is conducive to negotiating disagreements or reaching compromises.
One potential solution is to revitalize place-based identities and engagement. However, focusing only on local politics might inadvertently reinforce local political blocs. Instead, we should aim to break up these blocs with cross-cutting cleavages and bridging social capital to connect people across localities. Studies suggest that connections between diverse groups and communities are a major source of innovation and change.
Unfortunately, we’ve seen a decline in institutions like churches and fraternal organizations that once facilitated these connections. Our research team is working on building new kinds of intermediating digital spaces that provide perspective, moderation, and focus on shared, rather than personal, problems. These spaces can accommodate discussion and deliberation at a large, even national, scale.
Can we design digital civic infrastructure to enable direct democracy for a national public while also dampening polarization? To achieve this, we need to address several problems. First, how do we get people to want to engage in civic participation, given the decades-long decline in engagement? Second, how do we encourage people to understand and consider the needs and concerns of others when making decisions that affect everyone?
We believe we need to create new kinds of mediated civic engagement that make people more comfortable and curious about engaging with difficult public issues. Alexis de Tocqueville often praised town meetings as schools for teaching people how to use and enjoy liberty. Today, however, we no longer attend town meetings at the same rates, and when we do, we don’t enjoy them. Many online spaces for public discussion are even worse, with people or bots yelling at each other and inciting virtual mobs.
To make engagement in collective discussions and decision-making tolerable or even enjoyable, we need to design digital platforms that allow for a measure of safety and autonomy. In physical spaces, like balconies on apartment buildings overlooking the street, people can engage with public events at a distance, deciding whether to get involved. Similarly, digital platforms should enable reserved sociability, where people can observe and engage at their own pace.
Urban planner Jane Jacobs noted that the diversity of city life is wonderful when it brings people together without forcing them to be too close. When people are too close, they tend to withdraw into their private spaces. Online platforms for discussion and deliberation could provide the same benefits by allowing people to dip their toes in first and gradually wade into discussions as they become interested.
Two examples of online platforms that illustrate this kind of digital intermediation are the School of Possibilities, an AI-enabled platform for public engagement on school reform piloted in Romania, and the Pol.is platform, widely used for public deliberation. Our research team is building on these examples to design and test features that enable reserved sociability and civic engagement.
However, digital innovations also raise questions about mitigating potential risks and harms. In a recent paper published by MIT in their AI Impact Series, our team discussed the importance of integrating generative AI into platforms in ways that uphold fundamental democratic commitments. We don’t want AI chatbots nudging people towards agreement just because they can write in friendlier or more authoritative voices. The principles of agency and respect must be upheld.
Generative AI can increase the speed of consensus by suggesting policy statements and solutions likely to be agreed upon. While this can be helpful, we don’t want efficiency to reduce the emergence of new ideas and creative solutions. Participants must not become too dependent on AI, leading to a lack of critical engagement with issues and other viewpoints.
Moreover, we need to guard against over-censorship or differential censorship, which can silence certain viewpoints. AI models can have biases due to algorithmic design decisions and training data, and external entities can manipulate discussions. Protecting and preserving minority interests and views is crucial.
Successful engagement infrastructure must ensure identity authentication, confirming that participants are real people entitled to engage. This can be achieved without compromising anonymity, allowing for secure and reserved civic engagement.
Digital intermediation can also help focus on public issues, avoiding emotional reactions and personal biases. Platforms can provide summaries and visualizations of discussions, allowing users to engage from different vantage points and decide when and how to participate.
Finally, digital platforms for civic engagement should tap into our natural curiosity and playfulness, encouraging people-watching and eavesdropping in a way that strengthens social bonds. These platforms can improve both local and national public discussion and deliberation.
In summary, we need to develop digital technologies that offer new opportunities for civic engagement while upholding democratic principles. By making people feel safe and protected and giving them control over their engagement, we can help them fully use and enjoy their liberty