Sundar Pichai on the big picture in AI — The promise of AI for India and the world

Sundar Pichai on the big picture in AI — The promise of AI for India and the world

In 2019, Google introduced a reading tutor app for young students in India, powered by Artificial Intelligence. Seeing a classroom full of children discovering a love for books with the help of AI was a moment I’ll never forget. By that time, Google had been investing in the underlying technology and breakthroughs for several years. But being in the classroom that day gave me an even clearer sense of AI’s potential to improve lives and the deep responsibility we have to get it right.

Fast forward to today. Millions of people are using generative AI-powered tools that didn’t exist a year ago. Tools like Bard, our conversational AI interface, which people across the country are using in nine different Indian languages, or our generative AI search experience, called SGE, to get answers to complex questions in both English and Hindi. Yet we are still only in the very early stages of a shift that will drive new waves of innovation, accelerate economic progress and create opportunities for people everywhere.

This opportunity ahead is why we have taken a bold and responsible approach to AI. We are being bold in our ambitions to make sure that we are pursuing applications that can be useful and have an impact. And we are doing it responsibly, guided by the AI principles we established in 2018, which are rooted in the belief that AI should be developed to benefit society while avoiding harmful applications. Our ultimate goal is to make AI more helpful for everyone, everywhere in the world.

There is no doubt in my mind that AI will be the biggest shift we experience in our lifetimes. In India, we see some important drivers of progress that are unique here: first, the energy and ingenuity of the young population who are shaping how the technology is being used. Second, the opportunity that India has to leapfrog and develop the next generation of solutions similar to what the country has done with digital payments.

Of course, India is already applying AI to make progress in fundamental areas. One example is using AI to make information more accessible to people in their native languages. Google is working with the Indian government to collate and open-source speech data for almost 800 dialects, while our Google Research team in Bengaluru is building a unified model that can handle over 100 different Indian languages.

Other promising opportunities lie in healthcare. Google is partnering with hospitals and nonprofits to apply AI in screening for eye disease, detecting tuberculosis and improving maternal health. In agriculture, which employs hundreds of millions of people in India, AI can help transform access to information. For example, the Telangana state government is using Google models to map field health and support sustainable farming, and the nonprofit Wadhwani AI is creating an app to provide accurate crop health data to individual farmers.

One of the most critical areas where generative AI can help is by enabling everyone to succeed in the digital economy. This includes bringing citizen services and programmes to more Indians across the country. To do this, Google Cloud is partnering with Axis My India to build an inclusive and multilingual superapp that helps people access government services, regardless of their language or where they live. Meanwhile, a new generation of ‘AI-first’ Indian developers and startups is emerging to grow the digital economy. Entrepreneurs in our India startups accelerator programme are using AI to discover antibodies, increase access to education, help small businesses reach their customers, and more.

Innovations that start in India are being used worldwide. That early version of our literacy app became Read Along, an online tutor that has helped over 30 million children globally learn to read. Our Flood Hub tool, which uses AI to forecast and help authorities warn at-risk communities, has expanded from India to more than 80 countries and can help predict flooding events a week ahead of time for 460 million people.

India will play an important role in helping to make sure AI is built responsibly. At Google, we are deeply committed to this. We are building new safeguards like our SynthID technology, a tool for watermarking and identifying AI-generated images. We are also engaging government, academia and experts to guide responsible approaches. As one example, we supported the establishment of a first-of-its-kind multidisciplinary Centre for Responsible AI with a grant of $1 million to the Indian Institute of Technology, Madras. This centre will help to build a foundation of fairness, interpretability, privacy and security for future AI development.

Every technology shift is an opportunity to advance scientific discovery, accelerate human progress and improve lives. AI will do this on a scale we haven’t seen before and India is uniquely positioned to play a leading role from the start. I am excited to see all the ways India can harness its potential and unleash a golden era of innovation in the years ahead.

The original article was published at India Today.

Illustration: Nilanjan Das

Japan’s Ukraine visit: Roundup on the Four Pillars

Japan’s Ukraine visit: Roundup on the Four Pillars

While it has been a slow start to the year in the Four Pillars, there have been small and interesting developments.

Japanese Foreign Minister Yoko Kamikawa went to Ukraine this past week, in an unannounced visit. Japan, a Pillar, continues to pledge support for the country in the war against Russia, in technological and nonlethal aids. The Pillars, both in NATO and in the Asia-Pacific, sees the prescience in helping Ukraine, even though it may not directly harm them. Interestingly, Indian munitions, of the Pillar India, were also in use by the Ukrainian military (for their Polish artilleries). This is important as the Pillars move to uphold the international rules-based order, in both Europe and Asia. After all, the war in Ukraine has sparked many challenges to this order across the world, from instabilities in the Middle East to saber-rattling in Latin America.

Recently, several American defense companies were sanctioned by China for arms sales to Taiwan. This shows the continuation of Chinese saber-rattling over the Asia-Pacific, even if recent military purges may reveal setbacks to their ambitions over the region.

Foreign Minister Yoko Kamikawa and her Ukrainian counterpart, Dmytro Kuleba, hold a news conference in a bomb shelter in Kyiv, amid Russia’s attack on Ukraine, on Sunday. | REUTERS

Vietnam AI Contest 2023 – AI in the Age of Global Enlightenment

Vietnam AI Contest 2023 – AI in the Age of Global Enlightenment

The Vietnam AI Contest 2023, themed “AI in the Age of Global Enlightenment,” invites participation from all high school students in Vietnam aged 15 to 18. Individuals and groups of 2 to 5 members can enter this nationwide competition, which spans from March 2023 to January 2024, comprising two rounds and four phases: the Preliminary Round (Phase 1 & 2) and the Final Round (Online phase and Live phase).

Organized by VLAB Innovation and sponsored by Boston Global Forum, Michael Dukakis Institute, VietNamNet, and Vietnam National Assembly Television, the contest seeks to identify students who are not only adept at writing but also possess inspirational and innovative qualities. These are the candidates deemed ready to enter the Age of Global Enlightenment and become AIWS citizens. Scoring is centered around two key aspects: inspiration and innovation.

The Final Round is scheduled for January 5-7, 2024, where Boston Global Forum Board Members and AIWS Natural AI Initiative Members, including distinguished scholars from Harvard and MIT such as Mr. Nguyen Anh Tuan, Thomas Patterson, Nazli Choucri, David Silbersweig, John Clippinger, and Tom Kehler, will participate. They attended the final round, conducted interviews, engaged in discussions, and evaluated presenters. The professors expressed high regard for the excellent and talented students, noting their impressive grasp of new AI technologies. One student was particularly commended for presenting a high-level policy approach to AI governance and a model for a better society, aligning with the principles of AIWS. Harvard Professor Thomas Patterson commented, “Thank you for including me. Very impressive young people. They help make Vietnam the country we dream of.”

What’s next for AI regulation in 2024?

What’s next for AI regulation in 2024?

In 2023, AI policy and regulation went from a niche, nerdy topic to front-page news. This is partly thanks to OpenAI’s ChatGPT, which helped AI go mainstream, but which also exposed people to how AI systems work—and don’t work. It has been a monumental year for policy: we saw the first sweeping AI law agreed upon in the European Union, Senate hearings and executive orders in the US, and specific rules in China for things like recommender algorithms.

If 2023 was the year lawmakers agreed on a vision, 2024 will be the year policies start to morph into concrete action. Here’s what to expect.

The United States

AI really entered the political conversation in the US in 2023. But it wasn’t just debate. There was also action, culminating in President Biden’s executive order on AI at the end of October—a sprawling directive calling for more transparency and new standards.

Through this activity, a US flavor of AI policy began to emerge: one that’s friendly to the AI industry, with an emphasis on best practices, a reliance on different agencies to craft their own rules, and a nuanced approach of regulating each sector of the economy differently.

Next year will build on the momentum of 2023, and many items detailed in Biden’s executive order will be enacted. We’ll also be hearing a lot about the new US AI Safety Institute, which will be responsible for executing most of the policies called for in the order.

From a congressional standpoint, it’s not clear what exactly will happen. Senate Majority Leader Chuck Schumer recently signaled that new laws may be coming in addition to the executive order. There are already several legislative proposals in play that touch various aspects of AI, such as transparency, deepfakes, and platform accountability. But it’s not clear which, if any, of these already proposed bills will gain traction next year.

What we can expect, though, is an approach that grades types and uses of AI by how much risk they pose—a framework similar to the EU’s AI Act. The National Institute of Standards and Technology has already proposed such a framework that each sector and agency will now have to put into practice, says Chris Meserole, executive director of the Frontier Model Forum, an industry lobbying body.

Another thing is clear: the US presidential election in 2024 will color much of the discussion on AI regulation. As we see in generative AI’s impact on social media platforms and misinformation, we can expect the debate around how we prevent harms from this technology to be shaped by what happens during election season.

Europe

The European Union has just agreed on the AI Act, the world’s first sweeping AI law.

After intense technical tinkering and official approval by European countries and the EU Parliament in the first half of 2024, the AI Act will kick in fairly quickly. In the most optimistic scenario, bans on certain AI uses could apply as soon as the end of the year.

This all means 2024 will be a busy year for the AI sector as it prepares to comply with the new rules. Although most AI applications will get a free pass from the AI Act, companies developing foundation models and applications that are considered to pose a “high risk” to fundamental rights, such as those meant to be used in sectors like education, health care, and policing, will have to meet new EU standards. In Europe, the police will not be allowed to use the technology in public places, unless they get court approval first for specific purposes such as fighting terrorism, preventing human trafficking, or finding a missing person.

Other AI uses will be entirely banned in the EU, such as creating facial recognition databases like Clearview AI’s or using emotion recognition technology at work or in schools. The AI Act will require companies to be more transparent about how they develop their models, and it will make them, and organizations using high-risk AI systems, more accountable for any harms that result.

Companies developing foundation models—the models upon which other AI products, such as GPT-4,  are based—will have to comply with the law within one year of the time it enters into force. Other tech companies have two years to implement the rules.

To meet the new requirements, AI companies will have to be more thoughtful about how they build their systems, and document their work more rigorously so it can be audited. The law will require companies to be more transparent about how their models have been trained and will ensure that AI systems deemed high-risk are trained and tested with sufficiently representative data sets in order to minimize biases, for example.

The EU believes that the most powerful AI models, such as OpenAI’s GPT-4 and Google’s Gemini, could pose a “systemic” risk to citizens and thus need additional work to meet EU standards. Companies must take steps to assess and mitigate risks and ensure that the systems are secure, and they will be required to report serious incidents and share details on their energy consumption. It will be up to companies to assess whether their models are powerful enough to fall into this category.

Open-source AI companies are exempted from most of the AI Act’s transparency requirements, unless they are developing models as computing-intensive as GPT-4. Not complying with rules could lead to steep fines or cause their products to be blocked from the EU.

The EU is also working on another bill, called the AI Liability Directive, which will ensure that people who have been harmed by the technology can get financial compensation. Negotiations for that are still ongoing and will likely pick up this year.

Some other countries are taking a more hands-off approach. For example, the UK, home of Google DeepMind, has said it does not intend to regulate AI in the short term. However, any company outside the EU, the world’s second-largest economy, will still have to comply with the AI Act if it wants to do business in the trading bloc.

Columbia University law professor Anu Bradford has called this the “Brussels effect”—by being the first to regulate, the EU is able to set the de facto global standard, shaping the way the world does business and develops technology. The EU successfully achieved this with its strict data protection regime, the GDPR, which has been copied everywhere from California to India. It hopes to repeat the trick when it comes to AI.

https://www.technologyreview.com/2024/01/05/1086203/whats-next-ai-regulation-2024/

Stephanie Arnett/MITTR | Envato

Nazli Choucri, MIT GSSD, and AIWS Natural AI

Nazli Choucri, MIT GSSD, and AIWS Natural AI

Nazli Choucri stands as a Global Enlightenment Leader, a distinguished contributor to the book “Remaking the World – Toward an Age of Global Enlightenment.” She holds the position of Professor of Political Science at MIT and serves as the Director of the Global System for Sustainable Development (GSSD). Notably, she is the author of “Cyberpolitics in International Relations” published by MIT Press. Additionally, Nazli Choucri is a valued Boston Global Forum Board Member and a dedicated member of the AIWS Natural AI Initiative.

The AIWS Natural AI Initiative acknowledges GSSD – Knowledge Meta-Networking for Decision and Strategy (https://gssd.mit.edu/) as a strategic alliance in the collaborative effort to build AIWS Angel, a practical embodiment of AIWS Natural AI.

GSSD has seven broad features and functions useful for different users:

  1. Strategy for integrating & organizing knowledge related to the domain of sustainable development, in multi-dimensional, multi-sector, and international terms.
  2. Conceptually robust multidisciplinary knowledge base.
  3. Detailed display of content details for individual topics.
  4. Method to represent knowledge with interrelated concepts organized in a nested, internally consistent form.
  5. Search, submission, and retrieval functions which operate over the system’s quality-controlled knowledge base.
  6. Multi-lingual knowledge provision search and submission.
  7. Reports and Working Papers from GSSD and related MIT-based research.

https://gssd.mit.edu/knowledge-system

Calling Business Leaders to Shape the Spiritual Values of AIWS Angel

Calling Business Leaders to Shape the Spiritual Values of AIWS Angel

Boston Global Forum (BGF) extends a heartfelt invitation to business leaders around the world to join us in a transformative initiative. As we embark on the creation of AIWS Angel, a groundbreaking super AI Assistant with the principle of Natural AI, we recognize the paramount importance of infusing this technological marvel with esteemed spiritual values. In our pursuit of excellence, compassion, and innovation, we appeal to business leaders to contribute their wisdom and insights in shaping the spiritual values that will underpin AIWS Angel. By uniting our collective expertise, we can build a future where advanced technology coexists harmoniously with timeless spiritual principles, fostering a world that embraces the profound connection between humanity and artificial intelligence. Your contributions will play a pivotal role in defining the ethical and compassionate character of AIWS Angel, contributing to a future guided by enlightened values. Join us in this endeavor to build a legacy that transcends technological innovation and encapsulates the essence of a spiritually enlightened AI World Society. For inquiries, please contact us at [email protected].

New Year Resolutions: Roundup on the Four Pillars

New Year Resolutions: Roundup on the Four Pillars

From 2023 to 2024, the challenges for the Four Pillars are still present. In fact, the world may be on another precipice. Here, we break them down by region and what can be done:

Asia-Pacific: The China threat still looms for democracy, whether they be in the South China Sea or Taiwan. The US, Japan, and India, three of the Pillars, have continued to reinforce against China, through improving ties with China’s neighbors, or increasing their defense and decoupling.

Europe: The Russian invasion of Ukraine is still raging, and the Pillar of the EU-UK should continue pushing for greater integration of Ukraine. Hopefully, the Pillars could also continue to send more weapons, for if the war were to turn into one of attrition, it may become unfavorable for Ukraine – and if Ukraine falls, it threatens the rest of Europe.

The Middle East: The Israel-Hamas war still continues, and sometimes threaten to turn into a regional conflict, whether it be through the threat of Hezbollah in Lebanon or Houthis in Yemen. It has, in a sense, turn into a regional conflict, as some of the Pillars have begun taking actions against the Houthis for their harassment of shipping in the Red Sea and Gulf of Aden.

Natural AI Initiative and AIWS Angel: Revolutionizing Human Interaction with Natural AI

Natural AI Initiative and AIWS Angel: Revolutionizing Human Interaction with Natural AI

AIWS Angel, a groundbreaking super AI Assistant, is at the forefront of transforming the way humans interact with technology, aligning seamlessly with the principles of the AIWS Natural AI initiative. With its core components encompassing the Human Brain, Spiritual Values, Physical Computing, Interactive Interfaces linking the human brain and physical computing, and a sophisticated Display, AIWS Angel stands as the epitome of innovation and ethical design. Notably designed by Mr. Nguyen Anh Tuan and his team, AIWS Angel is on a mission to collect esteemed spiritual values from various faiths, including Christianity, Islam, Hinduism, Buddhism, Judaism, and more. This collaborative effort ensures that AIWS Angel embodies profound spiritual principles, making it a beacon of ethical and compassionate assistance. Step into the future of AI interaction with AIWS Angel, where cutting-edge technology harmonizes with esteemed spiritual values, ushering in a transformative and uplifting experience for individuals and communities alike.

Artificial intelligence: Four debates to expect in 2024

Artificial intelligence: Four debates to expect in 2024

Artificial intelligence has gone mainstream.

Long the stuff of science fiction and blue-sky research, ​AI technologies like the ChatGPT and Bard chatbots have become everyday tools used by millions of people. And yet, experts say, we’ve only seen a glimpse of what’s to come.

“AI has reached its iPhone moment,” said Lea Steinacker, chief innovation officer at startup ada Learning and author of a forthcoming book on artificial intelligence, referring to the introduction of Apple’s smartphone in 2007, which popularized mobile internet access on phones.

Similarly, “applications like ChatGPT and others have brought AI tools to end users,” Steinacker told DW. “And that will affect society as a whole.”

Will deepfakes help derail elections?

So-called “generative” AI programs now allow anyone to create convincing texts and images from scratch in a matter of seconds. This has made it easier and cheaper than ever to produce “deepfake” content, in which people appear to say or do things they never did.

As major elections approach in 2024, from the US presidential race to the European Parliament elections, experts have said we could see a surge in deepfakes aimed at swaying public opinion or inciting unrest ahead of a vote.

“Trust in the EU electoral process will critically depend on our capacity to rely on cybersecure infrastructures and on the integrity and availability of information,” warned Juhan Lepassaar, executive director of the EU’s cybersecurity agency, when his office released a threat report in mid-October.

How much of an impact deepfakes will have will also largely depend on the efforts of social media companies to combat them. Several platforms, such as Google’s YouTube and Meta’s Facebook and Instagram, have implemented policies to flag AI-generated content, and the coming year will be the first major test of whether they work.

Who owns AI-generated content?

To develop “generative” AI tools, companies train the underlying models by feeding them vast amounts of texts or images sourced from the internet. So far, they’ve used these resources without obtaining explicit consent from the original creators — writers, illustrators, or photographers.

But rights holders are fighting back against what they see as violations of their copyrights.

Recently, The New York Times announced it was suing OpenAI and Microsoft, the companies behind ChatGPT, accusing them of using millions of the newspaper’s articles. San Francisco-based OpenAI is also being sued by a group of prominent American novelists, including John Grisham and Jonathan Franzen, for using their works.

Several other lawsuits are pending. For example, the photo agency Getty Images is suing the AI company Stability AI, which is behind the Stable Diffusion image creation system, for analyzing its photos.

The first rulings in these cases could come in 2024 — and they could set precedents for how existing copyright laws and practices need to be updated for the age of AI.

Who holds the power over AI?

As AI technology becomes more sophisticated, it’s becoming harder and more expensive for companies to develop and train the underlying models. Digital rights activists have warned this development is concentrating more and more cutting-edge expertise in the hands of a few powerful companies.

“This concentration of power in terms of infrastructure, computing power and data in the hands of a few tech companies illustrates a long-standing problem in the tech space,” Fanny Hidvegi, Brussels-based director of European policy and advocacy at the nonprofit Access Now, told DW.

As the technology becomes an indispensable part of people’s lives, a few private companies will influence how AI will reshape society, she warned.

How to enforce AI laws?

Against this backdrop, experts agree that — just as cars need to be equipped with seatbelts — artificial intelligence technology needs to be governed by rules.

In December 2023, after years of negotiations,the EU agreed on its AI Act, the world’s first comprehensive set of specific laws for artificial intelligence.

Now, all eyes will be on regulators in Brussels to see if they walk the walk and enforce the new rules. It’s fair to expect heated discussions about whether and how the rules need to be adjusted.

“The devil is in the details,” said Lea Steinacker, “and in the EU, as in the US, we can expect drawn-out debates over the actual practicalities of these new laws.”

The original article was published in Deutsche Welle.

The BGF and AIWS continue to promote governance of AI with AIWS Roundtables and initiatives in 2024.

AI chatbot ChatGPT is considered the fastest-growing consumer internet app of all time
Image: Andreas Franke/picture alliance