AIWS History of AI House at AIP

AIWS History of AI House at AIP

The Boston Global Forum is proud to announce its collaboration with the Army Innovation Park (AIP) in Nha Trang, Vietnam, to establish the AI of History House as part of the AIWS Initiative. This groundbreaking project aims to curate and present the history of artificial intelligence, showcasing significant achievements, events, and distinguished innovators and leaders in the field. Initiated in 2020, the History of AI program under AIWS continuously updates its repository of achievements and events, which are accessible both online at AIWS.city and physically at AIP’s History of AI House. Additionally, BGF and AIP will co-organize distinguished lectures at the History of AI House. On March 15, 2024, the History of AI House will be inaugurated at this momentous occasion, coinciding with a conference co-organized by the Telecommunications University and the Khanh Hoa Union of Science & Technology Associations (KUSTA) to commemorate the 100th anniversary of Nha Trang city (1924-2024), renowned as one of the most beautiful bays in the world.

Two years since the beginning of Russia’s invasion of Ukraine: Roundup on the Four Pillars

Two years since the beginning of Russia’s invasion of Ukraine: Roundup on the Four Pillars

Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.

It has now been two years since the beginning of Russia’s invasion of Ukraine. Looking back, this event has clearly delineated the new challenges the Four Pillars and the liberal rules-based order would have to contend with in the coming years. The fact that Ukraine is still standing after the announcement of a three-day military operation is testament to the will of their country, but the Pillars should not drip feed aid to Ukraine or let domestic squabbles get in the way of helping. While sanctions have isolated Russia from the broader global economy, there is still no substitute for military aid. Even though Ukraine fatigue is a real thing, it should be remembered that their defeat would signal to certain powers that they can violate the global order.

This week, the Hungarian parliament is finally set to vote on Sweden’s ascension to NATO, after dragging their feet for a bit after Turkey’s acquiescence earlier this year. Hungary remains the last holdout from the unanimous vote needed to ratify Sweden’s membership.

Revisiting the ongoing Israel-Hamas war, ceasefire talks have broken down and remain an uncertain solution, while Israel continues its heavy-handed approach in Gaza. The IDF is now conducting operations into Rafah, the southernmost city in the strip, and PM Netanyahu has stated that he would seek open control of a demilitarized Gaza. The US and Europe, two of the Pillars, are understandably backing Israel in the war, but they still need to thread the needle of humanitarian concerns, Israel’s security, and a Palestinian state.

In Asia, although Japan, a Pillar, has fallen into a recession, signs of a stronger economy are looming through a rally in the Nikkei and more interestingly, increased production in semiconductors. As TSMC begins to diversify outside of Taiwan to the US and Japan in case of an invasion from the mainland, Japan is now benefiting from the TSMC plant starting production, and even more so than the US.

Image: Gleb Garanich/REUTERS

Boston Global Forum supports Army Innovation Park (AIP) of Vietnam in researching and developing AI

Boston Global Forum supports Army Innovation Park (AIP) of Vietnam in researching and developing AI

Nguyen Anh Tuan, CEO of the Boston Global Forum, attended the Inauguration Ceremony of AIP on February 25, 2024, in Nha Trang, Vietnam, as an honored guest. The Boston Global Forum is committed to supporting AIP in researching and developing AIWS Angel, a super AI Assistant with concepts of natural AI. Additionally, BGF is dedicated to establishing the History of AI House and deploying the AIWS Leadership Program at AIP.

Vietnamese Deputy Minister of Defense Pham Hoai Nam, along with leaders of the Ministry of Defense, Chief of Party of Khanh Hoa Province Nguyen Hai Ninh, and leaders of Khanh Hoa province attended and spoke in support of AIP.

 

President of Telecommunications University Le Xuan Hung highlighted:

“In 2018, leaders of the Ministry of Defense invested in building the Army Innovation Park. The overall structure of the Center includes 3 6-storey buildings, with a total floor area of ​​more than 25,000 m2 including office and conference areas; training area; research and development area; Data center area; Guest House area and other supporting works, forming a complex symbolizing creativity and development. The area of AIP is 30,000 square meters. The Information Technology infrastructure systems and Data Centers of AIP are funded by the Government of India.

I thank the leaders of the Boston Global Forum, Khanh Hoa Agarwood Company, businesses, and investors for always trusting, accompanying, cooperating, and building the ever-growing Center.”

 

In particular, the Telecommunications University respectfully remembers the contributions of the late Senior Lieutenant General Nguyen Chi Vinh, former Deputy Minister of Defense – who laid the foundation and paid attention to directing the construction and development of AIP from the first day of its establishment.

With the highest determination, AIP commit and promise leaders to focus on leading and directing the effective exploitation of AIP’s functions; implement financial autonomy for regular operations and development investment; and build AIP into a reliable center with high reputation domestically, regionally and internationally.

 

Vietnamese Deputy Minister of Defense Pham Hoai Nam noted:

“Leaders of the Ministry of Defense always believe, support and make the best conditions to build AIP to continuously develop and become a Military Innovation Park with high stature, position and prestige, both domestically, regionally, and internationally.”

US and China agree to map out framework for developing AI responsibly

US and China agree to map out framework for developing AI responsibly

AP: Michael Dwyer, file

The original article was published on ABC News Australia.

The world’s two powerhouse nations have finally agreed to sit down and discuss their concerns around the expanding power and reach of artificial intelligence (AI) after years of lobbying from officials and experts.

Both Beijing and Washington have been wary of giving their adversary an advantage by limiting their own research and capabilities, but observers have long-expressed concern that the existential risks of such an approach are far too high.

“The capacity of AI to induce risks that could potentially result in human extinction or irrevocable civilisational collapse cannot be overstated,” AI policy and ethics experts warned last year.

While a date hasn’t been set, it’s expected that the US and China will meet in the next few months to work on a framework for the responsible development of AI.

As they eye the next wave of advance tech with potentially conflicting motivations and goals, here’s a look at what each side wants, what regulations are in place, and the risks they may contend with.

What are the main concerns?

The rise of AI has fed a host of concerns.

They include fears it could be used to disrupt the democratic process, turbocharge fraud, cause widespread job losses — and then there’s the obvious worries around military applications.

The rapid growth of generative artificial intelligence, which can create text, images and video in seconds in response to prompts, has heightened fears that the new technology could be used to sway major elections this year, as more than half of the world’s population head to the polls.

It’s already being used to meddle in politics and even convince people not to vote.

In January, a robocall using fake audio of US President Joe Biden circulated to New Hampshire voters, urging them to stay home during the state’s presidential primary election.

For Samantha Hoffman, a leading analyst on China’s national security strategy and emerging technology, the potential to use AI to dupe the public and even subvert political processes are among the greatest risks.

“Things like the interest in generative AI and collection of things like language, data, images, sound — anything related to the generation of potentially fake images and text and so on,” she told the ABC.

“If you can influence the way that people think and perceive information it helps the [government] stay ahead of a crisis or conflict.

“If you lose in the information domain — that’s one of the most critical domains — and so you might have already lost the battle.”

Meanwhile, in a recent Brookings Institute report — A roadmap for a US-China AI dialogue — authors Ryan Hass and Graham Webster argued that any discussion about AI frameworks need to focus on three key areas: “Military uses of AI, enabling positive cooperation, and keeping focused on the realm of the possible.”

For military applications, they said the challenge was not about promising not to use AI on the battlefield but to “begin building boundaries and common expectations around acceptable military uses of automation”.

What’s the current state of play?

Last year, a report from the Australian Strategic Policy Institute found China was beating the US in 37 of 44 technologies likely to propel innovation, growth and military power.

They include AI, robotics, biotechnology, advanced manufacturing, and quantum technology.

The US leads innovation in only seven technologies — including quantum computing and vaccines — and ranks second to China in most other categories.

The Biden administration has taken drastic steps to slow China’s AI development.

It has passed laws to restrict China’s access to critical technology, and is also spending more than $US200 billion ($306 billion) to regain its lead in manufacturing semiconductor chips.

Dr Hoffman said that would slow down some of China’s development.

“But, it’s not going to stop,” Dr Hoffman told the ABC.

And so the need for the talks, which will build on a channel for consultation on artificial intelligence announced in November after US President Joe Biden and Chinese President Xi Jinping met in California.

What regulations are in place?

Regulations and potential controls are currently being formed.

In November, the US and more than a dozen other countries, with the notable exception of China, unveiled a 20-page non-binding agreement carrying general recommendations on AI.

The agreement covered topics including monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.

But it didn’t mention things like the appropriate uses of AI, or how the data that feeds these models is gathered.

At a global AI safety summit in the UK in November, Wu Zhaohui, China’s vice minister of science and technology, said Beijing was ready to increase collaboration on AI safety to help build an “international mechanism, broadening participation, and a governance framework based on wide consensus delivering benefits to the people”.

“Building a community with a shared future for mankind,” Mr Wu said, according to an official event translation.

More than 25 countries present at the summit, including the US and China signed the “Bletchley Declaration”, under which they will work together and establish a common approach on oversight.

But despite the platitudes from both sides, many AI policy and ethics experts maintain that it’s yet to be seen whether Beijing and Washington and their respective militaries can demonstrate a shared commitment to common interests or global safety.

The US is set to launch an AI safety institute, where developers of AI systems that pose risks to US national security, the economy, public health or safety will have to share the results of safety tests with the government.

Meanwhile, China has already blacklisted some information sources from being used to train AI.

The banned information covers things that are censored on the Chinese internet, including “advocating terrorism” or violence, as well as “overthrowing the socialist system”, “damaging the country’s image”, and “undermining national unity and social stability”, China’s National Information Security Standardisation Committee said.

Beijing also has to clear any mass-market AI products before they are released.

What do experts hope talks will achieve?

As the two superpowers compete for AI dominance, experts have warned of the increasing importance for common ground on AI safety to be established, given how little either country knows about their counterpart’s approach to AI.

In recent weeks, it’s been revealed that Beijing and Washington are preparing for bilateral talks “this spring” (autumn in Australia).

While the final parameters for the talks are yet to be announced, given the wide applications of AI, they could cover “potentially everything”, Dr Hoffman said.

Basically, AI can be adapted to use in so many applications it’s hard to think of areas that won’t be affected, from high-tech future weapons and drones used on battlefields to everyday tasks.

“It covers everything from healthcare applications to autonomous weapons, things like facial recognition to things like ChatGPT,” she said.

One thing that has become clear already is that China and the US are not pursuing the same goals with AI, Dr Hoffman told the ABC, as the development of AI currently plays into their respective national strategies.

“They’re really talking about replacing the existing world order,” Dr Hoffman explains.

But, because it will be almost impossible for either China or the US to continue their technological advancements independent of each other, there’s one thing Dr Hoffman believes both sides will want to discuss.

“It’s about finding the most responsible ways to manage risk,” Dr Hoffman said.

AI needs lots of information, and so developing standards for “data sharing vetted by both governments could be immensely powerful”, the authors of the Brookings Institute report wrote.

Using the talks to raise other concerns, even ones that seem related like the US blocks on China’s access to critical technologies, “would push the dialogue into a cul-de-sac”, the report added.

That said, even if the talks remain general in nature and don’t lead to any concrete agreement, experts and policymakers agree that a pledged willingness to cooperate is a much better scenario than not talking and continuing to develop AI frameworks covertly in isolation.

Yasuhide Nakayama speaks at the FirstPost Defense Summit panel on AI

Yasuhide Nakayama speaks at the FirstPost Defense Summit panel on AI

Former Japanese Minister of Defence Yasuhide Nakayama, a BGF Global Enlightenment Leader, spoke at the “Will AI shape how wars are fought? What role will artificial intelligence play in the future of warfare?” panel at the FirstPost Defense Summit 2024 in India.

The panel takes a deep dive into the path forward for AI in military operations and the risks that it poses. Superintelligent machines are no longer confined to the realm of science fiction. Artificial Intelligence is developing rapidly, but without guardrails, it can do more harm than good. As countries race to ensure that they don’t fall behind in the AI race, the panel discusses how these nonhuman entities could determine results on the battlefield.

 

Logo of AIWS Angel

Logo of AIWS Angel

The logo of AIWS Angel embodies the essence of innovation, compassion, and human-machine synergy at the heart of the AI World Society Initiative. With its distinctive design, the logo symbolizes the transformative potential of AI in fostering a more harmonious and enlightened world.

The logo of AIWS Angel encapsulates the timeless aspirations of humanity. Since ancient times, humans have harbored dreams of divine beings capable of rescuing mankind from adversity. These dreams materialized in the form of powerful gods such as Apollo, Aphrodite, and Athena in Greek mythology. Similarly, cultures around the globe have projected their hopes and aspirations onto mighty deities endowed with extraordinary powers and noble missions, as depicted in folklore, fairy tales, and mythology. These divine figures symbolize humanity’s relentless pursuit of discovery, the determination to surmount obstacles, and the desire for a life of ever-increasing richness and beauty.

The AIWS Angel logo incorporates the image of a Bald Eagle descending to the earth, symbolizing support for humanity. It represents the convergence of human ingenuity and technological innovation, working in harmony to advance the well-being of society. The logo serves as a beacon of hope, guiding humanity towards a future where artificial intelligence is harnessed for the greater good, facilitating progress, enlightenment, and the realization of our dreams.

Munich Security Conference 2024: Roundup on the Four Pillars

Munich Security Conference 2024: Roundup on the Four Pillars

Minh Nguyen is the Editor of the Boston Global Forum and a Shinzo Abe Initiative Fellow. She writes the Four Pillars column in the BGF Weekly newsletter.

 

The Munich Security Conference, an annual conference focused on security and defense issues around the world, took place over the past week. Top issues discussed were the Russo-Ukrainian war and the war in Gaza. In addition, delegations discussed issues in the Indo-Pacific and Africa.

It is clear that the Four Pillars, or at least those focused on the war in Ukraine, should continue the return to capacity of NATO – meaning increasing defense spending to the 2% target, continuing ammunition and artillery production to Ukraine. It is clear that if Ukraine fails, that puts the rest of Europe in danger, as well as the broader global order in Asia and elsewhere too. As it were, Europe must continue ramping up defense.

However, the “vibes” and sentiments of the participants, as it were, were not positive coming out of the conference. Trump’s comment on NATO members, the death of Alexei Navalny, and Ukraine’s loss in Avdiivka signals a more grim portent for the Pillars. These challenges are things that they will have to grapple with in not just the coming months, but years. Still, some remain optimistic about Ukraine’s chances at winning the war.

Ursula von der Leyen, the current EU Commissioner and a recipient of the AIWS Peace and Security Award, has announced that she is seeking a second term as commissioner. Her current tenure has faced challenges in the pandemic and the invasion, but it has also brought back federalism and a stronger EU.

Ukrainian President Volodymyr Zelensky and U.S. Vice President Kamala Harris smile at the end of a press conference at the Munich Security Conference in Munich on Feb. 17. Tobias Schwarz/AFP via Getty Images

Boston Global Forum to unveil the Special Report on Peace and Security Solutions at the Shinzo Abe Initiative Conference

Boston Global Forum to unveil the Special Report on Peace and Security Solutions at the Shinzo Abe Initiative Conference

In a significant development aimed at addressing global conflict and instability, the Boston Global Forum (BGF) has announced its forthcoming unveiling of the Shinzo Abe Initiative Special Report on Solutions for Peace and Security in Conflict and War Areas. This report will be presented and discussed at the Shinzo Abe Initiative Conference in Tokyo on March 28, 2024.

The Shinzo Abe Initiative Conference serves as a platform for fostering dialogue and collaboration among policymakers, scholars, and thought leaders on pressing global challenges. At this year’s conference, BGF will present the findings and recommendations of the Special Report, which promises to offer innovative insights into resolving conflicts and promoting peace in war-torn regions worldwide.

The Special Report represents a culmination of extensive research, analysis, and expert input from leading figures in the field of international relations, conflict resolution, and peacebuilding. It provides a comprehensive examination of key conflict zones and war-torn regions across the globe, identifying underlying causes, dynamics, and humanitarian impacts of ongoing conflicts.

The unveiling and discussion of the Shinzo Abe Initiative Special Report are expected to catalyze meaningful dialogue and action towards achieving sustainable peace and security in conflict-affected areas. Participants at the conference will have the opportunity to engage with the report’s findings, exchange perspectives, and explore collaborative strategies for addressing the complex challenges of conflict resolution and peacebuilding.

As the global community grapples with the urgent need for effective solutions to mitigate conflict and promote stability, the Shinzo Abe Initiative Special Report stands as a beacon of hope and a call to action for concerted efforts towards a more peaceful and secure world.

Japanese Minister for Foreign Affair speaks at the Shinzo Abe Initiative Conference in April 5, 2023 in Tokyo

Tech giants pledge action against deceptive AI in elections

Tech giants pledge action against deceptive AI in elections

A voter leaves a polling booth at St. Anthony Community Center during the presidential primary election, Tuesday, Jan. 23, 2024, in Manchester, N.H. (AP Photo/Michael Dwyer)

Tech giants including Microsoft, Meta, Google, Amazon, X, OpenAI and TikTok unveiled an agreement on Friday aimed at mitigating the risk that artificial intelligence will disrupt elections in 2024.

The tech industry “accord” takes aim at AI-generated images, video and audio that could deceive voters about candidates, election officials and the voting process. But it stops short of calling for an outright ban on such content.

And while the agreement is a show of unity for platforms with billions of collective users, it largely outlines initiatives that are already underway, such as efforts to detect and label AI-generated content.

Fears over how AI could be used to mislead voters and maliciously misrepresent those running for office are escalating in a year that will see millions of people around the world head to the polls. Apparent AI-generated audio has already been used to impersonate President Biden discouraging Democrats from voting in New Hampshire’s January primary and to purportedly show a leading candidate claiming to rig the vote in Slovakia’s September election.

“The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes,” the text of the accord says. “We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders.”

The companies rolled out the agreement at the Munich Security Conference, an annual gathering of heads of state, intelligence and military officials and diplomats dubbed the “Davos of Defense.”

The agreement is a voluntary set of principles and commitments from the tech companies. It includes developing technology to watermark, detect and label realistic content that’s been created with AI; assessing the models that underlie AI software to identify risks for abuse; and supporting efforts to educate the public about AI. The agreement does not spell out how the commitments will be enforced.

Work on the accord began at the start of the year and the final agreement came together in just six weeks. Its broad scope lacks the specific, enforceable measures many tech critics have pushed for, but likely reflects the challenge of getting 20 different companies on board in such a short timeframe.

Microsoft president Brad Smith said the show of industry unity is itself an accomplishment.

“We all want and need to innovate. We want and need to compete with each other. We want and need to grow our businesses,” he said. “But it’s also just indispensable that we adhere to a high level of responsibility, that we acknowledge and address the problems that are very real, including to democracy.”

The 20 companies signing the agreement include those that make tools for generating AI content, including OpenAI, Anthropic and Adobe. It’s also been signed by Eleven Labs, whose voice cloning technology researchers believe was behind the fake Biden audio. Platforms that distribute content including Facebook and Instagram owner Meta, TikTok and X, the company formerly known as Twitter, also signed.

Nick Clegg, Meta’s president of global affairs, said coming together as an industry, on top of work the companies are already doing, was necessary because of the scale of the threat AI poses.

“All our defenses are only as strong against the deceptive use of AI during elections as our collective efforts,” he said. “Generative AI content doesn’t just stay within the silo of one platform. It moves across the internet at great speed from one platform to the next.”

With its focus on transparency, education, and detecting and labeling deceptive AI content rather than removing it, the agreement reflects the tech industry’s hesitance to more aggressively police political content.

Critics on the right have mounted a pressure campaign in Congress and the courts against social media platform policies and partnerships with government agencies and academics aimed at tamping down election-related falsehoods. As a result, some tech companies have backed off those efforts. In particular, misinformation, propaganda and hate speech have all surged on X since Elon Musk’s takeover, according to researchers.

Microsoft’s Smith said the agreement makes a clear distinction between free speech, which the companies are committed to protecting, and deceptive content.

“Let every person speak for themself. Let groups stand together and speak out on the issues they care about. But don’t deceive or fundamentally defraud the public by trying to put in someone’s mouth words that were never spoken,” he said. “That’s not, in our view, free expression. That’s called fraud and deceit.”

New wrinkle to an old problem

Just how disruptive a force AI will be this election cycle remains an open and unanswerable question.

Some experts fear the risk is hard to overstate.

“The power afforded by new technologies that can be used by adversaries — it’s going to be awful,” said Joe Kiniry, chief scientist of the open-source election technology company Free & Fair. “I don’t think we can do science-fiction writing right now that’s going to approach some of the things we’re going to see over the next year.”

But election officials and the federal government maintain the effect will be more muted.

The Cybersecurity and Infrastructure Security Agency, the arm of the Department of Homeland Security tasked with election security, said in a recent report that generative AI capabilities “will likely not introduce new risks, but they may amplify existing risks to election infrastructure” like disinformation about voting processes and cybersecurity threats.

AI dominated many conversations at a conference of state secretaries of state earlier this month in Washington, D.C. Election officials were quick to note they’ve been fighting against misinformation about their processes for years, so in many ways AI’s recent advance is just an evolution of something they are already familiar with.

“AI needs to be exposed for the amplifier that it is, not the great mysterious, world-changing, calamity-inducing monstrosity that some people are making it out to be,” said Adrian Fontes, the Democratic secretary of state of Arizona. “It is a tool by which bad messages can spread, but it’s also a tool by which great efficiencies can be discovered.”

While some election experts fear the risk of AI is hard to overstate, others, like Arizona Secretary of State Adrian Fontes, say it’s not the “calamity-inducing monstrosity that some people are making it out to be.” Ross D. Franklin/AP

One specific worry that came up frequently was how difficult it is to encourage the public to be skeptical of what they see online without having that skepticism turn into a broader distrust and disengagement of all information.

Officials expect, for instance, that candidates will claim more and more that true information is AI-generated, a phenomenon known as the liar’s dividend.

“It will become easier to claim anything is fake,” Adriana Stephan, an election security analyst with CISA, said during a panel about AI at the conference.

Regulators are eyeing guardrails too

Many of the signatories to the new tech accord have already announced efforts that fall under the areas the agreement covers. Meta, TikTok and Google require users to disclose when they post realistic AI-generated content. TikTok has banned AI fakes of public figures when they’re used for political or commercial endorsements. OpenAI doesn’t allow its tools to be used for political campaigning, creating chatbots impersonating candidates, or discouraging people from voting.

Last week Meta said it will start labeling images created with leading AI tools in the coming months, using invisible markers the industry is developing. Meta also requires advertisers to disclose the use of AI in ads about elections, politics and social issues, and bars political advertisers from using the company’s own generative AI tools to make ads.

Efforts to identify and label AI-generated audio and video are more nascent, even as they have already been used to mislead voters, as in New Hampshire.

But even as tech companies respond to pressure over how their products could be misused, they are also pushing ahead with even more advanced technology. On Thursday, OpenAI announced a tool that generates realistic videos up to a minute long from simple text prompts.

The moves by companies to voluntarily rein in the use of AI come as regulators are grappling with how to set guardrails on the new technology.

European lawmakers are poised in April to approve the Artificial Intelligence Act, a sweeping set of rules billed as the world’s first comprehensive AI law.

In the U.S., a range of proposed federal laws regulating the technology, including banning deceptive deepfakes in elections and creating a new agency to oversee AI, haven’t gained much traction. States are moving faster, with lawmakers in some 32 states introducing bills to regulate deepfakes in elections since the beginning of this year, according to the progressive advocacy group Public Citizen.

Critics of Silicon Valley say that while AI is amplifying existing threats in elections, the risks presented by technology are broader than the companies’ newest tools.

“The leading tech-related threat to this year’s elections, however, stems not from the creation of content with AI but from a more familiar source: the distribution of false, hateful, and violent content via social media platforms,” researchers at the New York University Stern Center for Business and Human Rights wrote in a report this week criticizing content moderation changes made at Meta, Google and X.