Alex Pentland’s ideas at BGF Conference on Global Enlightenment Economy

Alex Pentland’s ideas at BGF Conference on Global Enlightenment Economy

On January 27, 2023, the Boston Global Forum organized the BGF conference on Global Enlightenment Economy.

MIT Professor Alex Pentland, a Distinguished Contributor of  “Remaking the World – Toward an Age of Global Enlightenment”, was a keynote speaker.

In Alex Pentland’s talk, he discussed the importance of a data Commons to achieve the United Nations Centennial Initiative and UN sustainable development goals. The data Commons is a census of the world, not just a single country, that includes information on people, economics, social conditions, inequality, and pollution. Progress on the data Commons has been slow, but there is potential for a trillion dollars in tax avoidance to be addressed by cross-border trade data sharing. Pentland also emphasized the importance of modern data-driven social programs and tax programs to help address tax avoidance, with the software being open source and common. He used the example of Estonia’s blockchain transparency system for VAT, which increased revenue by 15% and citizen satisfaction even more. The data Commons and common protocols for cross-border trade are essential for reducing tax avoidance and improving knowledge of communities.

What Rights Do Human Creators Have When Their Work Is Used to Train an AI System?

What Rights Do Human Creators Have When Their Work Is Used to Train an AI System?

Bao Tran, The Representative of Boston Global Forum in Silicon Valley, USA

In order to ensure that human creators’ rights are protected in the context of developing and implementing artificial intelligence, there should be a clear understanding of the rights and responsibilities that such a system has. It is also important to create an oversight body that is independent from the public authorities and is tasked with promoting the knowledge and understanding of AI and human rights as the Global Alliance for Digital Governance.

 

Do’s

When human creators work with an AI system, the end results can be impressive. But how does it work? Some of the best practices in the field suggest a few things to keep in mind.

First, it’s important to know what you’re working with. The more you can identify the needs and priorities of the people who will be affected, the better off you’ll be.

You’ll want to be careful with the data you feed to an AI model, too. Having a lot of it can lead to the wrong type of results. In some cases, this can create ethical dilemmas. Using AI to generate content can also open up new copyright and legal complications.

To illustrate this, consider the following example. A company called Stitch Fix recently started experimenting with a tool called DALL-E 2. This is a program that takes an image from Google Search, applies machine learning techniques, and creates an illustration of a clothing item.

Another way to think about this is to imagine a hypothetical robot. While the image is technically not produced by an AI, it does use a complex machine learning algorithm to generate the image.

Although it’s not necessarily easy to make the case for a purely AI-generated image, there are some perks. For instance, they can be more diverse than the human-produced version. There’s also a chance they’re a bit better quality.

Ultimately, the most important thing to remember is that a model will only perform the feats it’s trained for. So, if you’re building an AI system, it’s important to give users a lot of input. Not only will this help make your AI system more intuitive, it will also make the process more transparent.

Other tips to keep in mind include focusing on the most effective results and letting the system do its job. As AI continues to grow in power, it’s important to think about how to mitigate its pitfalls.  Consider use of IP laws to help.

Lastly, the best thing to do is to make a well-constructed, comprehensive AI policy that covers all aspects of the program. From the data used to train the software to the way in which the results are delivered, it’s important to understand what’s at stake and to find a way to manage it.

Don’ts

Whether it’s an automated system or an augmented human, it’s clear that artificial intelligence is pervading our daily lives. But as the tech advances and the algorithms get more and more complex, developers need to focus on how their efforts are actually delivering results. And a lot of the time, the results can’t be measured in a single number.

One example is a clever gizmo that will help you get on the road. The European Union has proposed requiring companies to provide a formal explanation of their automated decisions. Other examples include the use of natural language processing and analytics, such as IBM’s Watson which won the US quiz show Jeopardy! Interestingly, these technologies are not without their own drawbacks. For instance, the systems are often haphazardly deployed, leading to a disproportionate misidentification of Black people by facial recognition technology.

There are also notable exceptions, such as the Google AI strategy. The company is making efforts to help artists create transformative works, such as a “transformable” image which can be manipulated and repurposed for other purposes. Similarly, Adobe recently announced guidelines for generative art submissions. In the process, they’ve released a tool called DreamUp which allows users to make a more informed decision about whether or not a particular piece of artwork is a computer generated masterpiece.

Another noteworthy piece of software is Playform, a video game that lets you play a game in the cloud. Many users have reported some interesting results, such as signature gestures and signature colors. Somewhat surprisingly, a lot of these effects are not well explained, which is a shame.

Perhaps it’s time to stop relying on computers to do the dirty work, and start looking to artists and designers to create new and innovative works of art. If you’re an artist, check out the following list of AI-based resources. They’re all free and open to the public. You can even mint your own NFTs from the models! With this technology at your fingertips, you can now create an interactive experience that is akin to creating art by hand.

 

Promote knowledge and understanding of AI and human rights

Promoting knowledge and understanding of AI and human rights is a key aspect of the international community’s efforts to ensure that human rights are not undermined by the technology. It is crucial to identify and mitigate the risks that the technology poses, while at the same time supporting its uses to promote sustainable development, and to respond to difficult problems.

In the past few years, the international community has engaged in consultations with experts from across the globe. This has led to a number of concrete recommendations. These recommendations focus on 10 areas of action to address the impact of AI.

In the context of humanitarian aid, AI is likely to play an important role. However, it is also likely to pose unique risks to human rights. As such, it is imperative that it is used in a way that is respectful to the human rights of affected persons and to the environment.

AI and human rights education should reach a broad spectrum of audiences. In addition to reaching those with limited IT literacy, it is important to make sure that all participants in the design, implementation, and use of AI systems are informed about the risks and vulnerabilities of the technology.

Self-assessment of an AI system’s potential human rights impacts should be carried out prior to development and implementation. This assessment should be based on a thorough review of the system’s purpose, the context in which it is intended to operate, and its potential human rights implications.

A meaningful external review of an AI system should be conducted at regular intervals. This should include an evaluation of the algorithms behind the system, the way in which the decision-makers influence inputs, and the human control over the system. Such an assessment should take place in an open and transparent manner.

Relevant oversight bodies should be established to monitor the impact of AI on human rights and to investigate violations of human rights. They should have adequate resources and appropriate training to carry out their duties.

In order to operationalize the human rights framework, there is a need for capacity building in both the public and NGO sectors. This is especially necessary to address the challenges of human rights oversight in the context of AI.

 

Oversight bodies should be independent of public authorities

If you are thinking about investing in artificial intelligence, or if you are a creative individual or company, you probably want to know what kind of legal protection you can expect from your work. Copyright law has been a major factor in protecting the intellectual property of creative works. However, there may be an exception for artificial intelligence. Some say that if an AI system has been created from a human work, it may be considered free of copyright, which could mean that the work would be freely distributed.

This may be a good thing for the public at large, because it would allow individuals and companies to keep investing in the technology. However, it would also allow the system to be easily manipulated, which could make the technology less accurate and effective. In addition, a free-of-copyright work means that anyone can use it, regardless of the creator, without paying anything to the author. It would also be a bad deal for the companies that create the work.

Despite this potential benefit, it seems as though the copyright model is the best way to protect the rights of the creators of an artificial intelligence system. If the work is able to be freely distributed, it could be argued that the creator is not protected, and that the originality standards are being violated. However, it is important to note that the current trend of lowering the thresholds for originality could have negative consequences for those who create works.

 

The Global Alliance for Digital Governance (GADG), established through a collaboration of the Boston Global Forum and the World Leadership Alliance-Club de Madrid at the Policy Lab on September 7-9, 2021, is an excellent instrument for this mission.

Happy Lunar New Year 2023

Happy Lunar New Year 2023

As we begin the new Lunar Year in 2023, the Boston Global Forum team would like to extend our warmest greetings and best wishes for a prosperous and successful year ahead.

This past year has been one of great change and adaptation, and as we look to the future, we must continue to work together to address the pressing challenges facing our world.

As the CEO of the Boston Global Forum, I am honored to lead an organization that is at the forefront of addressing this issue through our AI World Society (AIWS). Our goal is to create an Age of Global Enlightenment, where individuals and societies are empowered to live in innovation and higher understanding, and responsible use of AI is a key component of this vision.

I encourage each and every one of you to take an active role in creating an Age of Global Enlightenment by participating in the AIWS Actions and contributing your expertise, insights, and ideas. Together, we can shape the future of AI for the betterment of humanity.

As we celebrate the Lunar New Year, let us also celebrate the new opportunities and possibilities it brings. Together, we can create a brighter and more enlightened future for all.

Wishing you a happy and prosperous new Year in 2023!

 

Sincerely,

Nguyen Anh Tuan

CEO, Boston Global Forum

Japanese former Defense Minister Yasuhide Nakayama named the coordinator of the Shinzo Abe Initiative for Peace and Security

Japanese former Defense Minister Yasuhide Nakayama named the coordinator of the Shinzo Abe Initiative for Peace and Security

On January 20, 2023, Boston Global Forum announced that Yasuhide Nakayama has been named the coordinator of the Shinzo Abe Initiative for Peace and Security. The Shinzo Abe Initiative for Peace and Security is an initiative which focuses on promoting peace and security in the region and around the world.

Yasuhide Nakayama, as a former Defense Minister of Japan, brings a wealth of experience and knowledge in the field of international security and defense. His appointment as coordinator of the initiative indicates the significance and importance of Japan’s role in promoting peace and security in the region and the world.

The Shinzo Abe Initiative for Peace and Security, under the leadership of Yasuhide Nakayama, aims to address the most pressing security challenges facing the world today through dialogue, research, and policy recommendations. The initiative will convene experts, policymakers, and leaders from around the world to discuss and develop solutions to these challenges, with the goal of creating a more stable and peaceful world.

Deeply concerned about “smart deterrence”: China’s AI-warfare plan for Taiwan

Deeply concerned about “smart deterrence”: China’s AI-warfare plan for Taiwan

Chinese military experts are reportedly exploring “smart deterrence” concepts, marking a significant evolution in China’s use of artificial intelligence (AI) and other emerging technologies from tactical and operational military levels to influence strategic-level decision-making.

South China Morning Post reported that China could become a leader in so-called “intelligent warfare”, drawing on advanced technologies such as AI, cloud computing, big data analytics and cyber offense and defense.

It is important to note that any potential military use of AI raises significant ethical and legal concerns, and it is important for governments and international organizations to address these issues through open and transparent dialogue.

Global Alliance for Digital Governance is deeply concerned and will discuss with leaders about this issue.

https://asiatimes.com/2023/01/smart-deterrence-chinas-ai-warfare-plan-for-taiwan/

https://www.scmp.com/news/china/military/article/3206780/call-pla-use-ai-smart-deterrence-against-us-over-taiwan