Bao Tran, The Representative of Boston Global Forum in Silicon Valley, USA
In order to ensure that human creators’ rights are protected in the context of developing and implementing artificial intelligence, there should be a clear understanding of the rights and responsibilities that such a system has. It is also important to create an oversight body that is independent from the public authorities and is tasked with promoting the knowledge and understanding of AI and human rights as the Global Alliance for Digital Governance.
Do’s
When human creators work with an AI system, the end results can be impressive. But how does it work? Some of the best practices in the field suggest a few things to keep in mind.
First, it’s important to know what you’re working with. The more you can identify the needs and priorities of the people who will be affected, the better off you’ll be.
You’ll want to be careful with the data you feed to an AI model, too. Having a lot of it can lead to the wrong type of results. In some cases, this can create ethical dilemmas. Using AI to generate content can also open up new copyright and legal complications.
To illustrate this, consider the following example. A company called Stitch Fix recently started experimenting with a tool called DALL-E 2. This is a program that takes an image from Google Search, applies machine learning techniques, and creates an illustration of a clothing item.
Another way to think about this is to imagine a hypothetical robot. While the image is technically not produced by an AI, it does use a complex machine learning algorithm to generate the image.
Although it’s not necessarily easy to make the case for a purely AI-generated image, there are some perks. For instance, they can be more diverse than the human-produced version. There’s also a chance they’re a bit better quality.
Ultimately, the most important thing to remember is that a model will only perform the feats it’s trained for. So, if you’re building an AI system, it’s important to give users a lot of input. Not only will this help make your AI system more intuitive, it will also make the process more transparent.
Other tips to keep in mind include focusing on the most effective results and letting the system do its job. As AI continues to grow in power, it’s important to think about how to mitigate its pitfalls. Consider use of IP laws to help.
Lastly, the best thing to do is to make a well-constructed, comprehensive AI policy that covers all aspects of the program. From the data used to train the software to the way in which the results are delivered, it’s important to understand what’s at stake and to find a way to manage it.
Don’ts
Whether it’s an automated system or an augmented human, it’s clear that artificial intelligence is pervading our daily lives. But as the tech advances and the algorithms get more and more complex, developers need to focus on how their efforts are actually delivering results. And a lot of the time, the results can’t be measured in a single number.
One example is a clever gizmo that will help you get on the road. The European Union has proposed requiring companies to provide a formal explanation of their automated decisions. Other examples include the use of natural language processing and analytics, such as IBM’s Watson which won the US quiz show Jeopardy! Interestingly, these technologies are not without their own drawbacks. For instance, the systems are often haphazardly deployed, leading to a disproportionate misidentification of Black people by facial recognition technology.
There are also notable exceptions, such as the Google AI strategy. The company is making efforts to help artists create transformative works, such as a “transformable” image which can be manipulated and repurposed for other purposes. Similarly, Adobe recently announced guidelines for generative art submissions. In the process, they’ve released a tool called DreamUp which allows users to make a more informed decision about whether or not a particular piece of artwork is a computer generated masterpiece.
Another noteworthy piece of software is Playform, a video game that lets you play a game in the cloud. Many users have reported some interesting results, such as signature gestures and signature colors. Somewhat surprisingly, a lot of these effects are not well explained, which is a shame.
Perhaps it’s time to stop relying on computers to do the dirty work, and start looking to artists and designers to create new and innovative works of art. If you’re an artist, check out the following list of AI-based resources. They’re all free and open to the public. You can even mint your own NFTs from the models! With this technology at your fingertips, you can now create an interactive experience that is akin to creating art by hand.
Promote knowledge and understanding of AI and human rights
Promoting knowledge and understanding of AI and human rights is a key aspect of the international community’s efforts to ensure that human rights are not undermined by the technology. It is crucial to identify and mitigate the risks that the technology poses, while at the same time supporting its uses to promote sustainable development, and to respond to difficult problems.
In the past few years, the international community has engaged in consultations with experts from across the globe. This has led to a number of concrete recommendations. These recommendations focus on 10 areas of action to address the impact of AI.
In the context of humanitarian aid, AI is likely to play an important role. However, it is also likely to pose unique risks to human rights. As such, it is imperative that it is used in a way that is respectful to the human rights of affected persons and to the environment.
AI and human rights education should reach a broad spectrum of audiences. In addition to reaching those with limited IT literacy, it is important to make sure that all participants in the design, implementation, and use of AI systems are informed about the risks and vulnerabilities of the technology.
Self-assessment of an AI system’s potential human rights impacts should be carried out prior to development and implementation. This assessment should be based on a thorough review of the system’s purpose, the context in which it is intended to operate, and its potential human rights implications.
A meaningful external review of an AI system should be conducted at regular intervals. This should include an evaluation of the algorithms behind the system, the way in which the decision-makers influence inputs, and the human control over the system. Such an assessment should take place in an open and transparent manner.
Relevant oversight bodies should be established to monitor the impact of AI on human rights and to investigate violations of human rights. They should have adequate resources and appropriate training to carry out their duties.
In order to operationalize the human rights framework, there is a need for capacity building in both the public and NGO sectors. This is especially necessary to address the challenges of human rights oversight in the context of AI.
Oversight bodies should be independent of public authorities
If you are thinking about investing in artificial intelligence, or if you are a creative individual or company, you probably want to know what kind of legal protection you can expect from your work. Copyright law has been a major factor in protecting the intellectual property of creative works. However, there may be an exception for artificial intelligence. Some say that if an AI system has been created from a human work, it may be considered free of copyright, which could mean that the work would be freely distributed.
This may be a good thing for the public at large, because it would allow individuals and companies to keep investing in the technology. However, it would also allow the system to be easily manipulated, which could make the technology less accurate and effective. In addition, a free-of-copyright work means that anyone can use it, regardless of the creator, without paying anything to the author. It would also be a bad deal for the companies that create the work.
Despite this potential benefit, it seems as though the copyright model is the best way to protect the rights of the creators of an artificial intelligence system. If the work is able to be freely distributed, it could be argued that the creator is not protected, and that the originality standards are being violated. However, it is important to note that the current trend of lowering the thresholds for originality could have negative consequences for those who create works.
The Global Alliance for Digital Governance (GADG), established through a collaboration of the Boston Global Forum and the World Leadership Alliance-Club de Madrid at the Policy Lab on September 7-9, 2021, is an excellent instrument for this mission.