Recently, an Op-Ed staff editor and writer at New York Times was invited to a talk with Yuval Noah Harari, philosopher and international author of “Sapiens” and “Homo Deus” to discuss the vision of his latest work “21 lessons for the 21st Century” surrounding the future of humankind.
In the vision, humankind is going to let machines and robots take over and do tasks while humans live as “gods”.
Until that time, many problems have arisen in AI development; while nuclear weapons and arms races can be prevented, an AI weapon such as “killer robots” can be created secretly by a nation without the attention of others. It is difficult to tell when a new AI is being developed and whether it might cause danger. A certain level of trust needs to be built between nations on a global level.
In this interview, academic Yuval Harari engages in a broad-ranging discussion about human nature and the human condition, past, present and future. A number of his points underscore the importance of the AIWS initiative. He points out that recent advances in science suggest that humans don’t technically have full free will, that humans learn from stories, that stories don’t necessarily correspond to reality in which people suffer, that there are now techniques to “hack” the human mind and influence behavior, and that the development of AI (while offering many wonderful possibilities) sets the stage for manipulation and outcomes that are difficult to predict. Therefore, he concludes, there is a pressing need for efforts to oversee the ethical development of artificial intelligence, and that these efforts have to be global to be effective. AIWS is well positioned to continue to play a facilitating role in this important context.