Are We Having An Ethical Crisis in Computing?

Are We Having An Ethical Crisis in Computing?

With the Fourth Industrial Revolution, we stand on the brink of a technological revolution that will fundamentally change our lives. Much of that is thanks to disruptive innovations in the computing field, in particular AI. At the same time, according to world-renowned Professor Moshe Y. Vardi of Rice University, a Fellow of the Association for Computing Machinery and the American Mathematical Society, the field seems to face a public image crisis, viewed by many as the crisis of trust.

Indeed, strong concerns about ethics in computing came from the media and press. Many colleges are also hurriedly integrating ethics into their computing curricula. The narrative is that what ails tech today is a deficit of ethics, and the remedy, therefore, is an injection of ethics.

However, is this crisis real, and is how we are responding today the right solution? Professor Vardi raised these questions in an interesting op-ed he penned in last month’s publication of the ACM Communications. He compared today’s computing to the early 20th century’s automobile manufacturing. The solution to automobile crashes is not ethics training for drivers, but public policy, which makes transportation safety a public priority. Similarly, he argues that the current crisis with Computing is not an ethics crisis; it is a public policy crisis, and we need proper policies!

(See Professor Vardi’s full article HERE.)

This goes along well with the vision and goal of AI World Society (AIWS). AIWS was founded by the Michael Dukakis Institute for Innovation and Leadership indeed for the purpose of promoting ethical norms and practices in the development and use of AI. We recognized the importance of ethics guidelines at the policy level and recently published a comprehensive report about AI Ethics; link below:

https://bostonglobalforum.org/bgf2022/2018/12/aiws-report-about-ai-ethics/

A Stanford-Led Engineering Team Unveils the Prototype for a Computer-on-a-Chip

A Stanford-Led Engineering Team Unveils the Prototype for a Computer-on-a-Chip

Researchers at Stanford University and CEA-Leti unveiled the prototype computer-on-chip on February 19 at the International Solid State Circuits Conference in San Francisco.

It is the world’s first circuit integrating multiple-bit non-volatile memory (NVM) technology called Resistive RAM (RRAM) with silicon computing units, as well as new memory resiliency features that provide 2.3-times the capacity of existing RRAM. Its target applications include energy-efficient, smart-sensor nodes to support Artificial Intelligence on the Internet of Things, or “Edge AI”.

The research innovation is to unite memory and processing into one chip because it is faster and more efficient than passing data back and forth between separate chips as is the case today.  This innovation has also a strong impact for Edge AI applications with energy saving, which is aligned with the AI Practices Index designed by AI World Society (AIWS).

In the optimistic future, Edge AI with the Internet of Everything applications, even now the way that the prototype combines memory and processing could be incorporated into the chips found in smartphones and other mobile devices. “This is what engineers do,” said Subhasish Mitra, a Stanford professor of electrical engineering and computer science, who demonstrated as a strong design Ethics which has also been promoted by AIWS.

The US threatens to stop sharing intelligence with allies if they use Huawei

The US threatens to stop sharing intelligence with allies if they use Huawei

“If a country adopts this [Huawei equipment] and puts it in some of their critical information systems, we won’t be able to share information with them, we won’t be able to work alongside them,” Pompeo said during an interview with Fox Business on Thursday Feb 21, 2019.

It’s got a lot to do with the role of 5G and whether China could use security back doors to exert undue control over a nation’s digital infrastructure via Huawei’s equipment. Confusingly, on the same day as Pompeo’s comments, President Donald Trump tweeted that he wanted the US to win in 5G development “through competition, not by blocking out currently more advanced technologies.”

In an interview with the BBC , Huawei founder Ren Zhengfei said the company has never installed back doors into its technology and never would do so, even if required to by Chinese law.

But big problem of China Government and Business now is they said and they did difference, people do not trust on them.

How can the Chinese government and businesses recover the trust of people? They should be more open, promote transparency, and decentralize power, sharing them to Chinese people, who will have a real say in supervision of the Chinese government. Openness, transparency, and accountability by the government are parts of the criteria of AI World Society (AIWS) Ethics Index that was announced on December 3, 2018 by the Michael Dukakis Institute for Leadership and Innovation.

How AlphaZero has rewritten the rules of game play on its own

How AlphaZero has rewritten the rules of game play on its own

Silver’s latest creation, AlphaZero, learns to play board games including Go, chess, and Shogi by practicing against itself. Through millions of practice games, AlphaZero discovers strategies that it took humans millennia to develop.

So could AI one day solve problems that human minds never could?

Geordie Wood

When you have something learning by itself, that’s building up its own knowledge completely from scratch, it’s almost the essence of creativity.

AlphaZero has to figure out everything for itself. Every single step is a creative leap. Those insights are creative because they weren’t given to it by humans. And those leaps continue until it is something that is beyond our abilities and has the potential to amaze us.

In the future, AI systems can have reasonable thinking on the same level as the human brain, in which intelligent machines operate like human beings, therefore we need to organize a society that incorporates them into our lives. The Michael Dukakis Institute generated the AI World Society (AIWS) Initiative to build that society.

Enterprise AI: Data Analytics, Data Science and Machine Learning

Enterprise AI: Data Analytics, Data Science and Machine Learning

Jay Boisseau, Ph.D., and Lucase Wilson, Ph.D., who are Artificial Intelligence (AI) strategist and researcher at Dell EMC, described some fundamental technologies and processes that enable enterprises to put AI to transform their businesses.

In the article, they articulated and elaborated main building AI blocks including Data analytics, Data science, Data Engineering as well as Machine Learning and Deep Learning. These techniques are key enablers for drawing data insights about the present and making data predictions about the future, which are commonly essential for enterprises such as predictive maintenance in manufacturing, forecasting sales in retail stores, medical diagnosis in healthcare etc.

The development of AI changes the ground rules for decision-making in both enterprise and society, which is also initially developed and promoted by AI World Society (AIWS). Especially, AI is a disruptive technology to transform our society and human life quality with wide range of applications such as stronger customer relationships, smarter business decisions, shorter process, as well as better products and services to market.

Chancelor Angela Merkel gave a speecch on security at the Munich Conference

Chancelor Angela Merkel gave a speecch on security at the Munich Conference

On Saturday Feb 16, in Munich, German Chancellor Angela Merkel addressed the security conference with several critiques of U.S. foreign policy – and received a sustained standing ovation.

At Global Cybersecurity Day Conference on December 12, 2015, Boston Global Forum presented the World Leader for Peace, Security, and Development Award to Chancellor Angela Merkel. This award, given annually, was meant for leaders who have made outstanding contributions for peace, security, and cybersecurity in the world.

Cedric Villani builds AI Strategy for France and EU

Cedric Villani builds AI Strategy for France and EU

Cedric Villani, a mathematician, and politician who won the Field Prize will lead a team to build a meaningful artificial intelligence strategy for France and Europe. Here is some keynote of the strategy:

Building a Data-Focused Economic Policy

AI heavyweights such as China and the US are growing extraordinary and amazing methodologies. Therefore, France and Europe won’t really have their spot on the world AI stage by making a “European Google”. Rather, they should plan their own custom fitted model.

Advancing Agile and Enabling Research

The French scholarly research is at the cutting edge level of overall exploration arithmetic and artificial intelligence; however, the nation’s logical advancement does not always translate into concrete modern and monetary applications. The nation is hit by the cerebrum channel towards US heavyweights, and preparing capacities on AI and information science miss the mark of concerning necessities.

Surveying the Effects of AI on the Future of Work and the Labor Market

Work is experiencing huge changes, however, it isn’t yet completely prepared to address the changes. There are significant vulnerabilities on the impacts of improvement of man-made reasoning, computerization, and mechanical autonomy, especially in employment creation and removal. In any case, it looks progressively sure that most companies and organizations will be broadly reshaped.

Man-made brainpower Working for a More Ecological Economy

Cutting out a significant job for man-made consciousness likewise implies tending to its supportability, particularly from a biological point of view. This does not simply mean thinking about the utilization of AI in our biological progress, but instead planning locally natural AI and utilizing it to handle the effect of human activity on the earth.

Moral Considerations of AI

On the off chance that we need to create AI advances that conform to our qualities and social standards, at that point it is important to rally round established researchers, open specialists, industry, entrepreneurs and common society associations. Our main goal has attempted to advance some modest recommendations that could establish the frameworks for the moral improvement of AI and advance the discussion on this issue inside society on the loose.

Comprehensive and Diverse AI

Computerized reasoning must not turn into another method for closing the door for parts of the populace. Computer-based intelligence will create chances for innovation and improvement of social orders for people, so these open doors must benefit all no matter how you look at it.

Every country needs to abide by the moral and legal rules when developing in this field, and the world also requires international policies, conventions, and regulations to ensure unity and global consensus in developing AI. Calling leaders of nations to build a treaty on the exploitation and development of AI for peace is what the Michael Dukakis Institute (MDI) is actively implementing through Layer 4 and Layer 5 of the 7-layer AIWS Model.

(Full version of France AI Strategy :

https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf)

AI is reinventing the way we invent

AI is reinventing the way we invent

David Rodman, the editor of MIT Technology Review, raises the point:

The biggest impact of artificial intelligence will be to help humans make discoveries we couldn’t make on our own.

Perhaps an AlphaGo-like breakthrough could help the growing armies of researchers pouring over ever-expanding scientific data?

The idea is to infuse artificial intelligence and automation into all the steps of materials research and drug discovery.

OpenAI-announced Language Models are Unsupervised Multitask Learners

OpenAI-announced Language Models are Unsupervised Multitask Learners

An impressive new language AI writes product reviews and news articles. Its creators are worried about misuse.

The OpenAI team demonstrated that we could get those results from an “unsupervised” AI — meaning the system learned from reading 8 million Internet articles, not from being explicitly trained for the tasks.

The AIWS model includes implementations to reduce and monitor harmful things, misuse, and slanders that AI can unleash onto the world.