How AI can be trustworthy?

Nov 25, 2018News

Ever since the emergence of AI, while it has benefited us greatly in many areas, there are also doubtful prejudices toward AI, especially about how biased AI algorithms are. This concern has opened many discussions of AI ethics to seek common voices from diverse cultures.

However, biases in AI are not likely to completely disappear, since the same biases exist without AI. These discussions should focus on how to make AI trustworthy.

In fact, all the opinions we have on AI are biased. The media tends to give readers shocking experiences; hence, we usually hear more about bad examples than good ones. The press cultivates fear among us while trust is what we need. Instead of trying to prevent from being unfair, we should learn how to help AI to make the right decision.

Another key aspect to building trust is transparency. For instances, firms can take part in AI Ethics challenges initiated by Ministry of Economics Affair and Employment’s AI Steering Group of Finland. The challenges encourage them to write down their ethical codes of AI development.

Beside, it is essential that there are state-laws for how and why an AI is developed. Companies should be active in the discussion and keep their regulators about the achievements they have made.

In general, the current development of AI is not transparent enough to earn trust from people. With rules and orders, what the AIWS is working on, or ethical frameworks, which Michael Dukakis is building with AIWS Initiative, we can take a step closer to transparency and ethics in AI development.