Public access to Internet giants’ mapping data should be mandated

Public access to Internet giants’ mapping data should be mandated

The Open Data Institute (ODI) recently published a report on UK’s geospatial data to address the opportunities and challenges. This report emphasizes the value of map data to autonomous technology.

The report, carried in UK, suggests that the publishing of map data owned by Apple, Google and Uber would result in major development of technologies like autonomous cars and drones.

So far, Internet giants like Google Maps, Uber… have collected a huge amount of geospatial data which shows addresses, boundaries from the services provided by this data. However, the data was not accessible to public. It is argued by the ODI that it should be considered as a part of “national infrastructure”.

The data can remarkably facilitate a bloom of transportation that brings about approximately £11 billion pounds worth of income, including access to schools or hospitals for remote areas, advancements in commercial satellites, connected cars and drones, etc.

Despite the considerable benefits, it is difficult to mandate the access to data as there is no incentive to make these firms give up their competitive advantages. The ODI recommends a politic support of the government to make it possible in the near future.

Sharing geospatial data will facilitate governments in their public services. The sixth layer of the AIWS 7-layer Model developed by the Michael Dukakis Institute is studying how to apply AI effectively to public services and policy-making.

AI World Strategies Theater: Use Cases for AI and High-Performance Computing

AI World Strategies Theater: Use Cases for AI and High-Performance Computing

AI World Conference and Expo on “Accelerating Innovation in the Enterprise”, December 3-5, 2018 in Boston is focused on the state of the practice of AI in the enterprise. The 3-day conference and exposition are designed for business and technology executives who want to learn about innovative implementations of AI in the enterprise through case studies and peer networking.

As enterprises begin to scale AI pilot programs, which often incorporate deep learning (DL), machine learning (ML), and natural language processing (NLP), across the enterprise, the need for a high-performance compute and storage environment becomes clear. High Performance Computing (HPC) environments, with their massive processing capability and low-latency access to data, are seen by many observers as a clear complementary technology to AI. This panel workshop will provide the information on these following sub-topics:

  • What are the top AI use cases that are likely to be powered by HPC systems now, and in the future?
  • What technological and operational challenges exist with deploying HPC systems that can support AI pilot programs or full-scale rollouts?
  • What operational models are enterprises using to deploy AI technology that is powered by HPC?
  • What security and regulatory issues need to be considered when using HPC systems to power AI use cases?

The panel will take place under the moderation of Keith Kirkpatrick, Principal Analyst, Tractica with the presence of the following panelists:

  • Christopher Carothers, PhD, Technology & Academic Advisor, Research & Development, Lucd
  • Gary Tyreman, CEO, Univa
  • Margrit Betke, PhD, Professor of Computer Science, Boston University

The Michael Dukakis Institute for Leadership and Innovation is an international sponsor of the event and is collaborating with AI World to publish reports and programs on AI-Government including AIWS Index and AIWS Products.

Please join Governor Michael Dukakis, honorary advisory board member and a featured guest speaker, on Tuesday, December 4 at 8:55 am along with thousands of global 200 business executives at AI World.

For more information about the event, visit aiworld.com.

To register and receive a $200 discount off of a 2-3 day conference registration, click here and enter priority code 186800MDI.

To receive a complimentary pass to attend the expo, click here and enter priority code 186800XMDI.

How AI can be trustworthy?

How AI can be trustworthy?

Ever since the emergence of AI, while it has benefited us greatly in many areas, there are also doubtful prejudices toward AI, especially about how biased AI algorithms are. This concern has opened many discussions of AI ethics to seek common voices from diverse cultures.

However, biases in AI are not likely to completely disappear, since the same biases exist without AI. These discussions should focus on how to make AI trustworthy.

In fact, all the opinions we have on AI are biased. The media tends to give readers shocking experiences; hence, we usually hear more about bad examples than good ones. The press cultivates fear among us while trust is what we need. Instead of trying to prevent from being unfair, we should learn how to help AI to make the right decision.

Another key aspect to building trust is transparency. For instances, firms can take part in AI Ethics challenges initiated by Ministry of Economics Affair and Employment’s AI Steering Group of Finland. The challenges encourage them to write down their ethical codes of AI development.

Beside, it is essential that there are state-laws for how and why an AI is developed. Companies should be active in the discussion and keep their regulators about the achievements they have made.

In general, the current development of AI is not transparent enough to earn trust from people. With rules and orders, what the AIWS is working on, or ethical frameworks, which Michael Dukakis is building with AIWS Initiative, we can take a step closer to transparency and ethics in AI development.

Deep learning’s world champion is concerned about AI future

Deep learning’s world champion is concerned about AI future

“I think we’re better off thinking about how to both build smarter machines and make sure AI is used for the well-being of as many people as possible,” Professor Yoshua Bengio said in the interview with MIT Technology Review’s senior editor for AI, Will Knight.

Yoshua Bengio is well-known for being an expert in deep learning since the day deep learning was just an academic curiosity. He is a professor at University of Montreal, also the co-founder of Element AI in 2016 which has built a successful company helping firms explore AI applications.

In the interview with Will Knight, Mr. Bengio shared that AI will be the game changer to improve people’s lives everywhere. Yet, a few notes were taken about how AI should be developed.

Firstly, the democracy in AI research might lead to power concentration as big companies tend to get more and more powerful due to people’s priority. Whereas “it’s dangerous to have too much power concentrated in a few hands,” said Professor Yoshua.

Secondly, there is a need of establishing laws and treaties to prevent lethal uses of AI in addition, and set up a defensive system to prevent it as some countries can secretly develop AI weapons.

Furthermore, to achieve a human-level AI that satisfy human’s need in long term, we also need a long-term strategy as well as investment. And machine learning will be the foundation to make a major leap in AI innovation. “We need to be able to extend it do things like reasoning, learning casualty and exploring the world”, said Professor Yoshua. As machines are not able to project themselves in different circumstances, they need model. Hence, we need machines to be able to discover these casual models, enable them to learn from their mistakes.

To avoid casualty, the research of AI needs to follow certain set of rules to keep AI development under control. This is what the AIWS 7-layer Model is developing.

Mikko Hypponen on IoT, AI in Cybersecurity and Privacy

Mikko Hypponen on IoT, AI in Cybersecurity and Privacy

Over the years, machine learning has taken major leaps and become the basis for many security technologies as many of them rely on machine learning system. Regarding the risks, it can be used by attackers in the future. Mikko Hypponen shared his opinion about making preparation to protect user’s privacy and internet safety for the future.

People are now trying to automate, which can be done by connecting furniture to IoT appliances. However, this leaves your home exposed to attackers. To keep it safe from attack, “Put your IoT devices in a separate network segment. Do not allow any connectivity from the IoT segment to your production network. Keep their firmware up-to-date. Change the default credentials. Read the manuals,” said Mikko Hypponen.

In regard to online privacy, he suggested compartmentalizing your online life: separate your online identity from your real-world identity. For instance, name your Reddit account with something apart from your name and keep your information secured, by not revealing it on the internet at all.

Mikko Hypponen is a cybersecurity expert and columnist, the Chief Research Officer at F-Secure, who was awarded the Practitioner in Cybersecurity for his great work and active contribution to the public’s knowledge of cybersecurity by Boston Global Forum and Michael Dukakis Institute in 2015. He has been working with computer security for over 20 years and has fought the biggest malware outbreaks in the net.