Why Apple And Microsoft Are Moving AI To The Edge

Feb 2, 2020News

Artificial intelligence (AI) has traditionally been deployed in the cloud, because AI algorithms crunch massive amounts of data and consume massive computing resources.  But AI doesn’t only live in the cloud. In many situations, AI-based data crunching and decisions need to be made locally, on devices that are close to the edge of the network.

AI at the edge allows mission-critical and time-sensitive decisions to be made faster, more reliably and with greater security. The rush to push AI to the edge is being fueled by the rapid growth of smart devices at the edge of the network – smartphones, smart watches and sensors placed on machines and infrastructure. Earlier this month, Apple spent $200 million to acquire Xnor.ai, a Seattle-based AI startup focused on low-power machine learning software and hardware. Microsoft offers a comprehensive toolkit called Azure IoT Edge that allows AI workloads to be moved to the edge of the network.

Will AI continue to move to the edge? What are the benefits and drawbacks of AI at the edge versus AI in the cloud? To understand what the future holds for AI at the edge, it is useful to look back at the history of computing and how the pendulum has swung from centralized intelligence to decentralized intelligence across four paradigms of computing.

To support AI technology and application, Artificial Intelligence World Society Innovation Network (AIWS-IN) created AIWS Young Leaders program including Young Leaders and Experts from Australia, Austria, Belgium, Britain, Canada, Denmark, Estonia, France, Finland, Germany, Greece, India, Italy, Japan, Latvia, Netherlands, New Zealand, Norway, Poland, Portugal, Russia, Spain, Sweden, Switzerland, United States, and Vietnam.

The original article can be found here.