Machine learning continues to be one of the toughest skills to acquire. The domain is as vast and as complex as the field of computer science. Developers will have to learn new languages, algorithms, frameworks, tools from an extremely diverse and fragmented ecosystem. They need to learn how to use the cloud to train the models and optimizing those models to integrate with a variety of environments and platforms.
The complexity multiplies when we attempt to take the models to the edge. Each model has to be converted to take advantage of the underlying CPU and GPU architecture. Mainstream inferencing platforms with accelerators such as NVIDIA Jetson, Intel Movidius, and Google Edge TPU use different optimization techniques to run models at the edge. Developers need to learn the nuts and bolts of the hardware and software stacks to even run a simple AI-enabled application at the edge.
While the top cloud vendors are busy in turning their platforms into preferred training environments for deep learning, startups such as Xnor.ai are moving fast in simplifying the integration of AI with edge devices and off-line applications. The development of AI on edge also supports to expand AI on IoT devices and applications for helping people achieve well-being and happiness in a daily life, which is also promoted by AI World Society (AIWS) and Michael Dukakis Institute (MDI).
The original article can be found here