Deep learning is about to get easier — and more widespread

Aug 3, 2019News

Deep learning algorithms often require millions of training examples to perform their tasks accurately. But many companies and organizations don’t have access to such large caches of annotated data to train their models (getting millions of pictures of cats is hard enough; how do you get millions of properly annotated customer profiles — or, considering an application from the health care realm, millions of annotated heart failure events?). On top of that, in many domains, data is fragmented and scattered, requiring tremendous efforts and funding to consolidate and clean for AI training. In other fields, data is subject to privacy laws and other regulations, which may put it out of reach of AI engineers.

This is why AI researchers have been under pressure over the last few years to find workarounds for the enormous data requirements of deep learning. And it’s why there’s been a lot of interest in recent months as several promising solutions including hybrid AI models, Few-shot learning and one-shot learning, as well as Generating training data with GANs have emerged — two that would require less training data, and one that would allow organizations to create their own training examples. These innovative AI solutions are also aligned with AI World Society (AIWS) evaluative criteria including data collection methodology and hybrid algorithm to promote and apply openness and transparency in the use and development of constructive AI for human values.

The original article can be found here.