Home » News and Events » News » Adopting AI in Health Care Will Be Slow and Difficult

Adopting AI in Health Care Will Be Slow and Difficult

Artificial intelligence (AI), including machine learning, presents exciting opportunities to transform the health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers and potential investors to grapple with how to overcome today’s barriers to adoption, compliance, and implementation.

Here are key obstacles to consider and how to handle them:

Developing regulatory frameworks.  In 2017, the FDA released its Digital Health Innovation Action Plan to offer clarity about the agency’s role in advancing safe and effective digital health technologies, and addressing key provisions of the 21st Century Cures Act.

Achieving FDA approval. To account for the shifting FDA oversight and approval processes, software developers must carefully think through how to best design and roll out their product so it’s well positioned for FDA approval, especially if the software falls under the agency’s “higher risk” category.

AI is a black box. Besides current regulatory ambiguity, another key issue that poses challenges to the adoption of AI applications in the clinical setting is their black-box nature and the resulting trust issues.

Lower hurdles in life sciences. While AI’s application in the clinical care setting still faces many challenges, the barriers to adoption are lower for specific life sciences use cases. For instance, ML is an exceptional tool for matching patients to clinical trials and for drug discovery and identifying effective therapies.

But whether it’s in a life sciences capacity or the clinical care setting, the fact remains that many stakeholders stand to be impacted by AI’s proliferation in health care and life sciences. Obstacles certainly exist for AI’s wider adoption — from regulatory uncertainties to the lack of trust to the dearth of validated use cases. But the opportunities the technology presents to change the standard of care, improve efficiencies, and help clinicians make more informed decisions is worth the effort to overcome them. These challenges and opportunities of AI in health care is also mentioned in AI Ethics and Practice Indexes to include standards and frameworks for constructive use of AI in the society with the collaborations of different organizations and entities.

The original article can be found here.