Top 3 Barriers to Entry for AI in Health Care

Scaling AI Kelci Miclaus

When many people hear the phrase “AI in health care,” they may think of doctors being replaced by robots or machines. Not exactly a positive connotation, both on the patient and on the physician side of the equation, and also far from a realistic goal or measure of success. The potential for AI in health care is so much more diverse and, in many ways, complex.

From virtual care delivery, digital health driven by wearables and connected medical devices, intelligent automation in administrative processes, enhancing cognitive capabilities in point-of-care decision systems to bridging the payer and provider gap with drug manufacturers and therapeutic access through real world data, the use cases are wider and deeper than many believe. However, that doesn’t mean that the industry (like so many others) isn’t facing challenges in getting started. Here are the top barriers to entry for AI in health care.

woman typing on a laptop with stethoscope on desk

1. Establishing Trust

Trust in AI systems is a global issue that certainly goes beyond health care — in fact, 33% of US CEOs cite employee trust as one of the greatest barriers to AI adoption. However, it’s particularly pertinent in health care because, well… trust is such an essential component of health care at any level, with or without AI. 

And a huge factor when it comes to AI particularly relevant in health care is bias and model fairness. Unfairness can be explained at the very source of any AI project: the data. Building thoughtful implementations of AI that don’t propagate the bias in that data and lead to health disparities, especially for underrepresented or underserved groups, is critical. 

So what can be done? There are two things to think about, and the first is education. Executives need to start making moves to ensure that everyone working in the space, at all levels, understands the basics of AI. The technology must move from something that’s distant and scary to accessible and unambiguous. 79% of health care professionals under age 40 already see digital health technologies as key to better patient outcomes — data science and AI are the delivery mechanisms for this.

The second is around explainability. When AI systems do get built and start being used more widely, they must be white-box, meaning that people need to understand how they work and be able to interpret the results (yes, this is achievable even with deep learning applied in computer vision as an example). Today there are many tools (including Dataiku) that provide explainability features as well as features to combat bias, ensuring that users understand model outputs and have a hand in increasing trust as well as eliminating unfairness.

→ Machine Learning Basics: Get the Full Illustrated Guidebook

→ Get the Ebook: Black-Box vs. Explainable AI — How to Reduce Business Risk

2. Selecting the Right Use Cases

As mentioned in the introduction, there are so many possibilities for AI in health care that it can be overwhelming to get started. Is it better to tackle low-hanging fruit for short-term value, or get started on moonshots that will really differentiate the organization?

The answer is, of course, both. At the same time, use cases for AI in health care need to balance traditional success metrics of administrative AI use cases with unique domain KPIs of value-based care (for which it can be harder to prove clear return on investment, or ROI). Whether low-hanging fruit or moonshot, an ideal AI project will have clear and compelling answers to each of these questions:

  • WHO will this project benefit?
  • HOW will it specifically improve experience or outcomes, and HOW can this be measured?
  • WHY is using AI for this purpose better than existing processes?
  • WHAT is the upside if it succeeds, and WHAT are the consequences if it fails?
  • WHERE will the data come from, and does it already exist?
  • WHEN should an initial working prototype and, subsequently, a final solution in production be delivered?

For initial use case ideas, at Dataiku, we always recommend that our customers look at parts of their organizations that use a lot of manual processes and tools but that are potentially not very sophisticated. In fact, according to Health IT Analytics, 46% of healthcare professionals under age 40 see value in AI for reducing inefficiencies in administrative work.

A few key examples for the health care space are staffing and resource planning (including front-line worker churn prevention), AI-assisted coding in medical claims billing, patient risk and care delivery models, and medical imaging classification to improve clinical decision support systems.

The bottom line? Use cases for AI in health care need to be laser focused on improving process inefficiencies, increasing productivity, reducing risk, underscoring value-based care, and improving the cognitive capabilities of physicians (not focused on replacement or removing humans from the loop).

→ Get the Full Ebook:  Defining a Successful AI Project

3. Data Volume (& Privacy)

Having lots of data is a requirement for building AI systems, but is there such a thing as too much data? In health care, one could argue the data is as much a challenge as an asset. According to RBC Capital Markets, approximately 30% of world’s data being generated is health care data — an estimated 270 GB of health care data for every human on earth (plus, much of it is unstructured data). And our digital device interactions will only continue to increase, with an increasing emphasis on health signals.  All of this data brings three unique barriers. The first is simply handling all of it efficiently and with interoperability to ensure data integrity fit for purpose.

The second is handling all of it efficiently to train models and create actionable insights while also respecting legal, regulatory, and privacy constraints. To make matters more complex, even the privacy component has two branches. One is the privacy, security, and protection of the data needed to fuel AI (think data de-identification, privacy-preserving AI and federated data systems) as well as the privacy and security of deployed AI systems (think cyber security of AI-driven medical devices).

Under emerging data privacy regulation (like GDPR, CCPA, etc.), working directly with personal data becomes extremely limited, and working with anonymized data is incredibly difficult (not to mention resource-intensive) to actually do correctly. So what other options are there to work with data in an increasingly regulated world?

That’s where an AI platform (like Dataiku) can come in. In general, data teams and non-data teams alike need an AI platform for a variety of reasons, including pure efficiency. But critically, one of the biggest advantages is compliance with data regulations. AI platforms allow for:

  • Data cataloging, documentation, and clear data lineage — that is, they allow data teams and leaders to trace (and often see at a glance) which data source is used in each project.
  • Access restriction and control, including separation by team, by role, purpose of analysis and data use, etc.
  • Easier data minimization, including clear separation in projects as well as some built-in help for anonymization and pseudonymization, only data relevant to the specific purpose will be processed, minimizing risk. 

→ Executing Data Privacy-Compliant Data Projects: A Guide for Data Teams

The third barrier within data volume is the caveat that even with a lot of data, for some critical use cases in the health care space we don't have a lot of labeled data. For example, in computer vision, we have tons of medical imaging but very few that are robustly labeled for training models. Labeling data for computer vision use cases is often highly resource-intensive, requiring specialized knowledge or outsourced labor. 

While AI without labeled data is technically possible using techniques like transfer learning and unsupervised learning, most machine learning models today use supervised learning, which does require labeled data. Once again, while not a magic bullet solution, technology can help. For example, with Dataiku's built-in, collaborative managed labeling system, data annotation teams can efficiently generate mass quantities of high quality, labeled data for machine learning purposes.

Building Up AI Governance to Break Down Barriers

As you have read here, the biggest challenges for AI in health care are not inherently technological; rather, they are people- and process-based. However, technology can still play a role in breaking down these barriers. For example, building a strong AI Governance program is an important component to making all three barriers to entry covered here more accessible. 

AI Governance is much wider than data governance. An AI Governance framework enforces organizational priorities through standardized rules, processes, and requirements that shape how AI is designed, developed, and deployed. It’s easy to see how this kind of oversight can help build trust while ensuring data privacy, but also (when done right) allow for enough flexibility to help AI projects thrive.

→ How to Build Governance Plans & Workflows in Dataiku

It’s important to emphasize that when it comes to AI, there is no silver bullet. But having the right tools (including a platform like Dataiku) to build trust with white-box AI, help manage an end-to-end governance program, keep track of and work with massive amounts of protected data, upskill people into working with data in their day-to-day jobs, and execute more quickly on use cases can certainly be a viable starting point. 

You May Also Like

Talking AI Democratization With Dr. Anastassia Lauterbach

Read More

6 Top-of-Mind Topics About AI & Trust in 2024

Read More

3 Concrete Ways to Drive AI ROI

Read More

Alteryx to Dataiku: The Visual Flow

Read More