Jim Ritchie

February 21, 2023

AI in healthcare is just like driving a car…

The framing of the use and value of artificial intelligence in healthcare is often polarised. Descriptions range from tools that enable and support clinical decision making to transformational technologies akin to medical artificial general intelligence. For clinicians this increases the challenge in evaluating and adopting new tools. Compounding the problem are organisational adoption frameworks that lack real world meaning. These frameworks describe phases of AI maturity from awareness to routine operational use to the ephemeral “transformative change” (where AI drives decision making and touches all parts of a healthcare system).The extreme promises and limited meaning of adoption frameworks makes it difficult for organsiations to understand how these new technologies should be considered. A problem from both perspectives!

In attempts to address this, there are well constructed papers that focus on the product lifecycle (from development to implementation to long term governance of AI) as well as work that considers AI in different phases of clinical research. These papers make sensible recommendations such as clearly defining problems to be solved, developing models within known regulations, validating outputs, clinically testing and then safely implementing within established legal frameworks. None of these suggestions are bad or wrong, but the risk to progress is that all solutions are seen as requiring the same level of scrutiny (and that’s before variation in legalities is even considered). 

So is there a ‘risk first’ model? In 2014 the International Medical device Regulators Forum proposed four categories for software as a medical device. This was constructed using a matrix of the severity of the clinical condition and the significance of the information being provided to the user.
IMG_0126.jpeg

In the UK the MHRA have somewhat developed against this theme with guidance describing where different technology do / do not meet the definition of a medical device. However, a recent comparative analysis demonstrates that international approaches remain inconsistent and that challenges remain in decoding some vendor’s product descriptions. Subjectivity creates uncertainty, with one digital technology being defined as both

software ... indicated for use as an aid in the diagnostic evaluation of snoring in patients

And

software [that] employs neural network algorithms to analyse sleep breathing sounds caused by airway collapse during apnea 

There’s quite the difference and makes consideration of impact and risk more difficult!

For conceptual framing, I’m a huge fan of @drhughharvey and his ideas around how we can better describe digital therapeutics. Building against some of this thought process I feel that there is value in defining and communicating different levels of clinician substitution using AI and other technologies. Many people will be familiar with the different levels of driving automation driving automation which gives a clear threshold at which responsibility for monitoring the driving environment passes from human to machine.



Could this model be applied to healthcare, allowing differential assessments, and risk frameworks to be applied, or used to add more information to health app quality labels. As a first pass of thinking:

Level 0 - No clinical automation 
This is representative of the current state of care delivery in most of the world. Although there may be processes or systems that support staff e.g. drug interaction warnings, these is advisory, do not take any decisions and are only there to help and guide.


Level 1
- Clinician assistance 
These are more advanced forms of clinical decision support that create unstructured break glass moments during care delivery (not based on simple logical models), or help synthesise complex clinical information sets into structured summaries. All actions and tasks still remain in the control of the clinician with the system enabling more efficient working, not making or suggesting any specific diagnoses, treatments or actions. 


Level 2
- Advanced clinician assistance / partial clinical automation 
This is the lowest level of care automation where tools can direct the actions of clinical teams to consider possibilities, but all decisions are made by trained clinicians. Diagnoses or areas of clinical interest may be suggested to the user, but all responsibility for review and action remains with the clinician. 


Level 3
- Context specific clinician substitution 
As tools progress, solutions are able to reliably identify defined pathologies or care needs e.g. condition specific identification on imaging and pass this information to the next responsible clinician without the need for routine human review and confirmation. Here AI begins to be accountable for specific part(s) of care. 


Level 4
- Thematic clinician substitution 
At this point technologies are able to either replace a broad based clinical skill set (e.g. reviewing imaging for all possible pathologies on a chest X-Ray) or are able to plan and manage end to end care pathways for specific conditions (from diagnosis to treatment). The possibility for human override remains but this is optional and not required in most circumstances. 


Level 5
- Full clinical replacement 
Finally, care planned and delivered without human input. Patients engage with technologies as they would a trained clinician today. Symptoms are shared; investigations planned, scheduled and reviewed; diagnoses made and treatment plans enacted without any form of human input or oversight. 


I’m sure there is lots of scope to improve these ideas, and as always I’m really interested to hear the views of others.

 @chorltonjim