Topics In Demand
Notification
New

No notification found.

Machine-learning AI—What makes it different?
Machine-learning AI—What makes it different?

December 1, 2022

188

1

Systems that simply imitate humans are not new to the medical field, of course. Since the advent of commercial transistors in the 1960s, computational medical devices have increasingly mimicked human behavior and actions. Automatic blood pressure monitors (sphygmomanometers) imitate the actions of trained clinicians in detecting and reporting the Korotkoff sounds that signify systolic and diastolic blood pressures. Portable defibrillators evaluate heart waveforms to determine when defibrillation is necessary and can then act to deliver the needed defibrillation.

Devices like these, by supplementing or in some instances replacing direct clinician involvement, have already expanded the availability of care outside of healthcare facilities, to homes and workplaces, as well as to areas and regions where trained clinicians are rare or absent. Such technologies, however, do not act independently of human reasoning, but instead, utilize previously validated clinical protocols to diagnose medical conditions or deliver therapy. They do not “think” for themselves in the sense of understanding, making judgments, or solving problems[1]; rather, they are static rules-based systems,[2] programmed to produce specific outputs based on the values of received inputs.

While such systems can be very sophisticated, the rules they employ are static— they are not created or modified by the systems. Their algorithms are developed based on documented and approved clinical research and then validated to produce expected (i.e., predictable) results. In this aspect, rules-based AI systems, other than their complexity, do not differ substantially from computational and electronic medical devices that have been in use since the 1960s.

There are other types of AI that utilize large data sets and complex statistical methodologies to discover new relationships between inputs, actions, and outcomes. These data-driven or machine-learning systems[3] are not explicitly programmed to provide pre-determined outputs, but are heuristic, with the ability to learn and make judgments. In short, machine learning AI systems, unlike simple rules-based systems, are cognitive in some sense and can modify their outputs accordingly. For the purposes of this blog post, we have separated data-driven/machine learning AI into two groups—locked models that are unable to change without external intervention, and continuous learning (or adaptive models) that modify outputs automatically in real-time. In reality, there are likely to be several levels of change control for AI—from traditional concepts that are already known to accelerated concepts that may need additional levels of control.

The more sophisticated of these data-driven systems (i.e., super-intelligent AI) can surpass human cognition in their ability to process enormous and complicated data sets and engage in higher levels of abstraction. Utilizing multiple layers of statistical analysis and deep learning/neural networks, these systems act as black boxes[4] producing protocols and algorithms for diagnosis or therapy that are not readily understandable by clinicians or explicable to patients.

Data-driven machine learning AI systems can be further divided into locked models and continuous learning models:

  • Locked models[5] employ algorithms that are developed using training data and machine learning, which are then fixed so neither the internal algorithms nor system outputs change automatically (though changes can be accommodated in a stepwise manner).
  • Continuous learning models (or adaptive models)[6] utilize newly received data to test the assumptions that underlie their operations in real-world use and, when potential improvements are identified, the systems are programmed to automatically modify internal algorithms and update external outputs.

The special characteristics of machine learning and deep-learning AI systems differentiate them from rules-based systems and more traditional medical devices in specific ways. First, they learn—these systems not only treat patients but are capable of assessing the results of treatment both for individuals and across populations, as well as making predictions about improving treatment to achieve better patient outcomes. Second, they are capable of autonomy—some of these systems have the potential to change (and presumably improve) processes and outputs, without direct clinical oversite or traditional validation. Third, because of their sophisticated computational abilities, the predictions developed by these systems may, to some degree, be inexplicable to patients and clinicians. Combined, these characteristics blur the essential nature of the devices themselves, changing them from being simply tools used under the direction of clinicians to systems capable of making autonomous clinical judgments about treatment.

 


[1]   One definition of “Think” in the Cambridge Dictionary is “to use one’s mind to understand.”

[2]  Daniels, et al, Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care (White Paper), Duke Margolis Center for Health Policy, 2019, p. 10

[3]   Ibid. For the purposes of this blog post, the terms data-driven, and machine-learning are synonymous, as are the terms continuous learning and adaptive models.

[4]  The metaphor of “black box” is used widely and with different connotations, but with respect to AI, we are not simply talking about a lack of visibility with respect to mechanisms or calculations, but also to the inscrutability of the basic rationale for performance.

[5]  The term “locked” with respect to AI has been defined as “a function/model that was developed through data-based AI methods, but does not update itself in real-time (although supplemental updates can be made to the software on a regular basis).” [Source: Duke, Current State and Near-Term Priorities for AI-Enabled Diagnostic Support Software in Health Care]. A “locked” data-driven algorithm, even if externally validated, is not a rules-based algorithm, because that locked AI algorithm is not based on current, rules-based medical knowledge

[6] Duke Margolis Center for Health Policy, 2019, p. 12.


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


BSI enables people and organizations to perform better. We share knowledge, innovation and best practice to make excellence a habit – all over the world, every day.

© Copyright nasscom. All Rights Reserved.