Topics In Demand
Notification
New

No notification found.

The Promise and Peril of AI in Healthcare
The Promise and Peril of AI in Healthcare

September 4, 2024

AI

14

0

Artificial intelligence and machine learning tools hold great promise for improving healthcare, but also pose risks that must be addressed, according to experts. 

The use of AI in healthcare is accelerating, with studies showing it can effectively diagnose some chronic illnesses, boost staff productivity, enhance quality of care, and optimize resources. AI is already being deployed to help diagnose patients, aid drug development, improve doctor-patient communication, and transcribe medical records.  

Because large datasets like medical images lend themselves well to AI, it has successfully diagnosed conditions requiring visual pattern recognition. For instance, AI developed by Google to detect diabetic retinopathy has provided quick diagnoses, second opinions for ophthalmologists, earlier detection, and increased access. Now, Stanford researchers have created an algorithm to review x-rays and detect 14 diseases in seconds.

AI chatbots and assistants can also upgrade patient experiences by helping to find doctors, book appointments, and answer questions. For providers, AI can identify treatment plans, clinical tools, and medications more swiftly. It is also being used to document patient visits almost instantly, boosting efficiency and reducing frustration with paperwork. Some hospitals and doctors are leveraging AI to confirm insurance and prior authorizations too, decreasing unpaid claims.

However, a recent Pew Research poll found 60% of Americans would be uneasy if their provider relied on AI for diagnoses or treatments. 57% felt it would diminish the doctor-patient relationship, while only 38% believed it would improve outcomes.

Beyond effectiveness, potential racial and gender bias in algorithms is concerning. Some studies have uncovered race-based inconsistencies and limitations from the scarcity of women's and minority health data. 

A May Deloitte report called for reassessing clinical algorithms to ensure all patients get requisite care. It advised examining if and how race is used in algorithms and whether its inclusion is justified. Deloitte also spotlighted longstanding problems collecting and employing race and ethnicity data in healthcare, with CDC data showing these demographics unavailable for nearly 40% of COVID-19 cases and vaccines.

The AMA has outlined principles for healthcare AI, stressing the use of representative population data and addressing explicit and implicit biases. It promotes augmented over fully autonomous AI.

Regulators are also scrutinizing possible discrimination. Last year, the California Attorney General requested information from 30 hospital CEOs about identifying and tackling racial and ethnic biases in commercial tools, initiating an investigation into discriminatory impacts of healthcare algorithms. 

In contrast, 51% of Americans perceiving racial and ethnic bias in healthcare told Pew bias and unfair treatment would decline with AI.

Safeguarding private health data used to develop and run AI is another major worry. Training algorithms necessitates access to huge data sets, while utilization risks exposing such data if retained or via third-party vendor breaches.

Though many AI applications originate in academic centers, private sector partnerships are often indispensable for commercialization. But these have sometimes led to poor privacy protections, lack of patient control over data usage, and incomplete disclosure of impacts.  

Studies have also shown AI can re-identify individuals from anonymized health repositories. Some AI can even infer non-health facts about people.

Healthcare's susceptibility to data theft makes it especially vulnerable. IBM Security reported healthcare endures the costliest data breaches, averaging $10.93 million. While most privacy initiatives have come from states, only a few have enacted healthcare AI-specific laws so far.

With awareness and diligence, AI's benefits for patients and providers could be immense. But all involved must remain cognizant of potential biases and privacy pitfalls to realize AI's full promise in healthcare. 


The article was first published on CSM Blog Named: The Promise and Peril of AI in Healthcare


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


CSM Tech provides transforming solutions and services in IT for Governments and large or small Industries. As a CMMI Level 5 company, CSM emphasizes more on Quality of delivery and Customer Satisfaction. With about 2 decades of delivering solutions and more than 500 employees, CSM has developed a comprehensive portfolio of products, solutions and smart consulting services. CSM has achieved quite a few unique distinctions of being first to many unexplored business opportunities.

© Copyright nasscom. All Rights Reserved.