Topics In Demand
Notification
New

No notification found.

Regulation and standardization of AI in healthcare
Regulation and standardization of AI in healthcare

285

1

AI potentially introduces new risk that is not currently addressed within the current portfolio of standards and guidance for software

As further advancements are made with AI technology, regulators may consider multiple approaches for addressing the safety and effectiveness of AI in healthcare, including how international standards and other best practices are currently used to support the regulation of medical software, along with differences and gaps that will need to be addressed for AI solutions. A key aspect will be the need to generate real world clinical evidence for AI throughout its life cycle, and the potential for additional clinical evidence to support adaptive systems.

In the last ten years, regulatory guidance and international standards have emerged for software, either as a standalone medical device or where it is incorporated into a physical device. This has provided requirements and guidance for software manufacturers to demonstrate compliance to medical device regulations and to place their products on the market.

However, AI potentially introduces new risk[1] that is not currently addressed within the current portfolio of standards and guidance for software. Different approaches will be required to ensure the safety and performance of AI solutions placed on the market. As these new approaches are being defined, the current regulatory landscape for software should be considered as a good starting point.

In Europe, the Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) include several generic requirements that can apply to software. These consist of the following:

  • general obligations of manufacturers, such as risk management, clinical performance evaluation, quality management, technical documentation, unique device identification, postmarket surveillance and corrective actions;
  • requirements regarding design and manufacture, including construction of devices, interaction with the environment, diagnostic and measuring functions, active and connected devices; and
  • information supplied with the device, such as labelling and instructions for use.

In addition, the EU regulations contain requirements that are specific to software. These include avoidance of negative interactions between software and the IT environment, and requirements for electronic programmable systems.

In the U.S., the FDA recently published a discussion paper[2] for a proposed regulatory framework for modifications to AI/machine learning-based SaMD. It is based upon practices from current FDA premarket programs, including 510(k), De Novo, and Premarket Approval (PMA) pathways. It utilizes risk categorization principles from the IMDRF, along with the FDA benefit-risk framework, risk management principles in the software modifications guidance, and the Total Product Life Cycle (TPLC) approach from the FDA Digital Health Pre-Cert program.

Elsewhere, other countries are beginning to develop and publish papers relating to regulatory guidance. In China, the National Medical Products Administration (NMPA)[3] has produced a guideline for aided decision-making medical device software using deep learning techniques. Japanese and South Korean regulatory bodies have also published guidance for AI in healthcare.

 


[1] See clause 15 of Machine learning AI in medical devices: adapting regulatory frameworks and standards to ensure safety and performance https://pages.bsigroup.com/l/35972/2020-05-06/2dkr8q4

[2] Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning [AL/ML]-Based Software as a Medical Device [SaMD]

[3] https://chinameddevice.com/china-cfda-ai-software-guideline/


That the contents of third-party articles/blogs published here on the website, and the interpretation of all information in the article/blogs such as data, maps, numbers, opinions etc. displayed in the article/blogs and views or the opinions expressed within the content are solely of the author's; and do not reflect the opinions and beliefs of NASSCOM or its affiliates in any manner. NASSCOM does not take any liability w.r.t. content in any manner and will not be liable in any manner whatsoever for any kind of liability arising out of any act, error or omission. The contents of third-party article/blogs published, are provided solely as convenience; and the presence of these articles/blogs should not, under any circumstances, be considered as an endorsement of the contents by NASSCOM in any manner; and if you chose to access these articles/blogs , you do so at your own risk.


BSI enables people and organizations to perform better. We share knowledge, innovation and best practice to make excellence a habit – all over the world, every day.

© Copyright nasscom. All Rights Reserved.