AI in Healthcare: Navigating Regulatory Pathways in Europe and the USA

Updated - 14 Oct 2024 15 min read
xtatic logo green
Ivan Sinapov Technical Copywriter at XTATIC HEALTH
AI in Healthcare: Navigating Regulatory Pathways in Europe and the USA

Over the past couple of years, we have seen the greater involvement of AI (Artificial Intelligence) in every possible aspect of society – from machines that are now operated by computers to applications that can diagnose medical conditions. 

We have incorporated AI into our whole world and our everyday lives, especially when it comes to the healthcare industry. In 2021, the AI healthcare market was worth over $11 billion worldwide and was predicted by experts to reach a value of around $188 billion by 2030. (1)

Given the multitude of applications of this technology, we have also started to concern ourselves with the correct regulations that will allow further improvement through Artificial Intelligence as well as the assured safety of personal information and people as a whole. 

The two major international actors that have come to introduce regulations and set things in order are the European Union and the United States. There are, however, key differences between the two in how they manage technology in the medical industry, which we will take a better look at in this article. 

 

What are artificial intelligence and machine learning?


What are artificial intelligence and machine learning?

Artificial intelligence

The concept has been defined as the science of engineering intelligent computers and intelligent programs. Such programs can utilize specific models and techniques that take in vast amounts of raw information, process it, and extrapolate needed answers. In healthcare, AI has been used in medical devices, diagnostics, imaging, and even the generation of treatment plans. 

Machine learning

A technique through which artificial intelligence can be trained and designed to learn from and take action based on data. Such a method can be programmed to stay static and not change drastically based on the information that it is fed or to adapt to new information that it has processed. 

There are many examples where machine learning is used, especially in healthcare. One such example is imaging software that scans a patient’s skin and, based on prior images it has access to, determines whether the patient may or may not have skin cancer. 

Another technique through which AI can be of great service in the healthcare industry is deep learning, which is a subset of ML where larger amounts of data are analyzed and more algorithms are created to simulate neural networks, capable of more complex tasks. 

NLP (neural language processing) is a method where the main goal is to interpret human language, be it verbal or written. The process is used in the interpretation of documentation, notes, research, and reports.

pattern

Benefit from the increasing role of AI in Healthcare.

Unlock AI’s full potential in healthcare with a personalized AI strategy session with one of our specialists by getting in touch here. "

iso certifications logo hl7 logo hippa logo gmp logo fda logo gdpr logo

The rise of AI in diagnostics

The rise of artificial intelligence in diagnostics has the potential to bring drastic change to the healthcare industry. AI technologies, such as machine learning and deep learning algorithms, can analyze large amounts of medical data, including imaging scans, lab results, and patient records, to assist healthcare professionals in making more accurate and efficient diagnoses.

AI can aid in the early detection of diseases, provide personalized treatment recommendations, and improve patient outcomes. It can help healthcare providers identify patterns and trends in medical data that may be difficult for humans to discern, leading to faster and more precise diagnoses.

By automating certain diagnostic processes, AI can also help alleviate the burden on healthcare professionals, allowing them to focus more on patient care and complex cases. This can lead to improved efficiency in healthcare delivery, reduced healthcare costs, and better resource utilization. 

However, the integration of AI in diagnostics also presents challenges and considerations. Ensuring the accuracy and reliability of AI algorithms is crucial, as errors or biases in the data or algorithms could have serious consequences for patients. Data privacy and security are also important considerations, as AI relies on vast amounts of sensitive patient information.

Aspects of regulation to take into account

Regulatory frameworks and ethical guidelines need to be developed to govern the use of AI in diagnostics, harnessing transparency, accountability, and patient safety. 

Collaboration between healthcare professionals, data scientists, regulatory bodies, and technology developers is essential to harnessing the full potential of AI in diagnostics while addressing the associated ethical, legal, and social implications. 

Here are some of the aspects of regulation that need to be considered:

  • Approval and clearance. AI-based diagnostic systems may need to undergo regulatory approval or clearance processes before they can be used in clinical practice. Regulatory bodies, such as the FDA in the United States or the European Medicines Agency (EMA) in the European Union, assess the safety, effectiveness, and quality of these systems.
  • Validation and verification. AI algorithms used in diagnostics must undergo rigorous validation and verification processes to ensure their accuracy and reliability. Regulatory frameworks may require evidence of algorithm performance, clinical validation studies, and robust data management practices.
  • Data privacy and security. The use of AI in diagnostics involves handling and processing large amounts of sensitive patient data. Regulatory implications include compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) in the EU, to safeguard patient privacy and ensure secure data storage and transmission.
  • Transparency and explainability. AI algorithms used in diagnostics should be transparent and explainable to address concerns regarding the black-box nature of AI systems. Regulatory frameworks may require documentation of algorithmic decision-making processes, providing insights into how decisions are reached.
  • Ethical considerations. The ethical implications of AI in diagnostics are important regulatory considerations. These include fairness, non-discrimination, accountability, and the responsible use of AI technologies to avoid biases and ensure equitable access to healthcare services.
  • Post-market surveillance. Continuous monitoring and assessment of AI-based diagnostic systems may be required to detect and address any adverse events, algorithmic biases, or performance issues that arise after deployment.

AI/ML-based SaMD (Software as a Medical Device) regulations,

AI/ML-based SaMD (Software as a Medical Device) regulations

Before we take a better look at what exactly the FDA has proposed as regulatory frameworks for the subject matter, we must first better understand what SaMD actually means. 

The definition of software as a medical device was given by the International Medical Device Regulators Forum (IMDRF) as follows “software intended to be used for one or more medical purposes that performs these purposes without being part of a hardware medical device.”  (2).

The technology of software as a medical device can be used across a multitude of different platforms, including ready-made platforms, medical device platforms, and even virtual networks. It has already found a lot of use in many medical facilities around the world and continues to grow in popularity. 

The question here remains, however, how has the FDA proposed to deal with the regulatory aspects of managing such a technology? What steps need to be taken to assure the safety of patients and the security the industry is obligated to provide?

It is not too difficult to imagine how artificial intelligence and machine learning can be incorporated into the process of healthcare and treating patients. With their ability to process large amounts of information, software developers can easily apply them to necessary medical processes. 

The manufacturers of medical devices are already using these technologies to further improve patient care and workflow in facilities. The Center for Devices and Radiological Health (CDRH) of the FDA is therefore considering proposing a complete product lifecycle-based regulatory framework. 

Such a framework would allow for real-time modifications and adaptability for these medical devices without limiting their effectiveness or risking their safety. 

Way of regulation

The FDA usually chooses an appropriate premarket pathway through which they are able to safely analyze medical devices and review them. Such pathways include the De Novo classification, premarket clearance (510(k)), and premarket approval. 

The FDA has the authority to evaluate and approve modifications to medical devices, including software as a medical device, if the modification carries a significant risk to patients. This means that any changes made to medical devices or their software that could potentially impact patient safety must undergo review and clearance by the FDA. 

The traditional manner in which the FDA regulates medical devices is not meant for adaptive artificial intelligence and machine learning technologies. The need for full-proof medical devices that have been changed and improved through AI and ML may actually require more devices to go through the FDA’s premarket review.

Providing intended use

The IMDRF’s method of risk categorization when it comes to AI and ML as software in medical devices is entirely based on intended use, similar to the FDA’s traditional approaches. 

The two main factors the IMDRF determines are key to providing the intended use are the following:

  • The importance of the information provided by the software as a medical device (SaMD) in healthcare decision-making is determined by its intended purpose, whether it is meant to be used for treatment, diagnosis, driving clinical management, or providing information to support clinical management.
  • The specific healthcare situation or condition, including the intended user, targeted disease or condition, and population for which the SaMD is intended, are taken into account. These factors help categorize the SaMD’s applicability in healthcare, whether it is relevant for critical, serious, or non-serious healthcare situations or conditions.

When both factors are taken into consideration, the intended use of the AI/ML-based SaMD can be categorized from the lowest (I) to the highest (IV). The two factors are divided to describe three different situations. 

The significance of the information provided by the SaMD to healthcare decisions is that it differentiates between treating and diagnosing, driving clinical management, and informing clinical management. On the other hand, the state of a healthcare situation or condition describes a non-serious, serious, or critical condition. 

pattern 2

Take the next step in healthcare innovation using AI.

Explore cutting-edge AI solutions with our trained experts at BGO software by booking a consultation call today!

Algorithms

AI/ML-based software as a medical device (SaMD) can utilize one of two types-a locked algorithm that produces consistent results given the same input or an adaptive algorithm that learns and changes its responses over time. Locked algorithms use fixed functions, while adaptive algorithms undergo changes based on defined learning processes, potentially resulting in different outputs. 

The adaptation process can address various clinical aspects, optimizing performance based on local patient populations, user preferences, additional data, or changes in intended use. The process involves learning and updating stages, with the algorithm adjusting its behavior through the analysis of new data and deploying updated versions.

While AI/ML-based software as a medical device (SaMD) can vary from locked to continuously adaptive algorithms, there are shared considerations for managing data, re-training, and evaluating performance across the entire spectrum. 

The thoroughness of performance evaluation relies on factors like testing methods, the quality and relevance of the dataset used, and the training methods employed by the algorithm. Reliable algorithms typically necessitate ample, high-quality training data that is accurately labeled. 

Similarly, a consistent set of principles can guide the assurance of function and performance confidence to users through suitable validation, transparency, and post-modification claims.

How is artificial intelligence handled in the EU?

How is artificial intelligence handled in the EU?

The European Union is already handling the implications of AI and the potential issues that may arise from it. The way the situation is being handled is by applying pre-existing legal frameworks and creating new regulations like the proposed AI Act, for example. 

The EU General Data Protection Regulation (GDPR) is a means through which Artificial Intelligence is partly regulated and monitored, but not completely. The regulations have to deal with personal data pertaining to patients that are being treated by medical professionals and healthcare providers, but when it comes to AI in the industry, it is more focused on innovation.

Innovation in the healthcare world is usually conducted a lot more through clinical trials than directly into patient treatment. Although GDPR protects patient data, it does not fully cover the information of research participants. 

What the proposed AI Act introduced was a risk-based way to handle the consequences of medical AI based on the key principles of ethical AI. GDPR and the AI Act have to be combined and synchronized so that all the goals each puts forward can be achieved together. 

The Act has to complement GDPR in order for it to have an effect on healthcare services and research. Although the policy may not be passed for a couple of years, a lot of what it is trying to achieve will be carried out by then.

GDPR and AI systems

The GDPR places restrictions on automated decision-making (ADM) and the processing of health data, except in cases where patient consent is obtained or for public interest purposes. 

These regulations can pose significant limitations on the use of health data with AI systems for ADM. However, the GDPR also encourages innovation and technological advancements in scientific research, providing broad exemptions for such activities. 

While the GDPR addresses the regulation of AI systems to some extent by focusing on the processing of personal data and safeguarding individuals against automated decision-making, it does not offer comprehensive protection against AI systems as a whole. 

Consequently, AI regulation has emerged as a prominent policy concern in the EU. The EU has transitioned from a non-binding guideline-based approach to a legislative approach, proposing the AI Act to establish a new regulatory framework for AI development and application in the EU. 

The proposed AI Act aims to provide a technology-neutral definition of AI systems within EU law and introduce a classification system that assigns different requirements and obligations based on a risk-based approach.

Key challenges in balancing regulation and safety

Key challenges in balancing regulation and safety

The increasing integration of AI in healthcare presents key challenges for regulatory bodies as they strive to balance innovation and safety. One of the challenges is keeping pace with rapid technological advancements. 

AI technologies are evolving at a fast pace, making it difficult for regulatory frameworks to keep up with the latest developments. The dynamic nature of AI algorithms and their continuous learning capabilities add complexity to the regulatory process.

Another challenge is ensuring the safety and efficacy of AI-based healthcare solutions. The unique characteristics of AI, such as the ability to process large amounts of data and make autonomous decisions, raise concerns about patient safety and the potential for algorithmic bias. 

Regulatory bodies need to establish robust frameworks for evaluating the performance, reliability, and safety of AI algorithms to protect patients and ensure that these technologies meet the necessary standards.

Additionally, the global nature of AI in healthcare poses challenges for regulatory harmonization. Different countries have varying regulatory frameworks and standards, which can create complexities when deploying AI technologies across borders. Regulatory bodies must collaborate and establish international guidelines and standards to foster consistency and ensure patient safety and data privacy in a global healthcare landscape.

Moreover, the need for transparency and explainability in AI algorithms is a significant challenge. AI models often operate as black boxes, making it difficult to understand the underlying decision-making process. Regulatory bodies must address the issue of algorithmic transparency and ensure that AI systems are accountable, explainable, and auditable.

Another key challenge when dealing with AI in the healthcare industry is promoting inclusiveness and equity. It should not discriminate based on factors such as age, sex, gender, income, race, ethnicity, sexual orientation, ability, or any other characteristics protected under human rights codes. Inclusiveness means that AI systems should be accessible and beneficial to all, regardless of their personal characteristics.

How do leading healthcare AI firms navigate regulatory approvals?

Given how much Artificial Intelligence in healthcare has evolved and changed the whole industry, it is normal for different companies to pioneer innovation. There are many startups already working with AI to create revolutionary technologies for the healthcare world. 

Arterys 

Arterys, a pioneering healthcare company, achieved a significant milestone in 2017 by obtaining clearance from the FDA for using deep learning and cloud technologies in clinical services. With a focus on simplifying the diagnosis of heart defects in newborns and children, Arterys sought to address the challenge of reading and analyzing the large output files produced by the 4D Flow MRI imaging technology.

Traditional image-archiving servers in hospitals were unable to handle the size of the 4D Flow images effectively. To overcome this limitation, Arterys leveraged cloud computing infrastructure to deliver the 4D Flow images directly to hospital radiologists through a web browser. 

This innovative approach enabled radiologists to access the images seamlessly and make timely treatment decisions that could potentially save lives.

Recognizing the need for further automation, Arterys combined deep learning algorithms with cloud computing GPUs to develop a solution for automating the measurement of heart ventricles. By harnessing the power of artificial intelligence, the company eliminated the manual calculation process previously performed by providers. 

This advancement allowed for automatic and accurate measurements of ventricles, enhancing diagnostic efficiency and accuracy. Furthermore, Arterys received their eighth FDA clearance, allowing them to launch a new application.

Butterfly network

Ultrasounds play a crucial role in diagnosing various conditions, including blood clots, gallstones, and cancerous tumors. However, the high cost of advanced ultrasound machines, which can exceed $100,000, creates a significant barrier for underserved communities worldwide, limiting their access to medical imaging. (3)

To address this issue, Butterfly Network has developed an innovative solution by introducing the world’s first hand-held whole-body imager. These compact probes, designed to be attached to smartphones, enable imaging capabilities that can be transported to even the most remote locations. 

Complementing this technology, artificial intelligence (AI) algorithms are employed to interpret the imaging results with a level of accuracy comparable to that of human clinicians.

The world of regulating personal data with respect to new technology can be difficult to understand purely because of the amount of information out there. Many different healthcare organizations and companies are interested in what the future holds, especially when it comes to Artificial intelligence and more so when it is involved in healthcare. 

pattern 3

Whether you’re a startup, a Fortune 100 company or a government organisation, our team can deliver a solution that works for you.

BGO Software

Thankfully, there are people in places governing such regulations who are taking the necessary precautions and steps to ensure the safety of such technology. 

 

____________________________________________________________________________

Sources:
  • In 2021, the AI in healthcare market was worth over 11 billion U.S. dollars worldwide, with a forecast for the market to reach around 188 billion U.S. dollars by 2030. (1)
  • “software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device.” (2)
  • The cost of new ultrasound machines ranges from $20,000 to $200,000 USD or even higher if it’s a high-end machine. (3)
 
References:
xtatic logo green

Ivan Sinapov

Ivan is a Technical Copywriter with extensive experience in the field of medical technology and software development. He specializes in translating complex technical concepts into clear and engaging content tailored for both industry professionals and broader audiences.

What’s your goal today?

wyg icon 01

Hire us to develop your
product or solution

Since 2008, BGO Software has been providing dedicated IT teams to Fortune
100 Pharmaceutical Corporations, Government and Healthcare Organisations, and educational institutions.

If you’re looking to flexibly increase capacity without hiring, check out:

On-Demand IT Talent Product Development as a Service
wyg icon 02

Get ahead of the curve
with tech leadership

We help startups, scale-ups & SMEs create cutting-edge healthcare products and solutions by providing them with the technical consultancy and support they need to break through.

If you’re looking to scope and validate your Health solution, check out:

Project CTO as a Service
wyg icon 03

See our Case Studies

Wonder what it takes to solve some of the toughest problems in Health (and how to come up with high-standard, innovative solutions)?

Have a look at our latest work in digital health:

Browse our case studies
wyg icon 04

Contact Us

We help healthcare companies worldwide get the value, speed, and scalability they need-without compromising on quality. You’ll be amazed of how within-reach top service finally is.

Have a project in mind?

Contact us
chat user icon

Hello!

Did you know that BGO Software is one of the only companies strictly specialising in digital health IT talent and tech leadership?

Our team has over 15 years of experience helping health startups, Fortune 100 enterprises, and governments deliver leading healthcare tech solutions.

If you want to explore your options, would you like to book a free consultation call today?

Yes

It’s a free, no-obligation, fact-finding opportunity. You’ll have a friendly chat with our team, ask any questions, and see how we could help in detail.