- About
- Publications and Conferences
- …
- About
- Publications and Conferences
- About
- Publications and Conferences
- …
- About
- Publications and Conferences
The AI Act: Controlling the Risks of Employing AI in the Pharmaceutical and Medical Device Industry
Authors: Noga Yifrach-Stav, M.A, Ofer Yifrach-Stav, M.Sc., Ph.D., Charles Campbell, Ph.D.
We are facing a new era. The AI-Act recently (March 13, 2024) passed, making the European Parliament the first major governing body in the world that sets clear laws regulating the utilization of artificial intelligence (AI). These regulations will govern 27 EU nations, as well as global entities who operate in the EU, and will come into force in May 20241.
Artificial intelligence (AI) is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings, such as the ability to reason, discover meaning, generalize, or learn from experience2.
In recent years, we have seen an increase in the use of AI in various industries3, from agriculture and transportation, through business operation, education and public safety, to healthcare. Many of us are also familiar with the uses of AI in day-to-day applications, such as autonomous cars, predicted text, virtual assistants (e.g. Siri, Alexa etc.), and smart-home applications. According to Facebook’s chief technology officer Mike Schroepfer, “The power of AI technology is it can solve problems that scale to the whole planet,” such as climate change and food insecurity4.
Inevitably, the use of AI in the pharmaceutical and medical device industry is also progressing in giant leaps, to the extent of being considered as having “the potential to transform the pharmaceutical industry5. For example, AI is employed in, among other applications, research and drug discovery6, clinical studies design7, wearable medical devices, and recruitment and management of patients in clinical studies. Specific to the process of drug development, AI can be used for identifying new targets, unraveling target-disease relationships, selecting drug candidates, predicting protein structures, designing and optimizing molecular compounds, comprehending disease mechanisms, discovering prognostic and predictive biomarkers, analyzing biometric data from wearable devices and imaging, and advancing precision medicine.
There are many benefits to the use of AI in these fields. For example, AI can be implemented in the research of a specific type of molecule. Sanofi S.A., for example, has harnessed AI for the sake of studying small molecule candidates for oncology and immunology8. Another use of AI in the development of new drugs is for prediction of toxicity from the interaction of molecules in new compounds9.
In clinical studies, the use of chat bots in the recruitment and patient support can reduce costs and enhance patient satisfaction and compliance. AI does not only replace humans in performing various functions, but it can also perform tasks involving data analysis and decision-making faster and with a higher precision. Ulfa et.al write: “The significant advantage of AI is that it is substantially more [sic] better than people in investigating information and it can dissect huge number of information that would regularly not fit into any of the ordinary PCs.”10
For example, an AI software has been designed to respond to emergency calls, aimed at identifying instances of cardiac arrest during calls with greater speed and accuracy compared to human medical dispatchers11. Another example are medical-decision-support-systems, such as the Watson for Oncology platform (WFO), which is an artificial intelligence cognitive computing system. This WFO, developed by the IBM corporation with a group of oncologists from Memorial Sloan Kettering Cancer Center (MSK), facilitates the decision-making process traditionally performed by the treating doctor, and matches cancer patients with an optimal chemotherapy treatment12, 13. The efficiency of this AI based system is such that it can diagnose breast cancer in 60 seconds!14
Another example from the pharmaceutical industry is an AI based system which identifies and analyzes Good Manufacturing Practices (GMP) deviations, thus improving the pharmaceutical manufacturer’s ability to foresee deviations and prevent their occurrences15. Similarly, AI can be used in the development of new drugs. Different tools based on artificial intelligence exist that can predict the properties of chemicals. For instance, machine learning techniques use large datasets created during previous processes of optimizing compounds to teach the program how to make predictions.
Despite the obvious advantages of the use of AI in the pharmaceutical and medical device industry, there are imminent risks that must be addressed by manufacturers as well as regulators. Even ChatGPT, an AI system, acknowledges such risks. According to chat GPT 3.5 (Figure 1), “Employing artificial intelligence (AI) in the pharmaceutical and medical device industry poses several risks. One significant concern is the potential for biased or poor-quality data used to train AI algorithms. Inaccurate or biased data can lead to flawed predictions and decisions, compromising patient safety and treatment efficacy. Furthermore, the complexity and opacity of AI models make them challenging to interpret, raising concerns about the reliability and trustworthiness of AI-driven systems. Additionally, ensuring regulatory compliance with healthcare regulations such as HIPAA and GDPR is crucial to protect patient privacy and data security. Integrating AI technologies into clinical workflows also presents challenges, as healthcare professionals may resist change and require adequate training to effectively utilize AI-driven tools. Ethical considerations, such as algorithmic bias and fairness, must be carefully addressed to prevent exacerbating disparities in healthcare delivery and outcomes. Finally, overreliance on AI without appropriate human oversight can lead to errors and adverse events, highlighting the importance of maintaining human judgment in patient care decisions. Overall, navigating these risks requires careful consideration and collaboration among stakeholders to ensure the responsible and ethical use of AI in the pharmaceutical and medical device industry.”
To control potential risks, the European Commission introduced in April 2021 the initial regulatory framework for AI within the EU, evaluating and categorizing AI systems based on the potential risks they present to users across various applications. Needless to say, the higher the risk the application poses, the higher the regulation will be. In June 2023, The European Parliament adopted the AI Act, the world’s first rules regulating artificial intelligence. Beyond mitigating potential risks to health, safety and human rights, the AI Act intends to also protect other core values of the European society, such as rule of law, democracy and the protection of the environment16.
Potential risks associated with AI in Europe – Classification of risk levels
The European Parliament is involved in creating policies to promote Europe’s digital technologies capacity. As part of the EU’s digital transformation, meaning the integration of digital technologies by companies and the impact of the technologies on society17, the EU encourages the growing employment of AI tools to improve people’s lives. When it comes to AI, the main goal is to ensure that AI systems are “safe, transparent, traceable, non-discriminatory and environmentally friendly”18.
The legal framework of the AI Act will apply to any AI system available on the EU market, whether it comes from within the EU or from an external source, with the exemption of providers of free and open-source models. It is important to note that during the research, development and prototyping activities, AI systems are not required to abide with the AI Act regulations, since these activities precede the release on the market.
The classification of risk is determined by the intended function of the AI system, taking into consideration the existing EU product safety legislation (Figure 2). Since the adoption of AI act, AI systems presenting ‘unacceptable’ risks will be banned. This includes applications that do biometric categorization or social scoring based on sensitive characteristics such as religion, beliefs, political affiliation, sexual orientation, etc.), downloading people’s pictures from the internet and using them for facial recognition databases, exploiting vulnerable groups, and manipulating people in a way that denies their free will. For example, voice-activated toys that encourage children to engage in dangerous behaviours.
In the next level of risk, are applications which have the potential to adversely impact people’s safety or fundamental rights. These are classified as high-risk. This category includes, for example, AI systems used for education or vocation training, management systems, employment, law enforcement, legal services, migration, and asylum control, etc. This category also includes AI operated devices, such as elevators, toys, airplanes, cars, and medical devices. Providers of high-risk AI systems will be mandated to comply with specific regulations, like implementing quality and risk management systems, and adhering to specific regulations pertaining to documentation, traceability, transparency, cybersecurity, etc. Another key component of the regulation of high-risk applications is the requirement of human-oversight.
The next category, posing a lower level of risk, comprises applications with specific transparency risk. This category includes AI systems that pose a risk of manipulation, for example, chat bots. The requirements regarding this category pertains to the obligation of the application to be transparent; users should be informed that they are interacting with AI. The final system category is for those that present minimal risk, such as spam filters, which are not subject to restriction beyond existing relevant legislation, such as General Data Protection Regulation (GDPR).
General-purpose AI (GPAI) models and those that pose systemic risk, like Chat-GPT and Dall-e Image Generator are subject to specific rules in the new AI Act: safeguards must be implemented to ensure that illicit content cannot be generated; summaries of copyright data used in the generation process (or the training of the model) need to be published; and the fact that the content was generated by AI needs to be disclosed.
An additional important aspect of the AI Act is that citizens will have the ability to complain if they have been harmed by an AI. Moreover, once a complaint is received, the citizen will be able to receive a report explaining why the AI made the decision that it did.
Providers of high-risk AI systems: a new process
The new regulations mandate that developers of high-risk AI systems undergo a conformity assessment to ensure their compliance with the new AI requirements. For medical devices, for instance, this assessment process will involve a notified body. Only after demonstrating that the device is safe and conforms with the requirements, will the developer be able to register the AI system in the EU database. As a last step, prior to marketing, a declaration of conformity needs to be signed. All approved AI systems should bear the CE marking. The systems will continue to be under authority surveillance under human oversight and monitoring, and serious incidents and malfunctioning should be reported.
US policy regarding AI
On October 30, 2023, United States (US) President Joe Biden issued an Executive Order that aims to establish “new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”19 In addition to this, the White House has released a “Blueprint for an AI Bill of Rights”, that identifies “five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence”, including Safe and Effective Systems (protection from unsafe or ineffective systems); Algorithmic Discrimination Protections (protection from discrimination by algorithms); Data Privacy (protection from abusive data practices and control over use of your data); Notice and Explanation (transparency when automated systems are being used and how it contributes to outcomes that affect the user); and Human Alternatives, Consideration, and Fallback (the ability to opt out and have access to human resources to consider and remedy any problems encountered).20
The US Food and Drug Administration (FDA), through cooperative action of the Centers for Biologics Evaluation and Research (CBER), Drug Evaluation and Research (CDER), and Devices and Radiological Health (CDRH), and Office of Combination products (OCP) plans to, “Advance the responsible use of AI for medical products. This entails building regulatory approaches that, to the extent feasible, can be applied across various medical products and uses within the health care delivery system.” To that end, numerous actions will be taken regarding how AI will be used in medical products, organized in four areas of focus: 1. Foster Collaboration to Safeguard Public Health; 2. Advance the Development of Regulatory Approaches that Support Innovation; 3. Promote the Development of Harmonized Standards, Guidelines, Best Practices, and Tools; 4. Support Research Related to the Evaluation and Monitoring of AI Performance.21
Canada’s AIDA
In Canada, the field of artificial intelligence is to be governed by Canada’s Artificial Intelligence and Data Act (AIDA). While there is still no legislation regulating AI in Canada, AIDA proposes an approach focused on building on existing Canadian consumer protection and human rights law (such as the Canada Consumer Product Safety Act, the Food and Drugs Act, the Canadian Human Rights Act (and corresponding provincial laws), and the Criminal Code, among others); ensuring that policy and enforcement move together as technology evolves; and prohibiting reckless and malicious uses of AI. AIDA’s framework is similar to the EU Act in that it proposes to employ a risk-based approach, with criteria for “high-impact AI systems” (equivalent to EU’s high-risk) defined in the regulations aimed at “protecting the interests of the Canadian public, while avoiding imposing an undue burden on the Canadian AI ecosystem.”22
AI In Health Care – Balancing Innovation and Responsibility
In the coming years, we will undoubtedly continue to see the rise of the application of AI across the healthcare landscape, which will drive innovation in lifesaving practices and technologies. Key to the development of this nascent industry is the adoption of practical legislation and regulation to ensure that the incorporation of AI is done in a responsible manner, with safeguards in place to ensure that patients have access to safe and effective products; are protected against algorithm-based discrimination; have assurance that privacy of their data will be secured; have a clear understanding that that they are interacting with AI and what the implications are; and are able to seek alternative, non-AI-based care. Regulators must therefore continue their efforts, in a rapidly advancing landscape, to ensure that a balance is struck between innovation and responsible regulation.
About the Authors
Ofer Yifrach-Stav has 18+ years of experience in the pharmaceutical, biotechnology and medical device industry, focusing on compliance, quality assurance and validation aspects. He has a BSc in Biotechnology Engineering, an MSc in Environmental Engineering, and a PhD in Computer Science. Ofer is a certified ISO Lead Auditor in ISO 9001:2015, ISO 13485:2016, and ISO 27001:2022.
Charles Campbell has 8+ years of experience in the pharmaceutical and medical device industries, focusing on quality assurance, validation, and research and development aspects. He has a BSc in Biochemistry and a PhD in Cellular and Molecular Medicine, focusing on developmental signaling pathways.
YS Consulting is a dynamic consulting firm offering solid experience and strategic guidance to biotechnology, medical device, pharmaceutical, and cosmetic companies worldwide. We provide a comprehensive range of high-quality services, from ensuring compliance and developing robust QMS, to validation, training, inspection-readiness, project management, and innovative solutions, to deliver forward-thinking strategies to help our clients navigate complex regulatory landscapes.
References
1 Heikkilä, M., “The AI Act is done. Here’s what will (and won’t) change”. 19 Mar. 2024. https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/. Accessed 26 Mar. 2024.
2 Copeland, B.J., “artificial intelligence”. Encyclopedia Britannica, 6 Mar. 2024, https://www.britannica.com/technology/artificial-intelligence. Accessed 10 March 2024.
3 Castro, D. and New, J., 2016. The promise of artificial intelligence. Center for data innovation, 115(10), pp.32-35.
4 Will Knight, “Could AI Solve the World’s Biggest Problems,” MIT Technology Review, https://www.technologyreview.com/s/545416/could-ai-solve-theworlds-biggest-problems/.
5 Henstock, P., 2021. Artificial intelligence in pharma: positive trends but more investment needed to drive a transformation. Archives of Pharmacology and Therapeutics, 2(2), pp.24-28.
6 Paul, D., Sanap, G., Shenoy, S., Kalyane, D., Kalia, K. and Tekade, R.K., 2021. Artificial intelligence in drug discovery and development. Drug discovery today, 26(1), p.80.
7 Harrer, S., Shah, P., Antony, B. and Hu, J., 2019. Artificial intelligence for clinical trial design. Trends in pharmacological sciences, 40(8), pp.577-591.
8 Sanofi [Press Release], Exscientia and Sanofi establish strategic research collaboration to develop AI-driven pipeline of precision engineered medicines. January 7, 2022. https://www.sanofi.com/en/media-room/press-releases/2022/2022-01-07-06-00-00-2362917 .
9 Öztürk, H., Özgür, A. and Ozkirimli, E., 2018. DeepDTA: deep drug–target binding affinity prediction. Bioinformatics, 34(17), pp.i821-i829.
10 Ulfa, A.M., Afandi Saputra, Y. and Nguyen, P.T., 2019. Role of artificial intelligence in pharma science. Journal of critical reviews, 7(1), p.2020.
11 Blomberg, S.N., Folke, F., Ersbøll, A.K., Christensen, H.C., Torp-Pedersen, C., Sayre, M.R., Counts, C.R., and Lippert, F.K., 2019. Machine learning as a supportive tool to recognize cardiac arrest in emergency calls. Resuscitation, 138, pp.322-329.
12 Jie, Z., Zhiying, Z. and Li, L., 2021. A meta-analysis of Watson for Oncology in clinical application. Scientific reports, 11(1), p.5792.
13 Tupasela, A. and Di Nucci, E., 2020. Concordance as evidence in the Watson for Oncology decision-support system. AI & SOCIETY, 35, pp.811-818.
14 Mishra, V., 2018. Artificial intelligence: the beginning of a new era in pharmacy profession. Asian Journal of Pharmaceutics (AJP), 12(02).
15 Guo, W., 2023. Exploring the Value of AI Technology in Optimizing and Implementing Supply Chain Data for Pharmaceutical Companies. Innovation in Science and Technology, 2(3), pp.1-6. 16 The European Commission, Press Release – Artificial Intelligence – Questions and Answers. 12 December 2023. https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683.
17 European Parliament Directorate General for Communication. Shaping the digital transformation: EU strategy explained. Topics. 2023. https://www.europarl.europa.eu/topics/en/article/20210414STO02010/shaping-the-digital-transformation-eu-strategy-explained.
18 European Parliament. EU AI Act: first regulation on artificial intelligence. June 8, 2023.
19 The White House. October 30, 2023. FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Briefing Room, Statements and Releases.
20 The White House. Blueprint for an AI Bill of Rights MAKING: AUTOMATED SYSTEMS WORK FOR THE AMERICAN PEOPLE. Office of Science and Technology Policy.
21 Innovation, Science and Economic Development Canada. The Artificial Intelligence and Data Act (AIDA) – Companion document.
22 Food and Drug Administration. Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together
Find the article in:
© 2021