Charting the Course Through the New EU Artificial Intelligence Act: A Guide for Healthcare and Life Science Industries

On June 14, 2023, the European Parliament significantly strived towards regulating artificial intelligence (AI), endorsing its stance on the draft EU Artificial Intelligence Act (EU AI Act).  This all-encompassing regulatory framework for AI is poised to profoundly impact companies globally, especially those operating in the healthcare and life science sectors.  With additional regulations anticipated for US-based companies, it is paramount for executives and investors in these industries to comprehend the potential implications and gear up for the forthcoming changes.

General Data Protection Regulation (GDPR) and AI

AI has been a subject of intense discussion across the Atlantic, with misconceptions about the technology and the applicability of existing laws to AI systems.  Within the European Union (EU) and European Economic Area (EEA), the General Data Protection Regulation (GDPR) already governs products and services leveraging AI technologies. Companies must factor in the GDPR when employing AI systems to gather or process the personal data of individuals residing in the EU and EEA.  Certain EU data protection authorities (DPAs) have interpreted the GDPR as applicable to specific AI systems and services, leading to actions such as a temporary ban on ChatGPT in Italy.  The European Data Protection Board (EDPB) has set up a task force to examine the privacy implications associated with ChatGPT, although it has yet to yield any concrete outcomes.

The EU AI Act: A Comprehensive Approach to AI

The European Commission is adopting a holistic approach to AI.  The recent vote endorsing the draft EU AI Act includes more stringent requirements for generative AI services, like ChatGPT, and an expansion of the scope of “high-risk” scenarios.  The EU AI Act would necessitate all users of “high-risk AI systems” to conduct a detailed “fundamental rights impact assessment,” akin to the data protection impact assessments mandated under the GDPR.  The AI Act allows the use of these systems but requires compliance with various new regulations, including comprehensive testing, proper documentation of data quality, and an accountability framework outlining human oversight of the relevant AI system.

Possible "High-Risk" Scenarios

High-risk scenarios in healthcare and life sciences typically refer to situations where the use of AI systems could potentially lead to significant harm to people's health, safety, or fundamental rights. Here are a few examples:

  • Medical Diagnostics and Treatment Recommendations: AI systems used to diagnose diseases or recommend treatments based on patient data are considered high-risk because errors or biases in these systems could lead to incorrect diagnoses or inappropriate treatments, potentially causing harm to patients.

  • Clinical Decision Support Systems: These are AI systems used to assist healthcare professionals in making clinical decisions. If these systems provide incorrect or misleading information, it could lead to inappropriate care decisions.

  • Patient Monitoring Systems: AI systems used to monitor patients' vital signs or other health indicators can be considered high-risk. If these systems fail to accurately detect or alert healthcare providers about a change in a patient's condition, it could result in delayed or missed treatment.

  • Drug Discovery and Development: AI is increasingly being used in the discovery and development of new drugs. If an AI system makes an error in this process, it could lead to the development of ineffective or harmful drugs.

  • Genetic Testing and Personalized Medicine: AI systems used in genetic testing and personalized medicine could also be considered high-risk. Errors or biases in these systems could lead to incorrect health risk assessments or inappropriate treatment recommendations.

  • Robotic Surgery: AI systems used in robotic surgery could be considered high-risk. If these systems malfunction or make errors during surgery, it could result in harm to the patient.

It is important to note that the classification of an AI system as high-risk doesn't necessarily mean that the system is inherently dangerous or harmful. Instead, it means that due to the potential risks associated with the system's use, it needs to be subject to stricter regulatory oversight to ensure its safety and effectiveness.

Implications for Healthcare and Life Science Companies

The EU AI Act has significant implications for healthcare and life science companies, particularly in the following areas:

  • Regulatory Compliance: Companies will need to ensure their AI systems comply with these new rules.  This could involve significant changes to their AI development and deployment processes, including ensuring human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

  • Risk Assessment and Mitigation: Companies will need to assess and mitigate the risks their AI systems pose.  This could involve changes to their risk management processes and potentially developing new tools and methodologies for risk assessment and mitigation.

  • Transparency and Accountability: Companies must be transparent about their use of AI and accountable for its impacts.  This could involve changes to their data management and reporting processes and potentially developing new tools and methodologies for transparency and accountability.

  • Innovation and Testing: Promoting regulatory sandboxes could allow companies to test their AI systems in real-life environments before they are deployed.  This could help to identify and address potential issues early in the development process.

  • Patient Rights and Trust: The new rules could help build trust in AI systems among patients and the public.  This could lead to increased AI adoption in healthcare and life sciences.

  • High-Risk AI Systems: If a company's AI system falls under the high-risk category, it will need to adhere to stricter regulations.  This could impact AI systems used in critical areas of healthcare and life sciences, such as diagnosis, treatment recommendations, and patient monitoring.

  • Research and Development: The exemptions for research activities and AI components provided under open-source licenses could benefit AI research and development companies.  This could potentially lead to increased innovation in the field of AI in healthcare and the life sciences.

The Global Landscape of AI Regulation

The EU AI Act signifies a substantial shift in the global landscape of AI regulation.  The Act's broad regulatory scope applies to many types of AI systems and provides for adopting rules enforcing the EU’s principles.  If the EU AI Act survives in its current form or is close to it, the EU would adopt the strictest approach with respect to regulating AI systems of the three jurisdictions.  In contrast, the United Kingdom and the United States are currently conducting further studies into the issues, leveraging existing authorities, and providing nonbinding guidance on the development and use of AI products and services rather than moving to adopt a comprehensive regulatory regime. The EU AI Act would apply to developers, service providers, and businesses located outside the EU when using the output produced by AI systems in the EU, or when such systems collect or process personal data of individuals located in the EU or EEA.

Looking Ahead

The EU AI Act is not yet final.  The European Parliament and European Commission will work to reconcile the Act with the European Commission and the EU Council (representing the Member States), mediating any conflicts.  Once reconciled, the provisional EU AI Act will return to the European Parliament to ratify the revised text of the act.  While it is difficult to predict how long this process will take, it is possible that a provisional EU AI Act could return to the European Parliament by December 2023.  The effective date of the legislation is still subject to negotiations but will likely become law two years after the provisional EU AI Act is voted into law.  As the landscape of AI regulation continues to evolve, healthcare and life science companies must stay informed and prepared to navigate these changes.  The EU AI Act represents a significant step towards a more regulated future for AI, with far-reaching implications for companies operating in this space.  By understanding and preparing for these changes, companies can ensure they remain compliant, competitive, and at the forefront of AI innovation.

Previous
Previous

Shaping the Ideal CEO for the Future of Healthcare, HealthTech, and Life Sciences

Next
Next

The Ideal Board Member and Strategic Advisor