Skip to main content
All
November 13, 2024

Uniting Global AI Regulatory Frameworks: Predictions & Opportunities

Advisory

Life sciences companies are certainly not alone when it comes to questions about how, when and where to use artificial intelligence. The uncertainty surrounding the direction of global AI regulation, however, poses a distinct challenge to a complex industry that is already highly regulated.

Using AI to navigate regulatory and compliance matters is underway to varying degrees for some life sciences businesses, but a world of possibilities lies ahead. How it gets sorted will be guided, in part, by AI regulatory frameworks that are still being shaped and debated. Those answers become more urgent when considering that AI could potentially allow practitioners in the industry to:

  1. Diagnose using AI-enabled imaging or digital tools
  2. Treat using a sophisticated AI algorithm
  3. Enroll participants in AI-designed and monitored studies
  4. Transition patients to approved products using an AI-customized treatment plan
  5. Monitor patients via wearable sensors or implants incorporating AI

What Life Sciences Leaders Say About AI’s Impact on the Industry

Any one of the above healthcare developments would have profound implications for the life sciences industry, and for the patients it serves. The regulatory approach taken by countries will impact the likelihood of these medical breakthroughs. 

No One Blueprint for AI Regulation

Governments in Europe and North America broadly agree on the catalog of risks presented by AI but are pursuing divergent paths when it comes to addressing them — creating a complex and evolving global regulatory environment that poses challenges for life sciences companies.

To better understand what’s driving the survey results, consider the developing, complex landscape that life sciences companies currently face. The U.S. has taken a largely sectoral approach to AI regulation at the federal level, applying regulators’ existing statutes to new technologies, such as the Federal Trade Commission’s authority to take action against companies using AI to engage in discriminatory practices or the FDA’s oversight of AI tools classified as medical devices.

Questions to Ask Before Deploying AI

As with any program, planning and preparation are key to successful AI implementation. Life sciences companies looking to use AI tools should first ask themselves:

  • Do we have an AI policy and governance structure in place?
  • Do we have a system for conducting thorough risk assessments when procuring AI tools, including diligence on both the vendor and the tool?
  • Have we implemented robust data privacy and security measures?
  • Have we cleaned any datasets to be used in training the AI system?
  • Have we updated our procurement templates to reflect AI-specific challenges?
  • Have we trained both procurement personnel and end users on the use of AI tools?
  • Is there any need or requirement to tell customers, stakeholders, or investors about our proposed use of AI tools?

The U.S. Approach to AI Regulation

The Biden administration took small steps toward horizontal regulation, developing and applying policy frameworks across agencies’ efforts. In 2023, they laid out an initial vision for the emerging AI industry in the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

By mandating a set of minimum evaluation, monitoring, and risk-mitigation practices for use in the federal government, the Biden administration attempted to use the federal example and procurement policy to foster responsible AI deployment and development in the private sector as well. To that end, the executive order also called for various agencies to undertake rulemakings and other inquiries related to AI. President-elect Donald Trump has vowed to change direction, with the 2024 Republican Platform stating that “[w]e will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”

How Other Life Sciences Organizations Are Approaching Compliance

Congress, and especially the Senate, undertook a crash course on AI and its implications in 2023. Drawing on months of forums, briefings, and listening sessions, the Bipartisan Senate AI Working Group released “Driving U.S. Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Senate” in May 2024, emphasizing support for innovation and offering a more limited approach to regulation.

The House of Representatives similarly established a bipartisan Task Force on Artificial Intelligence to develop a legislative framework for that chamber. In the interim, the House Republican leadership came out against prohibitions on algorithmic discrimination that had been included in the bipartisan privacy bill negotiated between the chairs of the House Energy and Commerce Committee and the Senate Commerce, Science, and Transportation Committee. Accordingly, it seems unlikely the House Task Force will propose significant regulation when it presents its report, probably later this year.

A Look at States’ Approach to AI Regulation

While the prospects for significant congressional action on regulation remain dim, U.S. state governments have been a hive of activity. Almost half of states have consumer privacy laws that regulate automated decision-making (ADM). A few state (and local) statutes regulate particular AI applications (e.g., Illinois’s law on using AI to evaluate video interviews in hiring). Colorado has enacted the first U.S. statute generally regulating AI — it targets discrimination in AI systems that make “consequential” decisions about individuals, including with respect to healthcare services. California also recently adopted laws on disclosure and detection of AI-generated content and on disclosures about the data used to train AI systems. A wide range of AI legislation remains under consideration in state capitols, with additional measures likely to be adopted over the next few years.

The EU AI Act: A Wider, Hybrid Strategy

In contrast to the U.S.’s light-touch and sectoral approach, the EU has chosen to regulate AI intensively and horizontally, with broad measures covering the entire economy. As a starting point, AI systems using personal data must comply with the General Data Protection Regulation (GDPR), which has a number of provisions addressing AI development and deployment.

On top of the GDPR, the EU’s AI Act, which came into force this year, takes a hybrid approach, following the template of EU product-safety legislation with a risk-based approach to regulation while also aiming to protect fundamental rights. It has a wide reach. In addition to European developers and deployers of AI systems, the AI Act applies to non-EU developers that want to market their models and systems inside the EU — and even to non-EU deployers of AI systems from which the output is sent into the EU.

A small set of practices posing an unacceptable risk to fundamental rights are prohibited outright, such as expansion of facial recognition databases from untargeted scraping of internet or CCTV footage and biometric categorization by certain sensitive or protected attributes.

The AI Act prescriptively regulates AI systems used in high-risk use cases such as safety components for various types of regulated products and — unless there is no significant risk to health, safety, or fundamental rights — various applications of biometrics; employment decisions; and access to, or eligibility for, healthcare and other essential services and benefits.

Even non-high-risk AI systems must comply with various transparency requirements. For example, systems intended to interact with people must make it obvious that they are AI, and generative AI outputs must be identified as such.

Most of these obligations fall on the provider (i.e., developer) of the AI system, not the deployer. But a deployer will be treated as the provider if it puts its name or trademark on a high-risk AI system already on market, substantially modifies an existing high-risk AI system, or modifies a non-high-risk AI system to become high risk. 

Paradoxically, for a highly prescriptive piece of legislation, the AI Act is in many ways unfinished. A lot of details have been left to implementing and delegated acts by the European Commission; guidelines and other guidance from the European Commission, EU AI Office, EU member states’ authorities, and other bodies; and technical standards to be adopted by the European standards-setting bodies that will provide safe harbors for compliance by providers of high-risk AI systems and general-purpose AI models.

The EU’s revised Product Liability Directive will require member states to subject AI systems and other software to their product liability laws. In addition, the EU’s co-legislators are considering an AI Liability Directive to clarify how these product liability laws will apply to AI systems.

Differing AI Regulatory Frameworks Pose Challenges

Similar to the EU, in the UK, the GDPR already governs significant aspects of the development and deployment of AI systems using personal data. But, as in the U.S., the UK is primarily taking a sectoral approach to AI regulation, with existing regulators applying high-level principles to AI use in their domains. The Starmer government seems likely to continue its predecessor’s principal focus on the safety of AI frontier models.

These differences are only the tip of the iceberg, as other countries and even states within countries adopt their own AI frameworks.

  • China has adopted a raft of measures that aggregate into relatively comprehensive AI regulation. These measures combine the consumer and worker protections common to Western AI regulation with provisions to maintain social stability and party control. 
  • A number of countries have privacy laws like the GDPR that cover ADM, and their regulators regularly provide guidance on how their privacy laws apply to the development and deployment of AI.
  • Brazil and Canada are among the countries seriously considering AI legislation.
  • Japan may follow suit, as the ruling Liberal Democratic Party issued a 2023 white paper suggesting that AI regulation may be necessary.
  • Other jurisdictions like Singapore continue to believe that only soft-law guidance on AI governance is necessary.

Global AI Regulation Is A Work in Progress

International regulatory consensus is unlikely to materialize any time soon, especially since it took months for just the G7 nations to agree to high-level AI principles last year.

The current landscape for privacy laws, particularly within the U.S. but elsewhere as well, illustrates how complicated the regulatory framework for AI might become for multinational companies to navigate. The rapid development of privacy statutes and regulations in the past decade has led to inconsistent requirements across jurisdictions, creating potential confusion among both businesses and consumers, and has diverted resources to compliance that companies could have invested in new products and services.

However, even if the legal regimes remain very different, international standards could help to bridge the differences. The International Organization for Standardization has already issued standards on AI Management Systems (ISO 42001) and AI Guidance on Risk Management (ISO 23894). They will not suffice for compliance with the EU AI Act, but they are a start. Major jurisdictions eventually may recognize the same standards as consistent with their own laws, which would harmonize the laws for global businesses. Until then, however, companies will have to decide how much regulatory dissonance they can handle.

Our AI in Life Sciences Report Outlines the Data — and Opportunities — that Make Clear Global Regulation so Urgent.