Virtual and Digital Health Digest
This digest covers key virtual and digital health regulatory and public policy developments during May and early June 2024 from the United States, United Kingdom, and European Union.
In this issue, you will find the following:
U.S. News
- FDA Regulatory Updates
- Health Care Fraud and Abuse Updates
- Corporate Transactions Updates
- Provider Reimbursement Updates
- Policy Updates
- Privacy and AI Updates
U.S. Featured Content
On May 10, 2024, the Centers for Medicare & Medicaid Services (CMS) published the final Medicaid Managed Care Rule that aims to improve access, quality, and health outcomes for Medicaid and Children’s Health Insurance Program (CHIP) managed care enrollees. The rule includes maximum wait time allowances, with annual secret shopper surveys to ensure compliance, as well as standards surrounding the evolving role of telehealth in managed care plans. The final rule does not implement the same network adequacy standards used for Medicare Advantage (MA) plans, as discussed in our April 2024 digest.
EU and UK News
EU/UK Featured Content
Artificial intelligence (AI) safety has been in focus over the past month, including with the publication of the Interim International Scientific Report on the Safety of Advanced AI. International collaboration in this area is increasing as world leaders met at the AI Summit in Seoul, and the UK government recently announced a collaboration on AI safety with Canada, supplementing its existing commitment with France. Further, the UK launched the AI safety evaluations platform, which is available to the global community. In the meantime, the EU has established an AI Office to oversee the implementation of the AI Act and the Medicines and Healthcare products Regulatory Agency (MHRA) has published its AI Airlock to address novel challenges in the regulation of artificial intelligence medical devices (AIaMD).
U.S. News
FDA Regulatory Updates
FDA Issues Additional Guiding Principles for Transparency of Machine Learning-Enabled Medical Devices. On June 13, 2024, the U.S. Food and Drug Administration (FDA), Health Canada, and the United Kingdom’s MHRA identified further guiding principles for transparency for machine learning-enabled medical devices (MLMDs). Building on the 10 guiding principles jointly identified in 2021 for good machine learning practice, the guiding principles for transparency focus on providing users and health care providers with relevant information at timely intervals to support safe and effective use of MLMDs and enhance patient-centered care. The transparency guiding principles consider relevant audiences, relevant information, placement of information, timing, methods to support transparency, and the motivation for transparency. These principles are important to implement to ensure access to safe, effective, and high-quality machine learning technologies as they rapidly evolve in the medical field.
CDER Establishes Program on Use of AI for Pharmacovigilance. On June 11, 2024, FDA’s Center for Drug Evaluation and Research (CDER) announced that it had established an Emerging Drug Safety Technology Meeting (EDSTM) program that provides eligible parties with the opportunity to meet with CDER regarding their research, development, and use of AI and other emerging technologies in pharmacovigilance (PV). As described by CDER, the goals of the meeting program are to facilitate discussion and mutual learning of the pharmaceutical industry’s application of these technologies to PV. CDER makes clear that “[t]he EDSTM program is not an avenue to seek regulatory advice on compliance with pharmacovigilance regulations” and is instead meant to “help CDER consider providing regulatory advice on specific technologies to facilitate their adoption when appropriate.” The EDSTM program is open to applicants with at least one approved application regulated by CDER and/or by other relevant parties supporting industry’s PV activities (e.g., academia, contract research organizations/pharmacovigilance vendors, software developers) who develop, leverage, or intend to leverage AI or other emerging technologies that can be used to satisfy the post-marketing reporting requirements in 21 CFR 314.80, 314.98, and 600.80.
FDA will grant EDSTM requests quarterly each calendar year for a total of up to nine participants in a 12-month period for the initial phase of the EDSTM. The first EDSTM request submission deadline is October 1, 2024. CDER plans to respond to meeting requests with the decision to grant, deny, or defer an EDSTM within 45 days after the submission deadline.
FDA Updates List of AI/ML-Enabled Devices. On May 13, 2024, the FDA updated its publicly available list of AI/Machine Learning (ML)-enabled medical devices to add 191 devices to the list. The FDA explained that “[o]f those newly added to the list, 151 are devices with final decision dates between August 1, 2023 and March 31, 2024, and 40 are devices from prior periods identified through a further refinement of methods used to generate this list.” With this update, the AI/ML device list now includes 882 devices.
AdvaMed Urges the FDA To Revise Its Guidance on CDS Software. In a letter dated May 6, 2024, Advanced Medical Technology Association (AdvaMed) responded to Representative Amy Bera’s request for information on the current state of AI in health care. Among other things, AdvaMed advocated for the FDA to revise its guidance on clinical decision support (CDS) software to better align with the 21st Century Cures Act. AdvaMed specifically suggested that the FDA should “ensure AI-based CDS that produces a single output, such as a recommendation for a particular treatment option that is consistent with common treatment guidelines, wouldn’t de facto be regulated as a medical device.” In a final guidance issued in September 2022, FDA interpreted the Cures Act non-device CDS exemption such that software that provides a specific preventive, diagnostic, or treatment output would generally fail to meet the definition of a non-device CDS function.
FDA Clears Baby Monitoring System for Marketing. On May 3, 2024, Masimo received 510(k) clearance for Masimo Stork™, its over-the-counter baby monitoring system that provides alarms to parents or other caregivers for use with healthy babies 0-18 months of age. As described by Masimo in its May 6 press release, Stork™ monitors a baby’s oxygen saturation level, pulse rate, and skin temperature. It also alerts caregivers if a baby’s readings fall outside of preset ranges.
Health Care Fraud and Abuse Updates
DOJ and State Attorneys General Persist in Pursuing Medically Unnecessary Telemedicine Schemes Across the U.S. On May 7, 2024, Daniel Hurt was sentenced to 10 years in prison and ordered to pay more than US$97 million in restitution for his role in three separate health care fraud and illegal kickback schemes. Two of the three cases involved telehealth. In the first telehealth-related case, Hurt and his co-conspirators worked alongside patient recruiters to solicit patients with health insurance. Patient recruiters would generate prescriptions with the patients’ information, as well as a limited selection of compounded medications that utilized formulations created or altered to receive the maximum possible amount of reimbursement from insurance companies. These prescriptions would then be referred to a telemedicine service and sent to a pharmacy owned by Hurt and his co-conspirators that received thousands of medically unnecessary prescriptions. After the prescriptions were filled, the pharmacy would bill patients’ insurance plans thousands of dollars for these expensive compounded medications. Hurt received over US$4.2 million as a result of this scheme.
In the second telehealth-related case, Hurt and co-conspirators, including those associated with so-called marketing entities, obtained thousands of cancer genomic (CGx) testing samples from Medicare beneficiaries across the country. Marketers used targeted campaigns to induce beneficiaries into submitting CGx specimens by cheek swabs sent to their home or those provided at alleged “health fairs” held throughout the United States. Hurt funneled these specimens to a hospital in Ellwood City, Pennsylvania, that allegedly billed Medicare for the tests even though ECMC did not possess validated equipment to conduct any CGx testing. In order to justify Medicare reimbursement for the CGx testing, Hurt and co-conspirators acquired CGx prescriptions from telemedicine physicians despite the fact that the doctors did not conduct proper telemedicine visits, did not treat Medicare beneficiaries for cancer or symptoms of cancer, and did not use the test results in the treatment of the beneficiaries. Medicare ultimately suffered a loss of more than US$25 million from the scheme.
In separate matters, Jamie P. McNamara and John M. Spivey were charged on May 14, 2024 for their roles in a scheme to defraud Medicare by billing for cancer genetic testing and cardiovascular genetic testing that was ineligible for Medicare reimbursement due to being medically unnecessary and procured through the payment of illegal kickbacks and bribes. McNamara and Spivey operated multiple laboratories that acquired doctors’ orders for genetic testing from call centers and telemarketers that allegedly utilized aggressive telemarketing campaigns to convince Medicare beneficiaries to agree to receive genetic testing. Telemedicine doctors, who were not the beneficiaries’ treating physicians, allegedly signed the orders for genetic testing, but did not have any type of consultation with the beneficiaries and did not follow up with the beneficiaries after the testing was performed. In total, the government estimated that the laboratories operated by McNamara and Spivey submitted over US$174 million in false and fraudulent claims to Medicare for genetic testing and received over US$55 million in reimbursements.
Plaintiffs Challenge California’s Telehealth Licensure Rule in Recent Lawsuit. On May 16, 2024, two plaintiffs, a patient and her radiation oncologist, filed a complaint against the head of California’s medical licensing board, challenging the state’s telemedicine regulations requiring doctors who treat or consult with patients in California to also be licensed in California. Shellye Horowitz, who resides in California, suffers from hemophilia A and utilizes telehealth visits with her radiation oncologist Dr. Sean McBride, who resides in and is licensed in New York, to obtain specialty care. California law mandates that “any person who practices … any system or mode of treating the sick or afflicted in [California] … without having at the time of doing so a valid, unrevoked, or unsuspended certificate … is guilty of a public offense, punishable by a fine not exceeding ten thousand dollars ($10,000), [or] by imprisonment.” Cal. Bus. & Prof. Code § 2052(a). The lawsuit argues, in part, that California’s licensure rule burdens interstate specialty medical practices and violates the U.S. Constitution’s Dormant Commerce Clause, which prohibits states from enacting laws that discriminate against interstate commerce or excessively burden interstate commerce in relation to any putative local benefits. This case is still ongoing.
Corporate Transactions Updates
Do You Have AI? Investment in health care AI continues to dominate conversations and the market with over one in four dollars flowing to companies focused on development and real-world implementation of this technology. This continues the upward trajectory of venture capital investment in this subsector of the health care industry over the past couple of years with Silicon Valley Bank predicting that investment will top US$11 billion in 2024 compared to last year’s US$7.2 billion. Another positive sign for continued growth comes from an increasing number of new funds focused on health care AI that have raised substantial dollars in 2024. Current health care AI investment cuts across the spectrum with dollars deployed for drug discovery, personalized medicine, and clinical support, as well as administrative functions including virtual assistants, clinical scribes, and revenue cycle efficiencies. Illustrating the health of this sector, Tempus AI, one of the early darlings of precision medicine leveraging large clinical and molecular data sets is finally set to IPO this year having filed its prospectus with the SEC in May. Tempus AI is hoping to raise US$410.7 million with a target valuation of US$6.10 billion.
Provider Reimbursement Updates
CMS Releases Medicaid Managed Care Rule. On May 10, 2024, CMS published a final rule that aims to improve access to care, quality, and health outcomes for Medicaid and Children’s Health Insurance Program managed care enrollees. 89 Fed. Reg. 41002. Among other updates, the final rule establishes maximum wait times for certain appointments, including routine primary care services and outpatient mental health and substance use disorder services. The rule also requires states to conduct annual secret shopper surveys to ensure compliance with appointment wait time standards.
In issuing these new standards, CMS addressed the evolving role of telehealth in federal health care programs. Appointments offered via telehealth will only count towards compliance with appointment wait time standards if the provider also offers in-person appointments. Moreover, telehealth visits offered during secret shopper surveys must be separately identified in survey results. Id. at 41276 (42 C.F.R. § 438.68(f)(2)(ii)). In support of this policy, the agency explained that while “increased reliance on telehealth can and should be part of the solution to address access deficiencies,” managed care plans cannot rely solely on telehealth to meet enrollees’ care needs. Id. at 41026.
Notably, this approach differs from the network adequacy standards used for Medicare Advantage plans. As we covered in the April 2024 digest, MA plans can receive a 10-percentage point credit towards the percentage of beneficiaries that reside within required time and distance standards when the plans contract with telehealth providers of certain specialty types — even if such providers do not also offer in-person appointments. CMS declined to adopt this approach for Medicaid managed care plans, noting that the time and distance standards applicable to MA plans are “substantially different” than appointment wait time standards. Id.
Policy Updates
House Ways & Means Committee Passes Telehealth Legislation. On May 8, 2024, the House Ways & Means Committee passed six health care bills related to telehealth and rural health issues. The committee favorably extended COVID-era flexibilities for Medicare’s telehealth coverage through 2030. This legislative package contains pharmacy benefit manager (PBM) reforms as offsets, which would prohibit PBMs and their affiliates from deriving income for covered Part D drugs based on a manufacturer’s price for the drug (“delinking”).
Senate Leader Schumer Releases AI Roadmap. On May 14, 2024, Senate Majority Leader Chuck Schumer (D-NY) released the Senate AI Working Group’s bipartisan roadmap for the development of AI policy. The roadmap outlines policy priorities the working group identified through the Senate’s nine AI Insight Forums, which convened over 150 leaders from the private sector, academia, the nonprofit space, government, and more to address pressing AI policy issues. While the roadmap calls for increasing federal investment in AI to US$32 billion annually and endorses several pieces of existing AI legislation, Leader Schumer’s announcement does not include a timeline or plan for implementation by the end of the year.
Bipartisan House Members Send Letter Urging for Expanded Beneficiary Access to Digital Therapeutics. On May 31, 2024, Representatives Kevin Hern (R-OK), Mike Thompson (D-CA), August Pfluger (R-TX), and Doris Matsui (D-CA) sent a letter encouraging the Centers for Medicare & Medicaid Services to expand Medicare coverage and reimbursement for digital therapeutics under existing benefit categories. CMS officials reportedly said that the agency is considering ways to incorporate digital therapeutics in the 2025 physician fee schedule, including the possibility for coverage of “digital cognitive behavioral therapy.”
Privacy and AI Updates
House Energy and Commerce Subcommittee Advances Federal Privacy Bill. On May 23, 2024, the Innovation, Data, and Commerce Subcommittee of the House Energy and Commerce Committee approved an updated version of the American Privacy Rights Act, the bipartisan privacy bill that was originally released in draft form in early April. The revised draft of the bill contains several notable new provisions, including one permitting individuals to choose to have humans, rather than certain algorithms, reach consequential business decisions about them, unless honoring such choice would be technologically impractical or prohibitively costly. The revised bill also includes a requirement that large data holders that use such algorithms to make consequential decisions conduct (or engage an independent auditor to conduct) an impact assessment of such use and submit the assessment to the National Telecommunications and Information Administration. With respect to personal health information privacy mandates, the new version of the bill adds an exemption from the bill’s applicability for entities that are subject to (1) the regulations related to the protection of human subjects under 45 C.F.R. Part 46 and (2) “[r]egulations and agreements related to information collected as part of human subjects research pursuant to the good clinical practice guidelines issued by The International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use; the protection of human subjects under 21 C.F.R. Parts 50 and 56, or personal data used or shared in research … conducted in accordance with applicable law.” This new exemption (which would only extend to actions taken within the scope of the referenced existing regulations and requirements), provides comfort that the bill, if enacted as currently written, would largely leave intact the existing framework for privacy in the context of human subjects research. However, the revised bill retains the original version’s preservation of any state law provisions “that protect the privacy of health information, healthcare information, medical information, medical records, HIV status, or HIV testing,” meaning that the bill would not preempt state privacy regulation over health data. With the subcommittee’s approval, the bill now advances to the full Energy and Commerce Committee for consideration.
Bipartisan Congressional Committee Focuses on New AI Regulation Bill. On June 4, 2024, the House and Senate Joint Economic Committee held a hearing on AI governance and economic growth, which focused on the proposed AI Research Innovation and Accountability Act (S. 3312), a bipartisan bill introduced by Senators Amy Klobuchar (D-MN), John Thune (R-SD), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Lujan (D-NM).
The bill would impose different standards on developers and users of so-called “critical-impact AI systems,” on the one hand, and “high-impact” AI systems, on the other. As defined in the bill, “critical-impact AI systems are those that implicate critical infrastructure, criminal justice, national security, or individuals’ biometric data. “High-impact” AI systems are those “developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual” to, among other things, health care or insurance. Digital health AI-based tools that might, for example, make a diagnostic assessment to determine a patient’s course of treatment, or to decide whether to grant or deny health insurance coverage, could be “high-impact AI systems” under the bill.
Developers of high-impact systems would have to submit transparency reports to the U.S. Department of Commerce describing their design and safety plans for those systems before implementing them and annually thereafter. The reports would have to include a description of how each high-impact AI system will be used and with what types of data, as well as of the potential impacts of the AI system. The Department of Commerce would have authority to impose fines on any high-impact AI system provider that failed to provide sufficiently robust or accurate transparency reports, as well as to prohibit deployment of that provider’s systems.
Although the advancement of the AI Research Innovation and Accountability Act is uncertain, the hearing underscored the growing recognition of the power and risks of AI and the challenges in regulating it. One witness at the June 4 hearing, a senior researcher at the Department of Energy, testified about the capacity of AI to analyze vast quantities of data and thereby to help rapidly advance scientific knowledge. Another witness, a Johns Hopkins University medical professor, testified about the potential for AI to improve health care delivery while reducing costs by making health care services more accessible and enhancing the productivity of providers.
EU and UK News
Regulatory Updates
Council of the European Union Adopts the AI Act. On May 21, 2024, the Council of the European Union formally adopted the Artificial Intelligence Act (AI Act). Following a lengthy negotiation period since the initial proposal by the European Commission (EC) in April 2021, the legislative process for the world’s first binding law on AI is nearing its conclusion. For further details on the negotiations surrounding the text of the AI Act, see our January 2023 Advisory, and our April 2024 digest for details on the agreed provisions of the AI Act.
The AI Act will become law 20 days after its publication in the EU’s Official Journal, and will apply two years after that, with some exceptions for specific provisions.
The EC also put forward the AI Pact, a voluntary initiative intended to have companies comply with the requirements of the AI Act ahead of its full implementation.
Establishment of the European AI Office. On May 29, 2024, the EC established the Artificial Intelligence Office, following the adoption of the commission establishing the AI Office on February 14, 2024, as mentioned in our April 2024 digest. The AI office will be responsible for:
- Ensuring the coherent implementation of the AI Act: supporting the governance bodies in EU Member States and directly enforcing the rules for general-purpose AI models
- Coordinating the drawing up of state-of-the-art codes of practice: conducting testing and evaluation of general-purpose AI models, requesting information, and applying sanctions
- Promoting an innovative EU ecosystem for trustworthy AI: enabling access to AI sandboxes and real-world testing
- Ensuring a strategic, coherent, and effective European approach on AI at the international level
The AI Office is currently:
- Preparing guidelines on the AI system definition and on the prohibitions
- Getting ready to coordinate the drawing up of codes of practice for the obligations for general-purpose AI models
- Overseeing the AI Pact, which allows companies to engage with the EC and stakeholders regarding the implementation of the requirements of the AI Act ahead of its application
The first meeting of the AI Office is expected at the end of June 2024.
Council of the European Union Adopts the Extension to IVDR Transition Periods and Accelerated Launch of Eudamed. On May 30, 2024, the Council of the European Union formally adopted the regulation to amend the Medical Device Regulations (EU) 2017/745 and the In Vitro Diagnostic Medical Device Regulations (EU) 2017/746 (IVDR), as applicable, to extend the transition provisions for certain in vitro diagnostic medical devices under the IVDR; allow for a gradual roll out of Eudamed so that certain modules will be mandatory from late 2025; and include a notification obligation in case of interruption of supply of a critical device. The details are discussed in our February 2024 digest and in our February 2024 blog post. The regulation will enter into force following publication in the EU’s Official Journal.
Launch of MHRA AI Airlock Regulatory Sandbox. On May 9 2024, the MHRA launched the AI Airlock, a new regulatory sandbox for AIaMDs. The aim of the AI Airlock is to identify the regulatory challenges posed by standalone AIaMD. The MHRA has created a platform through which regulators, manufacturers, and other relevant stakeholders can bring their expertise and work collaboratively to understand and mitigate novel risks associated with these products. A small number of real-world AIaMD products will be assessed to identify possible regulatory issues that could arise when AIaMD products are used for direct clinical purposes within the National Health Service (NHS). The AI Airlock has been set up to follow the regulatory sandbox model. However, it is described as being different from other regulatory sandboxes due to the collaboration between the MHRA, Department of Health and Social Care (DHSC), NHS AI Lab, NHS England, and UK Approved Bodies. You can read more in our May 2024 blog post.
Launch of UK AI Safety Evaluations Platform, Inspect. On May 10, 2024, the UK’s Department for Science, Innovation and Technology (DSIT) and the AI Safety Institute launched a new AI safety testing platform called Inspect. Inspect is a software library which enables innovators to assess specific capabilities of their technologies, for example core knowledge, ability to reason, and autonomous capabilities, and then generates a score based on the results. The platform is open-source and available to the global AI community with the aim of enhancing the consistency of safety evaluations of AI models across the world.
MHRA Publishes Proposals for International Recognition of Medical Devices. On May 21, 2024, the MHRA proposed the adoption of a procedure to recognize the approvals and certifications of medical devices from certain international regulators. The aim is to facilitate faster access to medical devices for patients in Great Britain, to avoid duplicative assessments of devices in the UK and allow the MHRA to focus resources on innovative devices that may be excluded from the proposed scheme. In particular, the following are excluded:
- Software as a Medical Device (SaMD) (including AIaMD) products that do not satisfy the MHRA’s intended purpose guidelines
- SaMD (including AIaMD) products approved via a route which relies on equivalence to a predicate (i.e. U.S. 510(k))
The relevant regions from which certificates and approvals would potentially be recognized are Australia, Canada, European Economic Area countries, and the U.S. In order to benefit from the scheme, the device must meet certain eligibility criteria, including that the labelling and packaging are in English, and the device must be in “all aspects” the same as that originally approved or certified by the recognized regulator. The proposed regime provides four different access routes. The applicable route will depend on, for example, the classification of the device.
You can read more about this topic in our blog post from May 2024.
UK Leading the Way on AI Safety — Interim International Scientific Report on the Safety of Advanced AI Published. The interim International Scientific Report on the Safety of Advanced AI was published on May 17, 2024. Commissioned by the UK government, experts from more than 30 countries, the EU, and the UN have come together to produce this independent research report. It focuses on general purpose AI by setting out its capabilities, risks, and how those risks may be mitigated. Publication comes ahead of the AI Summit in Seoul (as mentioned in our May 2024 digest), hosted jointly by the UK and South Korea, where world leaders subscribed to the Seoul Declaration, which is committed to cooperation and collaboration on thresholds for significant AI risks and safety testing.
Through commissioning the report and jointly hosting the AI Summit, the UK is positioning itself as a leader in AI safety. The UK also recently announced a collaboration with Canada, whereby joint research in AI safety will be undertaken. This is in addition to the UK’s collaboration with France, as discussed in our March 2024 digest.
Meanwhile, leading AI developers agreed to the Frontier AI Safety Commitments, under which they will publish risk assessments of their cutting-edge AI ahead of the AI Action Summit in France.
Report on AI Governance Published. On May 28, 2024, the House of Commons Science, Innovation and Technology Committee published their report on AI governance. The committee recommends that the government be ready to introduce specific legislation in this area in case regulators’ current powers and the voluntary codes of practice prove ineffective. This will depend upon whether the sectoral regulators, such as the MHRA, can implement the government’s overarching principles and keep pace with innovation. The report urges the government to assess whether sectoral regulators’ powers are sufficient to address risks of AI. Further, it recommends examination of how enforcement can be coordinated among sectoral regulators, identifying any lack of clarity or gaps in powers. It must also ensure that regulators have sufficient resources to enforce and investigate the development of AI. The MHRA, together with various other health authorities and oversight bodies, submitted joint evidence that was taken into account in the report. They submitted that AI governance in health is generally strong, although different opinions can emerge on issues that cut across multiple bodies’ remits.
Research report and updates on the MHRA-NICE partnership into digital mental health technologies. On May 3, 2024, the MHRA published a research report into the public’s perspectives of the benefits, risks, and applicability of digital mental health technologies (DMHT). The report was commissioned by the MHRA and the National Institute for Health and Care Excellence (NICE) as part of a joint three-year partnership funded by the Wellcome Trust looking to inform the future regulatory and evaluation framework for DMHT.
On May 7, 2024, the MHRA also published an update on other aspects of the DMHT partnership with NICE. The MHRA reports that it has concluded its work on mapping out the landscape of available DMHTs and their key characteristics, and exploring the key challenges for DMHT across the regulatory and evaluation pathway. It has led to the development of a conceptual framework for categorizing DMHTs and clearer proposals for how DMHTs qualify as SaMD. This work has been submitted for publication and sets up future work to consider the classification of DMHTs as SaMD and clinical evidence and post-market surveillance requirements. To receive future updates on the project, please register your interest.
Private Members’ Bill on AI regulation Has Been Dropped. In previous digests, we described how the Artificial Intelligence (Regulation) Private Members’ Bill was progressing through the House of Lords with its second reading. The bill sought to place AI regulatory principles on a statutory footing and establish a central AI authority to oversee the regulatory approach. This approach differed from that proposed by the UK government, where instead core regulatory principles will be set out in guidance and will be applied by existing regulatory authorities in their individual sectors. Although the bill passed its third reading in the House of Lords and was subsequently sent to the House of Commons on May 10, 2024 to be scrutinized, the bill has now been dropped due to the announcement of the general election on July 4, 2024. It will be interesting to see whether a similar proposal is put forward or whether the current government’s more flexible approach to the regulation of AI will change when the new government is formed.
Privacy Updates
Publication of the ICO’s Strategic Approach to AI. On May 1 2024, the Information Commissioner's Office (ICO) published its strategic approach to AI. Like the MHRA’s strategic approach, which was discussed in last month’s digest, this was also in response to the February 1, 2024 letter from the Secretaries of State of DSIT and DHSC. The ICO explains how the principles outlined in the government’s white paper already largely mirror the data protection principles that the ICO regulates and what work it has already done and plans to do to implement these principles:
- Publication of guidance. The ICO has published a range of guidance on how data protection law applies to AI: AI and data protection, automated decision-making and profiling, explaining decisions made with AI, and an AI and data protection toolkit. It also tracks the latest developments, for example it published a report on the impact of neurotechnologies and neurodata on privacy, and is holding a consultation series on generative AI. The ICO plans to update the guidance on AI and data protection and automated decision-making in spring 2025 to incorporate the Data Protection and Digital Information Bill once the bill has passed.
- Provision of advice and support. The ICO offers advice services for AI innovators through its regulatory sandbox, innovation advice service, innovation hub, and consensual audits. It is currently participating in a pilot of the AI and Digital Hub, which allows innovators to ask and coordinate complex questions to multiple regulators simultaneously and will be testing new regulatory sandbox projects in the coming months, such as personalized AI for those affected by cancer.
- Regulatory action. The ICO uses its enforcement powers to promote compliance and safeguard the public.
Finally, the ICO explains how it collaborates with other regulators, government, standards bodies, and international partners to promote regulatory coherence.
UK Government Calls for Views on New Voluntary Cyber Security Codes of Practice and the Development of a Global AI Security Standard. On May 15, 2024, the UK government announced two new voluntary codes of practice in the cyber security space: the AI Cyber Security Code of Practice and the Code of Practice for Software Vendors. These codes supplement others already in use, such as the Code of Practice for app store operators and developers. The new codes are intended to assist AI and software developers to improve cyber security by encouraging them to ensure that their products can withstand attempts at hacking, tampering, and sabotage. In addition, the AI Security Code sets out measures that can be taken by various entities across the supply chain to improve the security of AI products. It aims to increase confidence among AI users across a broad range of industries, and it is hoped that, in turn, this will boost efficiency and encourage economic growth.
The AI Cyber Security Code of Practice is meant to open up discussion with a wide range of stakeholders, including industry. It is the foundation for eventually aligning a global standard on AI security. The government welcomes views on the codes and on the intention of developing a global AI security standard until August 9, 2024.
Reimbursement Updates
Consultation Open on Fast-Track MedTech Funding. On May 23, 2024, NICE and the NHS announced proposals to allow MedTech developers to gain access to NHS funding under a new fast-track route for clinically and cost-effective products. This would allow the NHS to introduce “game-changing products” recommended by NICE on a large scale. The new pathway aims to ensure that patients can benefit from the best products, devices, digital technologies, or diagnostic innovations, and to provide greater certainty for MedTech developers. The pathway has been developed according to five guiding principles:
1. It is developed in coordination with NICE, focusing on high-impact products
2. It should support existing and emerging technologies
3. It will include a mechanism for automatic identification of funding for technologies that are clinically and cost-effective to support their adoption on the NHS
4. It should enable change in clinical practices and services
5. It should support bias identification and mitigation
The consultation is open for feedback from patients, clinicians, academics, and industry until August 15, 2024.
IP Updates
UK Intellectual Property Office Releases Updated Guidance on the Examination of Patents Involving Artificial Neural Networks. We reported on the developments of Emotional Perception AI Ltd v. Comptroller-General of Patents, Designs and Trade Marks [2023] EWHC 2948 (Ch) in our December 2023 and February 2024 digests and highlighted to readers and developers of digital health products using artificial neural networks (ANN) the shift in approach from the UK Intellectual Property Office (UKIPO) Manual of Patent Practice to ensure that examiners do not reject inventions using ANNs under the “program for a computer” exclusion to patentability.
Since then, on May 7, 2024, the UKIPO has updated its guidance on the examination of patent applications relating to artificial intelligence inventions (the AI Guidance). The AI Guidance summarizes the UKIPO’s position on when an AI invention makes a technical contribution and when it is excluded from patent protection.
To reflect Emotional Perception, the AI Guidance confirms that an invention involving an ANN (whether implemented in hardware or software) is not a computer program as such and therefore is not excluded from patent protection for lack of technical contribution. However, we infer from the tone and content of the AI Guidance that the UKIPO is not in agreement with the outcome in Emotional Perception. The AI Guidance notes that examiners are encouraged to consider whether other exclusions to patentability might apply instead.
Emotional Perception has, for now, provided the opportunity for a broader scope of inventions using ANNs to be patentable, but this may be short-lived if the Court of Appeal reverses the decision of the High Court. The Court of Appeal heard the appeal on May 14-15, 2024, and a decision is likely to be provided before the end of July 2024.
*The following individuals contributed to this Newsletter:
Amanda Cassidy is employed as a senior health policy advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Eugenia Pierson is employed as a senior health policy advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Sonja Nesbit is employed as a senior policy advisor at Arnold & Porter’s Washington, D.C. office. Sonja is not admitted to the practice of law.
Mickayla Stogsdill is employed as a senior policy specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
Katie Brown is employed as a policy advisor at Arnold & Porter’s Washington, D.C. office. Katie is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2024 All Rights Reserved. This Newsletter is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.