Virtual and Digital Health Digest
This digest covers key virtual and digital health regulatory and public policy developments during February and early March 2025 from the United States, United Kingdom, and European Union.
In this issue, you will find the following:
U.S. News
- Health Care Fraud and Abuse Updates
- Corporate Transactions Updates
- Provider Reimbursement Updates
- Privacy and AI Updates
- Policy Updates
U.S. Featured Content
House Energy & Commerce Chair Brett Guthrie (R-KY) and Vice Chair John Joyce (R-PA) announced the creation of a data privacy working group, which plans to develop a “national data privacy standard” and “framework for legislation” in the 119th Congress. On February 21, 2025, the group released a Request for Information to explore federal policy issues related to data privacy and cybersecurity, inviting public responses by April 7, 2025.
EU and UK News
EU/UK Featured Content
Artificial intelligence (AI) has been the focus this month, with certain aspects of the EU AI Act now in force and key guidance being published by the European Commission. In addition, the much criticized AI Liability Directive has been withdrawn by the European Commission. In the UK, the UK government published its AI Action Plan setting out its proportionate, flexible regulatory approach towards AI, and the Medicines and Healthcare products Regulatory Agency (MHRA) hosted an Innovation Showcase demonstrating how it is using digital technologies and AI throughout the regulatory lifecycle.
U.S. News
Health Care Fraud and Abuse Updates
Missouri Physician Pleads Guilty to False Statements to Medicare. On February 20, 2025, Dr. Jerry Bruggeman pleaded guilty to making false statements relating to a health care matter. A 2020 investigation by the Office of Inspector General for the U.S. Department of Health and Human Services revealed that Bruggeman received $29,440 in compensation from a telehealth company for orders he signed through an online portal between January 2018 and April 2019. Dr. Bruggeman was hired by the telehealth company to “review” and sign orders for cancer genetic testing, pharmacogenetic testing, and durable medical equipment (DME), despite never interacting with the patients prior to signing the forms.
Health Care Software and Service Company Vice President Pleads Guilty to $1 Billion Health Care Fraud Conspiracy. On February 20, 2025, Gregory Schreck of Kansas pleaded guilty to operating a fraudulent internet platform that generated false doctors’ orders for medically unnecessary orthotic braces, pain creams, and other items. The online platform operated by Schreck and his co-conspirators connected pharmacies, DME suppliers, and marketers with telemedicine companies that would accept kickbacks in exchange for signed doctors’ orders transmitted through the platform. The doctors’ orders generated by the platform falsely represented that a doctor had examined and treated the patient when, in reality, orders were made without regard to medical necessity and based only on a brief phone call. The conspiracy resulted in more than $1 billion billed to Medicare and other insurers.
Marketing and Durable Medical Equipment Company Owner Convicted for Role in $100 Million Medicare Fraud Scheme. On March 6, 2025, after a month-long jury trial, Raheel Naviwala was convicted of conspiracy to commit health care fraud and wire fraud, one count of health care fraud, conspiracy to violate the Anti-Kickback Statute, and three counts of violating the Anti-Kickback Statute. Naviwala and his co-conspirators purchased lists of Medicare patients’ information and hired telemarketers to convince patients to get orthotic braces. Naviwala and his co-conspirators then paid telemedicine doctors to sign pre-filled prescriptions for braces without speaking to the patients or assessing medical necessity. Naviwala subsequently sold the signed prescriptions to DME companies that could bill Medicare and other federal health care programs. To conceal the fraud, Naviwala and his co-conspirators signed contracts that falsely represented that Naviwala was billing the DME companies for marketing or consulting services.
Corporate Transactions Updates
Powerful Investors Back AI-Powered Digital Health Platforms for Doctors. Big-name investors in the health and technology sphere, including Sequoia Capital and IVP, are betting big on AI-powered digital platforms designed to improve decision-making and reduce the medical documentation burden for clinicians.
In late February 2025 and early March 2025, Sequoia Capital led Series A funding rounds for two separate AI platforms for doctors, Freed and OpenEvidence. OpenEvidence, a rapidly growing platform that provides real-time assistance for doctors making critical care decisions, announced on February 19, 2025 that it secured $75 million in Series A funding from Sequoia Capital. OpenEvidence plans to use the funding to “continue building the most trusted AI platform for doctors and other medical professionals in the world.” Freed, an AI platform built by a former Facebook engineer after seeing his wife’s challenges as a doctor burdened by onerous medical documentation requirements, provides technology to reduce the amount of time and energy spent on medical documentation. On March 5, 2025, Freed announced it received its first institutional capital, a $30 million funding round led by Sequoia Capital, and unveiled new features including pre-charting and specialty-specific notes.
On February 23, 2025, Abridge, a leader in AI-powered clinical documentation that uses enterprise-grade technology to convert patient-clinician conversations into structured clinical notes in real-time with electronic medical record integrations, announced a $150 Series C investment led by Lightspeed Venture Partners. The Series C funding round, which comes only four months after their $30 million Series B round, had participation from several other investors including IVP, Spark Capital, and CVS Health Ventures. Abridge plans to use the new capital to improve Abridge’s existing product lines and develop additional medical documentation-assisting capabilities.
Provider Reimbursement Updates
Drug Enforcement Administration Delays Final Rules on Telehealth Prescribing. As we covered in our February 2025 Digest, the U.S. Drug Enforcement Administration (DEA) issued two final rules in the last days of the Biden administration. First, the agency issued a final rule authorizing the telehealth prescription of a six-month supply of buprenorphine, a Schedule III narcotic, for use in the treatment of opioid use disorder. Second, the agency issued a final rule authorizing U.S. Department of Veterans Affairs (VA) practitioners to prescribe controlled substances via telehealth to VA patients without conducting an in-person medical evaluation, provided that another VA practitioner has, at any time, previously conducted an in-person medical evaluation.
The final rules were initially scheduled to become effective on February 18, 2025. Last month, however, the DEA delayed the effective date to March 21, 2025, in accordance with the Trump administration’s Presidential Memorandum titled, “Regulatory Freeze Pending Review.” On March 20, 2025, the DEA further delayed the effective date to December 31, 2025. The agency noted that the new effective date will not impact the ability of practitioners covered by the final rules to prescribe controlled substances via telehealth, as such practitioners are covered by COVID-19 era telehealth flexibilities that have been extended through December 31, 2025.
Privacy and AI Updates
Virginia Legislature Passes Legislation to Regulate High-Risk AI Systems. On February 20, 2025, the Virginia legislature passed a bill to regulate “high-risk” AI systems. The Virginia High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) would regulate the development and deployment of “high-risk artificial intelligence systems,” which it defines as AI systems “specifically intended to autonomously make, or be a substantial factor in making, a consequential decision.” The bill broadly defines an artificial intelligence system as “any machine learning-based system that, for any explicit or implicit objective, infers from the inputs such system receives how to generate outputs, including content, decisions, predictions, and recommendations, that can influence physical or virtual environments,” and defines a “consequential decision” as a decision that has a “material legal, or similarly significant, effect on the provision or denial to any consumer” of a key status or service, including access to health care services or insurance.
If the bill is signed into law, both developers and deployers of high-risk AI systems will be required to demonstrate that they have analyzed and mitigated the risks posed by their high-risk AI systems. Deployers of such systems, for example, would have to (1) implement a risk management program for the high-risk system, (2) complete an impact assessment of the high-risk system, (3) notify consumers using the system that they are interacting with an AI system (if applicable), (4) notify consumers of specified items if the high-risk system makes an adverse consequential decision concerning the consumer, and (5) make available a statement summarizing how the deployer manages any reasonably foreseeable risk of algorithmic discrimination caused by the system.
Each violation of the bill, which is enforceable only by the state Attorney General, could result in civil penalties of up to $1,000, plus reasonable attorney fees, expenses, and costs, and up to $10,000 if the violation was willful. If enacted, the law will take effect on July 1, 2026.
Policy Updates
Congress Passes Continuing Resolution (CR); Extends Medicare Telehealth Flexibilities. On March 14, 2025, President Trump signed into law the “Full-Year Continuing Appropriations and Extensions Act, 2025” (H.R. 1968), which passed the House in a nearly party-line vote of 217-213 and the Senate in a vote of 54-46 to fund the federal government through September 30, 2025. While congressional Democrats initially were unified in opposition to the CR, Senate Minority Leader Chuck Schumer (D-NY) and nine other Democratic Senators changed their position to avert a shutdown. The CR extends key health provisions, including extending Medicare telehealth flexibilities, through September.
Dr. Oz Promotes Artificial Intelligence During CMS Nomination Hearing. On March 14, 2025, the Senate Finance Committee held a nomination hearing for Dr. Mehmet Oz to be the next Centers for Medicare and Medicaid Services (CMS) Administrator. Dr. Oz said he hopes to harness the power of AI to automate Medicare Advantage’s prior authorization process and further limit the number of pre-authorized procedures from around 5,500 to 1,000.
HELP Committee Holds Nomination Hearing for U.S. Food and Drug Administration (FDA) Commissioner. On March 6, 2025, the Senate Health, Education, Labor, and Pensions (HELP) Committee held its nomination hearing for Dr. Marty Makary, M.D., M.P.H. to be the next FDA Commissioner. In his opening statement, Senate HELP Chair Bill Cassidy, M.D. (R-LA) stated that, while the FDA is regarded as the gold standard around the world in safeguarding public health, the agency faces significant challenges. He urged the Trump administration to cut red tape across the government and examine innovative ways to address bottlenecks in the FDA’s review process, such as utilizing AI and other innovative technologies to improve efficiency and accelerate drug discovery.
House Republicans Launch Data Privacy Working Group. On February 12, 2025, House Energy & Commerce Chair Brett Guthrie (R-KY) and Vice Chair John Joyce (R-PA) announced the creation of a data privacy working group, which plans to develop a “national data privacy standard” and “framework for legislation” in the 119th Congress. On February 21, 2025, Reps. Guthrie and Joyce released a Request for Information to explore federal policy issues related to data privacy and cybersecurity, inviting public responses to PrivacyWorkingGroup@mail.house.gov by April 7, 2025. House AI Task Force Co-Chairs Jay Obernolte (R-CA) and Ted Lieu (D-CA) recently spoke to various reporters about Congress’ future work to regulate AI and the need to address “bad actors” in the market, with Republicans preferring an “incremental” regulatory approach.
EU and UK News
Regulatory Updates
First Provisions of the EU AI Act Now Apply. The first provisions of the EU Artificial Intelligence Act (EU AI Act) are now in effect. These provisions include the definition of what qualifies as an AI system, the obligation of AI literacy (requiring companies developing, placing on the market, or using AI systems to ensure users of AI systems have a sufficient level of AI literacy), and prohibited AI use cases under the EU AI Act. Other provisions of the EU AI Act will apply in accordance with the transition timelines.
European Commission Publishes Guidelines Aimed at Companies Developing or Using AI Systems. These provide clarity on the definition of AI Systems and on the prohibited AI practices under the EU AI Act. On the AI Systems definition, the guidelines clarify that only technologies that learn, reason, or adjust intelligently qualify as AI Systems. Traditional software, such as simple prediction models or basic data processing software, is excluded. On prohibited AI practices, the guidelines provide clarifications with concrete examples of what qualifies and does not qualify as prohibited practices. They also outline the responsibilities of companies engaging in prohibited AI practices. Both guidelines are yet to be formally adopted by the European Commission.
UK Government Publishes AI Opportunities Action Plan (the Report). The Report highlights a proportionate, flexible regulatory approach towards AI. While life sciences is not its main focus, the Report recognizes that ineffective regulation could reduce uptake in sectors such as the medical sector, and features a number of examples of how AI can be used in health care. The Report recommends that the government should appoint AI Sector Champions in key industries, including life sciences, to collaborate with industry and government to develop AI adoption plans. Further, all regulators (including the MHRA) should publish annual reports on how they enabled innovation and growth by AI.
MHRA Innovation Showcase. Earlier this month, the MHRA hosted an Innovation Showcase, demonstrating the MHRA’s work across the spectrum of innovation, with a focus on AI prototypes in the regulatory lifecycle. A number of use cases within the MHRA were demonstrated, highlighting a “prove by doing” approach to innovation. AI featured heavily in the use cases, including a generative AI assistant to respond to questions about the British Pharmacopeia, the use of AI to assist assessment of clinical trial applications, and the use of AI to identify online sellers of counterfeit medicinal products. The MHRA intends to expand its approach into other areas where innovation and AI can lead to productivity gains.
UK Online Pharmacies Must Strengthen Safeguards for Supply of Medicines Via Telehealth Services. The UK’s General Pharmaceutical Council published new guidance for registered pharmacies providing pharmacy services at a distance, including on the internet. The guidance introduces enhanced safety measures whereby prescribers must take additional steps to ensure the information that a person provides in order to obtain medicines from an online pharmacy is accurate. Notably, medicines categorized as “high-risk” should not be prescribed based on an online questionnaire alone. See our February BioSlice Blog for more information.
Liability Updates
European Commission Withdraws AI Liability Directive (AILD), as Confirmed in the European Commission 2025 Work Program and Annexes. Proposed by the European Commission in September 2022, the AILD aimed to ensure broader protection for damage caused by AI systems. It faced much criticism from members of the European Parliament (as well as industry), who argued it would add unnecessary regulatory burden. In particular, the EU AI Act and the new Product Liability Directive set out a framework for the regulation of AI and provided redress for those who may suffer harm from AI; the addition of a further set of overlapping rules was seen as duplicative, as well as adding the risk of potential confusion.
*The following individuals contributed to this Newsletter:
Eugenia Pierson is employed as a senior health policy advisor at Arnold & Porter’s Washington, D.C. office. Eugenia is not admitted to the practice of law.
Amanda Cassidy is employed as a senior health policy advisor at Arnold & Porter’s Washington, D.C. office. Amanda is not admitted to the practice of law.
Sonja Nesbit is employed as a senior policy advisor at Arnold & Porter’s Washington, D.C. office. Sonja is not admitted to the practice of law.
Mickayla Stogsdill is employed as a senior policy specialist at Arnold & Porter’s Washington, D.C. office. Mickayla is not admitted to the practice of law.
© Arnold & Porter Kaye Scholer LLP 2025 All Rights Reserved. This Newsletter is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.