President Joe Biden at desk in Oval Office.

What’s on the Horizon for Health and Biotech with the AI Executive Order

By Adithi Iyer

Last month, President Biden signed an Executive Order mobilizing an all-hands-on-deck approach to the cross-sector regulation of artificial intelligence (AI). One such sector (mentioned, from my search, 33 times) is health/care. This is perhaps unsurprising— the health sector touches almost every other aspect of American life, and of course continues to intersect heavily with technological developments. AI is particularly paradigm-shifting here: the technology already advances existing capabilities in analytics, diagnostics, and treatment development exponentially. This Executive Order is, therefore, as important a development for health care practitioners and researchers as it is for legal experts. Here are some intriguing takeaways: 

Security-Driven Synthetic Biology Regulations could Affect Drug Discovery Models

It is unsurprising that the White House prioritizes national security measures in acting to regulate AI. But it is certainly eye-catching to see biological security risks join the list. The EO lists biotechnology on its list of examples of “pressing security risks,” and the Secretary of Commerce is charged with enforcing detailed reporting requirements for AI use (with guidance from the National Institute of Standards and Technology) in developing biological outputs that could create security risks. 

Reporting requirements may affect a burgeoning field of AI-mediated drug discovery enterprises and existing companies seeking to adopt the technology. Machine learning is highly valuable in the drug development space because of its incredible processing power. Companies that leverage this technology can identify both the “problem proteins” (target molecules) that power diseases and the molecules that can bind to these targets and neutralize them (usually, the drug or biologic) in a much shorter time and at much lower cost. To do this, however, the machine learning models in drug discovery applications also require a large amount of biological data—usually protein and DNA sequences. That makes drug discovery models quite similar to the ones that the White House deems a security risk. The EO cites synthetic biology as a potential biosecurity risk, likely coming from fears of using similarly large biological databases to produce and release synthetic pathogens and toxins to the general public. 

Those similarities will likely bring drug discovery into the White House’s orbit. The EO mentions certain model capacity and “size” cutoffs for heightened monitoring, which undoubtedly cover many of the Big-Tech powered AI models that we know already have drug discovery applications and uses. Drug developers may catch the incidental effects of these requirements, not least because in drug discovery, the more recent AI tools use protein synthesis to identify target molecules of interest.

These specifications and guidelines will add more requirements and limits on the capabilities of big models, but could also affect smaller and mid-size startups (despite calls for increased research and FTC action in getting small businesses up to speed). Increased accountability for AI developers is certainly important, but another potential direction more downstream of the AI tool itself might be restricting personnel access to these tools or their output, and hyper-protecting the information these models generate, especially when the software is linked to the internet. Either way, we’ll have to wait and see how the market responds, and how the competitive field is shaped by new requirements and new costs.

Keep an Eye on the HHS AI Task Force

One of the most directly impactful measures for health care is the White House’s directive to the Department of Health and Human Services (HHS) to form an AI Task Force to better understand, monitor, and enforce AI safety in health care applications by January 2024. The wide-reaching directive tasks the group with building out the principles in the White House’s 2022 AI Bill of Rights, prioritizing patient safety, quality, and protection of rights. 

Any one of the areas of focus in the Task Force’s regulatory action plan will no doubt have major consequences. But perhaps chief among these, and mentioned repeatedly throughout the EO, is the issue of AI-facilitated discrimination in the health care context. The White House directs HHS to create a comprehensive strategy to monitor outcomes and quality of AI-enabled health care tools in particular. This vigilance is well-placed; such health care tools, training on data that itself has encoded biases from historic and systemic discrimination, have no shortage of evidence showing their potential to further entrench inequitable patient care and health outcomes. Specific regulatory guidance, at least, is sorely needed. An understanding of and reforms to algorithmic decision-making will be essential to uncoding bias, if that is fully possible. And, very likely, the AI Bill of Rights’ “Human Alternatives, Collaboration, and Fallback” will see more human (provider and patient) intervention to generate decisions using these models. 

Because so much of the proposed action in AI regulation involves monitoring, the role of data (especially sensitive data as in the health care context) in this ecosystem cannot be understated. The HHS Task Force’s directive to develop measures for protecting personally identifiable data in health care may offer an additionally interesting development. The EO throughout references the importance of privacy protections undergirding the cross-agency action it envisions. Central to this effort is the White House’s commitment to funding, producing, and implementing privacy-enhancing technologies (PETs). With health information being particularly sensitive to security risks and incurring especially personal harms in cases of breach or compromise, PETs will likely be of increasingly high value and use in the health care setting. Of course, AI-powered PETs are of high value not just for data protections, but also for enhancing analytic capabilities. PETs in the health care setting may be able to use medical records and other health data to facilitate de-identified public health data sharing and improve diagnostics. Overall, a push towards de-identified health care data sharing and use can add a human-led, practical check on the unsettling implications for AI-scale capabilities on highly personal information and a reality of diminishing anonymity in personal data

Sweeping Changes and Watching What’s Next 

Certainly, the EO’s renewal of a push towards Congress passing federal legislation to formalize data protections will have big ripples in health care and biotechnology. Whether such a statute would envision entire subsections, if not a companion or separate bill altogether, for the health care context is less of an if and more of a when. Some questions that are less than an eventuality: is now too soon for sweeping AI regulations? Some companies seem to think so, while others think that the EO alone is not enough without meaningful congressional action. Either way, next steps should take care to avoid rewarding the highly-resourced few at the expense of competition, and encourage coordinated action to ensure essential protections in privacy and health security as relating to AI. Ultimately, this EO leaves more questions than answers, but the sector should be on notice for what’s to come.

Adithi Iyer

Adithi Iyer is a law student (J.D. 2025). Her research interests include cell biology, regenerative medicine, and the law. She previously worked in biomedical research, healthcare analytics, and tech policy. At the law school, Adithi is interested in examining how emerging biotechnologies in intersect with privacy and legal/ethical rights. She also serves on the editorial board of the Harvard Journal of Law and Technology (JOLT).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.