The US Food and Drug Administration (FDA) says it wants to accelerate AI deployment across centers. FDA Commissioner Martin A. McCurry has announced an aggressive timeline to expand AI use by June 30, 2025, and has placed a big bet on technology to change the US drug approval process.
However, rapid AI deployment at the FDA raises important questions about whether innovation can be balanced with monitoring.
Strategic Leadership Drive: FDA Name is the first AI Chief
The ambitious FDA AI deployment foundation was laid out with the appointment of Jeremy Walsh as the first ever AI director. Walsh previously led the deployment of enterprise-scale technology in federal health and intelligence agencies, and came from government contractor Booz Allen Hamilton, who worked as Chief Technologist for 14 years.
His appointment, announced shortly before the announcement of the rollout on May 8th, demonstrates the agency’s serious commitment to technology transformation. Timing is important. Walsh’s employment coincided with workforce cuts at the FDA, including the loss of key technical talent.
Among the losses was Sridar Mantha, former director of the Drug Evaluation and Research Centre’s Strategic Program. Ironically, Mantha is currently working with Walsh to coordinate the agency-wide deployment.
Pilot Program: Impressive results, limited details
Drives rapid AI deployments are the reported success of the pilot program of software trial agents. Commissioner McCalley said he was “blown by the success of the first AI-assisted Science Review Pilot,” claiming that one formula allowed the technology to perform the science review task in minutes that took three days.
However, the scope, rigour and results of the pilot scheme remain unpublished..
The agency has not published detailed reports on the methodology, validation procedures, or specific use cases of the pilots tested. The lack of transparency is a concern given the nature of the high stakes of drug evaluation.
If asked for more details, the FDA has pledged that additional details and updates about the initiative will be published in June. For those responsible for protecting public health through rigorous scientific reviews, the lack of published pilot data raises questions about the evidence base in favour of such an active timeline.
Industry Perspective: Cautious Optimism Fills the Concern
The pharmaceutical industry’s response to the deployment of FDA AI reflects a mix of optimism and anxiety. For a long time, businesses have been sought a faster approval process. McCurry said, “Why does it take more than 10 years for new drugs to come to the market?”
“AI is still developing, but to leverage it, we need a thoughtful, risk-based approach with our centre patients. We are pleased to see the FDA take concrete action to harness the possibilities of AI.”
However, industry experts have raised practical concerns. Mike Hinckle, FDA compliance expert at K&L Gates, highlighted the key issues. Pharmaceutical companies will want to know how their submitted data is protected.
The FDA has been given a report that it was in discussion with Openai about a project called Cdergpt, which appears to be the Center for Drug Evaluation and Research.
Expert Warning: Rush vs. Strict Discussion
Key experts in this field have expressed concern about the pace of deployment. Eric Topol, founder of the Scripps Research Translational Institute, told Axios:
He identified key gaps in transparency, including questions about the models used to train AI, and what inputs are being provided for special tweaks.
Former FDA commissioner Robert Caliph surprised his balanced tone. His comments reflect the broader sentiment among experts supporting AI integration, but question whether the June 30 deadline will allow enough time for proper verification and safeguards to be implemented.
Rafael Rosengarten from Healthcare’s AI Aliance supports automation, but emphasizes the need for governance and states that policy guidance is needed on what model performance is used to train AI models and is acceptable.
Political context: Trump’s deregulation AI vision
The deployment of FDA AI must be understood in the broader context of the Trump administration’s approach to AI governance. Trump’s federal policy overhaul – Biden-era Guardrail supports speed and international domination in technology – has turned government into a technical test.
The administration explicitly prioritizes innovation over precautions. Vice President JD Vance outlined four key AI policy priorities, including encouraging a “growth-promoting AI policy” rather than “overregulating the AI sector,” and took action to “avoid an overly preventive regulatory regime.”
Philosophy is clear about how the FDA is approaching the deployment of AI. With Elon Musk leading the charges under the “AI-First” banner, critics warn that it could rush to roll out at the agency to compromise data security, automate critical decisions, and put Americans at risk.
Protection and Governance: What are you missing?
The FDA has committed to maintaining strict information security and acting in accordance with FDA policies, but specific details about safeguards remain sparse. AI argues that rather than replacing human expertise, it is a tool that supports it, and can enhance regulatory rigor by supporting predicting toxicity and adverse events. This offers some peace of mind, but it does not have specificity.
The lack of public governance frameworks on what internal processes are in contrast to FDA’s industry guidance.
The agency previously issued draft guidance to pharmaceutical companies and provided recommendations for the use of AI, intended to support regulatory decisions regarding the safety, efficacy, or quality of drug or biological products. The draft guidance published in that example was based on feedback from over 800 external comments and experience in over 500 drug submissions involving AI components in development since 2016.
A wider AI landscape: a federal agency as a test site
The FDA initiative is part of a larger federal AI adoption wave. The General Services Bureau is piloting AI chatbots to automate daily tasks, and the Social Security Bureau plans to use AI software to transcription of applicant hearings.
However, GSA officials noted that the tool has been developed for 18 months. It emphasizes that its contrast with the FDA’s acceleration timeline is several weeks at the time of writing.
Rapid federal adoption reflects the Trump administration’s belief that America is well positioned to maintain global domination in AI and that the federal government must exploit the benefits of American innovation. It also maintains the importance of strong protections for American privacy, civil rights and civil liberties.
Innovation at intersections
The FDA’s ambitious timeline embodies the fundamental tension between technical commitments and regulatory responsibility. While AI offers clear benefits in automating boring tasks, the rush to implementation raises important questions about the erosion of transparency, accountability and scientific rigor.
The June 30th deadline will test whether the agency can maintain public trust, which has long been its cornerstone. Success requires more than technical capabilities. There is evidence that surveillance has not been sacrificed for speed.
The deployment of FDA AI represents a critical moment in drug regulation. The results determine whether rapid AI adoption will enhance public health protections or serve as a warning narrative about prioritizing efficiency for safety in issues of life and death. The interests will not be high.
See: AI vs Covid-19: Here are the AI tools and services to fight Coronavirus
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event will be held in collaboration with other major events, including the Intelligent Automation Conference, Blockx, Digital Transformation Week, and Cyber Security & Cloud Expo.
Check out other upcoming Enterprise Technology events and webinars with TechForge here.