VA's AI Tools Lack Patient Safety Oversight, Watchdog Warns

Share
The seal is seen at the Department of Veterans Affairs building in Washington, June 21, 2013. (Charles Dharapak/AP)

Artificial intelligence chatbots help Department of Veterans Affairs doctors document patient visits and make clinical decisions. But according to a report released Jan. 15 by VA's inspector general, no formal system tracks whether these tools put veterans at risk.

The Jan. 15 preliminary advisory memorandum from VA's Office of Inspector General identified what it calls "a potential patient safety risk" in how the Veterans Health Administration deploys generative AI chat tools in clinical settings. The watchdog found that VHA authorizes two AI systems for use with patient health information — VA GPT and Microsoft 365 Copilot Chat — without coordination with the National Center for Patient Safety.

"VHA does not have a formal mechanism to identify, track or resolve risks associated with generative AI," the OIG report states. The lack of oversight means no feedback loop exists to detect patterns related to patient safety or improve the quality of AI-assisted clinical care.

How VA Doctors Use AI

Clinicians at VA medical centers provide AI chatbots with clinical information and prompts. The systems generate text based on that input, which doctors can then copy into electronic health records. These tools are designed to reduce documentation burden and support medical decision-making.

VA GPT is an internal tool developed by the department specifically for VA staff. Microsoft 365 Copilot Chat is a commercial product available to all VA employees. According to VA's compliance plan for Office of Management and Budget guidance, VA GPT currently has approximately 100,000 users and is estimated to save each user between two and three hours per week.

Both tools depend on user prompts and do not have access to web search, meaning their knowledge base isn't current. This limitation becomes significant when doctors rely on these systems for up-to-date clinical guidance.

Read More: Here's Why January Is the Best Time to File Your VA Disability Claim

The Oversight Gap

The inspector general's review revealed that VHA's AI efforts for health care operate through what the report describes as "an informal collaboration" between the acting director of VA's National AI Institute and the chief AI officer within VA's Office of Information and Technology.

These officials did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical use, according to interviews conducted by the OIG. This breaks from VHA Directive 1050.01, which establishes that the Office of Quality Management and the National Center for Patient Safety must "establish and provide operational oversight of VHA quality programs and VHA patient safety programs."

A joint bulletin issued by VA's National AI Institute and Office of Information and Technology acknowledges that generative AI "introduces new risks and unknown consequences that can have a significantly negative impact on the privacy and safety of Veterans." Yet no standardized process exists to manage those risks in clinical applications.

Why AI Errors Matter in Health Care

Generative AI systems can produce inaccurate outputs. Research published in npj Digital Medicine in May 2025 examined AI-generated medical summaries and found that these tools can omit relevant data or generate false information, errors that could affect diagnoses and treatment decisions.

When a doctor uses an AI chatbot to summarize a patient's medical history or suggest treatment options, any inaccuracy becomes part of the patient's care. If the AI omits a relevant drug allergy or mischaracterizes symptoms, the clinician might make decisions based on incomplete or incorrect information.

The OIG report emphasizes this concern: "The OIG is concerned about VHA's ability to promote and safeguard patient safety without a standardized process for managing AI-related risks. Moreover, not having a process precludes a feedback loop and a means to detect patterns that could improve the safety and quality of AI chat tools used in clinical settings."

Read More: VA Doctors Can Finally Look You in the Eye, Thanks to a New AI Tool

VA's Broader AI Expansion

The oversight gap comes as VA rapidly expands its use of artificial intelligence. According to a July 2025 Government Accountability Office report, VA listed 229 AI use cases in operation as of 2024, up from prior years. These applications range from advanced medical devices to predictive algorithms designed to identify veterans at high risk of suicide.

VA's September 2025 AI strategy document outlines ambitious plans for AI-assisted clinical documentation, surveillance for health status changes, automated eligibility determination for benefits programs, and AI-enhanced customer support systems. The strategy emphasizes that VA is building infrastructure to support "fast, responsible adoption of common AI tooling."

VA has developed internal guidance for generative AI use, published in July 2023 and updated regularly. The guidance states that VA staff are responsible for reviewing AI-generated content for accuracy before use and that existing security and privacy policies apply. The department has also implemented role-based AI training for all employees starting in April 2024.

What Comes Next

The OIG's review remains ongoing. Because this was a preliminary advisory memorandum, the inspector general did not issue formal recommendations. The office plans to continue engaging with VHA leaders and will include a comprehensive analysis of this finding, along with any additional findings, in a final report.

In a statement to news outlets, VA press secretary Pete Kasperowicz emphasized that "VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff."

The inspector general's decision to release preliminary findings before completing its full review signals the urgency of the concern. "Given the critical nature of the issue," the report states, "the OIG is broadly sharing this preliminary finding so that VHA leaders are aware of this risk to patient safety."

The Wider Context

VA's challenges mirror those facing federal agencies across government. The July 2025 GAO report found that generative AI use cases across 11 federal agencies increased ninefold between 2023 and 2024. Agency officials consistently cited challenges including difficulty complying with existing federal policies, insufficient technical resources and budget, and the need to maintain appropriate use policies.

A separate analysis from cybersecurity firm Kiteworks found that just 10 percent of governments globally have centralized AI governance, with one-third lacking dedicated AI controls and 76 percent lacking automated mechanisms to shut down high-risk AI systems.

For veterans receiving care at VA facilities, the implications are straightforward. The AI tools that doctors use to document visits and support clinical decisions operate without the formal safety oversight that applies to other aspects of health care delivery. 

Stay on Top of Your Veteran Benefits

Military benefits are always changing. Keep up with everything from pay to health care by subscribing to Military.com, and get access to up-to-date pay charts and more with all latest benefits delivered straight to your inbox.

Share