Skip to Content

Press Releases

Chairman Rosendale Delivers Opening Statement at VA Subcommittee on Technology Modernization Hearing

WASHINGTON, D.C. - Today, Congressman Rosendale (MT-02), Chairman of the House Committee on Veterans’ Affairs Subcommittee on Technology Modernization, shared the following remarks at today’s hearing titled: "The Future of Data Privacy and Artificial Intelligence at VA."

This hearing is aimed at examining how artificial intelligence will impact data privacy at the VA and how Congress can build a more reliable, secure, and efficient VA amidst the rise of AI. You can watch the hearing live here. You can watch Chairman Rosendale’s remarks by clicking on the picture below.

Chairman Rosendale’s remarks as prepared for delivery: “Good afternoon, the Subcommittee will come to order. I want to welcome our witnesses to today’s hearing examining how the Brave New World of artificial intelligence will impact data privacy at the VA. This is the Subcommittee’s third privacy hearing. We take this subject very seriously. Veterans entrust the VA with data on every aspect of their lives—often more information than any other government agency or company possesses. Yet the VA struggles at every level to comply with the law and keep veterans’ health, personal, and financial information secure. Data breaches happen every few months, and they have taken many different forms. We have seen mass errors by a contractor mailing letters to the wrong veterans. We have seen employees lose or steal records and send files beyond the VA network where their ultimate destination is unknown. We have also seen companies gain access to veterans’ data under false pretenses. No successful, large scale cyberattack on the VA has been disclosed in several years. But we also know the Department is the target of thousands of attacks every day. It remains a constant risk. The VA can be the target and at fault—sometimes both in the very same data breach. No organization can prevent every breach, but in many of these incidents, VA officials did not realize that veterans’ information had been mishandled until well after the fact. In these situations, time is critical. The only way to step in before veterans’ data makes its way from unwitting recipients to criminals is to move fast. Employees reported most of the breaches we will discuss today, and I commend them for that. The examples I just described are a significant problem and put veterans in a precarious position, but they represent the Stone Age compared to the privacy risks posed by artificial intelligence. Much has been said about AI here on Capitol Hill. Unfortunately, I think most of it can be characterized as utopian or apocalyptic. The AI companies and their emissaries want us to focus on speculative, civilizational threats rather than the practical problems that are right in front of us. AI has been with us for several years in different forms, but it is quickly becoming ubiquitous. The VA is accustomed to operating as an island. That has many downsides, but in research and technology, it can actually be beneficial for protecting private information. But the AI business model is moving quickly and overtaking the island. AI is being imbedded into all sorts of software, dual-use AI models are proliferating, and narrow AI applications are broadening. In other words, the days of putting one data set into an AI model that only does one thing are ending, and the VA has thousands of contractors and partner companies that access veterans’ health and personal data today. Controlling how they apply AI will be extremely difficult. Without a doubt, I think the VA is using AI for some admirable purposes. Applying machine learning to analyze medical images can save lives by recognizing indicators of illnesses that the most experienced doctor may miss. Chatbots for customer service can be helpful if done well, and the VA has a lot of catching up to do. Sophisticated automation can clean up VA’s troves of disorganized administrative data in hours, whereas employees have been struggling with it for years. On the other hand, using AI to predict clinical outcomes or mental health problems may be powerful, but it presents a host of ethical problems. Even if the VA manages to prevent bias, the imposition on civil liberties cannot be ignored. My goal here is to learn more about what the VA is already doing with AI, and how our witnesses plan to adapt the Department’s old-fashioned process as the technology evolves around them. I appreciate our witnesses being here to explain all that." Witnesses included: Mr. Charles Worthington, Chief Technology Officer, U.S. Department of Veterans Affairs. Mr. Gil Alterovitz, Ph.D., Director, VA National Artificial Intelligence Institute, U.S. Department of Veterans Affairs. Mr. John Oswalt, Deputy Chief Information Officer, Office of Freedom of Information Act, U.S. Department of Veterans Affairs. Ms. Stephania Griffin, Director, Information Access and Privacy Office, U.S. Department of Veterans Affairs. Ms. Shane Tews, Nonresident Senior Fellow, American Enterprise Institute.