User login
The Year of AI: Learning With Machines to Improve Veteran Health Care
We have a tradition at Federal Practitioner where the December editorial usually features some version of the “best and worst” of the last 12 months in government health care. As we close out a difficult year, instead I offer a cautionary yet promising story that epitomizes both risk and benefit.
In some quarters, 2024 has been the year of AI (artificial intelligence).2 While in science fiction, superhuman machines, like the Terminator, are often associated with apocalyptic threats, we often forget the positive models of human-technology interaction, such as the protective robot in Lost in Space. While AI is not yet as advanced as what has already been depicted on the screen, it is inextricably interwoven into the daily fabric of our lives. Almost any website you go to for business or pleasure has a chatbot waiting to help (or frustrate) you. Most of us have Alexa, Siri, or another digital assistant organizing our homes and schedules. When I Google “everyday uses of artificial intelligence,” it is AI that responds with an overview.
Medicine is not immune. Renowned physician and scientist Eric Topol, MD, suggests that AI represents a “fourth industrial revolution in medicine” that can dramatically improve health care.3 The US Department of Veterans Affairs (VA) has been at the forefront of this new space.4 The story recounted below encapsulates the enormous benefits AI can bring to health care and the vigilance we must exercise to anticipate and mitigate risk for this to be an overall positive transition.
The story begins with a key element of AI change—the machine learning predictive algorithm. In this case, the algorithm was designed to predict—and thereby prevent—the top public health priority in federal practice: suicide. The Recovery Engagement and Coordination for Health-Veterans Enhanced Treatment (REACH VET) program was launched in 2017 to assist in identifying the top 0.1% of veterans at the highest risk for suicide.5
At least at this stage of AI in medicine, the safest and most ethical efforts come from collaborations between health care professionals and AI developers that maximize the very different strengths of each partner. REACH VET is an exemplar of this kind of teamwork. Once the algorithm analyzes > 60 variables to identify veterans at high risk for suicide, data are communicated to a REACH VET program coordinator, who then notifies the practitioner responsible for the veteran’s care so they can put into action evidence-based suicide prevention strategies.5
VA researchers in 2021 published a study of 173,313 veterans comparing outcomes before and after entry into the program using a triple differences design. Veterans participating in the program reported an increase in outpatient visits and documentation of safety plans, and a decrease in emergency department visits, inpatient mental health admissions, and recorded suicide attempts.6
A US Government Accounting Office analysis found that “REACH VET had identified veterans who had not been identified through other methods.”7 This was not just an example of AI hype: as a relatively rare and statistically complicated phenomenon, suicide is notoriously difficult to predict and model. Machine learning algorithms like REACH VET have unprecedented potential to assist and augment suicide prevention.8
In 2023, veteran service organizations and journalists raised concerns that the AI algorithm was biased and ignored critical risk factors that put some veterans at increased risk. Based on their analysis, they claimed that the algorithm did not account for risk factors uniquely associated with women veterans, namely military sexual trauma and intimate partner violence.9 Women are the most rapidly growing VA population, yet too often they encounter health care disparities, harassment, and stigmatization when seeking care. The Congressional Veterans Affairs committees investigated and introduced legislation to update the algorithm.10
VA experts dispute these claims, and a computer science PhD may be required to understand the debate. But as the history of medicine has shown us, every treatment and procedure has benefits and risks. No matter how bright and shiny the technology initially appears, a soft scientific underbelly emerges sooner or later. Just as with REACH VET, algorithm bias is often discovered during deployment when the logic of the laboratory encounters the unpredictable variety of humankind.11 Frequently, those problems are—as with REACH VET— not solely or even primarily technical ones. The data mirror society and reflect its biases.
For learning organizations like the VA and the US Department of Defense (DoD), the criticisms of REACH VET signal the need to engage in continuous performance improvement. AI requires the human trainers and supervisors who teach the machines to continuously revise and update their lesson plans. The most recent VA data show that in 2021, 6392 veterans died by suicide.12 In Congressional testimony, VA leaders reported that as of May 2024, REACH VET was operating in 28 VA facilities and had identified 6700 high-risk veterans.13 REACH VET can save veteran’s lives, which is the sine qua non for our federal health care systems.
The algorithm should be improved to identify ALL veterans so they receive lifesaving interventions. Every veteran’s life is sacred; the algorithm that may prevent suicide must be continuously improved. That is why our representatives did not propose to ban REACH VET or enforce an AI winter on the VA and DoD. Instead, they called for an update to the algorithm, underscoring the value of machine learning for suicide prediction and prevention.
The epigraph from one of the top AI ethicists and scientists in the world makes the point that AI is not the moral agent here: it is fallible humans who must keep learning along with machines. That is why, at the end of 2024, VA experts are revising the algorithm so REACH VET can help prevent even more veteran suicides in 2025 and beyond.14
- Waikar S. Health care’s AI future: a conversation with Fei Fei Li and Andrew Ng. HAI Stanford University. May 10, 2021. Accessed November 13, 2024. https://hai.stanford.edu/news/health-cares-ai-future-conversation-fei-fei-li-and-andrew-ng
- Johnson E, Forbes Technology Council. 2023 Was the Year of AI Hype—2024 is the Year of AI Practicality. Forbes. April 2, 2024. Accessed November 13, 2024. https://www.forbes.com/councils/forbestechcouncil/2024/04/02/2023-was-the-year-of-ai-hype-2024-is-the-year-of-ai-practicality/
- Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
- Perlis R. The VA was an early adopter of artificial intelligence to improve care-here’s what they learned. JAMA. 2024;332(17):1411-1414. doi:10.1001/jama.2024.20563
- VA REACH VET initiative helps save lives [press release]. April 3, 2017. Accessed November 13, 2024. https://news.va.gov/36714/va-reach-vet-initiative-helps-save-veterans-lives/
- McCarthy JF, Cooper SA, Dent KR, et al. Evaluation of the recovery engagement and coordination for health-veterans enhanced treatment suicide risk modeling clinical program in the Veterans Health Administration. JAMA Netw Open. 2021;4(10):e2129900. doi:10.1001/jamanetworkopen.2021.29900
- US Government Office of Accountability. Veteran suicide: VA efforts to identify veterans at risk through analysis of health record information. September 14, 2022. Accessed November 13, 2024. https://www.gao.gov/products/gao-22-105165
- Pigoni A, Delvecchio G, Turtulici N, et al. Machine learning and the prediction of suicide in psychiatric populations: a systematic review. Transl Psychiatry. 2024;14(1):140. doi:10.1038/s41398-024-02852-9
- Glantz A. VA veteran suicide prevention algorithm favors men. Military.com. May 23, 2024. Accessed November 13, 2024. https://www.military.com/daily-news/2024/05/23/vas-veteran-suicide-prevention-algorithm-favors-men.html
- S.5210 BRAVE Act of 2024. 118th Congress. https://www.congress.gov/bill/118th-congress/senate-bill/5210/text
- Ratwani RM, Sutton K, and Galarrga JE. Addressing algorithmic bias in health care. JAMA. 2024;332(13):1051-1052. doi:10.1001/jama.2024.1348/
- US Department of Veterans Affairs, Office of Mental Health and Suicide Prevention. 2023 national veteran suicide prevention annual report. November 2023 Accessed November 13, 2024. https://www.mentalhealth.va.gov/docs/data-sheets/2023/2023-National-Veteran-Suicide-Prevention-Annual-Report-FINAL-508.pdf
- House Committee on Veterans Affairs. Health Chairwoman Miller-Meeks opens Iowa field hearing on breakthroughs in VA healthcare. May 13, 2024. Accessed November 13, 2024. https://veterans.house.gov/news/documentsingle.aspx?DocumentID=6452
- Graham E. VA is updating its AI suicide risk model to reach more women. NEXTGOV/FCW. October 18, 2024. Accessed November 13, 2024. https://www.nextgov.com/artificial-intelligence/2024/10/va-updating-its-ai-suicide-risk-model-reach-more-women/400377/
We have a tradition at Federal Practitioner where the December editorial usually features some version of the “best and worst” of the last 12 months in government health care. As we close out a difficult year, instead I offer a cautionary yet promising story that epitomizes both risk and benefit.
In some quarters, 2024 has been the year of AI (artificial intelligence).2 While in science fiction, superhuman machines, like the Terminator, are often associated with apocalyptic threats, we often forget the positive models of human-technology interaction, such as the protective robot in Lost in Space. While AI is not yet as advanced as what has already been depicted on the screen, it is inextricably interwoven into the daily fabric of our lives. Almost any website you go to for business or pleasure has a chatbot waiting to help (or frustrate) you. Most of us have Alexa, Siri, or another digital assistant organizing our homes and schedules. When I Google “everyday uses of artificial intelligence,” it is AI that responds with an overview.
Medicine is not immune. Renowned physician and scientist Eric Topol, MD, suggests that AI represents a “fourth industrial revolution in medicine” that can dramatically improve health care.3 The US Department of Veterans Affairs (VA) has been at the forefront of this new space.4 The story recounted below encapsulates the enormous benefits AI can bring to health care and the vigilance we must exercise to anticipate and mitigate risk for this to be an overall positive transition.
The story begins with a key element of AI change—the machine learning predictive algorithm. In this case, the algorithm was designed to predict—and thereby prevent—the top public health priority in federal practice: suicide. The Recovery Engagement and Coordination for Health-Veterans Enhanced Treatment (REACH VET) program was launched in 2017 to assist in identifying the top 0.1% of veterans at the highest risk for suicide.5
At least at this stage of AI in medicine, the safest and most ethical efforts come from collaborations between health care professionals and AI developers that maximize the very different strengths of each partner. REACH VET is an exemplar of this kind of teamwork. Once the algorithm analyzes > 60 variables to identify veterans at high risk for suicide, data are communicated to a REACH VET program coordinator, who then notifies the practitioner responsible for the veteran’s care so they can put into action evidence-based suicide prevention strategies.5
VA researchers in 2021 published a study of 173,313 veterans comparing outcomes before and after entry into the program using a triple differences design. Veterans participating in the program reported an increase in outpatient visits and documentation of safety plans, and a decrease in emergency department visits, inpatient mental health admissions, and recorded suicide attempts.6
A US Government Accounting Office analysis found that “REACH VET had identified veterans who had not been identified through other methods.”7 This was not just an example of AI hype: as a relatively rare and statistically complicated phenomenon, suicide is notoriously difficult to predict and model. Machine learning algorithms like REACH VET have unprecedented potential to assist and augment suicide prevention.8
In 2023, veteran service organizations and journalists raised concerns that the AI algorithm was biased and ignored critical risk factors that put some veterans at increased risk. Based on their analysis, they claimed that the algorithm did not account for risk factors uniquely associated with women veterans, namely military sexual trauma and intimate partner violence.9 Women are the most rapidly growing VA population, yet too often they encounter health care disparities, harassment, and stigmatization when seeking care. The Congressional Veterans Affairs committees investigated and introduced legislation to update the algorithm.10
VA experts dispute these claims, and a computer science PhD may be required to understand the debate. But as the history of medicine has shown us, every treatment and procedure has benefits and risks. No matter how bright and shiny the technology initially appears, a soft scientific underbelly emerges sooner or later. Just as with REACH VET, algorithm bias is often discovered during deployment when the logic of the laboratory encounters the unpredictable variety of humankind.11 Frequently, those problems are—as with REACH VET— not solely or even primarily technical ones. The data mirror society and reflect its biases.
For learning organizations like the VA and the US Department of Defense (DoD), the criticisms of REACH VET signal the need to engage in continuous performance improvement. AI requires the human trainers and supervisors who teach the machines to continuously revise and update their lesson plans. The most recent VA data show that in 2021, 6392 veterans died by suicide.12 In Congressional testimony, VA leaders reported that as of May 2024, REACH VET was operating in 28 VA facilities and had identified 6700 high-risk veterans.13 REACH VET can save veteran’s lives, which is the sine qua non for our federal health care systems.
The algorithm should be improved to identify ALL veterans so they receive lifesaving interventions. Every veteran’s life is sacred; the algorithm that may prevent suicide must be continuously improved. That is why our representatives did not propose to ban REACH VET or enforce an AI winter on the VA and DoD. Instead, they called for an update to the algorithm, underscoring the value of machine learning for suicide prediction and prevention.
The epigraph from one of the top AI ethicists and scientists in the world makes the point that AI is not the moral agent here: it is fallible humans who must keep learning along with machines. That is why, at the end of 2024, VA experts are revising the algorithm so REACH VET can help prevent even more veteran suicides in 2025 and beyond.14
We have a tradition at Federal Practitioner where the December editorial usually features some version of the “best and worst” of the last 12 months in government health care. As we close out a difficult year, instead I offer a cautionary yet promising story that epitomizes both risk and benefit.
In some quarters, 2024 has been the year of AI (artificial intelligence).2 While in science fiction, superhuman machines, like the Terminator, are often associated with apocalyptic threats, we often forget the positive models of human-technology interaction, such as the protective robot in Lost in Space. While AI is not yet as advanced as what has already been depicted on the screen, it is inextricably interwoven into the daily fabric of our lives. Almost any website you go to for business or pleasure has a chatbot waiting to help (or frustrate) you. Most of us have Alexa, Siri, or another digital assistant organizing our homes and schedules. When I Google “everyday uses of artificial intelligence,” it is AI that responds with an overview.
Medicine is not immune. Renowned physician and scientist Eric Topol, MD, suggests that AI represents a “fourth industrial revolution in medicine” that can dramatically improve health care.3 The US Department of Veterans Affairs (VA) has been at the forefront of this new space.4 The story recounted below encapsulates the enormous benefits AI can bring to health care and the vigilance we must exercise to anticipate and mitigate risk for this to be an overall positive transition.
The story begins with a key element of AI change—the machine learning predictive algorithm. In this case, the algorithm was designed to predict—and thereby prevent—the top public health priority in federal practice: suicide. The Recovery Engagement and Coordination for Health-Veterans Enhanced Treatment (REACH VET) program was launched in 2017 to assist in identifying the top 0.1% of veterans at the highest risk for suicide.5
At least at this stage of AI in medicine, the safest and most ethical efforts come from collaborations between health care professionals and AI developers that maximize the very different strengths of each partner. REACH VET is an exemplar of this kind of teamwork. Once the algorithm analyzes > 60 variables to identify veterans at high risk for suicide, data are communicated to a REACH VET program coordinator, who then notifies the practitioner responsible for the veteran’s care so they can put into action evidence-based suicide prevention strategies.5
VA researchers in 2021 published a study of 173,313 veterans comparing outcomes before and after entry into the program using a triple differences design. Veterans participating in the program reported an increase in outpatient visits and documentation of safety plans, and a decrease in emergency department visits, inpatient mental health admissions, and recorded suicide attempts.6
A US Government Accounting Office analysis found that “REACH VET had identified veterans who had not been identified through other methods.”7 This was not just an example of AI hype: as a relatively rare and statistically complicated phenomenon, suicide is notoriously difficult to predict and model. Machine learning algorithms like REACH VET have unprecedented potential to assist and augment suicide prevention.8
In 2023, veteran service organizations and journalists raised concerns that the AI algorithm was biased and ignored critical risk factors that put some veterans at increased risk. Based on their analysis, they claimed that the algorithm did not account for risk factors uniquely associated with women veterans, namely military sexual trauma and intimate partner violence.9 Women are the most rapidly growing VA population, yet too often they encounter health care disparities, harassment, and stigmatization when seeking care. The Congressional Veterans Affairs committees investigated and introduced legislation to update the algorithm.10
VA experts dispute these claims, and a computer science PhD may be required to understand the debate. But as the history of medicine has shown us, every treatment and procedure has benefits and risks. No matter how bright and shiny the technology initially appears, a soft scientific underbelly emerges sooner or later. Just as with REACH VET, algorithm bias is often discovered during deployment when the logic of the laboratory encounters the unpredictable variety of humankind.11 Frequently, those problems are—as with REACH VET— not solely or even primarily technical ones. The data mirror society and reflect its biases.
For learning organizations like the VA and the US Department of Defense (DoD), the criticisms of REACH VET signal the need to engage in continuous performance improvement. AI requires the human trainers and supervisors who teach the machines to continuously revise and update their lesson plans. The most recent VA data show that in 2021, 6392 veterans died by suicide.12 In Congressional testimony, VA leaders reported that as of May 2024, REACH VET was operating in 28 VA facilities and had identified 6700 high-risk veterans.13 REACH VET can save veteran’s lives, which is the sine qua non for our federal health care systems.
The algorithm should be improved to identify ALL veterans so they receive lifesaving interventions. Every veteran’s life is sacred; the algorithm that may prevent suicide must be continuously improved. That is why our representatives did not propose to ban REACH VET or enforce an AI winter on the VA and DoD. Instead, they called for an update to the algorithm, underscoring the value of machine learning for suicide prediction and prevention.
The epigraph from one of the top AI ethicists and scientists in the world makes the point that AI is not the moral agent here: it is fallible humans who must keep learning along with machines. That is why, at the end of 2024, VA experts are revising the algorithm so REACH VET can help prevent even more veteran suicides in 2025 and beyond.14
- Waikar S. Health care’s AI future: a conversation with Fei Fei Li and Andrew Ng. HAI Stanford University. May 10, 2021. Accessed November 13, 2024. https://hai.stanford.edu/news/health-cares-ai-future-conversation-fei-fei-li-and-andrew-ng
- Johnson E, Forbes Technology Council. 2023 Was the Year of AI Hype—2024 is the Year of AI Practicality. Forbes. April 2, 2024. Accessed November 13, 2024. https://www.forbes.com/councils/forbestechcouncil/2024/04/02/2023-was-the-year-of-ai-hype-2024-is-the-year-of-ai-practicality/
- Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
- Perlis R. The VA was an early adopter of artificial intelligence to improve care-here’s what they learned. JAMA. 2024;332(17):1411-1414. doi:10.1001/jama.2024.20563
- VA REACH VET initiative helps save lives [press release]. April 3, 2017. Accessed November 13, 2024. https://news.va.gov/36714/va-reach-vet-initiative-helps-save-veterans-lives/
- McCarthy JF, Cooper SA, Dent KR, et al. Evaluation of the recovery engagement and coordination for health-veterans enhanced treatment suicide risk modeling clinical program in the Veterans Health Administration. JAMA Netw Open. 2021;4(10):e2129900. doi:10.1001/jamanetworkopen.2021.29900
- US Government Office of Accountability. Veteran suicide: VA efforts to identify veterans at risk through analysis of health record information. September 14, 2022. Accessed November 13, 2024. https://www.gao.gov/products/gao-22-105165
- Pigoni A, Delvecchio G, Turtulici N, et al. Machine learning and the prediction of suicide in psychiatric populations: a systematic review. Transl Psychiatry. 2024;14(1):140. doi:10.1038/s41398-024-02852-9
- Glantz A. VA veteran suicide prevention algorithm favors men. Military.com. May 23, 2024. Accessed November 13, 2024. https://www.military.com/daily-news/2024/05/23/vas-veteran-suicide-prevention-algorithm-favors-men.html
- S.5210 BRAVE Act of 2024. 118th Congress. https://www.congress.gov/bill/118th-congress/senate-bill/5210/text
- Ratwani RM, Sutton K, and Galarrga JE. Addressing algorithmic bias in health care. JAMA. 2024;332(13):1051-1052. doi:10.1001/jama.2024.1348/
- US Department of Veterans Affairs, Office of Mental Health and Suicide Prevention. 2023 national veteran suicide prevention annual report. November 2023 Accessed November 13, 2024. https://www.mentalhealth.va.gov/docs/data-sheets/2023/2023-National-Veteran-Suicide-Prevention-Annual-Report-FINAL-508.pdf
- House Committee on Veterans Affairs. Health Chairwoman Miller-Meeks opens Iowa field hearing on breakthroughs in VA healthcare. May 13, 2024. Accessed November 13, 2024. https://veterans.house.gov/news/documentsingle.aspx?DocumentID=6452
- Graham E. VA is updating its AI suicide risk model to reach more women. NEXTGOV/FCW. October 18, 2024. Accessed November 13, 2024. https://www.nextgov.com/artificial-intelligence/2024/10/va-updating-its-ai-suicide-risk-model-reach-more-women/400377/
- Waikar S. Health care’s AI future: a conversation with Fei Fei Li and Andrew Ng. HAI Stanford University. May 10, 2021. Accessed November 13, 2024. https://hai.stanford.edu/news/health-cares-ai-future-conversation-fei-fei-li-and-andrew-ng
- Johnson E, Forbes Technology Council. 2023 Was the Year of AI Hype—2024 is the Year of AI Practicality. Forbes. April 2, 2024. Accessed November 13, 2024. https://www.forbes.com/councils/forbestechcouncil/2024/04/02/2023-was-the-year-of-ai-hype-2024-is-the-year-of-ai-practicality/
- Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books; 2019.
- Perlis R. The VA was an early adopter of artificial intelligence to improve care-here’s what they learned. JAMA. 2024;332(17):1411-1414. doi:10.1001/jama.2024.20563
- VA REACH VET initiative helps save lives [press release]. April 3, 2017. Accessed November 13, 2024. https://news.va.gov/36714/va-reach-vet-initiative-helps-save-veterans-lives/
- McCarthy JF, Cooper SA, Dent KR, et al. Evaluation of the recovery engagement and coordination for health-veterans enhanced treatment suicide risk modeling clinical program in the Veterans Health Administration. JAMA Netw Open. 2021;4(10):e2129900. doi:10.1001/jamanetworkopen.2021.29900
- US Government Office of Accountability. Veteran suicide: VA efforts to identify veterans at risk through analysis of health record information. September 14, 2022. Accessed November 13, 2024. https://www.gao.gov/products/gao-22-105165
- Pigoni A, Delvecchio G, Turtulici N, et al. Machine learning and the prediction of suicide in psychiatric populations: a systematic review. Transl Psychiatry. 2024;14(1):140. doi:10.1038/s41398-024-02852-9
- Glantz A. VA veteran suicide prevention algorithm favors men. Military.com. May 23, 2024. Accessed November 13, 2024. https://www.military.com/daily-news/2024/05/23/vas-veteran-suicide-prevention-algorithm-favors-men.html
- S.5210 BRAVE Act of 2024. 118th Congress. https://www.congress.gov/bill/118th-congress/senate-bill/5210/text
- Ratwani RM, Sutton K, and Galarrga JE. Addressing algorithmic bias in health care. JAMA. 2024;332(13):1051-1052. doi:10.1001/jama.2024.1348/
- US Department of Veterans Affairs, Office of Mental Health and Suicide Prevention. 2023 national veteran suicide prevention annual report. November 2023 Accessed November 13, 2024. https://www.mentalhealth.va.gov/docs/data-sheets/2023/2023-National-Veteran-Suicide-Prevention-Annual-Report-FINAL-508.pdf
- House Committee on Veterans Affairs. Health Chairwoman Miller-Meeks opens Iowa field hearing on breakthroughs in VA healthcare. May 13, 2024. Accessed November 13, 2024. https://veterans.house.gov/news/documentsingle.aspx?DocumentID=6452
- Graham E. VA is updating its AI suicide risk model to reach more women. NEXTGOV/FCW. October 18, 2024. Accessed November 13, 2024. https://www.nextgov.com/artificial-intelligence/2024/10/va-updating-its-ai-suicide-risk-model-reach-more-women/400377/
The Year of AI: Learning With Machines to Improve Veteran Health Care
The Year of AI: Learning With Machines to Improve Veteran Health Care