Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

Gastric Cancer Prevention: New AGA Update Reflects Latest High-Risk Screening and Surveillance Advice

Article Type
Changed

Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.

The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterologists (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).

“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.

Dr. Douglas R. Morgan



Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.

“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”

The clinical practice update was published in Gastroenterology.

 

Gastric Cancer Screening

Gastric cancer remains a leading cause of preventable cancer and mortality in certain US populations, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.

High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.

Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.

In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.

As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.

 

Endoscopic Biopsy and Diagnosis

In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.

Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.

Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.

Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.

 

Endoscopic Management and Surveillance

If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.

Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.

Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.

In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.

Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.

 

Patient-Centered Approach

The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.

In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.

Dr. Hashem El-Serag



“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.

El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.

“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”

The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.

The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterologists (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).

“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.

Dr. Douglas R. Morgan



Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.

“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”

The clinical practice update was published in Gastroenterology.

 

Gastric Cancer Screening

Gastric cancer remains a leading cause of preventable cancer and mortality in certain US populations, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.

High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.

Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.

In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.

As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.

 

Endoscopic Biopsy and Diagnosis

In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.

Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.

Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.

Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.

 

Endoscopic Management and Surveillance

If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.

Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.

Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.

In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.

Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.

 

Patient-Centered Approach

The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.

In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.

Dr. Hashem El-Serag



“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.

El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.

“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”

The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.

A version of this article appeared on Medscape.com.

Clinicians can help reduce gastric cancer incidence and mortality in high-risk groups through endoscopic screening and surveillance of precancerous conditions, such as gastric intestinal metaplasia (GIM), according to a new clinical practice update from AGA.

The update supports additional gastric guidance published so far in 2025, including a clinical guideline on the diagnosis and management of gastric premalignant conditions (GPMC) from the American College of Gastroenterologists (ACG) and upper GI endoscopy quality indicators from ACG and the American Society for Gastrointestinal Endoscopy (ASGE).

“The synergy of these three publications coming out at the same time helps us to finally establish surveillance of high-risk gastric conditions in practice, as we do in the colon and esophagus,” said Douglas R. Morgan, MD, professor of medicine in gastroenterology and hepatology and director of Global Health programs in gastroenterology at the University of Alabama at Birmingham.

Dr. Douglas R. Morgan



Morgan, who wasn’t involved with the AGA update, served as lead author for the ACG guideline and co-author of the ACG-ASGE quality indicators. He also co-authored the 2024 ACG clinical guideline on treating Helicobacter pylori infection, which has implications for gastric cancer.

“The AGA and ACG updates provide detail, while the QI document is an enforcer with medical, legal, and reimbursement implications,” he said. “We have an alignment of the stars with this overdue move toward concrete surveillance for high-risk lesions in the stomach.”

The clinical practice update was published in Gastroenterology.

 

Gastric Cancer Screening

Gastric cancer remains a leading cause of preventable cancer and mortality in certain US populations, the authors wrote. The top ways to reduce mortality include primary prevention, particularly by eradicating H pylori, and secondary prevention through screening and surveillance.

High-risk groups in the United States should be considered for gastric cancer screening, including first-generation immigrants from high-incidence regions and potentially other non-White racial and ethnic groups, those with a family history of gastric cancer in a first-degree relative, and those with certain hereditary GI polyposis or hereditary cancer syndromes.

Endoscopy remains the best test for screening or surveillance of high-risk groups, the authors wrote, since it allows for direct visualization to endoscopically stage the mucosa, identify any concerning areas of neoplasia, and enable biopsies. Both endoscopic and histologic staging are key for risk stratification and surveillance decisions.

In particular, clinicians should use a high-definition white light endoscopy system with image enhancement, gastric mucosal cleansing, and insufflation to see the mucosa. As part of this, clinicians should allow for adequate visual inspection time, photodocumentation, and systematic biopsy protocol for mucosal staging, where appropriate.

As part of this, clinicians should consider H pylori eradication as an essential adjunct to endoscopic screening, the authors wrote. Opportunistic screening for H pylori should be considered in high-risk groups, and familial-based testing should be considered among adult household members of patients who test positive for H pylori.

 

Endoscopic Biopsy and Diagnosis

In patients with suspected gastric atrophy — with or without GIM — gastric biopsies should be obtained with a systematic approach, the authors wrote. Clinicians should take a minimum of five biopsies, sampling from the antrum/incisura and corpus.

Endoscopists should work with their pathologists on consistent documentation of histologic risk-stratification parameters when atrophic gastritis is diagnosed, the authors wrote. To inform clinical decision-making, this should include documentation of the presence or absence of H pylori infection, severity of atrophy or metaplasia, and histologic subtyping of GIM.

Although GIM and dysplasia are endoscopically detectable, these findings often go undiagnosed when endoscopists aren’t familiar with the characteristic visual features, the authors wrote. More training is needed, especially in the US, and although artificial intelligence tools appear promising for detecting early gastric neoplasia, data remain too preliminary to recommend routine use, the authors added.

Since indefinite and low-grade dysplasia can be difficult to identify by endoscopy and accurately diagnosis on histopathology, all dysplasia should be confirmed by an experienced gastrointestinal pathologist, the authors wrote. Clinicians should refer patients with visible or nonvisible dysplasia to an endoscopist or center with expertise in gastric neoplasia.

 

Endoscopic Management and Surveillance

If an index screening endoscopy doesn’t identify atrophy, GIM, or neoplasia, ongoing screening should be based on a patient’s risk factors and preferences. If the patient has a family history or multiple risk factors, ongoing screening should be considered. However, the optimal screening intervals in these scenarios aren’t well-defined.

Patients with confirmed gastric atrophy should undergo risk stratification, the authors wrote. Those with severe atrophic gastritis or multifocal/incomplete GIM would likely benefit from endoscopic surveillance, particularly if they have other risk factors such as family history. Surveillance should be considered every 3 years, though shorter intervals may be advisable for those with multiple risk factors such as severe GIM.

Patients with high-grade dysplasia or early gastric cancer should undergo endoscopic submucosal dissection (ESD), with the goal of en bloc, R0 resection to enable accurate pathologic staging and the intent to cure. Eradicating active H pylori infection is essential — but shouldn’t delay endoscopic intervention, the authors wrote.

In addition, patients with a history of successfully resected gastric dysplasia or cancer should undergo endoscopic surveillance. Although post-ESD surveillance intervals have been suggested in other recent AGA clinical practice updates, additional data are needed, particularly for US recommendations, the authors wrote.

Although type 1 gastric carcinoids in patients with atrophic gastritis are typically indolent, especially if less than 1 cm, endoscopists may consider resecting them and should resect lesions between 1and 2 cm. Patients with lesions over 2 cm should undergo cross-sectional imaging and be referred for surgical resection, given the risk for metastasis.

 

Patient-Centered Approach

The guideline authors suggested thinking about screening and surveillance on a patient-level basis. For instance, only those who are fit for endoscopic or potentially surgical treatment should be screened for gastric cancer and continued surveillance of GPMC, they wrote. If a person is no longer fit for endoscopic or surgical treatment, whether due to life expectancy or other comorbidities, then screening should be stopped.

In addition, to achieve health equity, clinicians should take a personalized approach to assess a patient’s risk for gastric cancer and determine whether to pursue screening and surveillance, the authors wrote. Modifiable risk factors — such as tobacco use, high-salt and processed food diets, and lack of health care — should also be addressed, since most of these risk factors disproportionately affect high-risk patients and represent healthcare disparities, they added.

Dr. Hashem El-Serag



“This update provides clinicians with a framework for understanding the natural history and epidemiology of gastric polyps, as well as guidance on best practices for the endoscopic detection and classification of gastric polyps, best practices for the endoscopic resection of gastric polyps, and best practices for endoscopic surveillance following resection,” said Hashem El-Serag, MD, professor and chair of medicine at the Baylor College of Medicine and director of the Texas Medical Center Digestive Diseases Center in Houston.

El-Serag, who wasn’t involved with the clinical practice update, has researched and published on consensus around the diagnosis and management of GIM.

“Stomach polyps are commonly found during routine endoscopic procedures. They are mostly asymptomatic and incidental, and therefore, clinicians may not be prepared ahead of time on how to deal with them,” he said. “The appropriate management requires proper identification and sampling of the polyp features and the uninvolved gastric mucosa, as well as a clear understanding of the risk factors and prognosis. Recent changes in the epidemiology and endoscopic management of gastric polyps makes this update timely and important.”

The update received no particular funding. The authors disclosed receiving grant support, having consultant relationships with, and serving in advisory roles for numerous pharmaceutical, biomedical, and biotechnology firms. Morgan and El-Serag reported having no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Computer-Aided Colonoscopy Not Ready for Prime Time: AGA Clinical Practice Guideline

Article Type
Changed

An AGA multidisciplinary panel has reached the conclusion that no recommendation can be made for or against the use of computer-aided detection (CADe)–assisted colonoscopy for colorectal cancer (CRC), the third most common cause of cancer mortality in the United States.

The systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid RecommendationsThe BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.

Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.

Dr. Shahnaz S. Sultan



This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”

The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).

But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.

“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”

Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.

Dr. Larry S. Kim



The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”

 

The Future

According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”

Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.

With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”

These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.

Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.

Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

An AGA multidisciplinary panel has reached the conclusion that no recommendation can be made for or against the use of computer-aided detection (CADe)–assisted colonoscopy for colorectal cancer (CRC), the third most common cause of cancer mortality in the United States.

The systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid RecommendationsThe BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.

Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.

Dr. Shahnaz S. Sultan



This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”

The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).

But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.

“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”

Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.

Dr. Larry S. Kim



The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”

 

The Future

According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”

Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.

With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”

These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.

Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.

Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.

A version of this article appeared on Medscape.com.

An AGA multidisciplinary panel has reached the conclusion that no recommendation can be made for or against the use of computer-aided detection (CADe)–assisted colonoscopy for colorectal cancer (CRC), the third most common cause of cancer mortality in the United States.

The systematic data review is a collaboration between AGA and The BMJ’s MAGIC Rapid RecommendationsThe BMJ issued a separate recommendation against CADe shortly after the AGA guideline was published.

Led by Shahnaz S. Sultan, MD, MHSc, AGAF, of the Division of Gastroenterology, Hepatology, and Nutrition at University of Minnesota, Minneapolis, and recently published in Gastroenterology, found only very low certainty of GRADE-based evidence for several critical long-term outcomes, both desirable and undesirable. These included the following: 11 fewer CRCs per 10,000 individuals and two fewer CRC deaths per 10,000 individuals, an increased burden of more intensive surveillance colonoscopies (635 more per 10,000 individuals), and cost and resource implications.

Dr. Shahnaz S. Sultan



This technology did, however, yield an 8% (95% CI, 6-10) absolute increase in the adenoma detection rate (ADR) and a 2% (95% CI, 0-4) increase in the detection rate of advanced adenomas and/or sessile serrated lesions. “How this translates into a reduction in CRC incidence or death is where we were uncertain,” Sultan said. “Our best effort at trying to translate the ADR and other endoscopy outcomes to CRC incidence and CRC death relied on the modeling study, which included a lot of assumptions, which also contributed to our overall lower certainty.”

The systematic and meta-analysis included 41 randomized controlled trials with more than 32,108 participants who underwent CADe-assisted colonoscopy. This technology was associated with a higher polyp detection rate than standard colonoscopy: 56.1% vs 47.9% (relative risk [RR], 1.22, 95% CI, 1.15-1.28). It also had a higher ADR: 44.8% vs 37.4% (RR, 1.22; 95% CI, 1.16-1.29).

But although CADe-assisted colonoscopy may increase ADR, it carries a risk for overdiagnosis, as most polyps detected during colonoscopy are diminutive (< 5 mm) and of low malignant potential, the panel noted. Approximately 25% of lesions are missed at colonoscopy. More than 15 million colonoscopies are performed annually in the United States, but studies have demonstrated variable quality of colonoscopies across key quality indicators.

“Artificial intelligence [AI] is revolutionizing medicine and healthcare in the field of GI [gastroenterology], and CADe in colonoscopy has been brought to commercialization,” Sultan told GI & Hepatology News. “Unlike many areas of endoscopic research where we often have a finite number of clinical trial data, CADe-assisted colonoscopy intervention has been studied in over 44 randomized controlled trials and numerous nonrandomized, real-world studies. The question of whether or not to adopt this intervention at a health system or practice level is an important question that was prioritized to be addressed as guidance was needed.”

Commenting on the guideline but not involved in its formulation, Larry S. Kim, MD, MBA, AGAF, a gastroenterologist at South Denver Gastroenterology in Denver, Colorado, said his practice group has used the GI Genius AI system in its affiliated hospitals but has so far chosen not to implement the technology at its endoscopy centers. “At the hospital, our physicians have the ability to utilize the system for select patients or not at all,” he told GI & Hepatology News.

Dr. Larry S. Kim



The fact that The BMJ reached a different conclusion based on the same data, evidence-grading system, and microsimulation, Kim added, “highlights the point that when evidence for benefit is uncertain, underlying values are critical.” In declining to make a recommendation, the AGA panel balanced the benefit of improved detection of potentially precancerous adenomas vs increased resource utilization in the face of unclear benefit. “With different priorities, other bodies could reasonably decide to recommend either for or against CADe.”

 

The Future

According to Sultan, gastroenterologists need a better understanding of patient values and preferences and the value placed on increased adenoma detection, which may also lead to more lifetime colonoscopies without reducing the risk for CRC. “We need better intermediate- and long-term data on the impact of adenoma detection on interval cancers and CRC incidence,” she said. “We need data on detection of polyps that are more clinically significant such as those 6-10 mm in size, as well as serrated sessile lesions. We also need to understand at the population or health system level what the impact is on resources, cost, and access.”

Ultimately, the living guideline underscores the trade-off between desirable and undesirable effects and the limitations of current evidence to support a recommendation, but CADe has to improve as an iterative AI application with further validation and better training.

With the anticipated improvement in software accuracy as AI machine learning reads increasing numbers of images, Sultan added, “the next version of the software may perform better, especially for polyps that are more clinically significant or for flat sessile serrated polyps, which are harder to detect. We plan to revisit the question in the next year or two and potentially revise the guideline.”

These guidelines were fully funded by the AGA Institute with no funding from any outside agency or industry.

Sultan is supported by the US Food and Drug Administration. Co-authors Shazia Mehmood Siddique, Dennis L. Shung, and Benjamin Lebwohl are supported by grants from the National Institute of Diabetes and Digestive and Kidney Diseases. Theodore R. Levin is supported by the Permanente Medical Group Delivery Science and Applied Research Program. Cesare Hassan is a consultant for Fujifilm and Olympus. Peter S. Liang reported doing research work for Freenome and advisory board work for Guardant Health and Natera.

Kim is the AGA president-elect. He disclosed no competing interests relevant to his comments.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Elemental Diet Eases Symptoms in Microbiome Gastro Disorders

Article Type
Changed

Short-term adherence to a palatable elemental diet (PED) significantly improved symptoms and the gut microbiota in adults with microbiome-driven gastrointestinal disorders, according to a new study.

Dr. Ali Rezaie

“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.

Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.

In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.

 

Assessing a Novel Diet in IMO and SIBO

In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.

All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.

The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.

The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.

Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).

 

Key Outcomes

Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.

Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.

“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.

Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.

The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.

“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.

“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.

 

Limitations and Next Steps

Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.

However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.

Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.

 

Making the Most of Microbiome Manipulation

Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.

Dr. Jatin Roper

“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.

Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.

However, diet palatability remains a major barrier in real-world practice.

“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.

The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.

The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Short-term adherence to a palatable elemental diet (PED) significantly improved symptoms and the gut microbiota in adults with microbiome-driven gastrointestinal disorders, according to a new study.

Dr. Ali Rezaie

“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.

Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.

In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.

 

Assessing a Novel Diet in IMO and SIBO

In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.

All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.

The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.

The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.

Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).

 

Key Outcomes

Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.

Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.

“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.

Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.

The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.

“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.

“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.

 

Limitations and Next Steps

Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.

However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.

Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.

 

Making the Most of Microbiome Manipulation

Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.

Dr. Jatin Roper

“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.

Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.

However, diet palatability remains a major barrier in real-world practice.

“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.

The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.

The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Short-term adherence to a palatable elemental diet (PED) significantly improved symptoms and the gut microbiota in adults with microbiome-driven gastrointestinal disorders, according to a new study.

Dr. Ali Rezaie

“Elemental diets have long shown promise for treating gastrointestinal disorders like Crohn’s disease, eosinophilic esophagitis, SIBO (small intestinal bacterial overgrowth), and IMO (intestinal methanogen overgrowth), but poor palatability has limited their use,” lead author Ali Rezaie, MD, medical director of the Gastrointestinal (GI) Motility Program and director of Bioinformatics at Cedars-Sinai Medical Center, Los Angeles, told GI & Hepatology News.

Elemental diets are specialized formulas tailored to meet an individual’s specific nutritional needs and daily requirements for vitamins, minerals, fat, free amino acids, and carbohydrates.

In SIBO and IMO specifically, only about half the patients respond to antibiotics, and many require repeat treatments, which underscores the need for effective nonantibiotic alternatives, said Rezaie. “This is the first prospective trial using a PED, aiming to make this approach both viable and accessible for patients,” he noted.

 

Assessing a Novel Diet in IMO and SIBO

In the study, which was recently published in Clinical Gastroenterology and Hepatology, Rezaie and colleagues enrolled 30 adults with IMO (40%), SIBO (20%), or both (40%). The mean participant age was 45 years, and 63% were women.

All participants completed 2 weeks of a PED, transitioned to 2-3 days of a bland diet, and then resumed their regular diets for 2 weeks.

The diet consisted of multiple 300-calorie packets, adjusted for individual caloric needs. Participants could consume additional packets for hunger but were prohibited from eating other foods. There was no restriction on water intake.

The primary endpoint was changes in stool microbiome after the PED and reintroduction of regular food. Secondary endpoints included lactose breath test normalization to determine bacterial overgrowth in the gut, symptom response, and adverse events.

Researchers collected 29 stool samples at baseline, 27 post-PED, and 27 at study conclusion (2 weeks post-diet).

 

Key Outcomes

Although the stool samples’ alpha diversity decreased after the PED, the difference was not statistically significant at the end of the study. However, 30 bacterial families showed significant differences in relative abundance post-PED.

Daily symptom severity improved significantly during the second week of the diet compared with baseline, with reduction in abdominal discomfort, bloating, distention, constipation, and flatulence. Further significant improvements in measures such as abdominal pain, diarrhea, fatigue, urgency, and brain fog were observed after reintroducing regular food.

“We observed 73% breath test normalization and 83% global symptom relief — with 100% adherence and tolerance to 2 weeks of exclusive PED,” Rezaie told GI & Hepatology News. No serious adverse events occurred during the study, he added.

Lactose breath test normalization rates post-PED were 58% in patients with IMO, 100% in patients with SIBO, and 75% in those with both conditions.

The extent of patient response to PED was notable, given that 83% had failed prior treatments, Rezaie said.

“While we expected benefit based on palatability improvements and prior retrospective data, the rapid reduction in methane and hydrogen gas — and the sustained microbiome modulation even after reintroducing a regular diet — exceeded expectations,” he said. A significant reduction in visceral fat was another novel finding.

“This study reinforces the power of diet as a therapeutic tool,” Rezaie said, adding that the results show that elemental diets can be palatable, thereby improving patient adherence, tolerance, and, eventually, effectiveness. This is particularly valuable for patients with SIBO and IMO who do not tolerate or respond to antibiotics, prefer nonpharmacologic options, or experience recurrent symptoms after antibiotic treatment.

 

Limitations and Next Steps

Study limitations included the lack of a placebo group with a sham diet, the short follow-up after reintroducing a regular diet, and the inability to assess microbial gene function.

However, the results support the safety, tolerance, and benefit of a PED in patients with IMO/SIBO. Personalized dietary interventions that support the growth of beneficial bacteria may be an effective approach to treating these disorders, Rezaie and colleagues noted in their publication.

Although the current study is a promising first step, longer-term studies are needed to evaluate the durability of microbiome and symptom improvements, Rezaie said.

 

Making the Most of Microbiome Manipulation

Elemental diets may help modulate the gut microbiome while reducing immune activation, making them attractive for microbiome-targeted gastrointestinal therapies, Jatin Roper, MD, a gastroenterologist at Duke University, Durham, North Carolina, told GI & Hepatology News.

Dr. Jatin Roper

“Antibiotics are only effective in half of SIBO cases and often require retreatment, so better therapies are needed,” said Roper, who was not affiliated with the study. He added that its findings confirmed the researchers’ hypothesis that a PED can be both safe and effective in patients with SIBO.

Roper noted the 83% symptom improvement as the study’s most unexpected and encouraging finding, as it represents a substantial improvement compared with standard antibiotic therapy. “It is also surprising that the tolerance rate of the elemental diet in this study was 100%,” he said.

However, diet palatability remains a major barrier in real-world practice.

“Adherence rates are likely to be far lower than in trials in which patients are closely monitored, and this challenge will not be easily overcome,” he added.

The study’s limitations, including the lack of metagenomic analysis and a placebo group, are important to address in future research, Roper said. In particular, controlled trials of elemental diets are needed to determine whether microbiome changes are directly responsible for symptom improvement.

The study was supported in part by Good LFE and the John and Geraldine Cusenza Foundation. Rezaie disclosed serving as a consultant/speaker for Bausch Health and having equity in Dieta Health, Gemelli Biotech, and Good LFE. Roper had no financial conflicts to disclose.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Wearable Devices May Predict IBD Flares Weeks in Advance

Key Takeaways
Article Type
Changed

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Publications
Topics
Sections
Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Title
Key Takeaways
Key Takeaways

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Low-Quality Food Environments Increase MASLD-related Mortality

National Policy Changes Needed Urgently
Article Type
Changed

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Publications
Topics
Sections
Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Title
National Policy Changes Needed Urgently
National Policy Changes Needed Urgently

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Infrequent HDV Testing Raises Concern for Worse Liver Outcomes

Timely Testing Using Reflex Tools
Article Type
Changed

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Publications
Topics
Sections
Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Title
Timely Testing Using Reflex Tools
Timely Testing Using Reflex Tools

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Intensive Nutrition Therapy Improves Outcomes in Alcohol-Related ACLF

Article Type
Changed

A recent study supports the importance of intensive nutrition therapy in managing patients with alcohol-related acute-on-chronic liver failure (ACLF).

In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.

The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.

ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.

Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.

The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.

After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.

OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.

The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.

Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.

Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).

During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).

Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.

Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”

 

Confirmatory Data 

Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”

“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.

“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.

She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”

This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A recent study supports the importance of intensive nutrition therapy in managing patients with alcohol-related acute-on-chronic liver failure (ACLF).

In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.

The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.

ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.

Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.

The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.

After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.

OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.

The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.

Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.

Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).

During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).

Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.

Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”

 

Confirmatory Data 

Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”

“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.

“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.

She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”

This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.

A version of this article appeared on Medscape.com.

A recent study supports the importance of intensive nutrition therapy in managing patients with alcohol-related acute-on-chronic liver failure (ACLF).

In a randomized controlled trial, compared with standard care, dietitian-supported, intensive nutritional therapy improved survival, reduced frailty, and lowered hospitalization rates in men with alcohol-related ACLF.

The study, performed by a team from the Postgraduate Institute of Medical Education and Research, Chandigarh, India, was published in Clinical Gastroenterology and Hepatology.

ACLF related to alcohol use is associated with poor outcomes due to poor nutritional intake and frailty. Frail patients with ACLF face higher morbidity, mortality, and hospitalization rates than their nonfrail counterparts. However, research on the role of structured nutritional interventions in improving these outcomes is limited.

Patal Giri, MBBS, MD, and colleagues enrolled 70 men with alcohol-related ACLF and frailty (liver frailty index [LFI] > 4.5) in a single-center, open-label study. Half were randomly allocated to an intervention group receiving outpatient intensive nutrition therapy (OINT) plus standard medical treatment (SMT) and half to a control group receiving SMT alone for 3 months.

The intervention group received a monitored high-calorie, high-protein, and salt-restricted diet as prescribed by a dedicated senior liver dietitian. The control group received regular nutritional recommendations and were managed for the ACLF-associated complications, without intervention or guidance by the study team.

After 3 months follow-up, overall survival (the primary outcome) was significantly improved in the OINT group compared with the control group (91.4% vs 57.1%), “suggesting that the improvement in nutrition status is associated with better survival,” the study team noted. Three patients died in the OINT group vs 15 in the SMT group.

OINT also led to a significant improvement in frailty, with LFI scores decreasing by an average of 0.93 in the intervention group vs 0.33 in the control group; 97% of patients improved from frail to prefrail status in the OINT group, whereas only 20% of patients improved in the SMT group.

The mean change in LFI of 0.93 with OINT is “well above the substantially clinically important difference” (change of 0.8) established in a previous study, the authors noted.

Significant improvements in weight and body mass index were also observed in the OINT group relative to the control group.

Liver disease severity, including model for end-stage liver disease (MELD) scores, showed greater improvement in the OINT group than in the control group (−8.7 vs −6.3 points from baseline to 3 months).

During the follow-up period, fewer patients in the intervention group than in the control group required a hospital stay (17% vs 45.7%).

Limitations of the study include the single-center design and the short follow-up period of 3 months, which limits long-term outcome assessment. Further, the study only included patients meeting Asia Pacific Association for Study of Liver criteria for ACLF, which does not include the patients with organ failure as defined by European Association for the Study of the Liver-Chronic Liver Failure Consortium criteria. Patients with ACLF who had more severe disease (MELD score > 30 or AARC > 10) were also not included.

Despite these limitations, the authors said their study showed that “dietician-monitored goal-directed nutrition therapy is very important in the management of patients with alcohol-related ACLF along with SMT.”

 

Confirmatory Data 

Reached for comment, Katherine Patton, MEd, RD, a registered dietitian with the Center for Human Nutrition at Cleveland Clinic, Cleveland, Ohio, said it’s well known that the ACLF patient population has a “very high rate of morbidity and mortality and their quality of life tends to be poor due to their frailty. It is also fairly well-known that proper nutrition therapy can improve outcomes, however barriers to adequate nutrition include decreased appetite, nausea, pain, altered taste, and early satiety from ascites.”

“Hepatologists are likely stressing the importance of adequate protein energy intake and doctors may refer patients to an outpatient dietitian, but it is up to the patient to make that appointment and act on the recommendations,” Patton told GI & Hepatology News.

“If a dietitian works in the same clinic as the hepatologist and patients can be referred and seen the same day, this is ideal. During a hospital admission, protein/calorie intake can be more closely monitored and encouraged by a multi-disciplinary team,” Patton explained.

She cautioned that “the average patient is not familiar with how to apply general calorie and protein goals to their everyday eating habits. This study amplifies the role of a dietitian and what consistent education and resources can do to improve a patient’s quality of life and survival.”

This study had no specific funding. The authors have declared no relevant conflicts of interest. Patton had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Better Prep, Better Scope: Task Force Updates Colonoscopy Bowel Prep Advice

Article Type
Changed

The United States multi-society task force on colorectal cancer (CRC) has updated its 2014 guidance for optimizing the adequacy of bowel preparation for colonoscopy.

The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.

“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.

 

Choice of Prep, Dosing and Timing, and Dietary Restrictions 

When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.

Dr. Brian Jacobson

In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.

The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.

The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.

Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.

The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.

How might these updated consensus recommendations change current clinical practice? 

Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.” 

He noted that the task force prefers the term “guidance” to “guidelines.”

 

New Quality Benchmark 

The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.

They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.

Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.

Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.

David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”

Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”

 

Dr. Brian Sullivan

Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”

“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.

“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.

The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.

 

‘Timely and Important’ Updates

Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.” 

“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.

He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”

“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.

The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.

The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.

This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The United States multi-society task force on colorectal cancer (CRC) has updated its 2014 guidance for optimizing the adequacy of bowel preparation for colonoscopy.

The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.

“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.

 

Choice of Prep, Dosing and Timing, and Dietary Restrictions 

When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.

Dr. Brian Jacobson

In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.

The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.

The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.

Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.

The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.

How might these updated consensus recommendations change current clinical practice? 

Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.” 

He noted that the task force prefers the term “guidance” to “guidelines.”

 

New Quality Benchmark 

The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.

They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.

Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.

Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.

David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”

Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”

 

Dr. Brian Sullivan

Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”

“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.

“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.

The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.

 

‘Timely and Important’ Updates

Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.” 

“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.

He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”

“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.

The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.

The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.

This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.

A version of this article first appeared on Medscape.com.

The United States multi-society task force on colorectal cancer (CRC) has updated its 2014 guidance for optimizing the adequacy of bowel preparation for colonoscopy.

The latest consensus recommendations emphasize the importance of verbal and written patient education, refine diet restrictions, update optimal purgative regimens, and advise tracking bowel prep adequacy rates at both the individual endoscopist and unit levels.

“Colorectal cancer remains the second most common cause of cancer death in the United States, and colonoscopy is considered the gold standard for evaluating the colon, including assessing causes of colon-related signs or symptoms and the detection of precancerous lesions. It is well recognized that the adequacy of bowel preparation is essential for optimal colonoscopy performance,” the task force wrote.

 

Choice of Prep, Dosing and Timing, and Dietary Restrictions 

When choosing bowel preparation regimens, the task force recommends considering the individual’s medical history, medications, and, when available, the adequacy of bowel preparation reported from prior colonoscopies. Other considerations include patient preference, associated additional costs to the patient, and ease in obtaining and consuming any purgatives or adjuncts.

Dr. Brian Jacobson

In terms of timing and dose, the task force now “suggests that lower-volume bowel preparation regimens, such as those that rely on only 2 liters of fluid compared to the traditional 4L, are acceptable options for individuals considered unlikely to have an inadequate bowel preparation. This assumes that the purgative is taken in a split-dose fashion (half the evening prior to colonoscopy and half the morning of the colonoscopy),” co–lead author Brian C. Jacobson, MD, MPH, AGAF, with Massachusetts General Hospital and Harvard Medical School, both in Boston, said in an interview.

The task force also states that a same-day bowel preparation regimen for afternoon, but not morning, colonoscopy is a “reasonable alternative to the now-common split-dose regimen,” Jacobson said.

The group did not find one bowel preparation purgative to be better than others, although table 7 in the document details characteristics of commonly used prep regimens including their side effects and contraindications.

Recommendations regarding dietary modifications depend upon the patient’s risk for inadequate bowel prep. For patients at low risk for inadequate bowel prep, the task force recommends limiting dietary restrictions to the day before a colonoscopy, relying on either clear liquids or low-fiber/low-residue diets for the early and midday meals. Table 5 in the document provides a list of low-residue foods and sample meals.

The task force also suggests the adjunctive use of oral simethicone (≥ 320 mg) to bowel prep as a way to potentially improve visualization, although they acknowledge that further research is needed.

How might these updated consensus recommendations change current clinical practice? 

Jacobson said: “Some physicians may try to identify individuals who will do just as well with a more patient-friendly, easily tolerated bowel preparation regimen, including less stringent dietary restrictions leading up to colonoscopy.” 

He noted that the task force prefers the term “guidance” to “guidelines.”

 

New Quality Benchmark 

The task force recommends documenting bowel prep quality in the endoscopy report after all washing and suctioning have been completed using reliably understood descriptors that communicate the adequacy of the preparation.

They recommend the term “adequate bowel preparation” be used to indicate that standard screening or surveillance intervals can be assigned based on the findings of the colonoscopy.

Additionally, the task force recommends that endoscopy units and individual endoscopists track and aim for ≥ 90% adequacy rates in bowel preparation — up from the 85% benchmark contained in the prior recommendations.

Jacobson told this news organization it’s “currently unknown” how many individual endoscopists and endoscopy units track and meet the 90% benchmark at present.

David Johnson, MD, professor of medicine and chief of gastroenterology at Eastern Virginia Medical School, Norfolk, who wasn’t on the task force, said endoscopy units and providers “need to be accountable and should be tracking this quality metric.”

Johnson noted that bowel prep inadequacy has “intrinsic costs,” impacting lesion detection, CRC incidence, and patient outcomes. Inadequate prep leads to “increased risk for morbidity, mortality, longer appointment and wait times for rescheduling, and negative connotations that may deter patients from returning.”

 

Dr. Brian Sullivan

Brian Sullivan, MD, MHS, assistant professor of medicine, division of gastroenterology, Duke University School of Medicine, Durham, North Carolina, who wasn’t on the task force, said the recommendation to target a 90% or higher bowel preparation adequacy rate is “appreciated.”

“This benchmark encourages practices to standardize measurement, tracking, and reporting of preparation quality at both the individual and unit levels. Specifically, it should motivate providers to critically evaluate their interpretation of preparation quality and ensure adequate cleansing before making determinations,” Sullivan said in an interview.

“At the unit level, this metric can identify whether there are opportunities for quality improvement, such as by implementing evidence-based initiatives (provided in the guidance) to enhance outpatient preparation processes,” Sullivan noted.

The task force emphasized that the majority of consensus recommendations focus on individuals at average risk for inadequate bowel prep. Patients at high risk for inadequate bowel prep (eg, diabetes, constipation, opioid use) should receive tailored instructions, including a more extended dietary prep and high-volume purgatives.

 

‘Timely and Important’ Updates

Sullivan said the updated consensus recommendations on optimizing bowel preparation quality for colonoscopy are both “timely and important.” 

“Clear guidance facilitates dissemination and adoption, promoting flexible yet evidence-based approaches that enhance patient and provider satisfaction while potentially improving CRC prevention outcomes. For instance, surveys reveal that some practices still do not utilize split-dose bowel preparation, which is proven to improve preparation quality, particularly for the right-side of the colon. This gap underscores the need for standardized guidance to ensure high-quality colonoscopy and effective CRC screening,” Sullivan said.

He also noted that the inclusion of lower-volume bowel prep regimens and less intensive dietary modifications for selected patients is a “welcome update.”

“These options can improve patient adherence and satisfaction, which are critical not only for the quality of the index exam but also for ensuring patients return for future screenings, thereby supporting long-term CRC prevention efforts,” Sullivan said.

The task force includes representatives from the American Gastroenterological Association, the American College of Gastroenterology, and the American Society for Gastrointestinal Endoscopy.

The consensus document was published online in the three societies’ respective scientific journals — Gastroenterology, the American Journal of Gastroenterology, and Gastrointestinal Endsocopy.

This research had no financial support. Jacobson is a consultant for Curis and Guardant Health. Sullivan had no disclosures. Johnson is an adviser to ISOThrive and a past president of the American College of Gastroenterology.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Safety Profile of GLP-1s ‘Reassuring’ in Upper Endoscopy

Article Type
Changed

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Publications
Topics
Sections

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date

Two Cystic Duct Stents Appear Better Than One

Article Type
Changed

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Publications
Topics
Sections

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Un-Gate On Date
Use ProPublica
CFC Schedule Remove Status
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date