Wearable Devices May Predict IBD Flares Weeks in Advance

Key Takeaways
Article Type
Changed
Tue, 04/08/2025 - 12:14

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Publications
Topics
Sections
Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Body

Dana J. Lukin, MD, PhD, AGAF, of New York-Presbyterian Hospital/Weill Cornell Medicine, New York City, described the study by Hirten et al as “provocative.”

“While the data require a machine learning approach to transform the recorded values into predictive algorithms, it is intriguing that routinely recorded information from smart devices can be used in a manner to inform disease activity,” Lukin said in an interview. “Furthermore, the use of continuously recorded physiological data in this study likely reflects longitudinal health status more accurately than cross-sectional use of patient-reported outcomes or episodic biomarker testing.”

Dr. Dana J. Lukin



In addition to offering potentially higher accuracy than conventional monitoring, the remote strategy is also more convenient, he noted.

“The use of these devices is likely easier to adhere to than the use of other contemporary monitoring strategies involving the collection of stool or blood samples,” Lukin said. “It may become possible to passively monitor a larger number of patients at risk for flares remotely,” especially given that “almost half of Americans utilize wearables, such as the Apple Watch, Oura Ring, and Fitbit.”

Still, Lukin predicted challenges with widespread adoption.

“More than half of Americans do not routinely [use these devices],” Lukin said. “Cost, access to internet and smartphones, and adoption of new technology may all be barriers to more widespread use.”

He suggested that the present study offers proof of concept, but more prospective data are needed to demonstrate how this type of remote monitoring might improve real-world IBD care. 

“Potential studies will assess change in healthcare utilization, corticosteroids, surgery, and clinical flare activity with the use of these data,” Lukin said. “As we learn more about how to handle the large amount of data generated by these devices, our algorithms can be refined to make a feasible platform for practices to employ in routine care.”

Lukin disclosed relationships with Boehringer Ingelheim, Takeda, Vedanta, and others.

Title
Key Takeaways
Key Takeaways

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Wearable devices like the Apple Watch and Fitbit may help identify and predict inflammatory bowel disease (IBD) flares, and even distinguish between inflammatory and purely symptomatic episodes, according to investigators.

These findings suggest that widely used consumer wearables could support long-term monitoring of IBD and other chronic inflammatory conditions, lead author Robert P. Hirten, MD, of Icahn School of Medicine at Mount Sinai, New York, and colleagues reported.

 

Dr. Robert P. Hirten

“Wearable devices are an increasingly accepted tool for monitoring health and disease,” the investigators wrote in Gastroenterology. “They are frequently used in non–inflammatory-based diseases for remote patient monitoring, allowing individuals to be monitored outside of the clinical setting, which has resulted in improved outcomes in multiple disease states.”

Progress has been slower for inflammatory conditions, the investigators noted, despite interest from both providers and patients. Prior studies have explored activity and sleep tracking, or sweat-based biomarkers, as potential tools for monitoring IBD. 

Hirten and colleagues took a novel approach, focusing on physiologic changes driven by autonomic nervous system dysfunction — a hallmark of chronic inflammation. Conditions like IBD are associated with reduced parasympathetic activity and increased sympathetic tone, which in turn affect heart rate and heart rate variability. Heart rate tends to rise during flares, while heart rate variability decreases.

Their prospective cohort study included 309 adults with Crohn’s disease (n = 196) or ulcerative colitis (n = 113). Participants used their own or a study-provided Apple Watch, Fitbit, or Oura Ring to passively collect physiological data, including heart rate, resting heart rate, heart rate variability, and step count. A subset of Apple Watch users also contributed oxygen saturation data.

Participants also completed daily symptom surveys using a custom smartphone app and reported laboratory values such as C-reactive protein, erythrocyte sedimentation rate, and fecal calprotectin, as part of routine care. These data were used to identify symptomatic and inflammatory flare periods.

Over a mean follow-up of about 7 months, the physiological data consistently distinguished both types of flares from periods of remission. Heart rate variability dropped significantly during flares, while heart rate and resting heart rate increased. Step counts decreased during inflammatory flares but not during symptom-only flares. Oxygen saturation stayed mostly the same, except for a slight drop seen in participants with Crohn’s disease.

These physiological changes could be detected as early as 7 weeks before a flare. Predictive models that combined multiple metrics — heart rate variability, heart rate, resting heart rate, and step count — were highly accurate, with F1 scores as high as 0.90 for predicting inflammatory flares and 0.83 for predicting symptomatic flares.

In addition, wearable data helped differentiate between flares caused by active inflammation and those driven by symptoms alone. Even when symptoms were similar, heart rate variability, heart rate, and resting heart rate were significantly higher when inflammation was present—suggesting wearable devices may help address the common mismatch between symptoms and actual disease activity in IBD.

“These findings support the further evaluation of wearable devices in the monitoring of IBD,” the investigators concluded.

The study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases and Ms. Jenny Steingart. The investigators disclosed additional relationships with Agomab, Lilly, Merck, and others.

 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/08/2025 - 10:14
Un-Gate On Date
Tue, 04/08/2025 - 10:14
Use ProPublica
CFC Schedule Remove Status
Tue, 04/08/2025 - 10:14
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 04/08/2025 - 10:14

Low-Quality Food Environments Increase MASLD-related Mortality

National Policy Changes Needed Urgently
Article Type
Changed
Tue, 04/08/2025 - 12:08

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Publications
Topics
Sections
Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Body

A healthy lifestyle continues to be foundational to the management of metabolic dysfunction–associated steatotic liver disease (MASLD). Poor diet quality is a risk factor for developing MASLD in the US general population. Food deserts and food swamps are symptoms of socioeconomic hardship, as they both are characterized by limited access to healthy food (as described by the US Department of Agriculture Dietary Guidelines for Americans) owing to the absence of grocery stores/supermarkets. However, food swamps suffer from abundant access to unhealthy, energy-dense, yet nutritionally sparse (EDYNS) foods.

Dr. Niharika Samala

The article by Paik et al shows that food deserts and food swamps are not only associated with the burden of MASLD in the United States but also with MASLD-related mortality. The counties with the highest MASLD-related mortality carried higher food swamps and food deserts, poverty, unemployment, household crowding, absence of broadband internet access, lack of high school education, and elderly, Hispanic residents and likely to be located in the South.

MASLD appears to have origins in the dark underbelly of socioeconomic hardship that might preclude many of our patients from complying with lifestyle changes. Policy changes are urgently needed at a national level, from increasing incentives to establish grocery stores in the food deserts to limiting the proportion of EDYNS foods in grocery stores and conspicuous labeling by the Food and Drug Administration of EDYNS foods. At an individual practice level, supporting MASLD patients in the clinic with a dietitian, educational material, and, where possible, utilizing applications to assist healthy dietary habits to empower them in choosing healthy food options.

Niharika Samala, MD, is assistant professor of medicine, associate program director of the GI Fellowship, and director of the IUH MASLD/NAFLD Clinic at the Indiana University School of Medicine, Indianapolis. She reported no relevant conflicts of interest.

Title
National Policy Changes Needed Urgently
National Policy Changes Needed Urgently

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

US counties with limited access to healthy food (food deserts) or a high density of unhealthy food outlets (food swamps) have higher mortality rates from metabolic dysfunction–associated steatotic liver disease (MASLD), according to investigators.

These findings highlight the importance of addressing disparities in food environments and social determinants of health to help reduce MASLD-related mortality, lead author Annette Paik, MD, of Inova Health System, Falls Church, Virginia, and colleagues reported.

“Recent studies indicate that food swamps and deserts, as surrogates for food insecurity, are linked to poor glycemic control and higher adult obesity rates,” the investigators wrote in Clinical Gastroenterology and Hepatology. “Understanding the intersection of these factors with sociodemographic and clinical variables offers insights into MASLD-related outcomes, including mortality.”

To this end, the present study examined the association between food environments and MASLD-related mortality across more than 2,195 US counties. County-level mortality data were obtained from the CDC WONDER database (2016-2020) and linked to food environment data from the US Department of Agriculture Food Environment Atlas using Federal Information Processing Standards (FIPS) codes. Food deserts were defined as low-income areas with limited access to grocery stores, while food swamps were characterized by a predominance of unhealthy food outlets relative to healthy ones.

Additional data on obesity, type 2 diabetes (T2D), and nine social determinants of health were obtained from CDC PLACES and other publicly available datasets. Counties were stratified into quartiles based on MASLD-related mortality rates. Population-weighted mixed-effects linear regression models were used to evaluate associations between food environment exposures and MASLD mortality, adjusting for region, rural-urban status, age, sex, race, insurance coverage, chronic dis-ease prevalence, SNAP participation, and access to exercise facilities.

Counties with the worst food environments had significantly higher MASLD-related mortality, even after adjusting for clinical and sociodemographic factors. Compared with counties in the lowest quartile of MASLD mortality, those in the highest quartile had a greater proportion of food deserts (22.3% vs 14.9%; P < .001) and food swamps (73.1% vs 65.7%; P < .001). They also had a significantly higher prevalence of obesity (40.5% vs 32.5%), type 2 diabetes (15.8% vs 11.4%), and physical inactivity (33.7% vs 24.9%).

Demographically, counties with higher MASLD mortality had significantly larger proportions of Black and Hispanic residents, and were more likely to be rural and located in the South. These counties also had significantly lower median household incomes, higher poverty rates, fewer adults with a college education, lower access to exercise opportunities, greater SNAP participation, less broadband access, and more uninsured adults.

In multivariable regression models, both food deserts and food swamps remained independently associated with MASLD mortality. Counties in the highest quartile of food desert exposure had a 14.5% higher MASLD mortality rate, compared with the lowest quartile (P = .001), and those in the highest quartile for food swamp exposure had a 13.9% higher mortality rate (P = .005).

Type 2 diabetes, physical inactivity, and lack of health insurance were also independently associated with increased MASLD-related mortality. 

“Implementing public health interventions that address the specific environmental factors of each county can help US policymakers promote access to healthy, culturally appropriate food choices at affordable prices and reduce the consumption of poor-quality food,” the investigators wrote. “Moreover, improving access to parks and exercise facilities can further enhance the impact of healthy nutrition. These strategies could help curb the growing epidemic of metabolic diseases, including MASLD and related mortality.”

This study was supported by King Faisal Specialist Hospital & Research Center, the Global NASH Council, Center for Outcomes Research in Liver Diseases, and the Beatty Liver and Obesity Research Fund, Inova Health System. The investigators disclosed no conflicts of interest.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 04/08/2025 - 10:05
Un-Gate On Date
Tue, 04/08/2025 - 10:05
Use ProPublica
CFC Schedule Remove Status
Tue, 04/08/2025 - 10:05
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 04/08/2025 - 10:05

Infrequent HDV Testing Raises Concern for Worse Liver Outcomes

Timely Testing Using Reflex Tools
Article Type
Changed
Mon, 04/07/2025 - 23:37

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Publications
Topics
Sections
Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Body

Hepatitis D virus (HDV) is an RNA “sub-virus” that infects patients with co-existing hepatitis B virus (HBV) infections. HDV infection currently affects approximately 15-20 million people worldwide but is an orphan disease in the United States with fewer than 100,000 individuals infected today.

Dr. Robert G. Gish

Those with HDV have a 70% lifetime risk of hepatocellular carcinoma (HCC), cirrhosis, liver failure, death, or liver transplant. But there are no current treatments in the US that are Food and Drug Administration (FDA)-approved for the treatment of HDV, and only one therapy in the European Union with full approval by the European Medicines Agency.

Despite HDV severity and limited treatment options, screening for HDV remains severely inadequate, often only testing those individuals at high risk sequentially. HDV screening, would benefit from a revamped approach that automatically reflexes testing when individuals are diagnosed with HBV if positive for hepatitis B surface antigen (HBsAg+), then proceeds to anti-HDV antibody total testing, and then double reflexed to HDV-RNA polymerase chain reaction (PCR) quantitation. This is especially true in the Veterans Administration (VA)’s hospitals and clinics, where Wong and colleagues found very low rates of HDV testing among a national cohort of US Veterans with chronic HBV.

This study highlights the importance of timely HDV testing using reflex tools to improve diagnosis and HDV treatment, reducing long-term risks of liver-related morbidity and mortality.

Robert G. Gish, MD, AGAF, is principal at Robert G Gish Consultants LLC, clinical professor of medicine at Loma Linda University, Loma Linda, Calif., and medical director of the Hepatitis B Foundation. His complete list of disclosures can be found at www.robertgish.com/about.

Title
Timely Testing Using Reflex Tools
Timely Testing Using Reflex Tools

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Only 1 in 6 US veterans with chronic hepatitis B (CHB) is tested for hepatitis D virus (HDV)—a coinfection associated with significantly higher risks of cirrhosis and hepatic decompensation—according to new findings.

The low testing rate suggests limited awareness of HDV-associated risks in patients with CHB, and underscores the need for earlier testing and diagnosis, lead author Robert J. Wong, MD, of Stanford University School of Medicine, Stanford, California, and colleagues, reported.

Dr. Robert J. Wong



“Data among US populations are lacking to describe the epidemiology and long-term outcomes of patients with CHB and concurrent HDV infection,” the investigators wrote in Gastro Hep Advances (2025 Oct. doi: 10.1016/j.gastha.2024.10.015).

Prior studies have found that only 6% to 19% of patients with CHB get tested for HDV, and among those tested, the prevalence is relatively low—between 2% and 4.6%. Although relatively uncommon, HDV carries a substantial clinical and economic burden, Dr. Wong and colleagues noted, highlighting the importance of clinical awareness and accurate epidemiologic data.

The present study analyzed data from the Veterans Affairs (VA) Corporate Data Warehouse between 2010 and 2023. Adults with CHB were identified based on laboratory-confirmed markers and ICD-9/10 codes. HDV testing (anti-HDV antibody and HDV RNA) was assessed, and predictors of testing were evaluated using multivariable logistic regression.

To examine liver-related outcomes, patients who tested positive for HDV were propensity score–matched 1:2 with CHB patients who tested negative. Matching accounted for age, sex, race/ethnicity, HBeAg status, antiviral treatment, HCV and HIV coinfection, diabetes, and alcohol use. Patients with cirrhosis or hepatocellular carcinoma (HCC) at base-line were excluded. Incidence of cirrhosis, hepatic decompensation, and HCC was estimated using competing risks Nelson-Aalen methods.

Among 27,548 veterans with CHB, only 16.1% underwent HDV testing. Of those tested, 3.25% were HDV positive. Testing rates were higher among patients who were HBeAg positive, on antiviral therapy, or identified as Asian or Pacific Islander.

Conversely, testing was significantly less common among patients with high-risk alcohol use, past or current drug use, cirrhosis at diagnosis, or HCV coinfection. In contrast, HIV coinfection was associated with increased odds of being tested.

Among those tested, HDV positivity was more likely in patients with HCV coinfection, cirrhosis, or a history of drug use. On multivariable analysis, these factors were independent predictors of HDV positivity.

In the matched cohort of 71 HDV-positive patients and 140 HDV-negative controls, the incidence of cirrhosis was more than 3-fold higher in HDV-positive patients (4.39 vs 1.30 per 100,000 person-years; P less than .01), and hepatic decompensation was over 5 times more common (2.18 vs 0.41 per 100,000 person-years; P = .01). There was also a non-significant trend toward increased HCC risk in the HDV group.

“These findings align with existing studies and confirm that among a predominantly non-Asian US cohort of CHB patients, presence of concurrent HDV is associated with more severe liver disease progression,” the investigators wrote. “These observations, taken together with the low rates of HDV testing overall and particularly among high-risk individuals, emphasizes the need for greater awareness and novel strategies on how to improve HDV testing and diagnosis, particularly given that novel HDV therapies are on the near horizon.”

The study was supported by Gilead. The investigators disclosed additional relationships with Exact Sciences, GSK, Novo Nordisk, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Mon, 04/07/2025 - 15:37
Un-Gate On Date
Mon, 04/07/2025 - 15:37
Use ProPublica
CFC Schedule Remove Status
Mon, 04/07/2025 - 15:37
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Mon, 04/07/2025 - 15:37

Safety Profile of GLP-1s ‘Reassuring’ in Upper Endoscopy

Article Type
Changed
Wed, 03/12/2025 - 18:55

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Publications
Topics
Sections

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Glucagon-like peptide-1 receptor agonists (GLP-1RAs) are associated with retained gastric contents and aborted procedures among patients undergoing upper endoscopy, according to a meta-analysis of more than 80,000 patients.

Safety profiles, however, were comparable across groups, suggesting that prolonged fasting may be a sufficient management strategy, instead of withholding GLP-1RAs, lead author Antonio Facciorusso, MD, PhD, of the University of Foggia, Italy, and colleagues reported.

“The impact of GLP-1RAs on slowing gastric motility has raised concerns in patients undergoing endoscopic procedures, particularly upper endoscopies,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is due to the perceived risk of aspiration of retained gastric contents in sedated patients and the decreased visibility of the gastric mucosa, which can reduce the diagnostic yield of the examination.”

The American Society of Anesthesiologists (ASA) recommends withholding GLP-1RAs before procedures or surgery, whereas AGA suggests an individualized approach, citing limited supporting data. 

A previous meta-analysis reported that GLP-1RAs mildly delayed gastric emptying, but clinical relevance remained unclear. 

The present meta-analysis aimed to clarify this uncertainty by analyzing 13 retrospective studies that involved 84,065 patients undergoing upper endoscopy. Outcomes were compared among GLP-1RA users vs non-users, including rates of retained gastric contents, aborted procedures, and adverse events. 

Patients on GLP-1RAs had significantly higher rates of retained gastric contents than non-users (odds ratio [OR], 5.56), a finding that held steady (OR, 4.20) after adjusting for age, sex, diabetes, body mass index, and other therapies. 

GLP-1RAs were also associated with an increased likelihood of aborted procedures (OR, 5.13; 1% vs. 0.3%) and a higher need for repeat endoscopies (OR, 2.19; 1% vs 2%); however, Facciorusso and colleagues noted that these events, in absolute terms, were relatively uncommon.

“The rate of aborted and repeat procedures in the included studies was low,” the investigators wrote. “This meant that only for every 110 patients undergoing upper endoscopy while in GLP-1RA therapy would we observe an aborted procedure and only for every 120 patients would we need to repeat the procedure.”

The overall safety profile of GLP-1RAs in the context of upper endoscopy remained largely reassuring, they added. Specifically, rates of bronchial aspiration were not significantly different between users and non-users. What’s more, no single study reported a statistically significant increase in major complications, including pulmonary adverse events, among GLP-1RA users. 

According to Facciorusso and colleagues, these findings suggest that retained gastric contents do not appear to substantially heighten the risk of serious harm, though further prospective studies are needed.

“Our comprehensive analysis indicates that, while the use of GLP-1RA results in higher rates of [retained gastric contents], the actual clinical impact appears to be limited,” they wrote. “Therefore, there is no strong evidence to support the routine discontinuation of the drug before upper endoscopy procedures.”

Instead, they supported the AGA task force’s recommendation for an individualized approach, and not withholding GLP-1RAs unnecessarily, calling this “the best compromise.”

“Prolonging the duration of fasting for solids could represent the optimal approach in these patients, although this strategy requires further evaluation,” the investigators concluded.

The investigators disclosed no conflicts of interest.







 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 03/12/2025 - 10:13
Un-Gate On Date
Wed, 03/12/2025 - 10:13
Use ProPublica
CFC Schedule Remove Status
Wed, 03/12/2025 - 10:13
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 03/12/2025 - 10:13

Two Cystic Duct Stents Appear Better Than One

Article Type
Changed
Tue, 03/11/2025 - 16:13

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Publications
Topics
Sections

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Placing two cystic duct stents instead of one during endoscopic transpapillary gallbladder drainage (ETGBD) is associated with a lower rate of unplanned reintervention, according to a retrospective multicenter study.

These findings suggest that endoscopists should prioritize dual stent placement when feasible, and consider adding a second stent in patients who previously received a single stent, James D. Haddad, MD, of the University of Texas Southwestern, Dallas, and colleagues reported.

 

Dr. James D. Haddad

The American Gastroenterological Association (AGA) has recognized the role of endoscopic drainage in managing acute cholecystitis in high-risk patients, but specific guidance on optimal technique and follow-up remains unclear, the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy.

“Despite accumulating data and increased interest in this technique, clear guidance on the ideal strategy for ETGBD is lacking,” Dr. Haddad and colleagues wrote. “For example, the optimal size, number, and follow-up of cystic duct stents for patients undergoing ETGBD has not been well established.”

To address this knowledge gap, the investigators analyzed data from 75 patients at five academic medical centers who had undergone ETGBD between June 2013 and October 2022. Patients were divided into two groups based on whether they received one or two cystic duct stents. 

The primary outcome was clinical success, defined as symptom resolution without requiring another drainage procedure. Secondary outcomes included technical success (defined as successful stent placement), along with rates of adverse events and unplanned reinterventions. 

Out of the 75 patients, 59 received a single stent, while 16 received dual stents. The median follow-up time was 407 days overall, with a longer follow-up in the single-stent group (433 days), compared with the double-stent group (118 days).

Clinical success was reported in 81.3% of cases, which technical success was achieved in 88.2% of cases. 

Patients who received two stents had significantly lower rates of unplanned reintervention, compared with those who received a single stent (0% vs 25.4%; P = .02). The median time to unplanned reintervention in the single-stent group was 210 days.

Use of a 7 French stent was strongly associated with placement of two stents (odd ratio [OR], 15.5; P = .01). Similarly, patients with a prior percutaneous cholecystostomy tube were significantly more likely to have two stents placed (OR, 10.8; P = .001).

Adverse event rates were uncommon and not statistically different between groups, with an overall rate of 6.7%. Post-endoscopic retrograde cholangiopancreatography pancreatitis was the most common adverse event, occurring in two patients in the single-stent group and one patient in the double-stent group. There were no reported cases of cystic duct or gallbladder perforation.

“In conclusion,” the investigators wrote, “ETGBD with dual transpapillary gallbladder stenting is associated with a lower rate of unplanned reinterventions, compared with that with single stenting, and has a low rate of adverse events. Endoscopists performing ETGBD should consider planned exchange of solitary transpapillary gallbladder stents or interval ERCP for reattempted placement of a second stent if placement of two stents is not possible at the index ERCP.”

The investigators disclosed relationships with Boston Scientific, Motus GI, and ConMed.







 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 03/11/2025 - 13:42
Un-Gate On Date
Tue, 03/11/2025 - 13:42
Use ProPublica
CFC Schedule Remove Status
Tue, 03/11/2025 - 13:42
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 03/11/2025 - 13:42

Circulating Proteins Predict Crohn’s Disease Years in Advance

From Treatment to Prevention
Article Type
Changed
Tue, 03/11/2025 - 16:01

Circulating blood proteins could enable early identification of Crohn’s disease (CD) years before clinical signs, according to investigators.

The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported. 

“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.

Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted. 

Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.

First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature. 

In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87. 

While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.

Dr. Olle Grännö (left) and Dr. Jonas Halfvarson are, respectively, the lead and principal authors of a study demonstrating how circulating blood proteins could enable early identification of Crohn's disease.



Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.

“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”

These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.

Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs. 

The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation. 

In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.

Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.

“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.

This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.

Body

Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.

With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.

For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.

Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.

Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.

Publications
Topics
Sections
Body

Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.

With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.

For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.

Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.

Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.

Body

Nowadays, preclinical biomarker discovery for inflammatory bowel diseases (IBD) is one of the key areas of study, aiming to identify the earliest stages of disease development and to find opportunities for early intervention. The study by Grännö and colleagues taps into this area and provides a significant advancement in the early detection of Crohn’s disease (CD) with a validated 29-plasma protein biomarker signature.

With an AUC of up to 0.87 in preclinical CD cases and even 0.82 as early as 16 years before diagnosis, these findings strongly support the notion that CD has a prolonged preclinical phase that is detectable up to many years before diagnosis. Importantly, their identified protein signatures also shed light on distinct pathophysiological mechanisms between CD and ulcerative colitis (UC), with CD characterized by early disruptions in gut barrier integrity and macrophage function, while UC was more marked by upregulated inflammatory markers.

For clinical practitioners, these findings have a strong transformative potential. Following further validation in larger cohorts and allowing clinical accessibility, preclinical biomarker screening could become a routine tool for risk stratification in at-risk individuals, such as those with a strong family history or genetic predisposition. This could enable implementation of early interventions, including dietary modifications and potentially prophylactic therapies, to delay or even prevent disease onset. Given that similar approaches have proven effective in type 1 diabetes, applying this strategy to IBD could significantly alter disease progression and patient outcomes.

Challenges remain before implementation in clinical practice could be realized. Standardized thresholds for risk assessment, cost-effectiveness analyses, and potential therapeutic strategies tailored to biomarker-positive individuals require further exploration. However, this study provides important data needed for a paradigm shift in IBD management — one that moves from reactive treatment to proactive prevention.

Arno R. Bourgonje, MD, PhD, is a postdoctoral fellow at the Division of Gastroenterology, Icahn School of Medicine at Mount Sinai, New York, and at the University Medical Center Groningen in Groningen, the Netherlands. He is involved in the European INTERCEPT consortium, which is focused on prediction and prevention of IBD. He reported no conflicts of interest.

Title
From Treatment to Prevention
From Treatment to Prevention

Circulating blood proteins could enable early identification of Crohn’s disease (CD) years before clinical signs, according to investigators.

The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported. 

“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.

Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted. 

Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.

First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature. 

In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87. 

While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.

Dr. Olle Grännö (left) and Dr. Jonas Halfvarson are, respectively, the lead and principal authors of a study demonstrating how circulating blood proteins could enable early identification of Crohn's disease.



Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.

“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”

These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.

Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs. 

The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation. 

In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.

Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.

“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.

This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.

Circulating blood proteins could enable early identification of Crohn’s disease (CD) years before clinical signs, according to investigators.

The 29-protein biosignature, which was validated across multiple independent cohorts, could potentially open doors to new preclinical interventions, lead author Olle Grännö, MD, of Örebro University in Sweden, and colleagues reported. 

“Predictive biomarkers of future clinical onset of active inflammatory bowel disease could detect the disease during ‘a window of opportunity’ when the immune dysregulation is potentially reversible,” the investigators wrote in Gastroenterology.

Preclinical biomarker screening has proven effective in other immune-mediated diseases, such as type 1 diabetes, where risk stratification using autoantibodies enabled early intervention that delayed disease onset, they noted. 

Previous studies suggested similar potential for inflammatory bowel disease (IBD) via predictive autoantibodies and serum proteins, although the accuracy of these markers was not validated in external cohorts. The present study aimed to fill this validation gap.

First, the investigators measured 178 plasma proteins in blood samples taken from 312 individuals before they were diagnosed with IBD. Using machine learning, Dr. Grännö and colleagues compared these findings with blood-matched controls who remained free of IBD through follow-up. This process revealed the 29-protein signature. 

In the same discovery cohort, the panel of 29 proteins differentiated preclinical CD cases from controls with an area under the curve (AUC) of 0.85. The signature was then validated in an independent preclinical cohort of CD patients, with an AUC of 0.87. 

While accuracy increased in proximity to clinical disease onset, the model was still highly predictive up to 16 years before CD diagnosis, at which time the AUC was 0.82. The panel showed perfect performance among newly diagnosed CD patients, with an AUC of 1.0, supporting clinical relevance.

Dr. Olle Grännö (left) and Dr. Jonas Halfvarson are, respectively, the lead and principal authors of a study demonstrating how circulating blood proteins could enable early identification of Crohn's disease.



Predictive power was statistically significant but less compelling among individuals with preclinical ulcerative colitis (UC). In this IBD subgroup, AUC for identification and validation cohorts was 0.77 and 0.67, respectively, while newly diagnosed patients had an AUC of 0.95.

“In preclinical samples, downregulated (but not upregulated) proteins related to gut barrier integrity and macrophage functionality correlated with time to diagnosis of CD,” Dr. Grännö and colleagues wrote. “Contrarily, all proteins associated with preclinical UC were upregulated, and only one protein marker correlated with the time to diagnosis.”

These findings suggest that disruptions in gut barrier integrity and macrophage function precede clinical CD onset, they explained, potentially serving as an early signal of inflammation-driven intestinal damage. In contrast, the preclinical UC signature primarily involved upregulated inflammatory markers.

Dr. Grännö and colleagues also examined the influence of genetic and environmental factors by comparing preclinical IBD signatures in unrelated and related twin pairs. 

The CD biosignature had an AUC of 0.89 when comparing individuals with preclinical CD to matched external (unrelated) healthy twins. Predictive ability dropped significantly (AUC = 0.58) when comparing CD cases to their own healthy twin siblings, suggesting that genetic and shared environmental factors have a “predominant influence” on protein dysregulation. 

In contrast, AUC among unrelated vs related twin controls was more similar for UC, at 0.76 and 0.64, respectively, indicating “a limited impact” of genetic and environmental factors on the protein signature.

Altogether, this study reinforces the concept of a long preclinical phase in CD, and highlights the potential for early detection and intervention, according to the investigators.

“The long preclinical period in CD endorses the adoption of early preventive strategies (e.g., diet alterations and medication) to potentially attenuate disease progression and improve the natural history of CD,” they concluded.

This study was funded by the Swedish Research Council, the Swedish Foundation for Strategic Research, the Örebro University Hospital Research Foundation, and others. The investigators disclosed relationships with Pfizer, Janssen, AbbVie, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 03/11/2025 - 13:07
Un-Gate On Date
Tue, 03/11/2025 - 13:07
Use ProPublica
CFC Schedule Remove Status
Tue, 03/11/2025 - 13:07
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 03/11/2025 - 13:07

New Risk Score Might Improve HCC Surveillance Among Cirrhosis Patients

Key Takeaways
Article Type
Changed
Fri, 02/07/2025 - 16:25

A newly validated risk stratification tool could potentially improve hepatocellular carcinoma (HCC) surveillance among patients with cirrhosis, based to a recent phase 3 biomarker validation study.

The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.

“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.” 

The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.

The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.

In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.

In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.

The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection. 

Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.

“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.

Body

Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.

Dr. Nancy S. Reau

“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said. 

PAaM, however, may be impractical for routine use.

“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.

Reau was particularly concerned about the need for repeated assessments over time. 

“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.” 

Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood. 

She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.

Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge. 

Finally, financial considerations may pose obstacles. 

“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.

Reau reported no conflicts of interest.

Publications
Topics
Sections
Body

Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.

Dr. Nancy S. Reau

“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said. 

PAaM, however, may be impractical for routine use.

“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.

Reau was particularly concerned about the need for repeated assessments over time. 

“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.” 

Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood. 

She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.

Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge. 

Finally, financial considerations may pose obstacles. 

“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.

Reau reported no conflicts of interest.

Body

Nancy S. Reau, MD, AGAF, of RUSH University in Chicago, highlighted both the promise and challenges of the PAaM score for HCC risk stratification, emphasizing that current liver cancer screening strategies remain inadequate, with only about 25% of patients receiving guideline-recommended surveillance.

Dr. Nancy S. Reau

“An easy-to-apply cost effective tool could significantly improve screening strategies, which should lead to earlier identification of liver cancer—at a time when curative treatment options are available,” Reau said. 

PAaM, however, may be impractical for routine use.

“A tool that classifies people into 3 different screening strategies and requires longitudinal applications and re-classification could add complexity,” she explained, predicting that “clinicians aren’t going to use it correctly.

Reau was particularly concerned about the need for repeated assessments over time. 

“People change,” she said. “A low-risk categorization by PAaM at the age of 40 may no longer be relevant at 50 or 60 as liver disease progresses.” 

Although the tool is “exciting,” Reau suggested that it is also “premature” until appropriate reclassification intervals are understood. 

She also noted that some patients still develop HCC despite being considered low risk, including cases of HCC that develop in non-cirrhotic HCV infection or MASLD.

Beyond the above clinical considerations, Dr. Reau pointed out several barriers to implementing PAaM in routine practice, starting with the under-recognition of cirrhosis. Even if patients are identified, ensuring both clinicians and patients adhere to screening recommendations remains a challenge. 

Finally, financial considerations may pose obstacles. 

“If some payers cover the tool and others do not, it will be very difficult to implement,” Dr. Reau concluded.

Reau reported no conflicts of interest.

Title
Key Takeaways
Key Takeaways

A newly validated risk stratification tool could potentially improve hepatocellular carcinoma (HCC) surveillance among patients with cirrhosis, based to a recent phase 3 biomarker validation study.

The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.

“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.” 

The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.

The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.

In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.

In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.

The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection. 

Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.

“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.

A newly validated risk stratification tool could potentially improve hepatocellular carcinoma (HCC) surveillance among patients with cirrhosis, based to a recent phase 3 biomarker validation study.

The Prognostic Liver Secretome Signature with Alpha-Fetoprotein plus Age, Male Sex, Albumin-Bilirubin, and Platelets (PAaM) score integrates both molecular and clinical variables to effectively classify cirrhosis patients by their risk of developing HCC, potentially sparing low-risk patients from unnecessary surveillance, lead author Naoto Fujiwara, MD, PhD, of the University of Texas Southwestern Medical Center, Dallas, and colleagues reported.

“Hepatocellular carcinoma risk stratification is an urgent unmet need for cost-effective screening and early detection in patients with cirrhosis,” the investigators wrote in Gastroenterology. “This study represents the largest and first phase 3 biomarker validation study that establishes an integrative molecular/clinical score, PAaM, for HCC risk stratification.” 

The PAaM score combines an 8-protein prognostic liver secretome signature with traditional clinical variables, including alpha-fetoprotein (AFP) levels, age, sex, albumin-bilirubin levels, and platelet counts. The score stratifies patients into high-, intermediate-, and low-risk categories.

The PAaM score was validated using 2 independent prospective cohorts in the United States: the statewide Texas Hepatocellular Carcinoma Consortium (THCCC) and the nationwide Hepatocellular Carcinoma Early Detection Strategy (HEDS). Across both cohorts, 3,484 patients with cirrhosis were followed over time to assess the development of HCC.

In the Texas cohort, comprising 2,156 patients with cirrhosis, PAaM classified 19% of patients as high risk, 42% as intermediate risk, and 39% as low risk. The annual incidence of HCC was significantly different across these groups, with high-risk patients experiencing a 5.3% incidence rate, versus 2.7% for intermediate-risk patients and 0.6% for low-risk patients (P less than .001). Compared with those in the low-risk group, high-risk patients had sub-distribution hazard ratio (sHR) of 7.51 for developing HCC, while intermediate-risk patients had an sHR of 4.20.

In the nationwide HEDS cohort, which included 1,328 patients, PAaM similarly stratified 15% of participants as high risk, 41% as intermediate risk, and 44% as low risk. Annual HCC incidence rates were 6.2%, 1.8%, and 0.8% for high-, intermediate-, and low-risk patients, respectively (P less than .001). Among these patients, sub-distribution hazard ratios for HCC were 6.54 for high-risk patients and 1.77 for intermediate-risk patients, again underscoring the tool’s potential to identify individuals at elevated risk of developing HCC.

The PAaM score outperformed existing models like the aMAP score and the PLSec-AFP molecular marker alone, with consistent superiority across a diverse range of cirrhosis etiologies, including metabolic dysfunction–associated steatotic liver disease (MASLD), alcohol-associated liver disease (ALD), and cured hepatitis C virus (HCV) infection. 

Based on these findings, high-risk patients might benefit from more intensive screening strategies, Fujiwara and colleagues suggested, while intermediate-risk patients could continue with semi-annual ultrasound-based screening. Of note, low-risk patients—comprising about 40% of the study population—could potentially avoid frequent screenings, thus reducing healthcare costs and minimizing unnecessary interventions.

“This represents a significant step toward the clinical translation of an individual risk-based HCC screening strategy to improve early HCC detection and reduce HCC mortality,” the investigators concluded.This study was supported by various the National Cancer Institute, Veterans Affairs, the Japan Society for the Promotion of Science, and others. The investigators disclosed additional relationships with Boston Scientific, Sirtex, Bayer, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 02/07/2025 - 15:39
Un-Gate On Date
Fri, 02/07/2025 - 15:39
Use ProPublica
CFC Schedule Remove Status
Fri, 02/07/2025 - 15:39
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 02/07/2025 - 15:39

Random Biopsy Improves IBD Dysplasia Detection, With Caveats

Incremental Value of Random Biopsies Questioned?
Article Type
Changed
Fri, 01/31/2025 - 15:29

Random biopsy during colonoscopy improves dysplasia detection among patients with inflammatory bowel disease (IBD), but level of benefit depends on equipment and disease characteristics, according to a recent review and meta-analysis.

Random biopsies collected in studies after 2011 provided limited additional yield, suggesting that high-definition equipment alone may be sufficient to achieve a high detection rate, lead author Li Gao, MD, of Air Force Medical University, Xi’an, China, and colleagues reported. In contrast, patients with primary sclerosing cholangitis (PSC) consistently benefited from random biopsy, offering clearer support for use in this subgroup.

“Random biopsy has been proposed as a strategy that may detect dysplastic lesions that cannot be identified endoscopically, thus minimizing the occurrence of missed colitis-associated dysplasia during colonoscopy,” the investigators wrote in Clinical Gastroenterology and Hepatology. However, the role of random biopsies in colonoscopic surveillance for patients with IBD remains a topic of ongoing debate.”

The SCENIC guidelines remain inconclusive on the role of random biopsy in IBD surveillance, the investigators noted, while other guidelines recommend random biopsy with high-definition white light endoscopy, but not chromoendoscopy. 

The present meta-analysis aimed to characterize the impact of random biopsy on dysplasia detection. The investigators aggregated prospective and retrospective studies published in English through September 2023, all of which compared random biopsy with other surveillance techniques and reported the proportion of dysplasia detected exclusively through random biopsy. 

“To the best of our knowledge, this systematic review and meta-analysis was the first comprehensive summary of the additional yield of random biopsy during colorectal cancer surveillance in patients with IBD,” Dr. Gao and colleagues noted.

The final dataset comprised 37 studies with 9,051 patients undergoing colorectal cancer surveillance for IBD. Patients had diverse baseline characteristics, including different proportions of ulcerative colitis and Crohn’s disease, as well as varying prevalence of PSC, a known risk factor for colorectal neoplasia.

The pooled additional yield of random biopsy was 10.34% in per-patient analysis and 16.20% in per-lesion analysis, meaning that approximately 1 in 10 patients and 1 in 6 lesions were detected exclusively through random biopsy. Despite these benefits, detection rates were relatively low: 1.31% per patient and 2.82% per lesion.

Subgroup analyses showed a decline in random biopsy additional yield over time. Studies conducted before 2011 reported an additional yield of 14.43% in per-patient analysis, compared to just 0.42% in studies conducted after 2011. This decline coincided with the widespread adoption of high-definition endoscopy.

PSC status strongly influenced detection rates throughout the study period. In patients without PSC (0%-10% PSC prevalence), the additional yield of random biopsy was 4.83% in per-patient analysis and 11.23% in per-lesion analysis. In studies where all patients had PSC, the additional yield increased dramatically to 56.05% and 45.22%, respectively.

“These findings highlight the incremental benefits of random biopsy and provide valuable insights into the management of endoscopic surveillance in patients with IBD,” the investigators wrote. “Considering the decreased additional yields in studies initiated after 2011, and the influence of PSC, endoscopy centers lacking full high-definition equipment should consider incorporating random biopsy in the standard colonoscopy surveillance for IBD patients, especially in those with PSC.”This study was supported by the National Key R&D Program of China, the Key Research and Development Program of Shaanxi Province, and the Nanchang High-Level Scientific and Technological Innovation Talents “Double Hundred Plan” project. The investigators disclosed no conflicts of interest.

Body

Patients with inflammatory bowel diseases (IBD) with colonic involvement are at two- to threefold increased risk of colorectal cancer (CRC), compared with the general population. The development and progression of dysplasia in these patients with IBD does not follow the typical adenoma-carcinoma sequence; rather, patients with IBD at increased risk of colorectal cancer may have field cancerization changes. Historically, these mucosal changes have been difficult to visualize endoscopically, at least with standard definition endoscopes. As a result, systematic, four-quadrant, random biopsies — 8 in each segment of the colon, totaling to 32 biopsies — are recommended for dysplasia detection. The practice has been adopted and accepted widely. Over time, there have been significant advancements in the management of IBD, with improved colonoscopic resolution, adjunct surveillance techniques, focus on quality of colonoscopic exams and evolution of treatments and treatment targets, and these have resulted in a reduction in the risk of CRC in patients with IBD. The value of random biopsies for dysplasia surveillance in patients with colonic IBD has been questioned.

Dr. Siddharth Singh

In this context, the systematic review and meta-analysis from Gao and colleagues provides critical insights into the yield of random biopsies for dysplasia surveillance in patients with IBD. Through a detailed analysis of 37 studies published between 2003 to 2023, with 9051 patients who underwent dysplasia surveillance with random biopsies, they ascertained the incremental yield of random biopsies. Overall, 1.3% of patients who underwent random biopsies were detected to have dysplasia. Of these, 1 in 10 patients were detected to have dysplasia only on random biopsies. On per-lesion analysis, one in six dysplastic lesions were only detected on random biopsies. Interestingly, this yield of random biopsies varied markedly depending on the era, as a surrogate for quality of colonoscopies. In studies that fully enrolled and published before 2011 (majority of patients recruited in the 1990s to early 2000s), the per-patient incremental yield of random biopsies was 14%; this dropped precipitously to 0.4% in studies published after 2011 (majority of patients recruited in late 2000s to 2010s). The incremental yield of random biopsies remained markedly high in studies with a high proportion of patients with primary sclerosing cholangitis (PSC), a condition consistently associated with a four- to sixfold higher risk of CRC in patients with IBD.



These findings lend support to the notion that improvements in endoscopy equipment with wide adoption of high-definition white-light colonoscopes and an emphasis on quality of endoscopic examination may be leading to better endoscopic detection of previously “invisible” dysplastic lesions, leading to a markedly lower incremental yield of random biopsies in the current era. This questions the utility of routinely collecting 32 random biopsies during a surveillance exam for a patient with IBD at increased risk of CRC (as long as a thorough high-quality exam is being performed), though there may be subpopulations such as patients with PSC where there may be benefit. Large ongoing trials comparing the yield of targeted biopsies vs random and targeted biopsies in patients with IBD undergoing dysplasia surveillance with high-definition colonoscopes will help to definitively address this question.

Siddharth Singh, MD, MS, is associate professor of medicine and director of the UCSD IBD Center in the division of gastroenterology, University of California, San Diego. He declares no conflicts of interest relative to this article.

Publications
Topics
Sections
Body

Patients with inflammatory bowel diseases (IBD) with colonic involvement are at two- to threefold increased risk of colorectal cancer (CRC), compared with the general population. The development and progression of dysplasia in these patients with IBD does not follow the typical adenoma-carcinoma sequence; rather, patients with IBD at increased risk of colorectal cancer may have field cancerization changes. Historically, these mucosal changes have been difficult to visualize endoscopically, at least with standard definition endoscopes. As a result, systematic, four-quadrant, random biopsies — 8 in each segment of the colon, totaling to 32 biopsies — are recommended for dysplasia detection. The practice has been adopted and accepted widely. Over time, there have been significant advancements in the management of IBD, with improved colonoscopic resolution, adjunct surveillance techniques, focus on quality of colonoscopic exams and evolution of treatments and treatment targets, and these have resulted in a reduction in the risk of CRC in patients with IBD. The value of random biopsies for dysplasia surveillance in patients with colonic IBD has been questioned.

Dr. Siddharth Singh

In this context, the systematic review and meta-analysis from Gao and colleagues provides critical insights into the yield of random biopsies for dysplasia surveillance in patients with IBD. Through a detailed analysis of 37 studies published between 2003 to 2023, with 9051 patients who underwent dysplasia surveillance with random biopsies, they ascertained the incremental yield of random biopsies. Overall, 1.3% of patients who underwent random biopsies were detected to have dysplasia. Of these, 1 in 10 patients were detected to have dysplasia only on random biopsies. On per-lesion analysis, one in six dysplastic lesions were only detected on random biopsies. Interestingly, this yield of random biopsies varied markedly depending on the era, as a surrogate for quality of colonoscopies. In studies that fully enrolled and published before 2011 (majority of patients recruited in the 1990s to early 2000s), the per-patient incremental yield of random biopsies was 14%; this dropped precipitously to 0.4% in studies published after 2011 (majority of patients recruited in late 2000s to 2010s). The incremental yield of random biopsies remained markedly high in studies with a high proportion of patients with primary sclerosing cholangitis (PSC), a condition consistently associated with a four- to sixfold higher risk of CRC in patients with IBD.



These findings lend support to the notion that improvements in endoscopy equipment with wide adoption of high-definition white-light colonoscopes and an emphasis on quality of endoscopic examination may be leading to better endoscopic detection of previously “invisible” dysplastic lesions, leading to a markedly lower incremental yield of random biopsies in the current era. This questions the utility of routinely collecting 32 random biopsies during a surveillance exam for a patient with IBD at increased risk of CRC (as long as a thorough high-quality exam is being performed), though there may be subpopulations such as patients with PSC where there may be benefit. Large ongoing trials comparing the yield of targeted biopsies vs random and targeted biopsies in patients with IBD undergoing dysplasia surveillance with high-definition colonoscopes will help to definitively address this question.

Siddharth Singh, MD, MS, is associate professor of medicine and director of the UCSD IBD Center in the division of gastroenterology, University of California, San Diego. He declares no conflicts of interest relative to this article.

Body

Patients with inflammatory bowel diseases (IBD) with colonic involvement are at two- to threefold increased risk of colorectal cancer (CRC), compared with the general population. The development and progression of dysplasia in these patients with IBD does not follow the typical adenoma-carcinoma sequence; rather, patients with IBD at increased risk of colorectal cancer may have field cancerization changes. Historically, these mucosal changes have been difficult to visualize endoscopically, at least with standard definition endoscopes. As a result, systematic, four-quadrant, random biopsies — 8 in each segment of the colon, totaling to 32 biopsies — are recommended for dysplasia detection. The practice has been adopted and accepted widely. Over time, there have been significant advancements in the management of IBD, with improved colonoscopic resolution, adjunct surveillance techniques, focus on quality of colonoscopic exams and evolution of treatments and treatment targets, and these have resulted in a reduction in the risk of CRC in patients with IBD. The value of random biopsies for dysplasia surveillance in patients with colonic IBD has been questioned.

Dr. Siddharth Singh

In this context, the systematic review and meta-analysis from Gao and colleagues provides critical insights into the yield of random biopsies for dysplasia surveillance in patients with IBD. Through a detailed analysis of 37 studies published between 2003 to 2023, with 9051 patients who underwent dysplasia surveillance with random biopsies, they ascertained the incremental yield of random biopsies. Overall, 1.3% of patients who underwent random biopsies were detected to have dysplasia. Of these, 1 in 10 patients were detected to have dysplasia only on random biopsies. On per-lesion analysis, one in six dysplastic lesions were only detected on random biopsies. Interestingly, this yield of random biopsies varied markedly depending on the era, as a surrogate for quality of colonoscopies. In studies that fully enrolled and published before 2011 (majority of patients recruited in the 1990s to early 2000s), the per-patient incremental yield of random biopsies was 14%; this dropped precipitously to 0.4% in studies published after 2011 (majority of patients recruited in late 2000s to 2010s). The incremental yield of random biopsies remained markedly high in studies with a high proportion of patients with primary sclerosing cholangitis (PSC), a condition consistently associated with a four- to sixfold higher risk of CRC in patients with IBD.



These findings lend support to the notion that improvements in endoscopy equipment with wide adoption of high-definition white-light colonoscopes and an emphasis on quality of endoscopic examination may be leading to better endoscopic detection of previously “invisible” dysplastic lesions, leading to a markedly lower incremental yield of random biopsies in the current era. This questions the utility of routinely collecting 32 random biopsies during a surveillance exam for a patient with IBD at increased risk of CRC (as long as a thorough high-quality exam is being performed), though there may be subpopulations such as patients with PSC where there may be benefit. Large ongoing trials comparing the yield of targeted biopsies vs random and targeted biopsies in patients with IBD undergoing dysplasia surveillance with high-definition colonoscopes will help to definitively address this question.

Siddharth Singh, MD, MS, is associate professor of medicine and director of the UCSD IBD Center in the division of gastroenterology, University of California, San Diego. He declares no conflicts of interest relative to this article.

Title
Incremental Value of Random Biopsies Questioned?
Incremental Value of Random Biopsies Questioned?

Random biopsy during colonoscopy improves dysplasia detection among patients with inflammatory bowel disease (IBD), but level of benefit depends on equipment and disease characteristics, according to a recent review and meta-analysis.

Random biopsies collected in studies after 2011 provided limited additional yield, suggesting that high-definition equipment alone may be sufficient to achieve a high detection rate, lead author Li Gao, MD, of Air Force Medical University, Xi’an, China, and colleagues reported. In contrast, patients with primary sclerosing cholangitis (PSC) consistently benefited from random biopsy, offering clearer support for use in this subgroup.

“Random biopsy has been proposed as a strategy that may detect dysplastic lesions that cannot be identified endoscopically, thus minimizing the occurrence of missed colitis-associated dysplasia during colonoscopy,” the investigators wrote in Clinical Gastroenterology and Hepatology. However, the role of random biopsies in colonoscopic surveillance for patients with IBD remains a topic of ongoing debate.”

The SCENIC guidelines remain inconclusive on the role of random biopsy in IBD surveillance, the investigators noted, while other guidelines recommend random biopsy with high-definition white light endoscopy, but not chromoendoscopy. 

The present meta-analysis aimed to characterize the impact of random biopsy on dysplasia detection. The investigators aggregated prospective and retrospective studies published in English through September 2023, all of which compared random biopsy with other surveillance techniques and reported the proportion of dysplasia detected exclusively through random biopsy. 

“To the best of our knowledge, this systematic review and meta-analysis was the first comprehensive summary of the additional yield of random biopsy during colorectal cancer surveillance in patients with IBD,” Dr. Gao and colleagues noted.

The final dataset comprised 37 studies with 9,051 patients undergoing colorectal cancer surveillance for IBD. Patients had diverse baseline characteristics, including different proportions of ulcerative colitis and Crohn’s disease, as well as varying prevalence of PSC, a known risk factor for colorectal neoplasia.

The pooled additional yield of random biopsy was 10.34% in per-patient analysis and 16.20% in per-lesion analysis, meaning that approximately 1 in 10 patients and 1 in 6 lesions were detected exclusively through random biopsy. Despite these benefits, detection rates were relatively low: 1.31% per patient and 2.82% per lesion.

Subgroup analyses showed a decline in random biopsy additional yield over time. Studies conducted before 2011 reported an additional yield of 14.43% in per-patient analysis, compared to just 0.42% in studies conducted after 2011. This decline coincided with the widespread adoption of high-definition endoscopy.

PSC status strongly influenced detection rates throughout the study period. In patients without PSC (0%-10% PSC prevalence), the additional yield of random biopsy was 4.83% in per-patient analysis and 11.23% in per-lesion analysis. In studies where all patients had PSC, the additional yield increased dramatically to 56.05% and 45.22%, respectively.

“These findings highlight the incremental benefits of random biopsy and provide valuable insights into the management of endoscopic surveillance in patients with IBD,” the investigators wrote. “Considering the decreased additional yields in studies initiated after 2011, and the influence of PSC, endoscopy centers lacking full high-definition equipment should consider incorporating random biopsy in the standard colonoscopy surveillance for IBD patients, especially in those with PSC.”This study was supported by the National Key R&D Program of China, the Key Research and Development Program of Shaanxi Province, and the Nanchang High-Level Scientific and Technological Innovation Talents “Double Hundred Plan” project. The investigators disclosed no conflicts of interest.

Random biopsy during colonoscopy improves dysplasia detection among patients with inflammatory bowel disease (IBD), but level of benefit depends on equipment and disease characteristics, according to a recent review and meta-analysis.

Random biopsies collected in studies after 2011 provided limited additional yield, suggesting that high-definition equipment alone may be sufficient to achieve a high detection rate, lead author Li Gao, MD, of Air Force Medical University, Xi’an, China, and colleagues reported. In contrast, patients with primary sclerosing cholangitis (PSC) consistently benefited from random biopsy, offering clearer support for use in this subgroup.

“Random biopsy has been proposed as a strategy that may detect dysplastic lesions that cannot be identified endoscopically, thus minimizing the occurrence of missed colitis-associated dysplasia during colonoscopy,” the investigators wrote in Clinical Gastroenterology and Hepatology. However, the role of random biopsies in colonoscopic surveillance for patients with IBD remains a topic of ongoing debate.”

The SCENIC guidelines remain inconclusive on the role of random biopsy in IBD surveillance, the investigators noted, while other guidelines recommend random biopsy with high-definition white light endoscopy, but not chromoendoscopy. 

The present meta-analysis aimed to characterize the impact of random biopsy on dysplasia detection. The investigators aggregated prospective and retrospective studies published in English through September 2023, all of which compared random biopsy with other surveillance techniques and reported the proportion of dysplasia detected exclusively through random biopsy. 

“To the best of our knowledge, this systematic review and meta-analysis was the first comprehensive summary of the additional yield of random biopsy during colorectal cancer surveillance in patients with IBD,” Dr. Gao and colleagues noted.

The final dataset comprised 37 studies with 9,051 patients undergoing colorectal cancer surveillance for IBD. Patients had diverse baseline characteristics, including different proportions of ulcerative colitis and Crohn’s disease, as well as varying prevalence of PSC, a known risk factor for colorectal neoplasia.

The pooled additional yield of random biopsy was 10.34% in per-patient analysis and 16.20% in per-lesion analysis, meaning that approximately 1 in 10 patients and 1 in 6 lesions were detected exclusively through random biopsy. Despite these benefits, detection rates were relatively low: 1.31% per patient and 2.82% per lesion.

Subgroup analyses showed a decline in random biopsy additional yield over time. Studies conducted before 2011 reported an additional yield of 14.43% in per-patient analysis, compared to just 0.42% in studies conducted after 2011. This decline coincided with the widespread adoption of high-definition endoscopy.

PSC status strongly influenced detection rates throughout the study period. In patients without PSC (0%-10% PSC prevalence), the additional yield of random biopsy was 4.83% in per-patient analysis and 11.23% in per-lesion analysis. In studies where all patients had PSC, the additional yield increased dramatically to 56.05% and 45.22%, respectively.

“These findings highlight the incremental benefits of random biopsy and provide valuable insights into the management of endoscopic surveillance in patients with IBD,” the investigators wrote. “Considering the decreased additional yields in studies initiated after 2011, and the influence of PSC, endoscopy centers lacking full high-definition equipment should consider incorporating random biopsy in the standard colonoscopy surveillance for IBD patients, especially in those with PSC.”This study was supported by the National Key R&D Program of China, the Key Research and Development Program of Shaanxi Province, and the Nanchang High-Level Scientific and Technological Innovation Talents “Double Hundred Plan” project. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 01/31/2025 - 15:23
Un-Gate On Date
Fri, 01/31/2025 - 15:23
Use ProPublica
CFC Schedule Remove Status
Fri, 01/31/2025 - 15:23
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 01/31/2025 - 15:23

Hispanic Patients Face Disparities in MASLD

Article Type
Changed
Fri, 01/17/2025 - 00:04

Hispanic adults in the US experience significantly higher risk of metabolic dysfunction–associated steatotic liver disease (MASLD) and metabolic dysfunction–associated steatohepatitis (MASH), compared with non-Hispanic adults, according to a new systematic review and meta-analysis.

These findings underscore worsening health disparities in MASLD management and outcomes in this patient population, Kaleb Tesfai, BS, of the University of California, San Diego, and colleagues reported.

Previously, a 2018 meta-analysis found that Hispanic individuals had a higher MASLD prevalence than non-Hispanic White and Black individuals, along with an elevated relative risk of MASH. 

“In the setting of the evolving obesity epidemic, prevalence of MASLD has increased and characteristics of patient populations of interest have changed since the time of this prior meta-analysis,” Mr. Tesfai and colleagues wrote in Clinical Gastroenterology and Hepatology. “Importantly, MASH has become a leading indication for liver transplant, thereby impacting long-term clinical outcomes. As such, accurate, updated prevalence rates and relative risk estimates of MASLD, MASH, advanced fibrosis/cirrhosis, and clinical outcomes for Hispanic adults in the US remain poorly characterized.”

The present meta-analysis focused specifically on Hispanic adults in the United States; it compared their disease prevalence, severity, and risk to non-Hispanic adults. Twenty-two studies, conducted between January 1, 2010, and December 31, 2023, were included, comprising 756,088 participants, of whom 62,072 were Hispanic. 

Study eligibility required reported data on the prevalence of MASLD, MASH, or advanced fibrosis, as well as racial or ethnic subgroup analyses. Studies were excluded if they did not use validated diagnostic methods, such as liver biopsy or imaging, or if they lacked stratification by Hispanic ethnicity. Prevalence estimates and relative risks were calculated using random-effects models. The analysis also accounted for potential confounding factors, including demographic characteristics, metabolic comorbidities, and social determinants of health (SDOH).

The pooled prevalence of MASLD among Hispanic adults was 41% (95% CI, 30%-52%), compared with 27% in non-Hispanic populations, reflecting a relative risk (RR) of 1.50 (95% CI, 1.32-1.69). For MASH, the pooled prevalence among Hispanic adults with MASLD was 61% (95% CI, 39%-82%), with an RR of 1.42 (95% CI, 1.04-1.93), compared with non-Hispanic adults.

“Our systematic review and meta-analysis highlights the worsening health disparities experienced by Hispanic adults in the US, with significant increase in the relative risk of MASLD and MASH in contemporary cohorts compared with prior estimates,” the investigators wrote. 

Despite these elevated risks for MASLD and MASH, advanced fibrosis and cirrhosis did not show statistically significant differences between Hispanic and non-Hispanic populations. 

The study also characterized the relationship between SDOH and detected health disparities. Adjustments for factors such as income, education, and health care access eliminated the independent association between Hispanic and MASLD risk, suggesting that these structural inequities play a meaningful role in disease disparities. 

Still, genetic factors, including PNPLA3 and TM6SF2 risk alleles, were identified as contributors to the higher disease burden in Hispanic populations.

Mr. Tesfai and colleagues called for prospective studies with standardized outcome definitions to better understand risks of advanced fibrosis and cirrhosis, as well as long-term clinical outcomes. In addition, they recommended further investigation of SDOH to determine optimal intervention targets.

“Public health initiatives focused on increasing screening for MASLD and MASH and enhancing health care delivery for this high-risk group are critically needed to optimize health outcomes for Hispanic adults in the US,” they concluded.This study was supported by various institutes at the National Institutes of Health, Gilead Sciences, and the SDSU-UCSD CREATE Partnership. The investigators disclosed additional relationships with Eli Lilly, Galmed, Pfizer, and others.

Publications
Topics
Sections

Hispanic adults in the US experience significantly higher risk of metabolic dysfunction–associated steatotic liver disease (MASLD) and metabolic dysfunction–associated steatohepatitis (MASH), compared with non-Hispanic adults, according to a new systematic review and meta-analysis.

These findings underscore worsening health disparities in MASLD management and outcomes in this patient population, Kaleb Tesfai, BS, of the University of California, San Diego, and colleagues reported.

Previously, a 2018 meta-analysis found that Hispanic individuals had a higher MASLD prevalence than non-Hispanic White and Black individuals, along with an elevated relative risk of MASH. 

“In the setting of the evolving obesity epidemic, prevalence of MASLD has increased and characteristics of patient populations of interest have changed since the time of this prior meta-analysis,” Mr. Tesfai and colleagues wrote in Clinical Gastroenterology and Hepatology. “Importantly, MASH has become a leading indication for liver transplant, thereby impacting long-term clinical outcomes. As such, accurate, updated prevalence rates and relative risk estimates of MASLD, MASH, advanced fibrosis/cirrhosis, and clinical outcomes for Hispanic adults in the US remain poorly characterized.”

The present meta-analysis focused specifically on Hispanic adults in the United States; it compared their disease prevalence, severity, and risk to non-Hispanic adults. Twenty-two studies, conducted between January 1, 2010, and December 31, 2023, were included, comprising 756,088 participants, of whom 62,072 were Hispanic. 

Study eligibility required reported data on the prevalence of MASLD, MASH, or advanced fibrosis, as well as racial or ethnic subgroup analyses. Studies were excluded if they did not use validated diagnostic methods, such as liver biopsy or imaging, or if they lacked stratification by Hispanic ethnicity. Prevalence estimates and relative risks were calculated using random-effects models. The analysis also accounted for potential confounding factors, including demographic characteristics, metabolic comorbidities, and social determinants of health (SDOH).

The pooled prevalence of MASLD among Hispanic adults was 41% (95% CI, 30%-52%), compared with 27% in non-Hispanic populations, reflecting a relative risk (RR) of 1.50 (95% CI, 1.32-1.69). For MASH, the pooled prevalence among Hispanic adults with MASLD was 61% (95% CI, 39%-82%), with an RR of 1.42 (95% CI, 1.04-1.93), compared with non-Hispanic adults.

“Our systematic review and meta-analysis highlights the worsening health disparities experienced by Hispanic adults in the US, with significant increase in the relative risk of MASLD and MASH in contemporary cohorts compared with prior estimates,” the investigators wrote. 

Despite these elevated risks for MASLD and MASH, advanced fibrosis and cirrhosis did not show statistically significant differences between Hispanic and non-Hispanic populations. 

The study also characterized the relationship between SDOH and detected health disparities. Adjustments for factors such as income, education, and health care access eliminated the independent association between Hispanic and MASLD risk, suggesting that these structural inequities play a meaningful role in disease disparities. 

Still, genetic factors, including PNPLA3 and TM6SF2 risk alleles, were identified as contributors to the higher disease burden in Hispanic populations.

Mr. Tesfai and colleagues called for prospective studies with standardized outcome definitions to better understand risks of advanced fibrosis and cirrhosis, as well as long-term clinical outcomes. In addition, they recommended further investigation of SDOH to determine optimal intervention targets.

“Public health initiatives focused on increasing screening for MASLD and MASH and enhancing health care delivery for this high-risk group are critically needed to optimize health outcomes for Hispanic adults in the US,” they concluded.This study was supported by various institutes at the National Institutes of Health, Gilead Sciences, and the SDSU-UCSD CREATE Partnership. The investigators disclosed additional relationships with Eli Lilly, Galmed, Pfizer, and others.

Hispanic adults in the US experience significantly higher risk of metabolic dysfunction–associated steatotic liver disease (MASLD) and metabolic dysfunction–associated steatohepatitis (MASH), compared with non-Hispanic adults, according to a new systematic review and meta-analysis.

These findings underscore worsening health disparities in MASLD management and outcomes in this patient population, Kaleb Tesfai, BS, of the University of California, San Diego, and colleagues reported.

Previously, a 2018 meta-analysis found that Hispanic individuals had a higher MASLD prevalence than non-Hispanic White and Black individuals, along with an elevated relative risk of MASH. 

“In the setting of the evolving obesity epidemic, prevalence of MASLD has increased and characteristics of patient populations of interest have changed since the time of this prior meta-analysis,” Mr. Tesfai and colleagues wrote in Clinical Gastroenterology and Hepatology. “Importantly, MASH has become a leading indication for liver transplant, thereby impacting long-term clinical outcomes. As such, accurate, updated prevalence rates and relative risk estimates of MASLD, MASH, advanced fibrosis/cirrhosis, and clinical outcomes for Hispanic adults in the US remain poorly characterized.”

The present meta-analysis focused specifically on Hispanic adults in the United States; it compared their disease prevalence, severity, and risk to non-Hispanic adults. Twenty-two studies, conducted between January 1, 2010, and December 31, 2023, were included, comprising 756,088 participants, of whom 62,072 were Hispanic. 

Study eligibility required reported data on the prevalence of MASLD, MASH, or advanced fibrosis, as well as racial or ethnic subgroup analyses. Studies were excluded if they did not use validated diagnostic methods, such as liver biopsy or imaging, or if they lacked stratification by Hispanic ethnicity. Prevalence estimates and relative risks were calculated using random-effects models. The analysis also accounted for potential confounding factors, including demographic characteristics, metabolic comorbidities, and social determinants of health (SDOH).

The pooled prevalence of MASLD among Hispanic adults was 41% (95% CI, 30%-52%), compared with 27% in non-Hispanic populations, reflecting a relative risk (RR) of 1.50 (95% CI, 1.32-1.69). For MASH, the pooled prevalence among Hispanic adults with MASLD was 61% (95% CI, 39%-82%), with an RR of 1.42 (95% CI, 1.04-1.93), compared with non-Hispanic adults.

“Our systematic review and meta-analysis highlights the worsening health disparities experienced by Hispanic adults in the US, with significant increase in the relative risk of MASLD and MASH in contemporary cohorts compared with prior estimates,” the investigators wrote. 

Despite these elevated risks for MASLD and MASH, advanced fibrosis and cirrhosis did not show statistically significant differences between Hispanic and non-Hispanic populations. 

The study also characterized the relationship between SDOH and detected health disparities. Adjustments for factors such as income, education, and health care access eliminated the independent association between Hispanic and MASLD risk, suggesting that these structural inequities play a meaningful role in disease disparities. 

Still, genetic factors, including PNPLA3 and TM6SF2 risk alleles, were identified as contributors to the higher disease burden in Hispanic populations.

Mr. Tesfai and colleagues called for prospective studies with standardized outcome definitions to better understand risks of advanced fibrosis and cirrhosis, as well as long-term clinical outcomes. In addition, they recommended further investigation of SDOH to determine optimal intervention targets.

“Public health initiatives focused on increasing screening for MASLD and MASH and enhancing health care delivery for this high-risk group are critically needed to optimize health outcomes for Hispanic adults in the US,” they concluded.This study was supported by various institutes at the National Institutes of Health, Gilead Sciences, and the SDSU-UCSD CREATE Partnership. The investigators disclosed additional relationships with Eli Lilly, Galmed, Pfizer, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/15/2025 - 11:38
Un-Gate On Date
Wed, 01/15/2025 - 11:38
Use ProPublica
CFC Schedule Remove Status
Wed, 01/15/2025 - 11:38
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Wed, 01/15/2025 - 11:38

Liver Stiffness Measurement Predicts Long-Term Outcomes In Pediatric Biliary Atresia

A Valuable New Tool
Article Type
Changed
Fri, 01/10/2025 - 12:13

Liver stiffness measurement (LSM) using vibration-controlled transient elastography (VCTE) predicts long-term outcomes among pediatric patients with biliary atresia, according to investigators.

These findings suggest that LSM may serve as a noninvasive tool for risk stratification and treatment planning in this population, reported lead author Jean P. Molleston, MD, of Indiana University School of Medicine, Indianapolis, and colleagues.

 

Dr. Jean P. Molleston

“Biliary atresia is frequently complicated by hepatic fibrosis with progression to cirrhosis and portal hypertension manifested by ascites, hepatopulmonary syndrome, and variceal bleeding,” the investigators wrote in Gastroenterology. “The ability to predict these outcomes can inform clinical decision-making.”

To this end, VCTE has been gaining increasing support in the pediatric setting.

“Advantages of VCTE over liver biopsy include convenience, cost, sampling bias, and risk,” the investigators wrote. “VCTE potentially allows (1) fibrosis estimation, (2) prediction of portal hypertension complications/survival, and (3) ability to noninvasively monitor liver stiffness as a fibrosis surrogate.”

The present multicenter study aimed to gauge the prognostic utility of VCTE among 254 patients, aged 21 years or younger, with biliary atresia. All patients had a valid baseline LSM, plus longitudinal clinical and laboratory data drawn from studies by the Childhood Liver Disease Research Network (ChiLDReN). Liver stiffness was assessed noninvasively with FibroScan devices, adhering to protocols that required at least 10 valid measurements and a variability of less than 30%.

The primary outcomes were survival with native liver (SNL), defined as the time to liver transplantation or death, and a composite measure of liver-related events, including the first occurrence of transplantation, death, ascites, variceal bleeding, or hepatopulmonary syndrome. Secondary outcomes focused on the trajectory of platelet decline, a marker of disease progression. The study also explored the relationship between baseline LSM and conventional biomarkers, including platelet count, albumin, and bilirubin.

LSM was a strong predictor of long-term outcomes. Specifically, Kaplan-Meier analysis showed significant differences in 5-year SNL across LSM strata (P < .001). Children with LSM values less than 10 kPa had excellent 5-year SNL rates (LSM 10 to < 15 kPa, 88.9%; 95% CI, 75.1-95.3%), while those with LSM of at least 15 kPa exhibited substantially lower 5-year SNL (58.9%; 95% CI, 46.0-69.7%).

Similarly, event-free survival (EFS) rates declined as LSM values increased (P < .001). Participants with LSM less than 10 kPa had a 5-year EFS rate of 92.2% versus with 61.2% for those with LSM of at least 15 kPa.

LSM also predicted platelet decline. For every twofold increase in baseline LSM, platelet counts declined by an additional 4,000/mm3 per year (P < .001). This association was illustrated through predicted trajectories for participants with LSM values of 4, 7, 12, 18, and 42 kPa, corresponding to different percentiles of disease severity.

Cox proportional hazards analysis indicated that a two-fold increase in LSM was associated with a hazard ratio of 3.3 (P < .001) for liver transplant or death. While LSM had good discrimination on its own (C statistic = 0.83), it did not significantly improve predictive accuracy when added to models based on platelet count, albumin, and bilirubin.

“This noninvasive measurement could potentially be used to predict natural history, stratify patients for clinical trials, plan interventions, and provide anticipatory guidance,” Molleston and colleagues concluded. This study was supported by grants from the National Institute of Diabetes, Digestive and Kidney Diseases; National Institutes of Health; Childhood Liver Disease Research Network; and others. The investigators disclosed no conflicts of interest.

Body
Dr. Aaron Bennett

Grading liver stiffness using elastography is a widely utilized tool in adult populations, and its application is expanding in pediatric hepatology clinics. Clinicians incorporate liver stiffness measurements (LSM) alongside clinical findings and biochemical markers to noninvasively assess the degree of hepatic fibrosis and cirrhosis. Molleston and colleagues leveraged the robust data from the National Institute of Diabetes and Digestive and Kidney Diseases–supported network ChiLDReN and found that LSM in children with biliary atresia (BA) correlate with the progression to complications associated with portal hypertension and liver transplantation. While these findings are not unexpected, this compelling investigation accomplishes the important function of validating the utility of elastography in this cohort.

Prognosticating the timeline of complications stemming from biliary atresia is a central tenet of pediatric hepatology. Helping families understand what the future may hold for their child is critical in fostering long-term relationships between clinicians and caregivers. Furthermore, establishing clear expectations regarding follow-up care and monitoring is beneficial for both providers and patients. Of particular importance is minimizing the need for invasive procedures, such as liver biopsy, which, while relatively safe, remains burdensome and is rarely used to assess fibrosis in BA.

Dr. Elizabeth B. Rand

Pediatric hepatologists already consider multiple factors — including age at hepatoportoenterostomy, subsequent clearance of cholestasis, exam findings such as splenomegaly, and platelet count — to predict the clinical course of infants with BA. The addition of a data-driven approach to interpreting liver stiffness measurements represents a valuable new tool in this expanding repertoire, offering an encouraging prospect for both providers and families navigating the complexities of pediatric liver disease.

Aaron Bennett, MD, is a fellow in the Division of Gastroenterology, Hepatology and Nutrition at Children’s Hospital of Philadelphia in Pennsylvania. Elizabeth B. Rand, MD, is the medical director of the Liver Transplant Program, director of the Gastroenterology Fellowship Program, and director of the Advanced Transplant Hepatology Program at Children’s Hospital of Philadelphia.

Publications
Topics
Sections
Body
Dr. Aaron Bennett

Grading liver stiffness using elastography is a widely utilized tool in adult populations, and its application is expanding in pediatric hepatology clinics. Clinicians incorporate liver stiffness measurements (LSM) alongside clinical findings and biochemical markers to noninvasively assess the degree of hepatic fibrosis and cirrhosis. Molleston and colleagues leveraged the robust data from the National Institute of Diabetes and Digestive and Kidney Diseases–supported network ChiLDReN and found that LSM in children with biliary atresia (BA) correlate with the progression to complications associated with portal hypertension and liver transplantation. While these findings are not unexpected, this compelling investigation accomplishes the important function of validating the utility of elastography in this cohort.

Prognosticating the timeline of complications stemming from biliary atresia is a central tenet of pediatric hepatology. Helping families understand what the future may hold for their child is critical in fostering long-term relationships between clinicians and caregivers. Furthermore, establishing clear expectations regarding follow-up care and monitoring is beneficial for both providers and patients. Of particular importance is minimizing the need for invasive procedures, such as liver biopsy, which, while relatively safe, remains burdensome and is rarely used to assess fibrosis in BA.

Dr. Elizabeth B. Rand

Pediatric hepatologists already consider multiple factors — including age at hepatoportoenterostomy, subsequent clearance of cholestasis, exam findings such as splenomegaly, and platelet count — to predict the clinical course of infants with BA. The addition of a data-driven approach to interpreting liver stiffness measurements represents a valuable new tool in this expanding repertoire, offering an encouraging prospect for both providers and families navigating the complexities of pediatric liver disease.

Aaron Bennett, MD, is a fellow in the Division of Gastroenterology, Hepatology and Nutrition at Children’s Hospital of Philadelphia in Pennsylvania. Elizabeth B. Rand, MD, is the medical director of the Liver Transplant Program, director of the Gastroenterology Fellowship Program, and director of the Advanced Transplant Hepatology Program at Children’s Hospital of Philadelphia.

Body
Dr. Aaron Bennett

Grading liver stiffness using elastography is a widely utilized tool in adult populations, and its application is expanding in pediatric hepatology clinics. Clinicians incorporate liver stiffness measurements (LSM) alongside clinical findings and biochemical markers to noninvasively assess the degree of hepatic fibrosis and cirrhosis. Molleston and colleagues leveraged the robust data from the National Institute of Diabetes and Digestive and Kidney Diseases–supported network ChiLDReN and found that LSM in children with biliary atresia (BA) correlate with the progression to complications associated with portal hypertension and liver transplantation. While these findings are not unexpected, this compelling investigation accomplishes the important function of validating the utility of elastography in this cohort.

Prognosticating the timeline of complications stemming from biliary atresia is a central tenet of pediatric hepatology. Helping families understand what the future may hold for their child is critical in fostering long-term relationships between clinicians and caregivers. Furthermore, establishing clear expectations regarding follow-up care and monitoring is beneficial for both providers and patients. Of particular importance is minimizing the need for invasive procedures, such as liver biopsy, which, while relatively safe, remains burdensome and is rarely used to assess fibrosis in BA.

Dr. Elizabeth B. Rand

Pediatric hepatologists already consider multiple factors — including age at hepatoportoenterostomy, subsequent clearance of cholestasis, exam findings such as splenomegaly, and platelet count — to predict the clinical course of infants with BA. The addition of a data-driven approach to interpreting liver stiffness measurements represents a valuable new tool in this expanding repertoire, offering an encouraging prospect for both providers and families navigating the complexities of pediatric liver disease.

Aaron Bennett, MD, is a fellow in the Division of Gastroenterology, Hepatology and Nutrition at Children’s Hospital of Philadelphia in Pennsylvania. Elizabeth B. Rand, MD, is the medical director of the Liver Transplant Program, director of the Gastroenterology Fellowship Program, and director of the Advanced Transplant Hepatology Program at Children’s Hospital of Philadelphia.

Title
A Valuable New Tool
A Valuable New Tool

Liver stiffness measurement (LSM) using vibration-controlled transient elastography (VCTE) predicts long-term outcomes among pediatric patients with biliary atresia, according to investigators.

These findings suggest that LSM may serve as a noninvasive tool for risk stratification and treatment planning in this population, reported lead author Jean P. Molleston, MD, of Indiana University School of Medicine, Indianapolis, and colleagues.

 

Dr. Jean P. Molleston

“Biliary atresia is frequently complicated by hepatic fibrosis with progression to cirrhosis and portal hypertension manifested by ascites, hepatopulmonary syndrome, and variceal bleeding,” the investigators wrote in Gastroenterology. “The ability to predict these outcomes can inform clinical decision-making.”

To this end, VCTE has been gaining increasing support in the pediatric setting.

“Advantages of VCTE over liver biopsy include convenience, cost, sampling bias, and risk,” the investigators wrote. “VCTE potentially allows (1) fibrosis estimation, (2) prediction of portal hypertension complications/survival, and (3) ability to noninvasively monitor liver stiffness as a fibrosis surrogate.”

The present multicenter study aimed to gauge the prognostic utility of VCTE among 254 patients, aged 21 years or younger, with biliary atresia. All patients had a valid baseline LSM, plus longitudinal clinical and laboratory data drawn from studies by the Childhood Liver Disease Research Network (ChiLDReN). Liver stiffness was assessed noninvasively with FibroScan devices, adhering to protocols that required at least 10 valid measurements and a variability of less than 30%.

The primary outcomes were survival with native liver (SNL), defined as the time to liver transplantation or death, and a composite measure of liver-related events, including the first occurrence of transplantation, death, ascites, variceal bleeding, or hepatopulmonary syndrome. Secondary outcomes focused on the trajectory of platelet decline, a marker of disease progression. The study also explored the relationship between baseline LSM and conventional biomarkers, including platelet count, albumin, and bilirubin.

LSM was a strong predictor of long-term outcomes. Specifically, Kaplan-Meier analysis showed significant differences in 5-year SNL across LSM strata (P < .001). Children with LSM values less than 10 kPa had excellent 5-year SNL rates (LSM 10 to < 15 kPa, 88.9%; 95% CI, 75.1-95.3%), while those with LSM of at least 15 kPa exhibited substantially lower 5-year SNL (58.9%; 95% CI, 46.0-69.7%).

Similarly, event-free survival (EFS) rates declined as LSM values increased (P < .001). Participants with LSM less than 10 kPa had a 5-year EFS rate of 92.2% versus with 61.2% for those with LSM of at least 15 kPa.

LSM also predicted platelet decline. For every twofold increase in baseline LSM, platelet counts declined by an additional 4,000/mm3 per year (P < .001). This association was illustrated through predicted trajectories for participants with LSM values of 4, 7, 12, 18, and 42 kPa, corresponding to different percentiles of disease severity.

Cox proportional hazards analysis indicated that a two-fold increase in LSM was associated with a hazard ratio of 3.3 (P < .001) for liver transplant or death. While LSM had good discrimination on its own (C statistic = 0.83), it did not significantly improve predictive accuracy when added to models based on platelet count, albumin, and bilirubin.

“This noninvasive measurement could potentially be used to predict natural history, stratify patients for clinical trials, plan interventions, and provide anticipatory guidance,” Molleston and colleagues concluded. This study was supported by grants from the National Institute of Diabetes, Digestive and Kidney Diseases; National Institutes of Health; Childhood Liver Disease Research Network; and others. The investigators disclosed no conflicts of interest.

Liver stiffness measurement (LSM) using vibration-controlled transient elastography (VCTE) predicts long-term outcomes among pediatric patients with biliary atresia, according to investigators.

These findings suggest that LSM may serve as a noninvasive tool for risk stratification and treatment planning in this population, reported lead author Jean P. Molleston, MD, of Indiana University School of Medicine, Indianapolis, and colleagues.

 

Dr. Jean P. Molleston

“Biliary atresia is frequently complicated by hepatic fibrosis with progression to cirrhosis and portal hypertension manifested by ascites, hepatopulmonary syndrome, and variceal bleeding,” the investigators wrote in Gastroenterology. “The ability to predict these outcomes can inform clinical decision-making.”

To this end, VCTE has been gaining increasing support in the pediatric setting.

“Advantages of VCTE over liver biopsy include convenience, cost, sampling bias, and risk,” the investigators wrote. “VCTE potentially allows (1) fibrosis estimation, (2) prediction of portal hypertension complications/survival, and (3) ability to noninvasively monitor liver stiffness as a fibrosis surrogate.”

The present multicenter study aimed to gauge the prognostic utility of VCTE among 254 patients, aged 21 years or younger, with biliary atresia. All patients had a valid baseline LSM, plus longitudinal clinical and laboratory data drawn from studies by the Childhood Liver Disease Research Network (ChiLDReN). Liver stiffness was assessed noninvasively with FibroScan devices, adhering to protocols that required at least 10 valid measurements and a variability of less than 30%.

The primary outcomes were survival with native liver (SNL), defined as the time to liver transplantation or death, and a composite measure of liver-related events, including the first occurrence of transplantation, death, ascites, variceal bleeding, or hepatopulmonary syndrome. Secondary outcomes focused on the trajectory of platelet decline, a marker of disease progression. The study also explored the relationship between baseline LSM and conventional biomarkers, including platelet count, albumin, and bilirubin.

LSM was a strong predictor of long-term outcomes. Specifically, Kaplan-Meier analysis showed significant differences in 5-year SNL across LSM strata (P < .001). Children with LSM values less than 10 kPa had excellent 5-year SNL rates (LSM 10 to < 15 kPa, 88.9%; 95% CI, 75.1-95.3%), while those with LSM of at least 15 kPa exhibited substantially lower 5-year SNL (58.9%; 95% CI, 46.0-69.7%).

Similarly, event-free survival (EFS) rates declined as LSM values increased (P < .001). Participants with LSM less than 10 kPa had a 5-year EFS rate of 92.2% versus with 61.2% for those with LSM of at least 15 kPa.

LSM also predicted platelet decline. For every twofold increase in baseline LSM, platelet counts declined by an additional 4,000/mm3 per year (P < .001). This association was illustrated through predicted trajectories for participants with LSM values of 4, 7, 12, 18, and 42 kPa, corresponding to different percentiles of disease severity.

Cox proportional hazards analysis indicated that a two-fold increase in LSM was associated with a hazard ratio of 3.3 (P < .001) for liver transplant or death. While LSM had good discrimination on its own (C statistic = 0.83), it did not significantly improve predictive accuracy when added to models based on platelet count, albumin, and bilirubin.

“This noninvasive measurement could potentially be used to predict natural history, stratify patients for clinical trials, plan interventions, and provide anticipatory guidance,” Molleston and colleagues concluded. This study was supported by grants from the National Institute of Diabetes, Digestive and Kidney Diseases; National Institutes of Health; Childhood Liver Disease Research Network; and others. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Fri, 01/10/2025 - 10:13
Un-Gate On Date
Fri, 01/10/2025 - 10:13
Use ProPublica
CFC Schedule Remove Status
Fri, 01/10/2025 - 10:13
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Fri, 01/10/2025 - 10:13