User login
MASLD Healthcare Costs Climbing Fast in Canada
according to a new study.
The expected surge reflects the growing prevalence of MASLD and its associated conditions, emphasizing the necessity for a comprehensive approach to address this escalating public health issue, reported lead author K. Ally Memedovich, BHSc, of the University of Calgary in Alberta, Canada, and colleagues.
“The costs associated with the management of MASLD in Canada remain unknown but have been estimated as being very high,” the investigators wrote in Gastro Hep Advances. “Specifically, in one study from the United States, the healthcare costs and utilization of those with MASLD was nearly double that of patients without MASLD but with similar health status. This difference was largely due to increases in imaging, hospitalization, liver fibrosis assessment, laboratory tests, and outpatient visits.”
Although projections are available to estimate the future prevalence of MASLD in Canada, no models are available to predict the growing national economic burden, prompting the present study.
Memedovich and colleagues analyzed healthcare usage data from 6,358 patients diagnosed with MASLD disease in Calgary from 2018 to 2020. Using provincial administrative data, they calculated both liver-specific and total healthcare costs associated with different stages of liver fibrosis, ranging from F0/F1 (minimal fibrosis) to F4 (advanced fibrosis or cirrhosis).
The patients’ liver fibrosis stages were determined using liver stiffness measurements obtained through shear wave elastography. Average annual cost per patient was then calculated for each fibrosis stage by analyzing hospitalizations, ambulatory care, and physician claims data.
The annual average liver-specific cost per patient increased with severity of liver fibrosis; costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$7.02, C$35.30, C$60.46, and C$72.55, respectively. By 2050, liver-specific healthcare costs are projected to increase by C$51 million, reaching C$136 million Canada-wide.
Total healthcare costs were markedly higher; annual costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$397.90, C$781.53, C$2,881.84, and C$1,598.82, respectively. As a result, total healthcare costs are expected to rise by nearly C$2 billion, contributing to a Canadian healthcare burden of C$5.81 billion annually by 2050.
The study revealed that over 90% of the healthcare costs for MASLD patients were attributed not to liver disease itself but to the management of associated comorbidities such as diabetes, hypertension, mental illness, and obesity. For instance, diabetes was the most common reason for physician visits among MASLD patients, accounting for 65.2% of cases. One study limitation was exclusion of decompensated cirrhosis, liver cancer, or a liver transplant recipient because of low prevalence in this cohort, potentially contributing to low liver specific healthcare costs.
Memedovich and colleagues noted that chronic diseases account for approximately C$68 billion annually in direct healthcare costs in Canada, representing around 58% of total healthcare expenditures. Estimates suggest that 1% annual reduction in chronic disease prevalence could save C$107 billion over the course of 20 years.
“Therefore, an approach that focuses on preventing and managing chronic diseases overall is needed to reduce the burden of MASLD on the healthcare system,” they wrote. This study was funded by LiveRx via an Alberta Innovates grant. The investigators disclosed relationships with Gilead, Abbott, GSK, and others.
Metabolic dysfunction–associated steatotic liver disease (MASLD) is the most common chronic liver disease, and its clinical burden is expected to mirror the rising rates of obesity and diabetes over the next couple decades. The cost analysis by Memdovich and colleagues provides a timely report on the healthcare burden of MASLD in Canada. Their results are, nevertheless, generalizable to other healthcare systems.
The authors found that nearly 98% of total healthcare costs of patients with MASLD were not specifically related to liver treatment, but rather linked to the management of patients’ cardiometabolic comorbidities. Projection estimates based on this cohort suggests a steep rise in the total healthcare costs over the coming decades reflecting increasing rates of comorbidities, with largest changes expected in the advanced fibrosis patient group. These findings highlight the need for early recognition of MASLD followed by a collaborative effort in management of MASLD in conjunction with its associated cardiometabolic comorbidities.
As rates for obesity, diabetes, and MASLD continue to rise, there is an urgency to create a global strategy for MASLD management that focuses on both prevention and treatment. Public health strategies are needed to increase awareness and focus on the treatment and prevention cardiometabolic risk factors that appear to be the main drivers of healthcare costs among patients with MASLD. A concerted effort is needed from providers, both primary care and specialists, for early recognition and treatment of MASLD. Such a public health response combined with recent advent in pharmacotherapy for weight loss and metabolic dysfunction–associated steatohepatitis may alter the projected costs and hopefully decrease the disease burden associated advanced MASLD.
Akshay Shetty, MD, is assistant professor of medicine and surgery at the David Geffen School of Medicine, University of California, San Francisco. He has no conflicts of interest to declare. Sammy Saab, MD, MPH, AGAF, is professor of medicine and surgery at the David Geffen School of Medicine at UCLA. He is on the speakers bureau for AbbVie, Gilead, Eisai, Intercept, Ipsen, Salix, Mallinckrodt, and Takeda, and has been a consultant for Gilead, Ipsen, Mallinckrodt, Madrigal, and Orphalan.
Metabolic dysfunction–associated steatotic liver disease (MASLD) is the most common chronic liver disease, and its clinical burden is expected to mirror the rising rates of obesity and diabetes over the next couple decades. The cost analysis by Memdovich and colleagues provides a timely report on the healthcare burden of MASLD in Canada. Their results are, nevertheless, generalizable to other healthcare systems.
The authors found that nearly 98% of total healthcare costs of patients with MASLD were not specifically related to liver treatment, but rather linked to the management of patients’ cardiometabolic comorbidities. Projection estimates based on this cohort suggests a steep rise in the total healthcare costs over the coming decades reflecting increasing rates of comorbidities, with largest changes expected in the advanced fibrosis patient group. These findings highlight the need for early recognition of MASLD followed by a collaborative effort in management of MASLD in conjunction with its associated cardiometabolic comorbidities.
As rates for obesity, diabetes, and MASLD continue to rise, there is an urgency to create a global strategy for MASLD management that focuses on both prevention and treatment. Public health strategies are needed to increase awareness and focus on the treatment and prevention cardiometabolic risk factors that appear to be the main drivers of healthcare costs among patients with MASLD. A concerted effort is needed from providers, both primary care and specialists, for early recognition and treatment of MASLD. Such a public health response combined with recent advent in pharmacotherapy for weight loss and metabolic dysfunction–associated steatohepatitis may alter the projected costs and hopefully decrease the disease burden associated advanced MASLD.
Akshay Shetty, MD, is assistant professor of medicine and surgery at the David Geffen School of Medicine, University of California, San Francisco. He has no conflicts of interest to declare. Sammy Saab, MD, MPH, AGAF, is professor of medicine and surgery at the David Geffen School of Medicine at UCLA. He is on the speakers bureau for AbbVie, Gilead, Eisai, Intercept, Ipsen, Salix, Mallinckrodt, and Takeda, and has been a consultant for Gilead, Ipsen, Mallinckrodt, Madrigal, and Orphalan.
Metabolic dysfunction–associated steatotic liver disease (MASLD) is the most common chronic liver disease, and its clinical burden is expected to mirror the rising rates of obesity and diabetes over the next couple decades. The cost analysis by Memdovich and colleagues provides a timely report on the healthcare burden of MASLD in Canada. Their results are, nevertheless, generalizable to other healthcare systems.
The authors found that nearly 98% of total healthcare costs of patients with MASLD were not specifically related to liver treatment, but rather linked to the management of patients’ cardiometabolic comorbidities. Projection estimates based on this cohort suggests a steep rise in the total healthcare costs over the coming decades reflecting increasing rates of comorbidities, with largest changes expected in the advanced fibrosis patient group. These findings highlight the need for early recognition of MASLD followed by a collaborative effort in management of MASLD in conjunction with its associated cardiometabolic comorbidities.
As rates for obesity, diabetes, and MASLD continue to rise, there is an urgency to create a global strategy for MASLD management that focuses on both prevention and treatment. Public health strategies are needed to increase awareness and focus on the treatment and prevention cardiometabolic risk factors that appear to be the main drivers of healthcare costs among patients with MASLD. A concerted effort is needed from providers, both primary care and specialists, for early recognition and treatment of MASLD. Such a public health response combined with recent advent in pharmacotherapy for weight loss and metabolic dysfunction–associated steatohepatitis may alter the projected costs and hopefully decrease the disease burden associated advanced MASLD.
Akshay Shetty, MD, is assistant professor of medicine and surgery at the David Geffen School of Medicine, University of California, San Francisco. He has no conflicts of interest to declare. Sammy Saab, MD, MPH, AGAF, is professor of medicine and surgery at the David Geffen School of Medicine at UCLA. He is on the speakers bureau for AbbVie, Gilead, Eisai, Intercept, Ipsen, Salix, Mallinckrodt, and Takeda, and has been a consultant for Gilead, Ipsen, Mallinckrodt, Madrigal, and Orphalan.
according to a new study.
The expected surge reflects the growing prevalence of MASLD and its associated conditions, emphasizing the necessity for a comprehensive approach to address this escalating public health issue, reported lead author K. Ally Memedovich, BHSc, of the University of Calgary in Alberta, Canada, and colleagues.
“The costs associated with the management of MASLD in Canada remain unknown but have been estimated as being very high,” the investigators wrote in Gastro Hep Advances. “Specifically, in one study from the United States, the healthcare costs and utilization of those with MASLD was nearly double that of patients without MASLD but with similar health status. This difference was largely due to increases in imaging, hospitalization, liver fibrosis assessment, laboratory tests, and outpatient visits.”
Although projections are available to estimate the future prevalence of MASLD in Canada, no models are available to predict the growing national economic burden, prompting the present study.
Memedovich and colleagues analyzed healthcare usage data from 6,358 patients diagnosed with MASLD disease in Calgary from 2018 to 2020. Using provincial administrative data, they calculated both liver-specific and total healthcare costs associated with different stages of liver fibrosis, ranging from F0/F1 (minimal fibrosis) to F4 (advanced fibrosis or cirrhosis).
The patients’ liver fibrosis stages were determined using liver stiffness measurements obtained through shear wave elastography. Average annual cost per patient was then calculated for each fibrosis stage by analyzing hospitalizations, ambulatory care, and physician claims data.
The annual average liver-specific cost per patient increased with severity of liver fibrosis; costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$7.02, C$35.30, C$60.46, and C$72.55, respectively. By 2050, liver-specific healthcare costs are projected to increase by C$51 million, reaching C$136 million Canada-wide.
Total healthcare costs were markedly higher; annual costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$397.90, C$781.53, C$2,881.84, and C$1,598.82, respectively. As a result, total healthcare costs are expected to rise by nearly C$2 billion, contributing to a Canadian healthcare burden of C$5.81 billion annually by 2050.
The study revealed that over 90% of the healthcare costs for MASLD patients were attributed not to liver disease itself but to the management of associated comorbidities such as diabetes, hypertension, mental illness, and obesity. For instance, diabetes was the most common reason for physician visits among MASLD patients, accounting for 65.2% of cases. One study limitation was exclusion of decompensated cirrhosis, liver cancer, or a liver transplant recipient because of low prevalence in this cohort, potentially contributing to low liver specific healthcare costs.
Memedovich and colleagues noted that chronic diseases account for approximately C$68 billion annually in direct healthcare costs in Canada, representing around 58% of total healthcare expenditures. Estimates suggest that 1% annual reduction in chronic disease prevalence could save C$107 billion over the course of 20 years.
“Therefore, an approach that focuses on preventing and managing chronic diseases overall is needed to reduce the burden of MASLD on the healthcare system,” they wrote. This study was funded by LiveRx via an Alberta Innovates grant. The investigators disclosed relationships with Gilead, Abbott, GSK, and others.
according to a new study.
The expected surge reflects the growing prevalence of MASLD and its associated conditions, emphasizing the necessity for a comprehensive approach to address this escalating public health issue, reported lead author K. Ally Memedovich, BHSc, of the University of Calgary in Alberta, Canada, and colleagues.
“The costs associated with the management of MASLD in Canada remain unknown but have been estimated as being very high,” the investigators wrote in Gastro Hep Advances. “Specifically, in one study from the United States, the healthcare costs and utilization of those with MASLD was nearly double that of patients without MASLD but with similar health status. This difference was largely due to increases in imaging, hospitalization, liver fibrosis assessment, laboratory tests, and outpatient visits.”
Although projections are available to estimate the future prevalence of MASLD in Canada, no models are available to predict the growing national economic burden, prompting the present study.
Memedovich and colleagues analyzed healthcare usage data from 6,358 patients diagnosed with MASLD disease in Calgary from 2018 to 2020. Using provincial administrative data, they calculated both liver-specific and total healthcare costs associated with different stages of liver fibrosis, ranging from F0/F1 (minimal fibrosis) to F4 (advanced fibrosis or cirrhosis).
The patients’ liver fibrosis stages were determined using liver stiffness measurements obtained through shear wave elastography. Average annual cost per patient was then calculated for each fibrosis stage by analyzing hospitalizations, ambulatory care, and physician claims data.
The annual average liver-specific cost per patient increased with severity of liver fibrosis; costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$7.02, C$35.30, C$60.46, and C$72.55, respectively. By 2050, liver-specific healthcare costs are projected to increase by C$51 million, reaching C$136 million Canada-wide.
Total healthcare costs were markedly higher; annual costs for patients with fibrosis stages F0/F1, F2, F3, and F4 were C$397.90, C$781.53, C$2,881.84, and C$1,598.82, respectively. As a result, total healthcare costs are expected to rise by nearly C$2 billion, contributing to a Canadian healthcare burden of C$5.81 billion annually by 2050.
The study revealed that over 90% of the healthcare costs for MASLD patients were attributed not to liver disease itself but to the management of associated comorbidities such as diabetes, hypertension, mental illness, and obesity. For instance, diabetes was the most common reason for physician visits among MASLD patients, accounting for 65.2% of cases. One study limitation was exclusion of decompensated cirrhosis, liver cancer, or a liver transplant recipient because of low prevalence in this cohort, potentially contributing to low liver specific healthcare costs.
Memedovich and colleagues noted that chronic diseases account for approximately C$68 billion annually in direct healthcare costs in Canada, representing around 58% of total healthcare expenditures. Estimates suggest that 1% annual reduction in chronic disease prevalence could save C$107 billion over the course of 20 years.
“Therefore, an approach that focuses on preventing and managing chronic diseases overall is needed to reduce the burden of MASLD on the healthcare system,” they wrote. This study was funded by LiveRx via an Alberta Innovates grant. The investigators disclosed relationships with Gilead, Abbott, GSK, and others.
FROM GASTRO HEP ADVANCES
Vonoprazan Offers PPI Alternative for Heartburn with Non-Erosive Reflux
according to investigators.
Benefits of vonoprazan were seen as soon as the first day of treatment and persisted through the 20-week extension period, lead author Loren Laine, MD, AGAF, of Yale School of Medicine, New Haven, Connecticut, and colleagues reported.
“A potential alternative to PPI therapy is a potassium-competitive acid blocker, a new class of antisecretory agents that provide more potent inhibition of gastric acid secretion than PPIs,” the investigators wrote in Clinical Gastroenterology and Hepatology.
While a small observational study found that 18 out of 26 patients (69%) with PPI-resistant NERD had improved symptoms with vonoprazan, subsequent randomized trials in Japan failed to meet their primary endpoints, Laine and colleagues noted. The present randomized trial was therefore conducted to determine how vonoprazan might help a US patient population.
The study involved 772 patients who reported heartburn at least 4 days per week during screening, but without erosive esophagitis on endoscopy. Participants were randomized into three groups: placebo, vonoprazan 10 mg, or vonoprazan 20 mg. These protocols were administered for 4 weeks, followed by a 20-week extension, in which placebo patients were rerandomized to receive one of the two vonoprazan dose levels.
The primary endpoint was the percentage of days without daytime or nighttime heartburn (24-hour heartburn-free days) during the initial 4-week treatment period. The secondary endpoint, assessed during the same timeframe, was percentage of days without need for a rescue antacid.
In the 4-week placebo-controlled period, patients treated with vonoprazan 10 mg and 20 mg showed a significant improvement in heartburn-free days, compared with placebo. The percentage of 24-hour heartburn-free days was 27.7% in the placebo group vs 44.8% in the 10-mg vonoprazan group (least squares mean difference 17.1%; P < .0001) and 44.4% in the 20 mg vonoprazan group (least squares mean difference 16.7%; P < .0001).
Benefits of vonoprazan were seen as early as the first day of treatment, with 8.3% and 11.6% more patients in the 10-mg and 20-mg groups, respectively, experiencing a heartburn-free day, compared with placebo. By day 2, these differences increased to 18.1% and 23.2%, respectively.
The percentage of days without rescue antacid use was also significantly higher in both vonoprazan groups. Patients in the 10 mg and 20 mg groups had 63.3% and 61.2% of days without antacid use, respectively, compared with 47.6% in the placebo group (P < .0001 for both comparisons).
These benefits persisted throughout the 20-week extension period, with similar percentages of heartburn-free days across all groups. Mean percentages of 24-hour heartburn-free days ranged from 61% to 63% in the extension phase, while median percentages spanned 76%-79%.
Adverse events were infrequent and comparable across all groups. The most common adverse event was nausea, occurring slightly more frequently in the vonoprazan groups (2.3% in the 10-mg group and 3.1% in the 20-mg group) vs placebo (0.4%). Serious adverse events were rare and were deemed unrelated to treatment. No new safety signals were identified during the 20-week extension period. Increases in serum gastrin levels, a marker of acid suppression, returned to near baseline after discontinuation of vonoprazan.
“In conclusion, the potassium-competitive acid blocker vonoprazan was efficacious in reducing heartburn symptoms in patients with NERD, with the benefit appearing to begin as early as the first day of therapy,” Laine and colleagues wrote.
In July 2024, the Food and Drug Administration approved vonoprazan for treating heartburn in patients with nonerosive gastroesophageal reflux disease.This study was funded by Phathom Pharmaceuticals. The investigators disclosed additional relationships with Takeda, Medtronic, Carnot, and others.
Proton pump inhibitors (PPIs) have revolutionized the treatment of gastroesophageal reflux disease (GERD). One might ask what the reason would be to challenge this giant of the pharmacopeia with another medication for GERD.
In this important study by Laine et al, vonoprazan is expectedly efficacious in treating nonerosive GERD (NERD) but notably less so when compared with the authors’ trial for erosive GERD. This is not surprising owing to the multiple and common acid independent etiologies of NERD, such as esophageal hypersensitivity. The high placebo response supports this. Two notable results, however, merit emphasis in potential advantages over PPIs.
First, vonoprazan is effective at day 1 of therapy by eliminating the need for loading. Second, nocturnal reflux, a purer form of GERD, is better controlled with a morning dose of vonopazan mitigating against nocturnal acid breakthrough and the need for twice-daily dosing with PPIs and/or addition of an H2 antagonist. These results by no means advocate for replacement of PPIs with PCABs, but at least suggest specific populations of GERD patients who may specifically benefit from PCAB use. The study also indirectly emphasizes that careful selection of NERD patients whose GERD symptoms are predominantly caused by increased esophageal acid exposure are the most appropriate candidates. The ultimate answer as to where vonoprazan will be used in our practice is evolving.
David Katzka, MD, is based in the Division of Digestive and Liver Diseases, Columbia University Medical Center, New York City. He has received research support from Takeda, Sanofi, and Regeneron. He is also an associate editor for GI & Hepatology News.
Proton pump inhibitors (PPIs) have revolutionized the treatment of gastroesophageal reflux disease (GERD). One might ask what the reason would be to challenge this giant of the pharmacopeia with another medication for GERD.
In this important study by Laine et al, vonoprazan is expectedly efficacious in treating nonerosive GERD (NERD) but notably less so when compared with the authors’ trial for erosive GERD. This is not surprising owing to the multiple and common acid independent etiologies of NERD, such as esophageal hypersensitivity. The high placebo response supports this. Two notable results, however, merit emphasis in potential advantages over PPIs.
First, vonoprazan is effective at day 1 of therapy by eliminating the need for loading. Second, nocturnal reflux, a purer form of GERD, is better controlled with a morning dose of vonopazan mitigating against nocturnal acid breakthrough and the need for twice-daily dosing with PPIs and/or addition of an H2 antagonist. These results by no means advocate for replacement of PPIs with PCABs, but at least suggest specific populations of GERD patients who may specifically benefit from PCAB use. The study also indirectly emphasizes that careful selection of NERD patients whose GERD symptoms are predominantly caused by increased esophageal acid exposure are the most appropriate candidates. The ultimate answer as to where vonoprazan will be used in our practice is evolving.
David Katzka, MD, is based in the Division of Digestive and Liver Diseases, Columbia University Medical Center, New York City. He has received research support from Takeda, Sanofi, and Regeneron. He is also an associate editor for GI & Hepatology News.
Proton pump inhibitors (PPIs) have revolutionized the treatment of gastroesophageal reflux disease (GERD). One might ask what the reason would be to challenge this giant of the pharmacopeia with another medication for GERD.
In this important study by Laine et al, vonoprazan is expectedly efficacious in treating nonerosive GERD (NERD) but notably less so when compared with the authors’ trial for erosive GERD. This is not surprising owing to the multiple and common acid independent etiologies of NERD, such as esophageal hypersensitivity. The high placebo response supports this. Two notable results, however, merit emphasis in potential advantages over PPIs.
First, vonoprazan is effective at day 1 of therapy by eliminating the need for loading. Second, nocturnal reflux, a purer form of GERD, is better controlled with a morning dose of vonopazan mitigating against nocturnal acid breakthrough and the need for twice-daily dosing with PPIs and/or addition of an H2 antagonist. These results by no means advocate for replacement of PPIs with PCABs, but at least suggest specific populations of GERD patients who may specifically benefit from PCAB use. The study also indirectly emphasizes that careful selection of NERD patients whose GERD symptoms are predominantly caused by increased esophageal acid exposure are the most appropriate candidates. The ultimate answer as to where vonoprazan will be used in our practice is evolving.
David Katzka, MD, is based in the Division of Digestive and Liver Diseases, Columbia University Medical Center, New York City. He has received research support from Takeda, Sanofi, and Regeneron. He is also an associate editor for GI & Hepatology News.
according to investigators.
Benefits of vonoprazan were seen as soon as the first day of treatment and persisted through the 20-week extension period, lead author Loren Laine, MD, AGAF, of Yale School of Medicine, New Haven, Connecticut, and colleagues reported.
“A potential alternative to PPI therapy is a potassium-competitive acid blocker, a new class of antisecretory agents that provide more potent inhibition of gastric acid secretion than PPIs,” the investigators wrote in Clinical Gastroenterology and Hepatology.
While a small observational study found that 18 out of 26 patients (69%) with PPI-resistant NERD had improved symptoms with vonoprazan, subsequent randomized trials in Japan failed to meet their primary endpoints, Laine and colleagues noted. The present randomized trial was therefore conducted to determine how vonoprazan might help a US patient population.
The study involved 772 patients who reported heartburn at least 4 days per week during screening, but without erosive esophagitis on endoscopy. Participants were randomized into three groups: placebo, vonoprazan 10 mg, or vonoprazan 20 mg. These protocols were administered for 4 weeks, followed by a 20-week extension, in which placebo patients were rerandomized to receive one of the two vonoprazan dose levels.
The primary endpoint was the percentage of days without daytime or nighttime heartburn (24-hour heartburn-free days) during the initial 4-week treatment period. The secondary endpoint, assessed during the same timeframe, was percentage of days without need for a rescue antacid.
In the 4-week placebo-controlled period, patients treated with vonoprazan 10 mg and 20 mg showed a significant improvement in heartburn-free days, compared with placebo. The percentage of 24-hour heartburn-free days was 27.7% in the placebo group vs 44.8% in the 10-mg vonoprazan group (least squares mean difference 17.1%; P < .0001) and 44.4% in the 20 mg vonoprazan group (least squares mean difference 16.7%; P < .0001).
Benefits of vonoprazan were seen as early as the first day of treatment, with 8.3% and 11.6% more patients in the 10-mg and 20-mg groups, respectively, experiencing a heartburn-free day, compared with placebo. By day 2, these differences increased to 18.1% and 23.2%, respectively.
The percentage of days without rescue antacid use was also significantly higher in both vonoprazan groups. Patients in the 10 mg and 20 mg groups had 63.3% and 61.2% of days without antacid use, respectively, compared with 47.6% in the placebo group (P < .0001 for both comparisons).
These benefits persisted throughout the 20-week extension period, with similar percentages of heartburn-free days across all groups. Mean percentages of 24-hour heartburn-free days ranged from 61% to 63% in the extension phase, while median percentages spanned 76%-79%.
Adverse events were infrequent and comparable across all groups. The most common adverse event was nausea, occurring slightly more frequently in the vonoprazan groups (2.3% in the 10-mg group and 3.1% in the 20-mg group) vs placebo (0.4%). Serious adverse events were rare and were deemed unrelated to treatment. No new safety signals were identified during the 20-week extension period. Increases in serum gastrin levels, a marker of acid suppression, returned to near baseline after discontinuation of vonoprazan.
“In conclusion, the potassium-competitive acid blocker vonoprazan was efficacious in reducing heartburn symptoms in patients with NERD, with the benefit appearing to begin as early as the first day of therapy,” Laine and colleagues wrote.
In July 2024, the Food and Drug Administration approved vonoprazan for treating heartburn in patients with nonerosive gastroesophageal reflux disease.This study was funded by Phathom Pharmaceuticals. The investigators disclosed additional relationships with Takeda, Medtronic, Carnot, and others.
according to investigators.
Benefits of vonoprazan were seen as soon as the first day of treatment and persisted through the 20-week extension period, lead author Loren Laine, MD, AGAF, of Yale School of Medicine, New Haven, Connecticut, and colleagues reported.
“A potential alternative to PPI therapy is a potassium-competitive acid blocker, a new class of antisecretory agents that provide more potent inhibition of gastric acid secretion than PPIs,” the investigators wrote in Clinical Gastroenterology and Hepatology.
While a small observational study found that 18 out of 26 patients (69%) with PPI-resistant NERD had improved symptoms with vonoprazan, subsequent randomized trials in Japan failed to meet their primary endpoints, Laine and colleagues noted. The present randomized trial was therefore conducted to determine how vonoprazan might help a US patient population.
The study involved 772 patients who reported heartburn at least 4 days per week during screening, but without erosive esophagitis on endoscopy. Participants were randomized into three groups: placebo, vonoprazan 10 mg, or vonoprazan 20 mg. These protocols were administered for 4 weeks, followed by a 20-week extension, in which placebo patients were rerandomized to receive one of the two vonoprazan dose levels.
The primary endpoint was the percentage of days without daytime or nighttime heartburn (24-hour heartburn-free days) during the initial 4-week treatment period. The secondary endpoint, assessed during the same timeframe, was percentage of days without need for a rescue antacid.
In the 4-week placebo-controlled period, patients treated with vonoprazan 10 mg and 20 mg showed a significant improvement in heartburn-free days, compared with placebo. The percentage of 24-hour heartburn-free days was 27.7% in the placebo group vs 44.8% in the 10-mg vonoprazan group (least squares mean difference 17.1%; P < .0001) and 44.4% in the 20 mg vonoprazan group (least squares mean difference 16.7%; P < .0001).
Benefits of vonoprazan were seen as early as the first day of treatment, with 8.3% and 11.6% more patients in the 10-mg and 20-mg groups, respectively, experiencing a heartburn-free day, compared with placebo. By day 2, these differences increased to 18.1% and 23.2%, respectively.
The percentage of days without rescue antacid use was also significantly higher in both vonoprazan groups. Patients in the 10 mg and 20 mg groups had 63.3% and 61.2% of days without antacid use, respectively, compared with 47.6% in the placebo group (P < .0001 for both comparisons).
These benefits persisted throughout the 20-week extension period, with similar percentages of heartburn-free days across all groups. Mean percentages of 24-hour heartburn-free days ranged from 61% to 63% in the extension phase, while median percentages spanned 76%-79%.
Adverse events were infrequent and comparable across all groups. The most common adverse event was nausea, occurring slightly more frequently in the vonoprazan groups (2.3% in the 10-mg group and 3.1% in the 20-mg group) vs placebo (0.4%). Serious adverse events were rare and were deemed unrelated to treatment. No new safety signals were identified during the 20-week extension period. Increases in serum gastrin levels, a marker of acid suppression, returned to near baseline after discontinuation of vonoprazan.
“In conclusion, the potassium-competitive acid blocker vonoprazan was efficacious in reducing heartburn symptoms in patients with NERD, with the benefit appearing to begin as early as the first day of therapy,” Laine and colleagues wrote.
In July 2024, the Food and Drug Administration approved vonoprazan for treating heartburn in patients with nonerosive gastroesophageal reflux disease.This study was funded by Phathom Pharmaceuticals. The investigators disclosed additional relationships with Takeda, Medtronic, Carnot, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Celiac Screening in Kids Appears Cost-Effective
If these screening strategies are deemed feasible by clinicians and patients, then implementation in routine care is needed, lead author Jan Heijdra Suasnabar, MSc, of Leiden University Medical Centre in the Netherlands, and colleagues reported.
“Cohort studies have shown that CD likely develops early in life and can be easily diagnosed by detection of CD-specific antibodies against the enzyme tissue transglutaminase type 2 (IgA-TG2),” the investigators wrote in Gastroenterology.
Despite the ease of diagnosis, as few as one in five cases of CD are detected using current clinical strategies, meaning many cases are diagnosed years after symptom onset.
“Such high rates of missed/delayed diagnoses have been attributed to CD’s varied and nonspecific symptoms, lack of awareness, and the resource-intensive process necessary to establish the diagnosis,” Heijdra Suasnabar and colleagues wrote. “From an economic perspective, the burden of CD translates into substantial excess healthcare and societal costs.”
These practice gaps prompted the present study, which explored the long-term cost effectiveness of mass CD screening and active case finding among pediatric patients.
The investigators employed a model-based cost-effectiveness analysis with a hypothetical cohort representing all children with CD in the Netherlands. Iterations of this model evaluated long-term costs as these children moved through the healthcare system along various CD detection strategies.
The first strategy was based on the current Dutch approach, which is the same as that in the United States: Patients are only evaluated for CD if they present with symptoms that prompt suspicion of disease. Based on data from population-based studies, the model assumed that approximately one in three cases would be detected using this strategy.
The second strategy involved mass screening using IgA-TG2 point-of-care testing (sensitivity, 0.94; specificity, 0.944) via youth health care clinics, regardless of symptoms.
The third strategy, called “active case finding,” represented something of an intermediate approach, in which children with at least 1 CD-related symptom underwent point-of-care antibody testing.
For both mass screening and active case finding strategies, a positive antibody test was followed with confirmatory diagnostic testing.
Compared with current clinical approach, mass screening added 7.46 more quality-adjusted life-years (QALYs) per CD patient with an increased cost of €28,635 per CD patient. Active case finding gained 4.33 QALYs per CD patient while incurring an additional cost of €15,585 per CD patient.
Based on a willingness-to-pay threshold of €20,000 per QALY, the investigators deemed both strategies “highly cost effective,” compared with current standard of care. Some of these costs were offset by “substantial” reductions in productivity losses, they noted, including CD-related absences from work and school.
“Our results illustrate how an earlier detection of CD through screening or case finding, although more costly, leads to improved health outcomes and a reduction in disease burden, compared with current care,” Heijdra Suasnabar and colleagues wrote.
Their concluding remarks highlighted the conservative scenarios built into their model, and suggested that their findings offer solid evidence for implementing new CD-testing strategies.
“If found to be feasible and acceptable by clinicians and patients, these strategies should be implemented in the Netherlands,” they wrote.This study was supported by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.
Celiac disease (CD) is common, affecting about 1% of the population, but it remains underdiagnosed because of its heterogeneous presentation and limited provider awareness. Most cases are detected only after patients develop gastrointestinal symptoms or laboratory abnormalities.
In this cost-effectiveness analysis, Suasnabar and colleagues demonstrate that screening children for celiac disease would be highly cost-effective relative to the current practice of clinical detection. They modeled point-of-care-testing using tissue transglutaminase IgA in all 3-year-old children in the Netherlands. While both mass screening and case-finding (via a standardized questionnaire) would increase healthcare costs relative to current care, both strategies would improve quality of life (QoL), reduce long-term complications (such as osteoporosis and non-Hodgkin lymphoma), and minimize productivity losses in individuals with CD. In sensitivity analyses accounting for uncertainty in QoL inputs and in the utility of diagnosing and treating asymptomatic CD, each screening strategy remained well below accepted willingness-to-pay thresholds.
John B. Doyle, MD, is a gastroenterology fellow in the Division of Digestive and Liver Diseases at Columbia University Medical Center, New York City. Benjamin Lebwohl, MD, MS, AGAF, is professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research at The Celiac Disease Center at Columbia. They have no conflicts of interest to declare.
Celiac disease (CD) is common, affecting about 1% of the population, but it remains underdiagnosed because of its heterogeneous presentation and limited provider awareness. Most cases are detected only after patients develop gastrointestinal symptoms or laboratory abnormalities.
In this cost-effectiveness analysis, Suasnabar and colleagues demonstrate that screening children for celiac disease would be highly cost-effective relative to the current practice of clinical detection. They modeled point-of-care-testing using tissue transglutaminase IgA in all 3-year-old children in the Netherlands. While both mass screening and case-finding (via a standardized questionnaire) would increase healthcare costs relative to current care, both strategies would improve quality of life (QoL), reduce long-term complications (such as osteoporosis and non-Hodgkin lymphoma), and minimize productivity losses in individuals with CD. In sensitivity analyses accounting for uncertainty in QoL inputs and in the utility of diagnosing and treating asymptomatic CD, each screening strategy remained well below accepted willingness-to-pay thresholds.
John B. Doyle, MD, is a gastroenterology fellow in the Division of Digestive and Liver Diseases at Columbia University Medical Center, New York City. Benjamin Lebwohl, MD, MS, AGAF, is professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research at The Celiac Disease Center at Columbia. They have no conflicts of interest to declare.
Celiac disease (CD) is common, affecting about 1% of the population, but it remains underdiagnosed because of its heterogeneous presentation and limited provider awareness. Most cases are detected only after patients develop gastrointestinal symptoms or laboratory abnormalities.
In this cost-effectiveness analysis, Suasnabar and colleagues demonstrate that screening children for celiac disease would be highly cost-effective relative to the current practice of clinical detection. They modeled point-of-care-testing using tissue transglutaminase IgA in all 3-year-old children in the Netherlands. While both mass screening and case-finding (via a standardized questionnaire) would increase healthcare costs relative to current care, both strategies would improve quality of life (QoL), reduce long-term complications (such as osteoporosis and non-Hodgkin lymphoma), and minimize productivity losses in individuals with CD. In sensitivity analyses accounting for uncertainty in QoL inputs and in the utility of diagnosing and treating asymptomatic CD, each screening strategy remained well below accepted willingness-to-pay thresholds.
John B. Doyle, MD, is a gastroenterology fellow in the Division of Digestive and Liver Diseases at Columbia University Medical Center, New York City. Benjamin Lebwohl, MD, MS, AGAF, is professor of medicine and epidemiology at Columbia University Medical Center and director of clinical research at The Celiac Disease Center at Columbia. They have no conflicts of interest to declare.
If these screening strategies are deemed feasible by clinicians and patients, then implementation in routine care is needed, lead author Jan Heijdra Suasnabar, MSc, of Leiden University Medical Centre in the Netherlands, and colleagues reported.
“Cohort studies have shown that CD likely develops early in life and can be easily diagnosed by detection of CD-specific antibodies against the enzyme tissue transglutaminase type 2 (IgA-TG2),” the investigators wrote in Gastroenterology.
Despite the ease of diagnosis, as few as one in five cases of CD are detected using current clinical strategies, meaning many cases are diagnosed years after symptom onset.
“Such high rates of missed/delayed diagnoses have been attributed to CD’s varied and nonspecific symptoms, lack of awareness, and the resource-intensive process necessary to establish the diagnosis,” Heijdra Suasnabar and colleagues wrote. “From an economic perspective, the burden of CD translates into substantial excess healthcare and societal costs.”
These practice gaps prompted the present study, which explored the long-term cost effectiveness of mass CD screening and active case finding among pediatric patients.
The investigators employed a model-based cost-effectiveness analysis with a hypothetical cohort representing all children with CD in the Netherlands. Iterations of this model evaluated long-term costs as these children moved through the healthcare system along various CD detection strategies.
The first strategy was based on the current Dutch approach, which is the same as that in the United States: Patients are only evaluated for CD if they present with symptoms that prompt suspicion of disease. Based on data from population-based studies, the model assumed that approximately one in three cases would be detected using this strategy.
The second strategy involved mass screening using IgA-TG2 point-of-care testing (sensitivity, 0.94; specificity, 0.944) via youth health care clinics, regardless of symptoms.
The third strategy, called “active case finding,” represented something of an intermediate approach, in which children with at least 1 CD-related symptom underwent point-of-care antibody testing.
For both mass screening and active case finding strategies, a positive antibody test was followed with confirmatory diagnostic testing.
Compared with current clinical approach, mass screening added 7.46 more quality-adjusted life-years (QALYs) per CD patient with an increased cost of €28,635 per CD patient. Active case finding gained 4.33 QALYs per CD patient while incurring an additional cost of €15,585 per CD patient.
Based on a willingness-to-pay threshold of €20,000 per QALY, the investigators deemed both strategies “highly cost effective,” compared with current standard of care. Some of these costs were offset by “substantial” reductions in productivity losses, they noted, including CD-related absences from work and school.
“Our results illustrate how an earlier detection of CD through screening or case finding, although more costly, leads to improved health outcomes and a reduction in disease burden, compared with current care,” Heijdra Suasnabar and colleagues wrote.
Their concluding remarks highlighted the conservative scenarios built into their model, and suggested that their findings offer solid evidence for implementing new CD-testing strategies.
“If found to be feasible and acceptable by clinicians and patients, these strategies should be implemented in the Netherlands,” they wrote.This study was supported by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.
If these screening strategies are deemed feasible by clinicians and patients, then implementation in routine care is needed, lead author Jan Heijdra Suasnabar, MSc, of Leiden University Medical Centre in the Netherlands, and colleagues reported.
“Cohort studies have shown that CD likely develops early in life and can be easily diagnosed by detection of CD-specific antibodies against the enzyme tissue transglutaminase type 2 (IgA-TG2),” the investigators wrote in Gastroenterology.
Despite the ease of diagnosis, as few as one in five cases of CD are detected using current clinical strategies, meaning many cases are diagnosed years after symptom onset.
“Such high rates of missed/delayed diagnoses have been attributed to CD’s varied and nonspecific symptoms, lack of awareness, and the resource-intensive process necessary to establish the diagnosis,” Heijdra Suasnabar and colleagues wrote. “From an economic perspective, the burden of CD translates into substantial excess healthcare and societal costs.”
These practice gaps prompted the present study, which explored the long-term cost effectiveness of mass CD screening and active case finding among pediatric patients.
The investigators employed a model-based cost-effectiveness analysis with a hypothetical cohort representing all children with CD in the Netherlands. Iterations of this model evaluated long-term costs as these children moved through the healthcare system along various CD detection strategies.
The first strategy was based on the current Dutch approach, which is the same as that in the United States: Patients are only evaluated for CD if they present with symptoms that prompt suspicion of disease. Based on data from population-based studies, the model assumed that approximately one in three cases would be detected using this strategy.
The second strategy involved mass screening using IgA-TG2 point-of-care testing (sensitivity, 0.94; specificity, 0.944) via youth health care clinics, regardless of symptoms.
The third strategy, called “active case finding,” represented something of an intermediate approach, in which children with at least 1 CD-related symptom underwent point-of-care antibody testing.
For both mass screening and active case finding strategies, a positive antibody test was followed with confirmatory diagnostic testing.
Compared with current clinical approach, mass screening added 7.46 more quality-adjusted life-years (QALYs) per CD patient with an increased cost of €28,635 per CD patient. Active case finding gained 4.33 QALYs per CD patient while incurring an additional cost of €15,585 per CD patient.
Based on a willingness-to-pay threshold of €20,000 per QALY, the investigators deemed both strategies “highly cost effective,” compared with current standard of care. Some of these costs were offset by “substantial” reductions in productivity losses, they noted, including CD-related absences from work and school.
“Our results illustrate how an earlier detection of CD through screening or case finding, although more costly, leads to improved health outcomes and a reduction in disease burden, compared with current care,” Heijdra Suasnabar and colleagues wrote.
Their concluding remarks highlighted the conservative scenarios built into their model, and suggested that their findings offer solid evidence for implementing new CD-testing strategies.
“If found to be feasible and acceptable by clinicians and patients, these strategies should be implemented in the Netherlands,” they wrote.This study was supported by the Netherlands Organization for Health Research and Development. The investigators disclosed no conflicts of interest.
FROM GASTROENTEROLOGY
Low Follow-up of Abnormal Urine Proteinuria Dipstick Tests in Primary Care
Only 1 in 15 urine dipstick tests showing proteinuria in the primary care setting are followed up with albuminuria quantification testing, according to investigators.
These findings expose a broad gap in screening for chronic kidney disease (CKD), which is especially concerning since newer kidney-protecting agents are more effective when prescribed earlier in the disease course, reported lead author Yunwen Xu, PhD, of Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.
“Evidence-based prescription of renin-angiotensin system inhibitors, glucagon-like peptide-1 receptor (GLP-1) agonists, sodium-glucose cotransporter 2 (SGLT2) inhibitors, and nonsteroidal mineralocorticoid receptor antagonists (nsMRAs) relies on the level of albuminuria,” the investigators wrote in Annals of Internal Medicine.
“Although urine albumin-creatinine ratio (ACR) is the most accurate method for quantifying albuminuria, dipstick urinalysis tests are inexpensive and are often used as an initial screening test, with guidelines recommending follow-up ACR testing if the protein dipstick test result is abnormal.”
Despite this guidance, real-world follow-up rates have been unknown, prompting the present study. Real-world data show a low follow-up rate. Dr. Xu and colleagues analyzed data from 1 million patients in 33 health systems who underwent urine dipstick testing in a primary care setting.
Across this population, 13% of patients had proteinuria, but only 6.7% underwent follow-up albuminuria quantification testing within the next year. ACR was the most common method (86%).
Likelihood of follow-up increased slightly with the level of proteinuria detected; however, absolute differences were marginal, with a 3+ result yielding a follow-up rate of just 8%, compared with 7.3% for a 2+ result and 6.3% for a 1+ result. When albuminuria quantification tests were conducted, 1+, 2+, and 3+ dipstick results were associated with albuminuria rates of 36.3%, 53.0%, and 64.9%, respectively.
Patients with diabetes had the highest follow-up rate, at 16.6%, vs 3.8% for those without diabetes.
Reasons for Low Follow-up Unclear
The dataset did not include information about reasons for ordering urinalyses, whether primary care providers knew about the abnormal dipstick tests, or awareness of guideline recommendations.
“I think they know it should be done,” said principal investigator Alexander R. Chang, MD, associate professor in the department of nephrology and population health sciences at Geisinger Health, Danville, Pennsylvania.
He suggested that real-time awareness issues, especially within electronic health record (EHR) systems, could explain the low follow-up rates. Blood test abnormalities are often flagged in red in EHRs, he said in an interview, but urine dipstick results typically remain in plain black and white.
“So, then it sort of requires that extra cognitive step to kind of look at that [result], and say, okay, that is pretty abnormal; I should do something about that,” he said.
Neil S. Skolnik, MD, a primary care physician at Jefferson Health, Abington, Pennsylvania, was surprised by the findings. “If you get a urinalysis and there’s protein, normally you follow up,” Dr. Skolnik said in an interview. “I have a feeling that there’s something we’re not seeing here about what’s going on. It is hard to imagine that in only 1 out of 15 times that proteinuria is identified, is there any follow-up. I really don’t have a good explanation.”
Renee Marie Betancourt, MD, associate professor and vice chair of diversity, equity, and inclusion in the Department of Family Medicine and Community Health at the University of Pennsylvania Perelman School of Medicine, Philadelphia, said it is hard to draw conclusions from the available data, but agreed that low visibility of results could be partially to blame.
“The chart doesn’t tell me [a urine dipstick result] is abnormal,” Dr. Betancourt said in an interview. “The chart just reports it, agnostic of normal or abnormal.”
Beyond issues with visibility, Dr. Betancourt described how primary care physicians are often so flooded with other concerns that a positive dipstick test can become a low priority, particularly among patients with CKD, who typically have other health issues.
“I oftentimes spend the majority of my visit on the patient’s concerns, and sometimes, beyond their concerns, I have concerns, and [a urine dipstick result] might not make it to the top of the list,” she said.
EHR-Based Interventions Might Help Improve Follow-up
Dr. Chang suggested that improved visibility of dipstick results could help, or possibly EHR-integrated clinical decision tools.
Dr. Betancourt and colleagues at Penn Medicine are actively working on such a solution. Their EHR-based intervention is aimed at identifying and managing patients with CKD. The present design, slated for pilot testing at one or two primary care clinics beginning in January 2025, depends upon estimated glomerular filtration rate (eGFR) to flag CKD patients, with ACR testing recommended yearly to predict disease progression.
Although urine dipstick findings are not currently a part of this software pathway, the findings from the present study might influence future strategy.
“I’m going to take this to our collaborators and ask about opportunities to ... encourage providers to be more active with dipsticks,” Dr. Betancourt said.
Newer Medications Are Effective, but Prescribing Challenges Remain
Ideally, CKD screening improvements will unlock a greater goal: prescribing kidney-protecting medications to patients who need them — as soon as they need them.
Here might lie the real knowledge gap among experienced primary care physicians, Dr. Chang suggested. “In the past, there wasn’t quite as much that you could do about having proteinuria,” he said. “But now we have lots more medications ... it’s not just tracking that they have a bad prognostic factor. [Proteinuria is] actually something that we can act upon.”
Who exactly should be prescribing these kidney-protecting medications, however, remains contested, as agents like GLP-1 agonists and SGLT2 inhibitors yield benefits across specialties, including nephrology, cardiology, and endocrinology.
“Everyone’s going to have to work together,” Dr. Chang said. “You can’t really put it all on the [primary care physician] to quarterback everything.”
And, regardless of who throws the ball, a touchdown is not guaranteed.
Dr. Betancourt called out the high cost of these newer drugs and described how some of her patients, already facing multiple health inequities, are left without.
“I have patients who cannot fill these medications because the copay is too high,” she said. “Just last week I received a message from a patient who stopped taking his SGLT2 inhibitor because the cost was too high ... it was over $300 per month.”
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. The authors’ conflicts of interests are available in the original paper. Dr. Skolnik and Dr. Betancourt reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Only 1 in 15 urine dipstick tests showing proteinuria in the primary care setting are followed up with albuminuria quantification testing, according to investigators.
These findings expose a broad gap in screening for chronic kidney disease (CKD), which is especially concerning since newer kidney-protecting agents are more effective when prescribed earlier in the disease course, reported lead author Yunwen Xu, PhD, of Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.
“Evidence-based prescription of renin-angiotensin system inhibitors, glucagon-like peptide-1 receptor (GLP-1) agonists, sodium-glucose cotransporter 2 (SGLT2) inhibitors, and nonsteroidal mineralocorticoid receptor antagonists (nsMRAs) relies on the level of albuminuria,” the investigators wrote in Annals of Internal Medicine.
“Although urine albumin-creatinine ratio (ACR) is the most accurate method for quantifying albuminuria, dipstick urinalysis tests are inexpensive and are often used as an initial screening test, with guidelines recommending follow-up ACR testing if the protein dipstick test result is abnormal.”
Despite this guidance, real-world follow-up rates have been unknown, prompting the present study. Real-world data show a low follow-up rate. Dr. Xu and colleagues analyzed data from 1 million patients in 33 health systems who underwent urine dipstick testing in a primary care setting.
Across this population, 13% of patients had proteinuria, but only 6.7% underwent follow-up albuminuria quantification testing within the next year. ACR was the most common method (86%).
Likelihood of follow-up increased slightly with the level of proteinuria detected; however, absolute differences were marginal, with a 3+ result yielding a follow-up rate of just 8%, compared with 7.3% for a 2+ result and 6.3% for a 1+ result. When albuminuria quantification tests were conducted, 1+, 2+, and 3+ dipstick results were associated with albuminuria rates of 36.3%, 53.0%, and 64.9%, respectively.
Patients with diabetes had the highest follow-up rate, at 16.6%, vs 3.8% for those without diabetes.
Reasons for Low Follow-up Unclear
The dataset did not include information about reasons for ordering urinalyses, whether primary care providers knew about the abnormal dipstick tests, or awareness of guideline recommendations.
“I think they know it should be done,” said principal investigator Alexander R. Chang, MD, associate professor in the department of nephrology and population health sciences at Geisinger Health, Danville, Pennsylvania.
He suggested that real-time awareness issues, especially within electronic health record (EHR) systems, could explain the low follow-up rates. Blood test abnormalities are often flagged in red in EHRs, he said in an interview, but urine dipstick results typically remain in plain black and white.
“So, then it sort of requires that extra cognitive step to kind of look at that [result], and say, okay, that is pretty abnormal; I should do something about that,” he said.
Neil S. Skolnik, MD, a primary care physician at Jefferson Health, Abington, Pennsylvania, was surprised by the findings. “If you get a urinalysis and there’s protein, normally you follow up,” Dr. Skolnik said in an interview. “I have a feeling that there’s something we’re not seeing here about what’s going on. It is hard to imagine that in only 1 out of 15 times that proteinuria is identified, is there any follow-up. I really don’t have a good explanation.”
Renee Marie Betancourt, MD, associate professor and vice chair of diversity, equity, and inclusion in the Department of Family Medicine and Community Health at the University of Pennsylvania Perelman School of Medicine, Philadelphia, said it is hard to draw conclusions from the available data, but agreed that low visibility of results could be partially to blame.
“The chart doesn’t tell me [a urine dipstick result] is abnormal,” Dr. Betancourt said in an interview. “The chart just reports it, agnostic of normal or abnormal.”
Beyond issues with visibility, Dr. Betancourt described how primary care physicians are often so flooded with other concerns that a positive dipstick test can become a low priority, particularly among patients with CKD, who typically have other health issues.
“I oftentimes spend the majority of my visit on the patient’s concerns, and sometimes, beyond their concerns, I have concerns, and [a urine dipstick result] might not make it to the top of the list,” she said.
EHR-Based Interventions Might Help Improve Follow-up
Dr. Chang suggested that improved visibility of dipstick results could help, or possibly EHR-integrated clinical decision tools.
Dr. Betancourt and colleagues at Penn Medicine are actively working on such a solution. Their EHR-based intervention is aimed at identifying and managing patients with CKD. The present design, slated for pilot testing at one or two primary care clinics beginning in January 2025, depends upon estimated glomerular filtration rate (eGFR) to flag CKD patients, with ACR testing recommended yearly to predict disease progression.
Although urine dipstick findings are not currently a part of this software pathway, the findings from the present study might influence future strategy.
“I’m going to take this to our collaborators and ask about opportunities to ... encourage providers to be more active with dipsticks,” Dr. Betancourt said.
Newer Medications Are Effective, but Prescribing Challenges Remain
Ideally, CKD screening improvements will unlock a greater goal: prescribing kidney-protecting medications to patients who need them — as soon as they need them.
Here might lie the real knowledge gap among experienced primary care physicians, Dr. Chang suggested. “In the past, there wasn’t quite as much that you could do about having proteinuria,” he said. “But now we have lots more medications ... it’s not just tracking that they have a bad prognostic factor. [Proteinuria is] actually something that we can act upon.”
Who exactly should be prescribing these kidney-protecting medications, however, remains contested, as agents like GLP-1 agonists and SGLT2 inhibitors yield benefits across specialties, including nephrology, cardiology, and endocrinology.
“Everyone’s going to have to work together,” Dr. Chang said. “You can’t really put it all on the [primary care physician] to quarterback everything.”
And, regardless of who throws the ball, a touchdown is not guaranteed.
Dr. Betancourt called out the high cost of these newer drugs and described how some of her patients, already facing multiple health inequities, are left without.
“I have patients who cannot fill these medications because the copay is too high,” she said. “Just last week I received a message from a patient who stopped taking his SGLT2 inhibitor because the cost was too high ... it was over $300 per month.”
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. The authors’ conflicts of interests are available in the original paper. Dr. Skolnik and Dr. Betancourt reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
Only 1 in 15 urine dipstick tests showing proteinuria in the primary care setting are followed up with albuminuria quantification testing, according to investigators.
These findings expose a broad gap in screening for chronic kidney disease (CKD), which is especially concerning since newer kidney-protecting agents are more effective when prescribed earlier in the disease course, reported lead author Yunwen Xu, PhD, of Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, and colleagues.
“Evidence-based prescription of renin-angiotensin system inhibitors, glucagon-like peptide-1 receptor (GLP-1) agonists, sodium-glucose cotransporter 2 (SGLT2) inhibitors, and nonsteroidal mineralocorticoid receptor antagonists (nsMRAs) relies on the level of albuminuria,” the investigators wrote in Annals of Internal Medicine.
“Although urine albumin-creatinine ratio (ACR) is the most accurate method for quantifying albuminuria, dipstick urinalysis tests are inexpensive and are often used as an initial screening test, with guidelines recommending follow-up ACR testing if the protein dipstick test result is abnormal.”
Despite this guidance, real-world follow-up rates have been unknown, prompting the present study. Real-world data show a low follow-up rate. Dr. Xu and colleagues analyzed data from 1 million patients in 33 health systems who underwent urine dipstick testing in a primary care setting.
Across this population, 13% of patients had proteinuria, but only 6.7% underwent follow-up albuminuria quantification testing within the next year. ACR was the most common method (86%).
Likelihood of follow-up increased slightly with the level of proteinuria detected; however, absolute differences were marginal, with a 3+ result yielding a follow-up rate of just 8%, compared with 7.3% for a 2+ result and 6.3% for a 1+ result. When albuminuria quantification tests were conducted, 1+, 2+, and 3+ dipstick results were associated with albuminuria rates of 36.3%, 53.0%, and 64.9%, respectively.
Patients with diabetes had the highest follow-up rate, at 16.6%, vs 3.8% for those without diabetes.
Reasons for Low Follow-up Unclear
The dataset did not include information about reasons for ordering urinalyses, whether primary care providers knew about the abnormal dipstick tests, or awareness of guideline recommendations.
“I think they know it should be done,” said principal investigator Alexander R. Chang, MD, associate professor in the department of nephrology and population health sciences at Geisinger Health, Danville, Pennsylvania.
He suggested that real-time awareness issues, especially within electronic health record (EHR) systems, could explain the low follow-up rates. Blood test abnormalities are often flagged in red in EHRs, he said in an interview, but urine dipstick results typically remain in plain black and white.
“So, then it sort of requires that extra cognitive step to kind of look at that [result], and say, okay, that is pretty abnormal; I should do something about that,” he said.
Neil S. Skolnik, MD, a primary care physician at Jefferson Health, Abington, Pennsylvania, was surprised by the findings. “If you get a urinalysis and there’s protein, normally you follow up,” Dr. Skolnik said in an interview. “I have a feeling that there’s something we’re not seeing here about what’s going on. It is hard to imagine that in only 1 out of 15 times that proteinuria is identified, is there any follow-up. I really don’t have a good explanation.”
Renee Marie Betancourt, MD, associate professor and vice chair of diversity, equity, and inclusion in the Department of Family Medicine and Community Health at the University of Pennsylvania Perelman School of Medicine, Philadelphia, said it is hard to draw conclusions from the available data, but agreed that low visibility of results could be partially to blame.
“The chart doesn’t tell me [a urine dipstick result] is abnormal,” Dr. Betancourt said in an interview. “The chart just reports it, agnostic of normal or abnormal.”
Beyond issues with visibility, Dr. Betancourt described how primary care physicians are often so flooded with other concerns that a positive dipstick test can become a low priority, particularly among patients with CKD, who typically have other health issues.
“I oftentimes spend the majority of my visit on the patient’s concerns, and sometimes, beyond their concerns, I have concerns, and [a urine dipstick result] might not make it to the top of the list,” she said.
EHR-Based Interventions Might Help Improve Follow-up
Dr. Chang suggested that improved visibility of dipstick results could help, or possibly EHR-integrated clinical decision tools.
Dr. Betancourt and colleagues at Penn Medicine are actively working on such a solution. Their EHR-based intervention is aimed at identifying and managing patients with CKD. The present design, slated for pilot testing at one or two primary care clinics beginning in January 2025, depends upon estimated glomerular filtration rate (eGFR) to flag CKD patients, with ACR testing recommended yearly to predict disease progression.
Although urine dipstick findings are not currently a part of this software pathway, the findings from the present study might influence future strategy.
“I’m going to take this to our collaborators and ask about opportunities to ... encourage providers to be more active with dipsticks,” Dr. Betancourt said.
Newer Medications Are Effective, but Prescribing Challenges Remain
Ideally, CKD screening improvements will unlock a greater goal: prescribing kidney-protecting medications to patients who need them — as soon as they need them.
Here might lie the real knowledge gap among experienced primary care physicians, Dr. Chang suggested. “In the past, there wasn’t quite as much that you could do about having proteinuria,” he said. “But now we have lots more medications ... it’s not just tracking that they have a bad prognostic factor. [Proteinuria is] actually something that we can act upon.”
Who exactly should be prescribing these kidney-protecting medications, however, remains contested, as agents like GLP-1 agonists and SGLT2 inhibitors yield benefits across specialties, including nephrology, cardiology, and endocrinology.
“Everyone’s going to have to work together,” Dr. Chang said. “You can’t really put it all on the [primary care physician] to quarterback everything.”
And, regardless of who throws the ball, a touchdown is not guaranteed.
Dr. Betancourt called out the high cost of these newer drugs and described how some of her patients, already facing multiple health inequities, are left without.
“I have patients who cannot fill these medications because the copay is too high,” she said. “Just last week I received a message from a patient who stopped taking his SGLT2 inhibitor because the cost was too high ... it was over $300 per month.”
This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. The authors’ conflicts of interests are available in the original paper. Dr. Skolnik and Dr. Betancourt reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM ANNALS OF INTERNAL MEDICINE
Severe Maternal Morbidity Three Times Higher in Surrogate Gestational Carriers
Gestational carriers face a significantly higher risk for severe maternal morbidity and other pregnancy complications than those conceiving naturally or via in vitro fertilization (IVF), according to a recent Canadian study.
These findings suggest that more work is needed to ensure careful selection of gestational carriers, reported lead author Maria P. Velez, MD, PhD, of McGill University, Montreal, Quebec, Canada, and colleagues.
“Although a gestational carrier should ideally be a healthy person, with a demonstrated low-risk obstetric history, it is not clear whether this occurs in practice,” the investigators wrote in Annals of Internal Medicine. “Moreover, the risk for maternal and neonatal adversity is largely unknown in this group.”
Study Compared Gestational Carriage With IVF and Unassisted Conception
To address these knowledge gaps, Dr. Velez and colleagues conducted a population-based cohort study in Ontario using linked administrative datasets. All singleton births at more than 20 weeks’ gestation with mothers aged 18-50 years were included from April 2012 to March 2021. Multifetal pregnancies were excluded, as were women with a history of infertility diagnosis without fertility treatment, and those who underwent intrauterine insemination or ovulation induction.
Outcomes were compared across three groups: Unassisted conception, IVF, and gestational carriage. The primary maternal outcome was severe maternal morbidity, defined by a validated composite of 41 unique indicators. The primary infant outcome was severe neonatal morbidity, comprising 19 unique indicators.
Secondary outcomes were hypertensive disorders, elective cesarean delivery, emergent cesarean delivery, preterm birth at less than 37 weeks, preterm birth at more than 32 weeks, and postpartum hemorrhage.
Logistic regression analysis adjusted for a range of covariates, including age, obesity, tobacco/drug dependence, chronic hypertension, and others. The final dataset included 846,124 births by unassisted conception (97.6%), 16,087 by IVF (1.8%), and 806 by gestational carriage (0.1%).
The weighted relative risk (wRR) for severe maternal morbidity was more than three times higher in gestational carriers than in those conceiving naturally (wRR, 3.30; 95% CI, 2.59-4.20) and 86% higher than in those conceiving via IVF (wRR, 1.86; 95% CI, 1.36-2.55). These stem from absolute risks of 2.3%, 4.3%, and 7.8% for unassisted, IVF, and surrogate pregnancies, respectively.
Moreover, surrogates were 75% more likely to have hypertensive disorders, 79% more likely to have preterm birth at less than 37 weeks, and almost three times as likely to have postpartum hemorrhage.
These same three secondary outcomes were also significantly more common when comparing surrogate with IVF pregnancies, albeit to a lesser degree. In contrast, surrogate pregnancies were associated with a 21% lower risk for elective cesarean delivery than IVF pregnancies (wRR, 0.79; 95% CI, 0.68-0.93).
Severe neonatal morbidity was not significantly different between the groups. These findings add to a mixed body of evidence surrounding both maternal and neonatal outcomes with gestational carriers, according to the investigators.
“Prior small studies [by Söderström-Anttila et al. and Swanson et al.] reported varying risks for preterm birth in singleton gestational carriage pregnancies, whereas a recent large US registry reported no increased risk for preterm birth compared with IVF, after accounting for multifetal pregnancy,” they wrote. “This study excluded multifetal pregnancies, a common occurrence after IVF, with reported higher risks for adverse outcomes. Accordingly, adverse maternal and newborn outcomes may have been underestimated herein.”
Causes of Worse Outcomes Remain Unclear
While the present findings suggest greater maternal morbidity among surrogates, potential causes of these adverse outcomes remain unclear.
The investigators suggested that implantation of a nonautologous embryo could be playing a role, as oocyte donation has been linked with an increased risk for hypertensive disorders of pregnancy.
“We don’t know exactly why that can happen,” Dr. Velez said in an interview. “Maybe that embryo can be associated with an immunological response that could be associated with higher morbidity during pregnancy. We need, however, other studies that can continue testing that hypothesis.”
In the meantime, more care is needed in surrogate selection, according to Dr. Velez.
“In our study, we found that there were patients, for example, who had more than three prior C-sections, which is one of the contraindications for gestational carriers, and patients who had more than five [prior] pregnancies, which is also another limitation in the guidelines for choosing these patients,” she said. “Definitely we need to be more vigilant when we accept these gestational carriers.”
But improving surrogate selection may be easier said than done.
The quantitative thresholds cited by Dr. Velez come from the American Society for Reproductive Medicine guidelines. Alternative guidance documents from the Canadian Fertility and Andrology Society and American College of Obstetricians and Gynecologists are less prescriptive; instead, they offer qualitative recommendations concerning obstetric history and risk assessment.
And then there is the regulatory specter looming over the entire field, evidenced by the many times that these publications cite ethical and legal considerations — far more than the average medical guidance document — when making clinical decisions related to surrogacy.
Present Study Offers Much-Needed Data in Understudied Field
According to Kate Swanson, MD, a perinatologist, clinical geneticist, and associate professor at the University of California San Francisco, the present study may help steer medical societies and healthcare providers away from these potential sand traps and toward conversations grounded in scientific data.
“I think one of the reasons that the Society for Maternal-Fetal Medicine and the maternal-fetal medicine community in general hasn’t been interested in this subject is that they see it as a social/ethical/legal issue rather than a medical one,” Dr. Swanson said in an interview. “One of the real benefits of this article is that it shows that this is a medical issue that the obstetric community needs to pay attention to.”
These new data could help guide decisions about risk and candidacy with both potential gestational carriers and intended parents, she said.
Still, it’s hard — if not impossible — to disentangle the medical and legal aspects of surrogacy, as shown when analyzing the present study.
In Canada, where it was conducted, intended parents are forbidden from paying surrogates for their services beyond out-of-pocket costs directly related to pregnancy. Meanwhile, surrogacy laws vary widely across the United States; some states (eg, Louisiana) allow only altruistic surrogacy like Canada, while other states (eg, California) permit commercial surrogacy with no legal limits on compensation.
Dr. Swanson and Dr. Velez offered starkly different views on this topic.
“I think there should be more regulations in terms of compensating [gestational carriers],” Dr. Velez said. “I don’t think being a gestational carrier should be like a job or a way of making a living.”
Dr. Swanson, who has published multiple studies on gestational carriage and experienced the process as an intended parent, said compensation beyond expenses is essential.
“I do think it’s incredibly reasonable to pay someone — a woman is taking on quite a lot of inconvenience and risk — in order to perform this service for another family,” she said. “I think it’s incredibly appropriate to compensate her for all of that.”
Reasons for compensation go beyond the ethical, Dr. Swanson added, and may explain some of the findings from the present study.
“A lot of these gestational carriers [in the present dataset] wouldn’t necessarily meet criteria through the American Society of Reproductive Medicine,” Dr. Swanson said, pointing out surrogates who had never had a pregnancy before or reported the use of tobacco or other drugs. “Really, it shows me that a lot of the people participating as gestational carriers were maybe not ideal candidates. I think one of the reasons that we might see that in this Canadian population is ... that you can’t compensate someone, so I think their pool of people willing to be gestational carriers is a lot smaller, and they may be a little bit less selective sometimes.”
Dr. Velez acknowledged that the present study was limited by a shortage of potentially relevant information concerning the surrogacy selection process, including underlying reasons for becoming a gestational carrier. More work is needed to understand the health and outcomes of these women, she said, including topics ranging from immunologic mechanisms to mental health.
She also called for more discussions surrounding maternal safety, with participation from all stakeholders, including governments, surrogates, intended parents, and physicians too.
This study was funded by the Canadian Institutes of Health Research. The investigators disclosed no conflicts of interest. Dr. Swanson disclosed a relationship with Mitera.
A version of this article first appeared on Medscape.com.
Gestational carriers face a significantly higher risk for severe maternal morbidity and other pregnancy complications than those conceiving naturally or via in vitro fertilization (IVF), according to a recent Canadian study.
These findings suggest that more work is needed to ensure careful selection of gestational carriers, reported lead author Maria P. Velez, MD, PhD, of McGill University, Montreal, Quebec, Canada, and colleagues.
“Although a gestational carrier should ideally be a healthy person, with a demonstrated low-risk obstetric history, it is not clear whether this occurs in practice,” the investigators wrote in Annals of Internal Medicine. “Moreover, the risk for maternal and neonatal adversity is largely unknown in this group.”
Study Compared Gestational Carriage With IVF and Unassisted Conception
To address these knowledge gaps, Dr. Velez and colleagues conducted a population-based cohort study in Ontario using linked administrative datasets. All singleton births at more than 20 weeks’ gestation with mothers aged 18-50 years were included from April 2012 to March 2021. Multifetal pregnancies were excluded, as were women with a history of infertility diagnosis without fertility treatment, and those who underwent intrauterine insemination or ovulation induction.
Outcomes were compared across three groups: Unassisted conception, IVF, and gestational carriage. The primary maternal outcome was severe maternal morbidity, defined by a validated composite of 41 unique indicators. The primary infant outcome was severe neonatal morbidity, comprising 19 unique indicators.
Secondary outcomes were hypertensive disorders, elective cesarean delivery, emergent cesarean delivery, preterm birth at less than 37 weeks, preterm birth at more than 32 weeks, and postpartum hemorrhage.
Logistic regression analysis adjusted for a range of covariates, including age, obesity, tobacco/drug dependence, chronic hypertension, and others. The final dataset included 846,124 births by unassisted conception (97.6%), 16,087 by IVF (1.8%), and 806 by gestational carriage (0.1%).
The weighted relative risk (wRR) for severe maternal morbidity was more than three times higher in gestational carriers than in those conceiving naturally (wRR, 3.30; 95% CI, 2.59-4.20) and 86% higher than in those conceiving via IVF (wRR, 1.86; 95% CI, 1.36-2.55). These stem from absolute risks of 2.3%, 4.3%, and 7.8% for unassisted, IVF, and surrogate pregnancies, respectively.
Moreover, surrogates were 75% more likely to have hypertensive disorders, 79% more likely to have preterm birth at less than 37 weeks, and almost three times as likely to have postpartum hemorrhage.
These same three secondary outcomes were also significantly more common when comparing surrogate with IVF pregnancies, albeit to a lesser degree. In contrast, surrogate pregnancies were associated with a 21% lower risk for elective cesarean delivery than IVF pregnancies (wRR, 0.79; 95% CI, 0.68-0.93).
Severe neonatal morbidity was not significantly different between the groups. These findings add to a mixed body of evidence surrounding both maternal and neonatal outcomes with gestational carriers, according to the investigators.
“Prior small studies [by Söderström-Anttila et al. and Swanson et al.] reported varying risks for preterm birth in singleton gestational carriage pregnancies, whereas a recent large US registry reported no increased risk for preterm birth compared with IVF, after accounting for multifetal pregnancy,” they wrote. “This study excluded multifetal pregnancies, a common occurrence after IVF, with reported higher risks for adverse outcomes. Accordingly, adverse maternal and newborn outcomes may have been underestimated herein.”
Causes of Worse Outcomes Remain Unclear
While the present findings suggest greater maternal morbidity among surrogates, potential causes of these adverse outcomes remain unclear.
The investigators suggested that implantation of a nonautologous embryo could be playing a role, as oocyte donation has been linked with an increased risk for hypertensive disorders of pregnancy.
“We don’t know exactly why that can happen,” Dr. Velez said in an interview. “Maybe that embryo can be associated with an immunological response that could be associated with higher morbidity during pregnancy. We need, however, other studies that can continue testing that hypothesis.”
In the meantime, more care is needed in surrogate selection, according to Dr. Velez.
“In our study, we found that there were patients, for example, who had more than three prior C-sections, which is one of the contraindications for gestational carriers, and patients who had more than five [prior] pregnancies, which is also another limitation in the guidelines for choosing these patients,” she said. “Definitely we need to be more vigilant when we accept these gestational carriers.”
But improving surrogate selection may be easier said than done.
The quantitative thresholds cited by Dr. Velez come from the American Society for Reproductive Medicine guidelines. Alternative guidance documents from the Canadian Fertility and Andrology Society and American College of Obstetricians and Gynecologists are less prescriptive; instead, they offer qualitative recommendations concerning obstetric history and risk assessment.
And then there is the regulatory specter looming over the entire field, evidenced by the many times that these publications cite ethical and legal considerations — far more than the average medical guidance document — when making clinical decisions related to surrogacy.
Present Study Offers Much-Needed Data in Understudied Field
According to Kate Swanson, MD, a perinatologist, clinical geneticist, and associate professor at the University of California San Francisco, the present study may help steer medical societies and healthcare providers away from these potential sand traps and toward conversations grounded in scientific data.
“I think one of the reasons that the Society for Maternal-Fetal Medicine and the maternal-fetal medicine community in general hasn’t been interested in this subject is that they see it as a social/ethical/legal issue rather than a medical one,” Dr. Swanson said in an interview. “One of the real benefits of this article is that it shows that this is a medical issue that the obstetric community needs to pay attention to.”
These new data could help guide decisions about risk and candidacy with both potential gestational carriers and intended parents, she said.
Still, it’s hard — if not impossible — to disentangle the medical and legal aspects of surrogacy, as shown when analyzing the present study.
In Canada, where it was conducted, intended parents are forbidden from paying surrogates for their services beyond out-of-pocket costs directly related to pregnancy. Meanwhile, surrogacy laws vary widely across the United States; some states (eg, Louisiana) allow only altruistic surrogacy like Canada, while other states (eg, California) permit commercial surrogacy with no legal limits on compensation.
Dr. Swanson and Dr. Velez offered starkly different views on this topic.
“I think there should be more regulations in terms of compensating [gestational carriers],” Dr. Velez said. “I don’t think being a gestational carrier should be like a job or a way of making a living.”
Dr. Swanson, who has published multiple studies on gestational carriage and experienced the process as an intended parent, said compensation beyond expenses is essential.
“I do think it’s incredibly reasonable to pay someone — a woman is taking on quite a lot of inconvenience and risk — in order to perform this service for another family,” she said. “I think it’s incredibly appropriate to compensate her for all of that.”
Reasons for compensation go beyond the ethical, Dr. Swanson added, and may explain some of the findings from the present study.
“A lot of these gestational carriers [in the present dataset] wouldn’t necessarily meet criteria through the American Society of Reproductive Medicine,” Dr. Swanson said, pointing out surrogates who had never had a pregnancy before or reported the use of tobacco or other drugs. “Really, it shows me that a lot of the people participating as gestational carriers were maybe not ideal candidates. I think one of the reasons that we might see that in this Canadian population is ... that you can’t compensate someone, so I think their pool of people willing to be gestational carriers is a lot smaller, and they may be a little bit less selective sometimes.”
Dr. Velez acknowledged that the present study was limited by a shortage of potentially relevant information concerning the surrogacy selection process, including underlying reasons for becoming a gestational carrier. More work is needed to understand the health and outcomes of these women, she said, including topics ranging from immunologic mechanisms to mental health.
She also called for more discussions surrounding maternal safety, with participation from all stakeholders, including governments, surrogates, intended parents, and physicians too.
This study was funded by the Canadian Institutes of Health Research. The investigators disclosed no conflicts of interest. Dr. Swanson disclosed a relationship with Mitera.
A version of this article first appeared on Medscape.com.
Gestational carriers face a significantly higher risk for severe maternal morbidity and other pregnancy complications than those conceiving naturally or via in vitro fertilization (IVF), according to a recent Canadian study.
These findings suggest that more work is needed to ensure careful selection of gestational carriers, reported lead author Maria P. Velez, MD, PhD, of McGill University, Montreal, Quebec, Canada, and colleagues.
“Although a gestational carrier should ideally be a healthy person, with a demonstrated low-risk obstetric history, it is not clear whether this occurs in practice,” the investigators wrote in Annals of Internal Medicine. “Moreover, the risk for maternal and neonatal adversity is largely unknown in this group.”
Study Compared Gestational Carriage With IVF and Unassisted Conception
To address these knowledge gaps, Dr. Velez and colleagues conducted a population-based cohort study in Ontario using linked administrative datasets. All singleton births at more than 20 weeks’ gestation with mothers aged 18-50 years were included from April 2012 to March 2021. Multifetal pregnancies were excluded, as were women with a history of infertility diagnosis without fertility treatment, and those who underwent intrauterine insemination or ovulation induction.
Outcomes were compared across three groups: Unassisted conception, IVF, and gestational carriage. The primary maternal outcome was severe maternal morbidity, defined by a validated composite of 41 unique indicators. The primary infant outcome was severe neonatal morbidity, comprising 19 unique indicators.
Secondary outcomes were hypertensive disorders, elective cesarean delivery, emergent cesarean delivery, preterm birth at less than 37 weeks, preterm birth at more than 32 weeks, and postpartum hemorrhage.
Logistic regression analysis adjusted for a range of covariates, including age, obesity, tobacco/drug dependence, chronic hypertension, and others. The final dataset included 846,124 births by unassisted conception (97.6%), 16,087 by IVF (1.8%), and 806 by gestational carriage (0.1%).
The weighted relative risk (wRR) for severe maternal morbidity was more than three times higher in gestational carriers than in those conceiving naturally (wRR, 3.30; 95% CI, 2.59-4.20) and 86% higher than in those conceiving via IVF (wRR, 1.86; 95% CI, 1.36-2.55). These stem from absolute risks of 2.3%, 4.3%, and 7.8% for unassisted, IVF, and surrogate pregnancies, respectively.
Moreover, surrogates were 75% more likely to have hypertensive disorders, 79% more likely to have preterm birth at less than 37 weeks, and almost three times as likely to have postpartum hemorrhage.
These same three secondary outcomes were also significantly more common when comparing surrogate with IVF pregnancies, albeit to a lesser degree. In contrast, surrogate pregnancies were associated with a 21% lower risk for elective cesarean delivery than IVF pregnancies (wRR, 0.79; 95% CI, 0.68-0.93).
Severe neonatal morbidity was not significantly different between the groups. These findings add to a mixed body of evidence surrounding both maternal and neonatal outcomes with gestational carriers, according to the investigators.
“Prior small studies [by Söderström-Anttila et al. and Swanson et al.] reported varying risks for preterm birth in singleton gestational carriage pregnancies, whereas a recent large US registry reported no increased risk for preterm birth compared with IVF, after accounting for multifetal pregnancy,” they wrote. “This study excluded multifetal pregnancies, a common occurrence after IVF, with reported higher risks for adverse outcomes. Accordingly, adverse maternal and newborn outcomes may have been underestimated herein.”
Causes of Worse Outcomes Remain Unclear
While the present findings suggest greater maternal morbidity among surrogates, potential causes of these adverse outcomes remain unclear.
The investigators suggested that implantation of a nonautologous embryo could be playing a role, as oocyte donation has been linked with an increased risk for hypertensive disorders of pregnancy.
“We don’t know exactly why that can happen,” Dr. Velez said in an interview. “Maybe that embryo can be associated with an immunological response that could be associated with higher morbidity during pregnancy. We need, however, other studies that can continue testing that hypothesis.”
In the meantime, more care is needed in surrogate selection, according to Dr. Velez.
“In our study, we found that there were patients, for example, who had more than three prior C-sections, which is one of the contraindications for gestational carriers, and patients who had more than five [prior] pregnancies, which is also another limitation in the guidelines for choosing these patients,” she said. “Definitely we need to be more vigilant when we accept these gestational carriers.”
But improving surrogate selection may be easier said than done.
The quantitative thresholds cited by Dr. Velez come from the American Society for Reproductive Medicine guidelines. Alternative guidance documents from the Canadian Fertility and Andrology Society and American College of Obstetricians and Gynecologists are less prescriptive; instead, they offer qualitative recommendations concerning obstetric history and risk assessment.
And then there is the regulatory specter looming over the entire field, evidenced by the many times that these publications cite ethical and legal considerations — far more than the average medical guidance document — when making clinical decisions related to surrogacy.
Present Study Offers Much-Needed Data in Understudied Field
According to Kate Swanson, MD, a perinatologist, clinical geneticist, and associate professor at the University of California San Francisco, the present study may help steer medical societies and healthcare providers away from these potential sand traps and toward conversations grounded in scientific data.
“I think one of the reasons that the Society for Maternal-Fetal Medicine and the maternal-fetal medicine community in general hasn’t been interested in this subject is that they see it as a social/ethical/legal issue rather than a medical one,” Dr. Swanson said in an interview. “One of the real benefits of this article is that it shows that this is a medical issue that the obstetric community needs to pay attention to.”
These new data could help guide decisions about risk and candidacy with both potential gestational carriers and intended parents, she said.
Still, it’s hard — if not impossible — to disentangle the medical and legal aspects of surrogacy, as shown when analyzing the present study.
In Canada, where it was conducted, intended parents are forbidden from paying surrogates for their services beyond out-of-pocket costs directly related to pregnancy. Meanwhile, surrogacy laws vary widely across the United States; some states (eg, Louisiana) allow only altruistic surrogacy like Canada, while other states (eg, California) permit commercial surrogacy with no legal limits on compensation.
Dr. Swanson and Dr. Velez offered starkly different views on this topic.
“I think there should be more regulations in terms of compensating [gestational carriers],” Dr. Velez said. “I don’t think being a gestational carrier should be like a job or a way of making a living.”
Dr. Swanson, who has published multiple studies on gestational carriage and experienced the process as an intended parent, said compensation beyond expenses is essential.
“I do think it’s incredibly reasonable to pay someone — a woman is taking on quite a lot of inconvenience and risk — in order to perform this service for another family,” she said. “I think it’s incredibly appropriate to compensate her for all of that.”
Reasons for compensation go beyond the ethical, Dr. Swanson added, and may explain some of the findings from the present study.
“A lot of these gestational carriers [in the present dataset] wouldn’t necessarily meet criteria through the American Society of Reproductive Medicine,” Dr. Swanson said, pointing out surrogates who had never had a pregnancy before or reported the use of tobacco or other drugs. “Really, it shows me that a lot of the people participating as gestational carriers were maybe not ideal candidates. I think one of the reasons that we might see that in this Canadian population is ... that you can’t compensate someone, so I think their pool of people willing to be gestational carriers is a lot smaller, and they may be a little bit less selective sometimes.”
Dr. Velez acknowledged that the present study was limited by a shortage of potentially relevant information concerning the surrogacy selection process, including underlying reasons for becoming a gestational carrier. More work is needed to understand the health and outcomes of these women, she said, including topics ranging from immunologic mechanisms to mental health.
She also called for more discussions surrounding maternal safety, with participation from all stakeholders, including governments, surrogates, intended parents, and physicians too.
This study was funded by the Canadian Institutes of Health Research. The investigators disclosed no conflicts of interest. Dr. Swanson disclosed a relationship with Mitera.
A version of this article first appeared on Medscape.com.
Reducing Biologic Discontinuation Among Pediatric Crohn’s Patients
, according to investigators.
These findings, and others concerning a lack of high-dose therapy and poor follow-up, suggest that more work is needed to optimize biologic therapy in this patient population, reported lead author Sabina Ali, MD, of UCSF Benioff Children’s Hospital, Oakland, California, and colleagues.
“With few medications available for treating CD, limited therapeutic longevity places patients at risk of exhausting treatment options,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is especially problematic for children, for whom infliximab and adalimumab remain the only medications approved by the Food and Drug Administration (FDA), and who require effective long-term therapy to maintain remission and prevent morbidity and disability for decades to come.”
Despite these concerns, reasons behind biologic discontinuation in the pediatric CD population have been poorly characterized, prompting the present study.
Dr. Ali and colleagues analyzed prospectively collected data from 823 patients treated at seven pediatric inflammatory bowel disease centers. Median age was 13 years, with slightly more male than female patients (60% vs 40%).
Within this group, 86% started biologics, most often infliximab (78%), followed by adalimumab (21%), and distantly, others (less than 1%). Most patients (86%) underwent TDM at some point during the treatment process, while one quarter (26%) took concomitant immunomodulators for at least 1 year.
Slightly less than one third of patients (29%) discontinued their first biologic after a median of approximately 2 years. The most common reason for discontinuation was inefficacy (34%), followed by nonadherence (12%), anti-drug antibodies (8%), and adverse events (8%).
Among those who discontinued due to inefficacy, 85% underwent prediscontinuation evaluation. When TDM of adalimumab or infliximab was performed prior to discontinuation, almost 2 out of 3 patients (62%) had drug levels lower than 10 µg/mL.
“We cannot determine the reasons dose escalation was not attempted,” the investigators wrote. “However, trough levels greater than 10 mg/mL may be associated with improved efficacy.”
Most patients (91%) who stopped their first biologic started a second, and more than one third (36%) also discontinued that second option, usually after about 1 year. After 4 years, only 10% of patients remained on their second biologic therapy. By study end, almost 1 out of 12 patients were on their third or fourth biologic, and 17% of patients were on a biologic currently not approved by the FDA.
Beyond characterizing these usage and discontinuation rates, the investigators also assessed factors associated with discontinuation or therapeutic persistence.
Proactive TDM was the strongest factor driving therapeutic persistence, as it reduced risk of discontinuation by 63%. Concomitant immunomodulatory therapy also reduced discontinuation risk, by 30%. Conversely, usage of 5-aminoasalicylate in the first 90 days of diagnosis was associated with a 70% higher discontinuation rate.
“The reason for this [latter finding about aminosalicylates] is not clear but may be an indicator of insurance-related or other barriers to care,” the investigators wrote.
Dr. Ali and colleagues concluded by noting how concerning, and commonplace, biologic discontinuation is in this patient population.
“This poses a serious problem for pediatric patients who will require treatment for decades to come,” they wrote. “Thoughtful strategies are needed to preserve treatment longevity and minimize the loss of treatment options.”
This work was supported by the Gary and Rachel Glick Charitable Fund. The investigators disclosed relationships with Janssen, Eli Lilly, AbbVie, and others.
As pediatric gastroenterologists, our practice has significantly changed over time, including the approach of using more effective medications sooner and adoption of therapeutic drug monitoring (TDM) as standard of care to optimize dosing. This study found the use of TDM during the induction phase of biologic therapy increased over the study duration from 2% to 70%, which is remarkable. Pediatric patients tend to have more extensive and severe disease, often necessitating higher dosing. With limited Food and Drug Administration–approved medications to treat children with IBD, it is imperative that we position these medications appropriately and be assertive with dose optimization to improve patient outcomes.
Alarmingly, one third of patients discontinued their biologic after 2.2 years. Concerningly, half discontinued their biologics without a trial of high-dose therapy and 14% without any evaluation. Trough levels >10 mg/mL may be associated with improved efficacy and low antibody levels can be overcome, however many of these patients had levels lower than this. This is likely a missed opportunity to capture response and increase durability with dose escalation. Biologic discontinuation was reduced by 60% with the use of proactive TDM and 32% with concomitant immunomodulators (on >12 months, compared with monotherapy). Pediatric data supporting the use of concomitant immunomodulators has been mixed.
As pediatric IBD physicians, we need to increase our diligence to optimize biologic therapy early. Early dose optimization could negate the observed protective impact from concomitant immunomodulator use in many cases, thereby decreasing risk of potential side effects. This highlights the importance of a shared decision-making discussion with our patients and families.
Further research is needed to address strategies to increase drug durability including TDM and dose optimization, adherence, health literacy, engagement, and the role for patient education to enhance medication optimization and durability.
Jennifer L. Dotson, MD, MPH, is chief of pediatric gastroenterology, hepatology, and nutrition at Arkansas Children’s Hospital and professor of pediatrics at the University of Arkansas for Medical Sciences, both in Little Rock. She declares no conflicts of interest.
As pediatric gastroenterologists, our practice has significantly changed over time, including the approach of using more effective medications sooner and adoption of therapeutic drug monitoring (TDM) as standard of care to optimize dosing. This study found the use of TDM during the induction phase of biologic therapy increased over the study duration from 2% to 70%, which is remarkable. Pediatric patients tend to have more extensive and severe disease, often necessitating higher dosing. With limited Food and Drug Administration–approved medications to treat children with IBD, it is imperative that we position these medications appropriately and be assertive with dose optimization to improve patient outcomes.
Alarmingly, one third of patients discontinued their biologic after 2.2 years. Concerningly, half discontinued their biologics without a trial of high-dose therapy and 14% without any evaluation. Trough levels >10 mg/mL may be associated with improved efficacy and low antibody levels can be overcome, however many of these patients had levels lower than this. This is likely a missed opportunity to capture response and increase durability with dose escalation. Biologic discontinuation was reduced by 60% with the use of proactive TDM and 32% with concomitant immunomodulators (on >12 months, compared with monotherapy). Pediatric data supporting the use of concomitant immunomodulators has been mixed.
As pediatric IBD physicians, we need to increase our diligence to optimize biologic therapy early. Early dose optimization could negate the observed protective impact from concomitant immunomodulator use in many cases, thereby decreasing risk of potential side effects. This highlights the importance of a shared decision-making discussion with our patients and families.
Further research is needed to address strategies to increase drug durability including TDM and dose optimization, adherence, health literacy, engagement, and the role for patient education to enhance medication optimization and durability.
Jennifer L. Dotson, MD, MPH, is chief of pediatric gastroenterology, hepatology, and nutrition at Arkansas Children’s Hospital and professor of pediatrics at the University of Arkansas for Medical Sciences, both in Little Rock. She declares no conflicts of interest.
As pediatric gastroenterologists, our practice has significantly changed over time, including the approach of using more effective medications sooner and adoption of therapeutic drug monitoring (TDM) as standard of care to optimize dosing. This study found the use of TDM during the induction phase of biologic therapy increased over the study duration from 2% to 70%, which is remarkable. Pediatric patients tend to have more extensive and severe disease, often necessitating higher dosing. With limited Food and Drug Administration–approved medications to treat children with IBD, it is imperative that we position these medications appropriately and be assertive with dose optimization to improve patient outcomes.
Alarmingly, one third of patients discontinued their biologic after 2.2 years. Concerningly, half discontinued their biologics without a trial of high-dose therapy and 14% without any evaluation. Trough levels >10 mg/mL may be associated with improved efficacy and low antibody levels can be overcome, however many of these patients had levels lower than this. This is likely a missed opportunity to capture response and increase durability with dose escalation. Biologic discontinuation was reduced by 60% with the use of proactive TDM and 32% with concomitant immunomodulators (on >12 months, compared with monotherapy). Pediatric data supporting the use of concomitant immunomodulators has been mixed.
As pediatric IBD physicians, we need to increase our diligence to optimize biologic therapy early. Early dose optimization could negate the observed protective impact from concomitant immunomodulator use in many cases, thereby decreasing risk of potential side effects. This highlights the importance of a shared decision-making discussion with our patients and families.
Further research is needed to address strategies to increase drug durability including TDM and dose optimization, adherence, health literacy, engagement, and the role for patient education to enhance medication optimization and durability.
Jennifer L. Dotson, MD, MPH, is chief of pediatric gastroenterology, hepatology, and nutrition at Arkansas Children’s Hospital and professor of pediatrics at the University of Arkansas for Medical Sciences, both in Little Rock. She declares no conflicts of interest.
, according to investigators.
These findings, and others concerning a lack of high-dose therapy and poor follow-up, suggest that more work is needed to optimize biologic therapy in this patient population, reported lead author Sabina Ali, MD, of UCSF Benioff Children’s Hospital, Oakland, California, and colleagues.
“With few medications available for treating CD, limited therapeutic longevity places patients at risk of exhausting treatment options,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is especially problematic for children, for whom infliximab and adalimumab remain the only medications approved by the Food and Drug Administration (FDA), and who require effective long-term therapy to maintain remission and prevent morbidity and disability for decades to come.”
Despite these concerns, reasons behind biologic discontinuation in the pediatric CD population have been poorly characterized, prompting the present study.
Dr. Ali and colleagues analyzed prospectively collected data from 823 patients treated at seven pediatric inflammatory bowel disease centers. Median age was 13 years, with slightly more male than female patients (60% vs 40%).
Within this group, 86% started biologics, most often infliximab (78%), followed by adalimumab (21%), and distantly, others (less than 1%). Most patients (86%) underwent TDM at some point during the treatment process, while one quarter (26%) took concomitant immunomodulators for at least 1 year.
Slightly less than one third of patients (29%) discontinued their first biologic after a median of approximately 2 years. The most common reason for discontinuation was inefficacy (34%), followed by nonadherence (12%), anti-drug antibodies (8%), and adverse events (8%).
Among those who discontinued due to inefficacy, 85% underwent prediscontinuation evaluation. When TDM of adalimumab or infliximab was performed prior to discontinuation, almost 2 out of 3 patients (62%) had drug levels lower than 10 µg/mL.
“We cannot determine the reasons dose escalation was not attempted,” the investigators wrote. “However, trough levels greater than 10 mg/mL may be associated with improved efficacy.”
Most patients (91%) who stopped their first biologic started a second, and more than one third (36%) also discontinued that second option, usually after about 1 year. After 4 years, only 10% of patients remained on their second biologic therapy. By study end, almost 1 out of 12 patients were on their third or fourth biologic, and 17% of patients were on a biologic currently not approved by the FDA.
Beyond characterizing these usage and discontinuation rates, the investigators also assessed factors associated with discontinuation or therapeutic persistence.
Proactive TDM was the strongest factor driving therapeutic persistence, as it reduced risk of discontinuation by 63%. Concomitant immunomodulatory therapy also reduced discontinuation risk, by 30%. Conversely, usage of 5-aminoasalicylate in the first 90 days of diagnosis was associated with a 70% higher discontinuation rate.
“The reason for this [latter finding about aminosalicylates] is not clear but may be an indicator of insurance-related or other barriers to care,” the investigators wrote.
Dr. Ali and colleagues concluded by noting how concerning, and commonplace, biologic discontinuation is in this patient population.
“This poses a serious problem for pediatric patients who will require treatment for decades to come,” they wrote. “Thoughtful strategies are needed to preserve treatment longevity and minimize the loss of treatment options.”
This work was supported by the Gary and Rachel Glick Charitable Fund. The investigators disclosed relationships with Janssen, Eli Lilly, AbbVie, and others.
, according to investigators.
These findings, and others concerning a lack of high-dose therapy and poor follow-up, suggest that more work is needed to optimize biologic therapy in this patient population, reported lead author Sabina Ali, MD, of UCSF Benioff Children’s Hospital, Oakland, California, and colleagues.
“With few medications available for treating CD, limited therapeutic longevity places patients at risk of exhausting treatment options,” the investigators wrote in Clinical Gastroenterology and Hepatology. “This is especially problematic for children, for whom infliximab and adalimumab remain the only medications approved by the Food and Drug Administration (FDA), and who require effective long-term therapy to maintain remission and prevent morbidity and disability for decades to come.”
Despite these concerns, reasons behind biologic discontinuation in the pediatric CD population have been poorly characterized, prompting the present study.
Dr. Ali and colleagues analyzed prospectively collected data from 823 patients treated at seven pediatric inflammatory bowel disease centers. Median age was 13 years, with slightly more male than female patients (60% vs 40%).
Within this group, 86% started biologics, most often infliximab (78%), followed by adalimumab (21%), and distantly, others (less than 1%). Most patients (86%) underwent TDM at some point during the treatment process, while one quarter (26%) took concomitant immunomodulators for at least 1 year.
Slightly less than one third of patients (29%) discontinued their first biologic after a median of approximately 2 years. The most common reason for discontinuation was inefficacy (34%), followed by nonadherence (12%), anti-drug antibodies (8%), and adverse events (8%).
Among those who discontinued due to inefficacy, 85% underwent prediscontinuation evaluation. When TDM of adalimumab or infliximab was performed prior to discontinuation, almost 2 out of 3 patients (62%) had drug levels lower than 10 µg/mL.
“We cannot determine the reasons dose escalation was not attempted,” the investigators wrote. “However, trough levels greater than 10 mg/mL may be associated with improved efficacy.”
Most patients (91%) who stopped their first biologic started a second, and more than one third (36%) also discontinued that second option, usually after about 1 year. After 4 years, only 10% of patients remained on their second biologic therapy. By study end, almost 1 out of 12 patients were on their third or fourth biologic, and 17% of patients were on a biologic currently not approved by the FDA.
Beyond characterizing these usage and discontinuation rates, the investigators also assessed factors associated with discontinuation or therapeutic persistence.
Proactive TDM was the strongest factor driving therapeutic persistence, as it reduced risk of discontinuation by 63%. Concomitant immunomodulatory therapy also reduced discontinuation risk, by 30%. Conversely, usage of 5-aminoasalicylate in the first 90 days of diagnosis was associated with a 70% higher discontinuation rate.
“The reason for this [latter finding about aminosalicylates] is not clear but may be an indicator of insurance-related or other barriers to care,” the investigators wrote.
Dr. Ali and colleagues concluded by noting how concerning, and commonplace, biologic discontinuation is in this patient population.
“This poses a serious problem for pediatric patients who will require treatment for decades to come,” they wrote. “Thoughtful strategies are needed to preserve treatment longevity and minimize the loss of treatment options.”
This work was supported by the Gary and Rachel Glick Charitable Fund. The investigators disclosed relationships with Janssen, Eli Lilly, AbbVie, and others.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Study Questions Relationship Between Crohn’s Strictures and Cancer Risk
, according to investigators.
Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.
“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.
Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.
After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.
Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.
Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.
“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.
The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.
In other words, common concerns about strictures causing cancer at the same site could be unfounded.
This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.
“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”
In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.
The investigators disclosed relationships with MSD, Ferring, Biogen, and others.
, according to investigators.
Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.
“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.
Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.
After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.
Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.
Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.
“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.
The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.
In other words, common concerns about strictures causing cancer at the same site could be unfounded.
This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.
“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”
In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.
The investigators disclosed relationships with MSD, Ferring, Biogen, and others.
, according to investigators.
Although 8% of patients with strictures in a multicenter study were diagnosed with CRC, this diagnosis was made either simultaneously or within 1 year of stricture diagnosis, suggesting that cancer may have driven stricture development, and not the other way around, lead author Thomas Hunaut, MD, of Université de Champagne-Ardenne, Reims, France, and colleagues reported.
“The occurrence of colonic stricture in CD always raises concerns about the risk for dysplasia/cancer,” the investigators wrote in Gastro Hep Advances, noting that no consensus approach is currently available to guide stricture management. “Few studies with conflicting results have evaluated the frequency of CRC associated with colonic stricture in CD, and the natural history of colonic stricture in CD is poorly known.”The present retrospective study included 88 consecutive CD patients with 96 colorectal strictures who were managed at three French referral centers between 1993 and 2022.
Strictures were symptomatic in 62.5% of cases, not passable by scope in 61.4% of cases, and ulcerated in 70.5% of cases. Colonic resection was needed in 47.7% of patients, while endoscopic balloon dilation was performed in 13.6% of patients.
After a median follow-up of 21.5 months, seven patients (8%) were diagnosed with malignant stricture, including five cases of colonic adenocarcinoma, one case of neuroendocrine carcinoma, and one case of B-cell lymphoproliferative neoplasia.
Malignant strictures were more common among older patients with longer disease duration and frequent obstructive symptoms; however, these factors were not supported by multivariate analyses, likely due to sample size, according to the investigators.
Instead, Dr. Hunaut and colleagues highlighted the timing of the diagnoses. In four out of seven patients with malignant stricture, both stricture and cancer were diagnosed at the same time. In the remaining three patients, cancer was diagnosed at 3 months, 8 months, and 12 months after stricture diagnosis. No cases of cancer were diagnosed later than 1 year after the stricture diagnosis.
“We believe that this result is important for the management of colonic strictures complicating CD in clinical practice,” Dr. Hunaut and colleagues wrote.
The simultaneity or proximity of the diagnoses suggests that the “strictures observed are already a neoplastic complication of the colonic inflammatory disease,” they explained.
In other words, common concerns about strictures causing cancer at the same site could be unfounded.
This conclusion echoes a recent administrative database study that reported no independent association between colorectal stricture and CRC, the investigators noted.
“Given the recent evidence on the risk of cancer associated with colonic strictures in CD, systematic colectomy is probably no longer justified,” they wrote. “Factors such as a long disease duration, primary sclerosing cholangitis, a history of dysplasia, and nonpassable and/or symptomatic stricture despite endoscopic dilation tend to argue in favor of surgery — especially if limited resection is possible.”
In contrast, patients with strictures who have low risk of CRC may be better served by a conservative approach, including endoscopy and systematic biopsies, followed by close endoscopic surveillance, according to the investigators. If the stricture is impassable, they recommended endoscopic balloon dilation, followed by intensification of medical therapy if ulceration is observed.
The investigators disclosed relationships with MSD, Ferring, Biogen, and others.
FROM GASTRO HEP ADVANCES
Subcutaneous Infliximab Beats Placebo for IBD Maintenance Therapy
These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.
Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.
“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.
One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.
All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.
The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.
The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.
In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).
In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.
The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.
“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”
The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.
Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.
The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.
It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.
Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.
The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.
It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.
Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
Intravenous (IV) infliximab-dyyb, also called CT-P13 in clinical trials, is a biosimilar that was approved in the United States in 2016 under the brand name Inflectra. It received approval in Europe and elsewhere under the brand name Remsima.
The study from Hanauer and colleagues represents a milestone in biosimilar development as the authors studied an injectable form of the approved IV biosimilar, infliximab-dyyb. How might efficacy compare amongst the two formulations? The LIBERTY studies did not include an active IV infliximab comparator to answer this question. Based on a phase 1, open label trial, subcutaneous (SC) infliximab appears noninferior to IV infliximab.
It is remarkable that we have progressed from creating highly similar copies of older biologics whose patents have expired, to reimagining and modifying biosimilars to potentially improve on efficacy, dosing, tolerability, or as in the case of SC infliximab-dyyb, providing a new mode of delivery. For SC infliximab, whether the innovator designation will cause different patterns of use based on cost or other factors, compared with places where the injectable and intravenous formulations are both considered biosimilars, remains to be seen.
Fernando S. Velayos, MD, MPH, AGAF, is director of the Inflammatory Bowel Disease Program, The Permanente Group Northern California; adjunct investigator at the Kaiser Permanente Division of Research; and chief of Gastroenterology and Hepatology, Kaiser Permanente San Francisco Medical Center. He reported no conflicts of interest.
These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.
Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.
“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.
One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.
All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.
The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.
The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.
In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).
In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.
The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.
“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”
The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.
These two randomized trials should increase confidence in SC infliximab as a convenient alternative to intravenous delivery, reported co–lead authors Stephen B. Hanauer, MD, AGAF, of Northwestern Feinberg School of Medicine, Chicago, Illinois, and Bruce E. Sands, MD, AGAF, of Icahn School of Medicine at Mount Sinai, New York City, and colleagues.
Specifically, the trials evaluated CT-P13, an infliximab biosimilar, which was Food and Drug Administration approved for intravenous (IV) use in 2016. The SC formulation was approved in the United States in 2023 as a new drug, requiring phase 3 efficacy confirmatory trials.
“Physicians and patients may prefer SC to IV treatment for IBD, owing to the convenience and flexibility of at-home self-administration, a different exposure profile with high steady-state levels, reduced exposure to nosocomial infection, and health care system resource benefits,” the investigators wrote in Gastroenterology.
One trial included patients with Crohn’s disease (CD), while the other enrolled patients with ulcerative colitis (UC). Eligibility depended upon inadequate responses or intolerance to corticosteroids and immunomodulators.
All participants began by receiving open-label IV CT-P13, at a dosage of 5 mg/kg, at weeks 0, 2, and 6. At week 10, those who responded to the IV induction therapy were randomized in a 2:1 ratio to continue with either the SC formulation of CT-P13 (120 mg) or switch to placebo, administered every 2 weeks until week 54.
The CD study randomized 343 patients, while the UC study had a larger cohort, with 438 randomized. Median age of participants was in the mid-30s to late 30s, with a majority being White and male. Baseline disease severity, assessed by the Crohn’s Disease Activity Index (CDAI) for CD and the modified Mayo score for UC, was similar across treatment groups.
The primary efficacy endpoint was clinical remission at week 54, defined as a CDAI score of less than 150 for CD and a modified Mayo score of 0-1 for UC.
In the CD study, 62.3% of patients receiving CT-P13 SC achieved clinical remission, compared with 32.1% in the placebo group, with a treatment difference of 32.1% (95% CI, 20.9-42.1; P < .0001). In addition, 51.1% of CT-P13 SC-treated patients achieved endoscopic response, compared with 17.9% in the placebo group, yielding a treatment difference of 34.6% (95% CI, 24.1-43.5; P < .0001).
In the UC study, 43.2% of patients on CT-P13 SC achieved clinical remission at week 54, compared with 20.8% of those on placebo, with a treatment difference of 21.1% (95% CI, 11.8-29.3; P < .0001). Key secondary endpoints, including endoscopic-histologic mucosal improvement, also favored CT-P13 SC over placebo with statistically significant differences.
The safety profile of CT-P13 SC was comparable with that of IV infliximab, with no new safety concerns emerging during the trials.
“Our results demonstrate the superior efficacy of CT-P13 SC over placebo for maintenance therapy in patients with moderately to severely active CD or UC after induction with CT-P13 IV,” the investigators wrote. “Importantly, the findings confirm that CT-P13 SC is well tolerated in this population, with no clinically meaningful differences in safety profile, compared with placebo. Overall, the results support CT-P13 SC as a treatment option for maintenance therapy in patients with IBD.”
The LIBERTY studies were funded by Celltrion. The investigators disclosed relationships with Pfizer, Gilead, Takeda, and others.
FROM GASTROENTEROLOGY
Should All Patients With Early Breast Cancer Receive Adjuvant Radiotherapy?
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
based on a 30-year follow-up of the Scottish Breast Conservation Trial.
These findings suggest that patients with biology predicting late relapse may receive little benefit from adjuvant radiotherapy, lead author Linda J. Williams, PhD, of the University of Edinburgh in Scotland, and colleagues, reported.
“During the past 30 years, several randomized controlled trials have investigated the role of postoperative radiotherapy after breast-conserving surgery for early breast cancer,” the investigators wrote in The Lancet Oncology. “These trials showed that radiotherapy reduces the risk of local recurrence but were underpowered individually to detect a difference in overall survival.”
How Did the Present Study Increase Our Understanding of the Benefits of Adjuvant Radiotherapy in Early Breast Cancer?
The present analysis included data from a trial that began in 1985, when 589 patients with early breast cancer (tumors ≤ 4 cm [T1 or T2 and N0 or N1]) were randomized to receive either high-dose or no radiotherapy, with final cohorts including 291 patients and 294 patients, respectively. The radiotherapy was given 50 Gy in 20-25 fractions, either locally or locoregionally.
Estrogen receptor (ER)–positive patients (≥ 20 fmol/mg protein) received 5 years of daily oral tamoxifen. ER-poor patients (< 20 fmol/mg protein) received a chemotherapy combination of cyclophosphamide, methotrexate, and fluorouracil on a 21-day cycle for eight cycles.
Considering all data across a median follow-up of 17.5 years, adjuvant radiotherapy appeared to offer benefit, as it was associated with significantly lower ipsilateral breast tumor recurrence (16% vs 36%; hazard ratio [HR], 0.39; P < .0001).
But that tells only part of the story.
The positive impact of radiotherapy persisted for 1 decade (HR, 0.24; P < .0001), but risk beyond this point was no different between groups (HR, 0.98; P = .95).
“[The] benefit of radiotherapy was time dependent,” the investigators noted.
What’s more, median overall survival was no different between those who received radiotherapy and those who did not (18.7 vs 19.2 years; HR, 1.08; log-rank P = .43), and “reassuringly,” omitting radiotherapy did not increase the rate of distant metastasis.
How Might These Findings Influence Treatment Planning for Patients With Early Breast Cancer?
“The results can help clinicians to advise patients better about their choice to have radiotherapy or not if they better understand what benefits it does and does not bring,” the investigators wrote. “These results might provide clues perhaps to the biology of radiotherapy benefit, given that it does not prevent late recurrences, suggesting that patients whose biology predicts a late relapse only might not gain a benefit from radiotherapy.”
Gary M. Freedman, MD, chief of Women’s Health Service, Radiation Oncology, at Penn Medicine, Philadelphia, offered a different perspective.
“The study lumps together a local recurrence of breast cancer — that is relapse of the cancer years after treatment with lumpectomy and radiation — with the development of an entirely new breast cancer in the same breast,” Dr. Freedman said in a written comment. “When something comes back between years 0-5 and 0-8, we usually think of it as a true local recurrence arbitrarily, but beyond that they are new cancers.”
He went on to emphasize the clinical importance of reducing local recurrence within the first decade, noting that “this leads to much less morbidity and better quality of life for the patients.”
Dr. Freedman also shared his perspective on the survival data.
“Radiation did reduce breast cancer mortality very significantly — death from breast cancers went down from 46% to 37%,” he wrote (P = .054). “This is on the same level as chemo or hormone therapy. The study was not powered to detect significant differences in survival by radiation, but that has been shown with other meta-analyses.”
Are Findings From a Trial Started 30 Years Ago Still Relevant Today?
“Clearly the treatment of early breast cancer has advanced since the 1980s when the Scottish Conservation trial was launched,” study coauthor Ian Kunkler, MB, FRCR, of the University of Edinburgh, said in a written comment. “There is more breast screening, attention to clearing surgical margins of residual disease, more effective and longer periods of adjuvant hormonal therapy, reduced radiotherapy toxicity from more precise delivery. However, most anticancer treatments lose their effectiveness over time.”
He suggested that more trials are needed to confirm the present findings and reiterated that the lack of long-term recurrence benefit is most relevant for patients with disease features that predict late relapse, who “seem to gain little from adjuvant radiotherapy given as part of primary treatment.”
Dr. Kunkler noted that the observed benefit in the first decade supports the continued use of radiotherapy alongside anticancer drug treatment.
When asked the same question, Freedman emphasized the differences in treatment today vs the 1980s.
“The results of modern multidisciplinary cancer care are much, much better than these 30-year results,” Dr. Freedman said. “The risk for local recurrence in the breast after radiation is now about 2%-3% at 10 years in most studies.”
He also noted that modern radiotherapy techniques have “significantly lowered dose and risks to heart and lung,” compared with techniques used 30 years ago.
“A take-home point for the study is after breast conservation, whether or not you have radiation, you have to continue long-term screening mammograms for new breast cancers that may occur even decades later,” Dr. Freedman concluded.
How Might These Findings Impact Future Research Design and Funding?
“The findings should encourage trial funders to consider funding long-term follow-up beyond 10 years to assess benefits and risks of anticancer therapies,” Dr. Kunkler said. “The importance of long-term follow-up cannot be understated.”
This study was funded by Breast Cancer Institute (part of Edinburgh and Lothians Health Foundation), PFS Genomics (now part of Exact Sciences), the University of Edinburgh, and NHS Lothian. The investigators reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE LANCET ONCOLOGY
Do Clonal Hematopoiesis and Mosaic Chromosomal Alterations Increase Solid Tumor Risk?
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
Clonal hematopoiesis of indeterminate potential (CHIP) and mosaic chromosomal alterations (mCAs) are associated with an increased risk for breast cancer, and CHIP is associated with increased mortality in patients with colon cancer, according to the authors of new research.
These findings, drawn from almost 11,000 patients in the Women’s Health Initiative (WHI) study, add further evidence that CHIP and mCA drive solid tumor risk, alongside known associations with hematologic malignancies, reported lead author Pinkal Desai, MD, associate professor of medicine and clinical director of molecular aging at Englander Institute for Precision Medicine, Weill Cornell Medical College, New York City, and colleagues.
How This Study Differs From Others of Breast Cancer Risk Factors
“The independent effect of CHIP and mCA on risk and mortality from solid tumors has not been elucidated due to lack of detailed data on mortality outcomes and risk factors,” the investigators wrote in Cancer, although some previous studies have suggested a link.
In particular, the investigators highlighted a 2022 UK Biobank study, which reported an association between CHIP and lung cancer and a borderline association with breast cancer that did not quite reach statistical significance.
But the UK Biobank study was confined to a UK population, Dr. Desai noted in an interview, and the data were less detailed than those in the present investigation.
“In terms of risk, the part that was lacking in previous studies was a comprehensive assessment of risk factors that increase risk for all these cancers,” Dr. Desai said. “For example, for breast cancer, we had very detailed data on [participants’] Gail risk score, which is known to impact breast cancer risk. We also had mammogram data and colonoscopy data.”
In an accompanying editorial, Koichi Takahashi, MD, PhD , and Nehali Shah, BS, of The University of Texas MD Anderson Cancer Center, Houston, Texas, pointed out the same UK Biobank findings, then noted that CHIP has also been linked with worse overall survival in unselected cancer patients. Still, they wrote, “the impact of CH on cancer risk and mortality remains controversial due to conflicting data and context‐dependent effects,” necessitating studies like this one by Dr. Desai and colleagues.
How Was the Relationship Between CHIP, MCA, and Solid Tumor Risk Assessed?
To explore possible associations between CHIP, mCA, and solid tumors, the investigators analyzed whole genome sequencing data from 10,866 women in the WHI, a multi-study program that began in 1992 and involved 161,808 women in both observational and clinical trial cohorts.
In 2002, the first big data release from the WHI suggested that hormone replacement therapy (HRT) increased breast cancer risk, leading to widespread reduction in HRT use.
More recent reports continue to shape our understanding of these risks, suggesting differences across cancer types. For breast cancer, the WHI data suggested that HRT-associated risk was largely driven by formulations involving progesterone and estrogen, whereas estrogen-only formulations, now more common, are generally considered to present an acceptable risk profile for suitable patients.
The new study accounted for this potential HRT-associated risk, including by adjusting for patients who received HRT, type of HRT received, and duration of HRT received. According to Desai, this approach is commonly used when analyzing data from the WHI, nullifying concerns about the potentially deleterious effects of the hormones used in the study.
“Our question was not ‘does HRT cause cancer?’ ” Dr. Desai said in an interview. “But HRT can be linked to breast cancer risk and has a potential to be a confounder, and hence the above methodology.
“So I can say that the confounding/effect modification that HRT would have contributed to in the relationship between exposure (CH and mCA) and outcome (cancer) is well adjusted for as described above. This is standard in WHI analyses,” she continued.
“Every Women’s Health Initiative analysis that comes out — not just for our study — uses a standard method ... where you account for hormonal therapy,” Dr. Desai added, again noting that many other potential risk factors were considered, enabling a “detailed, robust” analysis.
Dr. Takahashi and Ms. Shah agreed. “A notable strength of this study is its adjustment for many confounding factors,” they wrote. “The cohort’s well‐annotated data on other known cancer risk factors allowed for a robust assessment of CH’s independent risk.”
How Do Findings Compare With Those of the UK Biobank Study?
CHIP was associated with a 30% increased risk for breast cancer (hazard ratio [HR], 1.30; 95% CI, 1.03-1.64; P = .02), strengthening the borderline association reported by the UK Biobank study.
In contrast with the UK Biobank study, CHIP was not associated with lung cancer risk, although this may have been caused by fewer cases of lung cancer and a lack of male patients, Dr. Desai suggested.
“The discrepancy between the studies lies in the risk of lung cancer, although the point estimate in the current study suggested a positive association,” wrote Dr. Takahashi and Ms. Shah.
As in the UK Biobank study, CHIP was not associated with increased risk of developing colorectal cancer.
Mortality analysis, however, which was not conducted in the UK Biobank study, offered a new insight: Patients with existing colorectal cancer and CHIP had a significantly higher mortality risk than those without CHIP. Before stage adjustment, risk for mortality among those with colorectal cancer and CHIP was fourfold higher than those without CHIP (HR, 3.99; 95% CI, 2.41-6.62; P < .001). After stage adjustment, CHIP was still associated with a twofold higher mortality risk (HR, 2.50; 95% CI, 1.32-4.72; P = .004).
The investigators’ first mCA analyses, which employed a cell fraction cutoff greater than 3%, were unfruitful. But raising the cell fraction threshold to 5% in an exploratory analysis showed that autosomal mCA was associated with a 39% increased risk for breast cancer (HR, 1.39; 95% CI, 1.06-1.83; P = .01). No such associations were found between mCA and colorectal or lung cancer, regardless of cell fraction threshold.
The original 3% cell fraction threshold was selected on the basis of previous studies reporting a link between mCA and hematologic malignancies at this cutoff, Dr. Desai said.
She and her colleagues said a higher 5% cutoff might be needed, as they suspected that the link between mCA and solid tumors may not be causal, requiring a higher mutation rate.
Why Do Results Differ Between These Types of Studies?
Dr. Takahashi and Ms. Shah suggested that one possible limitation of the new study, and an obstacle to comparing results with the UK Biobank study and others like it, goes beyond population heterogeneity; incongruent findings could also be explained by differences in whole genome sequencing (WGS) technique.
“Although WGS allows sensitive detection of mCA through broad genomic coverage, it is less effective at detecting CHIP with low variant allele frequency (VAF) due to its relatively shallow depth (30x),” they wrote. “Consequently, the prevalence of mCA (18.8%) was much higher than that of CHIP (8.3%) in this cohort, contrasting with other studies using deeper sequencing.” As a result, the present study may have underestimated CHIP prevalence because of shallow sequencing depth.
“This inconsistency is a common challenge in CH population studies due to the lack of standardized methodologies and the frequent reliance on preexisting data not originally intended for CH detection,” Dr. Takahashi and Ms. Shah said.
Even so, despite the “heavily context-dependent” nature of these reported risks, the body of evidence to date now offers a convincing biological rationale linking CH with cancer development and outcomes, they added.
How Do the CHIP- and mCA-associated Risks Differ Between Solid Tumors and Blood Cancers?
“[These solid tumor risks are] not causal in the way CHIP mutations are causal for blood cancers,” Dr. Desai said. “Here we are talking about solid tumor risk, and it’s kind of scattered. It’s not just breast cancer ... there’s also increased colon cancer mortality. So I feel these mutations are doing something different ... they are sort of an added factor.”
Specific mechanisms remain unclear, Dr. Desai said, although she speculated about possible impacts on the inflammatory state or alterations to the tumor microenvironment.
“These are blood cells, right?” Dr. Desai asked. “They’re everywhere, and they’re changing something inherently in these tumors.”
Future research and therapeutic development
Siddhartha Jaiswal, MD, PhD, assistant professor in the Department of Pathology at Stanford University in California, whose lab focuses on clonal hematopoiesis, said the causality question is central to future research.
“The key question is, are these mutations acting because they alter the function of blood cells in some way to promote cancer risk, or is it reflective of some sort of shared etiology that’s not causal?” Dr. Jaiswal said in an interview.
Available data support both possibilities.
On one side, “reasonable evidence” supports the noncausal view, Dr. Jaiswal noted, because telomere length is one of the most common genetic risk factors for clonal hematopoiesis and also for solid tumors, suggesting a shared genetic factor. On the other hand, CHIP and mCA could be directly protumorigenic via conferred disturbances of immune cell function.
When asked if both causal and noncausal factors could be at play, Dr. Jaiswal said, “yeah, absolutely.”
The presence of a causal association could be promising from a therapeutic standpoint.
“If it turns out that this association is driven by a direct causal effect of the mutations, perhaps related to immune cell function or dysfunction, then targeting that dysfunction could be a therapeutic path to improve outcomes in people, and there’s a lot of interest in this,” Dr. Jaiswal said. He went on to explain how a trial exploring this approach via interleukin-8 inhibition in lung cancer fell short.
Yet earlier intervention may still hold promise, according to experts.
“[This study] provokes the hypothesis that CH‐targeted interventions could potentially reduce cancer risk in the future,” Dr. Takahashi and Ms. Shah said in their editorial.
The WHI program is funded by the National Heart, Lung, and Blood Institute; National Institutes of Health; and the Department of Health & Human Services. The investigators disclosed relationships with Eli Lilly, AbbVie, Celgene, and others. Dr. Jaiswal reported stock equity in a company that has an interest in clonal hematopoiesis.
A version of this article first appeared on Medscape.com.
FROM CANCER