User login
Hepatocellular Carcinoma: Leading Causes of Mortality Predicted
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Alcohol-associated liver disease (ALD) will likely become the leading cause of HCC-related mortality by 2026, and metabolic dysfunction–associated steatotic liver disease (MASLD) is projected to become the second leading cause by 2032, a new analysis found.
METHODOLOGY:
- HCC accounts for 75%-85% of primary liver cancers and most liver cancer deaths. Researchers have observed an upward trend in the incidence of and mortality from HCC in the past 2 decades.
- This cross-sectional study analyzed 188,280 HCC-related deaths among adults aged 25 and older to determine trends in mortality rates and project age-standardized mortality rates through 2040. Data came from the National Vital Statistics System database from 2006 to 2022.
- Researchers stratified mortality data by etiology of liver disease (ALD, hepatitis B virus, hepatitis C virus, and MASLD), age groups (25-64 or 65 and older years), sex, and race/ethnicity.
- Demographic data showed that 77.4% of deaths occurred in men, 55.6% in individuals aged 65 years or older, and 62.3% in White individuals.
TAKEAWAY:
- Overall, the age-standardized mortality rate for HCC-related deaths increased from 3.65 per 100,000 persons in 2006 to 5.03 in 2022 and was projected to increase to 6.39 per 100,000 persons by 2040.
- Sex- and age-related disparities were substantial. Men had much higher rates of HCC-related mortality than women (8.15 vs 2.33 per 100,000 persons), with a projected rate among men of 9.78 per 100,000 persons by 2040. HCC-related mortality rates for people aged 65 years or older were 10 times higher than for those aged 25-64 years (18.37 vs 1.79 per 100,000 persons) in 2022 and was projected to reach 32.81 per 100,000 persons by 2040 in the older group.
- Although hepatitis C virus–related deaths were projected to decline from 0.69 to 0.03 per 100,000 persons by 2034, ALD- and MASLD-related deaths showed increasing trends, with both projected to become the two leading causes of HCC-related mortality in the next few years.
- Racial disparities were also evident. By 2040, the American Indian/Alaska Native population showed the highest increase in projected HCC-related mortality rates, which went from 5.46 per 100,000 persons in 2006 to a project increase to 14.71 per 100,000 persons.
IN PRACTICE:
“HCC mortality was projected to continue increasing in the US, primarily due to rising rates of deaths attributable to ALD and MASLD,” the authors wrote.
This “study highlights the importance of addressing these conditions to decrease the burden of liver disease and liver disease mortality in the future,” Emad Qayed, MD, MPH, Emory University School of Medicine, Atlanta, wrote in an accompanying editorial.
SOURCE:
The study was led by Sikai Qiu, MM, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China, and was published online in JAMA Network Open.
LIMITATIONS:
The National Vital Statistics System database used in this study captured only mortality data without access to detailed clinical records or individual medical histories. Researchers could not analyze socioeconomic factors or individual-level risk factors owing to data anonymization requirements. Additionally, the inclusion of the COVID-19 pandemic period could have influenced observed trends and reliability of future projections.
DISCLOSURES:
This study was supported by grants from the National Natural Science Foundation of China. Several authors reported receiving consulting fees, speaking fees, or research support from various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Does Intensive Follow-Up Testing Improve Survival in CRC?
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
, according to findings from a secondary analysis.
METHODOLOGY:
- After curative surgery for CRC, intensive patient follow-up is common in clinical practice. However, there’s limited evidence to suggest that more frequent testing provides a long-term survival benefit.
- In the COLOFOL trial, patients with stage II or III CRC who had undergone curative resection were randomly assigned to either high-frequency follow-up (CT scans and CEA screening at 6, 12, 18, 24, and 36 months) or low-frequency follow-up (testing at 12 and 36 months) after surgery.
- This secondary analysis of the COLOFOL trial included 2456 patients (median age, 65 years), 1227 of whom received high-frequency follow-up and 1229 of whom received low-frequency follow-up.
- The main outcome of the secondary analysis was 10-year overall mortality and CRC–specific mortality rates.
- The analysis included both intention-to-treat and per-protocol approaches, with outcomes measured through December 2020.
TAKEAWAY:
- In the intention-to-treat analysis, the 10-year overall mortality rates were similar between the high- and low-frequency follow-up groups — 27.1% and 28.4%, respectively (risk difference, 1.3%; P = .46).
- A per-protocol analysis confirmed these findings: The 10-year overall mortality risk was 26.4% in the high-frequency group and 27.8% in the low-frequency group.
- The 10-year CRC–specific mortality rate was also similar between the high-frequency and low-frequency groups — 15.6% and 16.0%, respectively — (risk difference, 0.4%; P = .72). The same pattern was seen in the per-protocol analysis, which found a 10-year CRC–specific mortality risk of 15.6% in the high-frequency group and 15.9% in the low-frequency group.
- Subgroup analyses by cancer stage and location (rectal and colon) also revealed no significant differences in mortality outcomes between the two follow-up groups.
IN PRACTICE:
“This secondary analysis of the COLOFOL randomized clinical trial found that, among patients with stage II or III colorectal cancer, more frequent follow-up testing with CT scan and CEA screening, compared with less frequent follow-up, did not result in a significant rate reduction in 10-year overall mortality or colorectal cancer-specific mortality,” the authors concluded. “The results of this trial should be considered as the evidence base for updating clinical guidelines.”
SOURCE:
The study, led by Henrik Toft Sørensen, MD, PhD, DMSc, DSc, Aarhus University Hospital and Aarhus University, Aarhus, Denmark, was published online in JAMA Network Open.
LIMITATIONS:
The staff turnover at recruitment centers potentially affected protocol adherence. The inability to blind patients and physicians to the follow-up frequency was another limitation. The low-frequency follow-up protocol was less intensive than that recommended in the current guidelines by the National Comprehensive Cancer Network and the American Society of Clinical Oncology, potentially limiting comparisons to current standard practices.
DISCLOSURES:
The initial trial received unrestricted grants from multiple organizations including the Nordic Cancer Union, A.P. Møller Foundation, Beckett Foundation, Danish Cancer Society, and Swedish Cancer Foundation project. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Novel Digital Intervention Shows Promise for Depression
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
InterRhythmic care (IRC), a novel digital intervention, was linked to greater improvements in depressive symptoms, anxiety, interpersonal relationships, and social functioning in patients with major depressive disorder (MDD), compared with internet general psychoeducation in new research.
METHODOLOGY:
- The randomized, single-blind trial included 120 outpatients from the Shanghai Mental Health Center between March and November 2021 with MDD (mean age, 28.2 years; 99% Han Chinese; 83% women) who were randomly assigned to receive either IRC or internet general psychoeducation (control group).
- IRC included computer-based psychoeducation on stabilizing social rhythm regularity and resolution of interpersonal problems plus brief online interactions with clinicians. Patients received 10 minutes of IRC daily, Monday through Friday, for 8 weeks.
- The researchers assessed participants’ depressive symptoms, anxiety symptoms, interpersonal relationships, social function, and biological rhythms using the 17-item Hamilton Depression Rating Scale, Hamilton Anxiety Scale, Interpersonal Comprehensive Diagnostic Scale, Sheehan Disability Scale, and Morning and Evening Questionnaire at baseline and at 8 weeks.
TAKEAWAY:
- The participants who received IRC had significantly lower Hamilton Depression Rating total scores than those who received internet general psychoeducation (P < .001).
- The IRC group demonstrated improved anxiety symptoms, as evidenced by lower Hamilton Anxiety Scale total scores, than those observed for the control group (P < .001).
- The IRC group also showed improved outcomes in interpersonal relationships, as indicated by lower Interpersonal Comprehensive Diagnostic Scale total scores (P < .001).
- Social functioning improved significantly in the IRC group, as measured by the Sheehan Disability Scale subscores for work/school (P = .03), social life (P < .001), and family life (P = .001).
IN PRACTICE:
“This study demonstrated that IRC can improve clinical symptoms such as depressive symptoms, anxiety symptoms, interpersonal problems, and social function in patients with MDD. Our study suggested that the IRC can be used in clinical practice,” the investigators wrote.
SOURCE:
The study was led by Chuchen Xu, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine in China. It was published online on November 20, 2024, in The Journal of Psychiatric Research.
LIMITATIONS:
The 8-week follow-up period was considered too short to comprehensively evaluate the intervention’s long-term impact. Additionally, the researchers had to check and supervise assignment completion, which increased research costs and may, therefore, potentially limit broader implementation.
DISCLOSURES:
The investigators reported no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
How Are Patients Managing Intermediate-Risk Prostate Cancer?
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
METHODOLOGY:
- Current guidelines support active surveillance or watchful waiting for select patients with intermediate-risk prostate cancer. These observation strategies may help reduce the adverse effects associated with immediate radical treatment.
- To understand the trends over time in the use of active surveillance and watchful waiting, researchers looked at data of 147,205 individuals with intermediate-risk prostate cancer from the Surveillance, Epidemiology, and End Results prostate cancer database between 2010 and 2020 in the United States.
- Criteria for intermediate-risk included Gleason grade group 2 or 3, prostate-specific antigen (PSA) levels of 10-20 ng/mL, or stage cT2b of the disease. Researchers also included trends for patients with Gleason grade group 1, as a reference group.
- Researchers assessed the temporal trends and factors associated with the selection of active surveillance and watchful waiting in this population.
TAKEAWAY:
- Overall, the rate of active surveillance and watchful waiting more than doubled among intermediate-risk patients from 5% to 12.3% between 2010 and 2020.
- Between 2010 and 2020, the use of active surveillance and watchful waiting increased significantly among patients in Gleason grade group 1 (13.2% to 53.8%) and Gleason grade group 2 (4.0% to 11.6%) but remained stable for those in Gleason grade group 3 (2.5% to 2.8%; P = .85). For those with PSA levels < 10 ng/mL, adoption increased from 3.4% in 2010 to 9.2% in 2020 and more than doubled (9.3% to 20.7%) for those with PSA levels of 10-20 ng/mL.
- Higher Gleason grade groups had a significantly lower likelihood of adopting active surveillance or watchful waiting (Gleason grade group 2 vs 1: odds ratio [OR], 0.83; Gleason grade group 3 vs 1: OR, 0.79).
- Hispanic or Latino individuals (OR, 0.98) and non-Hispanic Black individuals (OR, 0.99) were slightly less likely to adopt these strategies than non-Hispanic White individuals.
IN PRACTICE:
“This study found a significant increase in initial active surveillance and watchful waiting for intermediate-risk prostate cancer between 2010 and 2020,” the authors wrote. “Research priorities should include reducing upfront overdiagnosis and better defining criteria for starting and stopping active surveillance and watchful waiting beyond conventional clinical measures such as GGs [Gleason grade groups] or PSA levels alone.”
SOURCE:
This study, led by Ismail Ajjawi, Yale School of Medicine, New Haven, Connecticut, was published online in JAMA.
LIMITATIONS:
This study relied on observational data and therefore could not capture various factors influencing clinical decision-making processes. Additionally, the absence of information on patient outcomes restricted the ability to assess the long-term implications of different management strategies.
DISCLOSURES:
This study received financial support from the Urological Research Foundation. Several authors reported having various ties with various sources.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Australia Registry Study: Melanoma-Related Deaths Increase at 0.8-mm Breslow Thickness
TOPLINE:
in an Australian study that used registry data.
METHODOLOGY:
- The study analyzed 144,447 individuals (median age, 56 years, 54% men) diagnosed with thin (T1) primary invasive melanomas (Breslow thickness, ≤ 1.0 mm) between 1982 and 2014 from all eight Australian state and territory population-based cancer registries.
- The researchers evaluated the associations between Breslow thickness (< 0.8 mm vs 0.8-1.0 mm) and incidences of melanoma-related and nonmelanoma-related deaths.
- The primary endpoint was time to death attributable to a melanoma-related cause, with death by a nonmelanoma-related cause as a competing event.
TAKEAWAY:
- The 20-year cumulative incidence of melanoma-related deaths was 6.3% for the whole cohort. The incidence was higher for tumors with a thickness of 0.8-1.0 mm (11%) than for those with a thickness < 0.8 mm (5.6%).
- The overall 20-year melanoma-specific survival rate was 95.9%, with rates of 94.2% for tumors < 0.8 mm and 87.8% for tumors measuring 0.8-1.0 mm in thickness. Each 0.1-mm increase in Breslow thickness was associated with worse prognosis.
- A multivariable analysis revealed that a tumor thickness of 0.8-1.0 mm was associated with both a greater absolute risk for melanoma-related deaths (subdistribution hazard ratio, 2.92) and a higher rate of melanoma-related deaths (hazard ratio, 2.98) than a tumor thickness < 0.8 mm.
- The 20-year incidence of death from nonmelanoma-related causes was 23.4%, but the risk for death from these causes showed no significant association with Breslow thickness categories.
IN PRACTICE:
“The findings of this large-scale population–based analysis suggest the separation of risk for patients with melanomas with a Breslow thickness above and below 0.8 mm,” the authors wrote, adding: “These results suggest that a change of the T1 threshold from 1.0 mm to 0.8 mm should be considered when the AJCC [American Joint Committee on Cancer] staging system is next reviewed.”
SOURCE:
The study was led by Serigne N. Lo, PhD, Melanoma Institute Australia, the University of Sydney. It was published online on December 11, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was registry-based and did not capture details such as tumor characteristics and treatment modalities. Inaccuracies in reporting the cause of death may have led to an underestimation of melanoma-specific mortality risks across all thickness groups and an overestimation of nonmelanoma mortality risks.
DISCLOSURES:
The study received funding support from Melanoma Institute Australia and two grants from the Australian National Health and Medical Research Council (NHMRC). Several authors reported receiving grants or personal fees from or having ties with various sources, including NHMRC.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in an Australian study that used registry data.
METHODOLOGY:
- The study analyzed 144,447 individuals (median age, 56 years, 54% men) diagnosed with thin (T1) primary invasive melanomas (Breslow thickness, ≤ 1.0 mm) between 1982 and 2014 from all eight Australian state and territory population-based cancer registries.
- The researchers evaluated the associations between Breslow thickness (< 0.8 mm vs 0.8-1.0 mm) and incidences of melanoma-related and nonmelanoma-related deaths.
- The primary endpoint was time to death attributable to a melanoma-related cause, with death by a nonmelanoma-related cause as a competing event.
TAKEAWAY:
- The 20-year cumulative incidence of melanoma-related deaths was 6.3% for the whole cohort. The incidence was higher for tumors with a thickness of 0.8-1.0 mm (11%) than for those with a thickness < 0.8 mm (5.6%).
- The overall 20-year melanoma-specific survival rate was 95.9%, with rates of 94.2% for tumors < 0.8 mm and 87.8% for tumors measuring 0.8-1.0 mm in thickness. Each 0.1-mm increase in Breslow thickness was associated with worse prognosis.
- A multivariable analysis revealed that a tumor thickness of 0.8-1.0 mm was associated with both a greater absolute risk for melanoma-related deaths (subdistribution hazard ratio, 2.92) and a higher rate of melanoma-related deaths (hazard ratio, 2.98) than a tumor thickness < 0.8 mm.
- The 20-year incidence of death from nonmelanoma-related causes was 23.4%, but the risk for death from these causes showed no significant association with Breslow thickness categories.
IN PRACTICE:
“The findings of this large-scale population–based analysis suggest the separation of risk for patients with melanomas with a Breslow thickness above and below 0.8 mm,” the authors wrote, adding: “These results suggest that a change of the T1 threshold from 1.0 mm to 0.8 mm should be considered when the AJCC [American Joint Committee on Cancer] staging system is next reviewed.”
SOURCE:
The study was led by Serigne N. Lo, PhD, Melanoma Institute Australia, the University of Sydney. It was published online on December 11, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was registry-based and did not capture details such as tumor characteristics and treatment modalities. Inaccuracies in reporting the cause of death may have led to an underestimation of melanoma-specific mortality risks across all thickness groups and an overestimation of nonmelanoma mortality risks.
DISCLOSURES:
The study received funding support from Melanoma Institute Australia and two grants from the Australian National Health and Medical Research Council (NHMRC). Several authors reported receiving grants or personal fees from or having ties with various sources, including NHMRC.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
in an Australian study that used registry data.
METHODOLOGY:
- The study analyzed 144,447 individuals (median age, 56 years, 54% men) diagnosed with thin (T1) primary invasive melanomas (Breslow thickness, ≤ 1.0 mm) between 1982 and 2014 from all eight Australian state and territory population-based cancer registries.
- The researchers evaluated the associations between Breslow thickness (< 0.8 mm vs 0.8-1.0 mm) and incidences of melanoma-related and nonmelanoma-related deaths.
- The primary endpoint was time to death attributable to a melanoma-related cause, with death by a nonmelanoma-related cause as a competing event.
TAKEAWAY:
- The 20-year cumulative incidence of melanoma-related deaths was 6.3% for the whole cohort. The incidence was higher for tumors with a thickness of 0.8-1.0 mm (11%) than for those with a thickness < 0.8 mm (5.6%).
- The overall 20-year melanoma-specific survival rate was 95.9%, with rates of 94.2% for tumors < 0.8 mm and 87.8% for tumors measuring 0.8-1.0 mm in thickness. Each 0.1-mm increase in Breslow thickness was associated with worse prognosis.
- A multivariable analysis revealed that a tumor thickness of 0.8-1.0 mm was associated with both a greater absolute risk for melanoma-related deaths (subdistribution hazard ratio, 2.92) and a higher rate of melanoma-related deaths (hazard ratio, 2.98) than a tumor thickness < 0.8 mm.
- The 20-year incidence of death from nonmelanoma-related causes was 23.4%, but the risk for death from these causes showed no significant association with Breslow thickness categories.
IN PRACTICE:
“The findings of this large-scale population–based analysis suggest the separation of risk for patients with melanomas with a Breslow thickness above and below 0.8 mm,” the authors wrote, adding: “These results suggest that a change of the T1 threshold from 1.0 mm to 0.8 mm should be considered when the AJCC [American Joint Committee on Cancer] staging system is next reviewed.”
SOURCE:
The study was led by Serigne N. Lo, PhD, Melanoma Institute Australia, the University of Sydney. It was published online on December 11, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was registry-based and did not capture details such as tumor characteristics and treatment modalities. Inaccuracies in reporting the cause of death may have led to an underestimation of melanoma-specific mortality risks across all thickness groups and an overestimation of nonmelanoma mortality risks.
DISCLOSURES:
The study received funding support from Melanoma Institute Australia and two grants from the Australian National Health and Medical Research Council (NHMRC). Several authors reported receiving grants or personal fees from or having ties with various sources, including NHMRC.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Cutaneous Lupus Associated with Greater Risk for Atherosclerotic Cardiovascular Disease
TOPLINE:
than with psoriasis.
METHODOLOGY:
- A retrospective matched longitudinal study compared the incidence and prevalence of ASCVD of 8138 individuals with CLE; 24,675 with SLE; 192,577 with psoriasis; and 81,380 control individuals.
- The disease-free control population was matched in a 10:1 ratio to the CLE population on the basis of age, sex, insurance type, and enrollment duration.
- Prevalent ASCVD was defined as coronary artery disease, prior myocardial infarction, or cerebrovascular accident, with ASCVD incidence assessed by number of hospitalizations over 3 years.
TAKEAWAY:
- Persons with CLE had higher ASCVD risk than control individuals (odds ratio [OR], 1.72; P < .001), similar to those with SLE (OR, 2.41; P < .001) but unlike those with psoriasis (OR, 1.03; P = .48).
- ASCVD incidence at 3 years was 24.8 per 1000 person-years for SLE, 15.2 per 1000 person-years for CLE, 14.0 per 1000 person-years for psoriasis, and 10.3 per 1000 person-years for controls.
- Multivariable Cox proportional regression modeling showed ASCVD risk was highest in those with SLE (hazard ratio [HR], 2.23; P < .001) vs CLE (HR, 1.32; P < .001) and psoriasis (HR, 1.06; P = .09).
- ASCVD prevalence was higher in individuals with CLE receiving systemic therapy (2.7%) than in those receiving no therapy (1.6%), suggesting a potential link between disease severity and CVD risk.
IN PRACTICE:
“Persons with CLE are at higher risk for ASCVD, and guidelines for the evaluation and management of ASCVD may improve their quality of care,” the authors wrote.
SOURCE:
The study was led by Henry W. Chen, MD, Department of Dermatology, University of Texas Southwestern Medical Center, Dallas. It was published online on December 4, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was limited by its relatively young population (median age, 49 years) and the exclusion of adults aged > 65 years on Medicare insurance plans. The database lacked race and ethnicity data, and the analysis was restricted to a shorter 3-year period. The study could not fully evaluate detailed risk factors such as blood pressure levels, cholesterol measurements, or glycemic control, nor could it accurately assess smoking status.
DISCLOSURES:
The research was supported by the Department of Dermatology at the University of Texas Southwestern Medical Center and a grant from the National Institutes of Health. Several authors reported receiving grants or personal fees from various pharmaceutical companies. One author reported being a deputy editor for diversity, equity, and inclusion at JAMA Cardiology. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
than with psoriasis.
METHODOLOGY:
- A retrospective matched longitudinal study compared the incidence and prevalence of ASCVD of 8138 individuals with CLE; 24,675 with SLE; 192,577 with psoriasis; and 81,380 control individuals.
- The disease-free control population was matched in a 10:1 ratio to the CLE population on the basis of age, sex, insurance type, and enrollment duration.
- Prevalent ASCVD was defined as coronary artery disease, prior myocardial infarction, or cerebrovascular accident, with ASCVD incidence assessed by number of hospitalizations over 3 years.
TAKEAWAY:
- Persons with CLE had higher ASCVD risk than control individuals (odds ratio [OR], 1.72; P < .001), similar to those with SLE (OR, 2.41; P < .001) but unlike those with psoriasis (OR, 1.03; P = .48).
- ASCVD incidence at 3 years was 24.8 per 1000 person-years for SLE, 15.2 per 1000 person-years for CLE, 14.0 per 1000 person-years for psoriasis, and 10.3 per 1000 person-years for controls.
- Multivariable Cox proportional regression modeling showed ASCVD risk was highest in those with SLE (hazard ratio [HR], 2.23; P < .001) vs CLE (HR, 1.32; P < .001) and psoriasis (HR, 1.06; P = .09).
- ASCVD prevalence was higher in individuals with CLE receiving systemic therapy (2.7%) than in those receiving no therapy (1.6%), suggesting a potential link between disease severity and CVD risk.
IN PRACTICE:
“Persons with CLE are at higher risk for ASCVD, and guidelines for the evaluation and management of ASCVD may improve their quality of care,” the authors wrote.
SOURCE:
The study was led by Henry W. Chen, MD, Department of Dermatology, University of Texas Southwestern Medical Center, Dallas. It was published online on December 4, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was limited by its relatively young population (median age, 49 years) and the exclusion of adults aged > 65 years on Medicare insurance plans. The database lacked race and ethnicity data, and the analysis was restricted to a shorter 3-year period. The study could not fully evaluate detailed risk factors such as blood pressure levels, cholesterol measurements, or glycemic control, nor could it accurately assess smoking status.
DISCLOSURES:
The research was supported by the Department of Dermatology at the University of Texas Southwestern Medical Center and a grant from the National Institutes of Health. Several authors reported receiving grants or personal fees from various pharmaceutical companies. One author reported being a deputy editor for diversity, equity, and inclusion at JAMA Cardiology. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
than with psoriasis.
METHODOLOGY:
- A retrospective matched longitudinal study compared the incidence and prevalence of ASCVD of 8138 individuals with CLE; 24,675 with SLE; 192,577 with psoriasis; and 81,380 control individuals.
- The disease-free control population was matched in a 10:1 ratio to the CLE population on the basis of age, sex, insurance type, and enrollment duration.
- Prevalent ASCVD was defined as coronary artery disease, prior myocardial infarction, or cerebrovascular accident, with ASCVD incidence assessed by number of hospitalizations over 3 years.
TAKEAWAY:
- Persons with CLE had higher ASCVD risk than control individuals (odds ratio [OR], 1.72; P < .001), similar to those with SLE (OR, 2.41; P < .001) but unlike those with psoriasis (OR, 1.03; P = .48).
- ASCVD incidence at 3 years was 24.8 per 1000 person-years for SLE, 15.2 per 1000 person-years for CLE, 14.0 per 1000 person-years for psoriasis, and 10.3 per 1000 person-years for controls.
- Multivariable Cox proportional regression modeling showed ASCVD risk was highest in those with SLE (hazard ratio [HR], 2.23; P < .001) vs CLE (HR, 1.32; P < .001) and psoriasis (HR, 1.06; P = .09).
- ASCVD prevalence was higher in individuals with CLE receiving systemic therapy (2.7%) than in those receiving no therapy (1.6%), suggesting a potential link between disease severity and CVD risk.
IN PRACTICE:
“Persons with CLE are at higher risk for ASCVD, and guidelines for the evaluation and management of ASCVD may improve their quality of care,” the authors wrote.
SOURCE:
The study was led by Henry W. Chen, MD, Department of Dermatology, University of Texas Southwestern Medical Center, Dallas. It was published online on December 4, 2024, in JAMA Dermatology.
LIMITATIONS:
The study was limited by its relatively young population (median age, 49 years) and the exclusion of adults aged > 65 years on Medicare insurance plans. The database lacked race and ethnicity data, and the analysis was restricted to a shorter 3-year period. The study could not fully evaluate detailed risk factors such as blood pressure levels, cholesterol measurements, or glycemic control, nor could it accurately assess smoking status.
DISCLOSURES:
The research was supported by the Department of Dermatology at the University of Texas Southwestern Medical Center and a grant from the National Institutes of Health. Several authors reported receiving grants or personal fees from various pharmaceutical companies. One author reported being a deputy editor for diversity, equity, and inclusion at JAMA Cardiology. Additional disclosures are noted in the original article.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
US Study Pinpoints Merkel Cell Risk Factors
TOPLINE:
in the United States.
METHODOLOGY:
- Researchers evaluated 38,020 MCC cases (38% women; 93% non-Hispanic White, 4% Hispanic, 1% non-Hispanic Black) diagnosed in the United States from 2001 to 2019 to estimate the contribution of potentially modifiable risk factors to the burden of MCC.
- Population-based cancer registries and linkages with HIV and transplant registries were utilized to identify MCC cases in patients with HIV, solid organ transplant recipients, and patients with chronic lymphocytic leukemia (CLL).
- Data on cloud-adjusted daily ambient UVR irradiance were merged with cancer registry information on the county of residence at diagnosis to assess UVR exposure. Studies reporting the prevalence of MCPyV in MCC specimens collected in the United States were combined via a meta-analysis.
- The study assessed population attributable fractions of MCC cases that were attributable to major immunosuppressive conditions (HIV, solid organ transplant, and chronic CLL), ambient UVR exposure, and MCPyV.
TAKEAWAY:
- The incidence of MCC was higher in people with HIV (standardized incidence ratio [SIR], 2.78), organ transplant recipients (SIR, 13.1), and patients with CLL (SIR, 5.75) than in the general US population. However, only 2.5% of MCC cases were attributable to these immunosuppressive conditions.
- Non-Hispanic White individuals showed elevated MCC incidence at both lower and higher ambient UVR exposure levels, with incidence rate ratios of 4.05 and 4.91, respectively, for MCC on the head and neck.
- A meta-analysis of 19 case series revealed that 63.8% of MCC cases were attributable to MCPyV, with a similar prevalence observed between immunocompromised and immunocompetent patients.
- Overall, 65.1% of MCC cases were attributable to ambient UVR exposure, with higher attribution for cases diagnosed on the head and neck than those diagnosed on other sites (72.1% vs 60.2%).
IN PRACTICE:
“The results of this study suggest that most MCC cases in the US are attributable to MCPyV and/or ambient UVR [UV radiation] exposure, with a smaller fraction attributable to three major immunosuppressive conditions,” the authors wrote. “Future studies should investigate UVR mutational signature, TMB [tumor mutational burden], and MCPyV prevalence according to race and ethnicity and patient immune status to help clarify the overlap between MCC risk factors.”
SOURCE:
The study was led by Jacob T. Tribble, BA, Division of Cancer Epidemiology and Genetics, National Cancer Institute (NCI), Rockville, Maryland. It was published online on November 27, 2024, in JAMA Dermatology.
LIMITATIONS:
Incidences of MCC may have been inflated because of increased medical surveillance in immunosuppressed populations. The analysis assumed that only cases among non-Hispanic White individuals were associated with UVR. Additionally, the meta-analysis of MCPyV prevalence primarily included studies from large academic institutions, which may not be representative of the entire US population.
DISCLOSURES:
This study was supported in part by the Intramural Research Program of the NCI and the National Institutes of Health Medical Research Scholars Program. Additional funding was provided through a public-private partnership with contributions from the American Association for Dental Research and the Colgate-Palmolive Company to the Foundation for the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
in the United States.
METHODOLOGY:
- Researchers evaluated 38,020 MCC cases (38% women; 93% non-Hispanic White, 4% Hispanic, 1% non-Hispanic Black) diagnosed in the United States from 2001 to 2019 to estimate the contribution of potentially modifiable risk factors to the burden of MCC.
- Population-based cancer registries and linkages with HIV and transplant registries were utilized to identify MCC cases in patients with HIV, solid organ transplant recipients, and patients with chronic lymphocytic leukemia (CLL).
- Data on cloud-adjusted daily ambient UVR irradiance were merged with cancer registry information on the county of residence at diagnosis to assess UVR exposure. Studies reporting the prevalence of MCPyV in MCC specimens collected in the United States were combined via a meta-analysis.
- The study assessed population attributable fractions of MCC cases that were attributable to major immunosuppressive conditions (HIV, solid organ transplant, and chronic CLL), ambient UVR exposure, and MCPyV.
TAKEAWAY:
- The incidence of MCC was higher in people with HIV (standardized incidence ratio [SIR], 2.78), organ transplant recipients (SIR, 13.1), and patients with CLL (SIR, 5.75) than in the general US population. However, only 2.5% of MCC cases were attributable to these immunosuppressive conditions.
- Non-Hispanic White individuals showed elevated MCC incidence at both lower and higher ambient UVR exposure levels, with incidence rate ratios of 4.05 and 4.91, respectively, for MCC on the head and neck.
- A meta-analysis of 19 case series revealed that 63.8% of MCC cases were attributable to MCPyV, with a similar prevalence observed between immunocompromised and immunocompetent patients.
- Overall, 65.1% of MCC cases were attributable to ambient UVR exposure, with higher attribution for cases diagnosed on the head and neck than those diagnosed on other sites (72.1% vs 60.2%).
IN PRACTICE:
“The results of this study suggest that most MCC cases in the US are attributable to MCPyV and/or ambient UVR [UV radiation] exposure, with a smaller fraction attributable to three major immunosuppressive conditions,” the authors wrote. “Future studies should investigate UVR mutational signature, TMB [tumor mutational burden], and MCPyV prevalence according to race and ethnicity and patient immune status to help clarify the overlap between MCC risk factors.”
SOURCE:
The study was led by Jacob T. Tribble, BA, Division of Cancer Epidemiology and Genetics, National Cancer Institute (NCI), Rockville, Maryland. It was published online on November 27, 2024, in JAMA Dermatology.
LIMITATIONS:
Incidences of MCC may have been inflated because of increased medical surveillance in immunosuppressed populations. The analysis assumed that only cases among non-Hispanic White individuals were associated with UVR. Additionally, the meta-analysis of MCPyV prevalence primarily included studies from large academic institutions, which may not be representative of the entire US population.
DISCLOSURES:
This study was supported in part by the Intramural Research Program of the NCI and the National Institutes of Health Medical Research Scholars Program. Additional funding was provided through a public-private partnership with contributions from the American Association for Dental Research and the Colgate-Palmolive Company to the Foundation for the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
in the United States.
METHODOLOGY:
- Researchers evaluated 38,020 MCC cases (38% women; 93% non-Hispanic White, 4% Hispanic, 1% non-Hispanic Black) diagnosed in the United States from 2001 to 2019 to estimate the contribution of potentially modifiable risk factors to the burden of MCC.
- Population-based cancer registries and linkages with HIV and transplant registries were utilized to identify MCC cases in patients with HIV, solid organ transplant recipients, and patients with chronic lymphocytic leukemia (CLL).
- Data on cloud-adjusted daily ambient UVR irradiance were merged with cancer registry information on the county of residence at diagnosis to assess UVR exposure. Studies reporting the prevalence of MCPyV in MCC specimens collected in the United States were combined via a meta-analysis.
- The study assessed population attributable fractions of MCC cases that were attributable to major immunosuppressive conditions (HIV, solid organ transplant, and chronic CLL), ambient UVR exposure, and MCPyV.
TAKEAWAY:
- The incidence of MCC was higher in people with HIV (standardized incidence ratio [SIR], 2.78), organ transplant recipients (SIR, 13.1), and patients with CLL (SIR, 5.75) than in the general US population. However, only 2.5% of MCC cases were attributable to these immunosuppressive conditions.
- Non-Hispanic White individuals showed elevated MCC incidence at both lower and higher ambient UVR exposure levels, with incidence rate ratios of 4.05 and 4.91, respectively, for MCC on the head and neck.
- A meta-analysis of 19 case series revealed that 63.8% of MCC cases were attributable to MCPyV, with a similar prevalence observed between immunocompromised and immunocompetent patients.
- Overall, 65.1% of MCC cases were attributable to ambient UVR exposure, with higher attribution for cases diagnosed on the head and neck than those diagnosed on other sites (72.1% vs 60.2%).
IN PRACTICE:
“The results of this study suggest that most MCC cases in the US are attributable to MCPyV and/or ambient UVR [UV radiation] exposure, with a smaller fraction attributable to three major immunosuppressive conditions,” the authors wrote. “Future studies should investigate UVR mutational signature, TMB [tumor mutational burden], and MCPyV prevalence according to race and ethnicity and patient immune status to help clarify the overlap between MCC risk factors.”
SOURCE:
The study was led by Jacob T. Tribble, BA, Division of Cancer Epidemiology and Genetics, National Cancer Institute (NCI), Rockville, Maryland. It was published online on November 27, 2024, in JAMA Dermatology.
LIMITATIONS:
Incidences of MCC may have been inflated because of increased medical surveillance in immunosuppressed populations. The analysis assumed that only cases among non-Hispanic White individuals were associated with UVR. Additionally, the meta-analysis of MCPyV prevalence primarily included studies from large academic institutions, which may not be representative of the entire US population.
DISCLOSURES:
This study was supported in part by the Intramural Research Program of the NCI and the National Institutes of Health Medical Research Scholars Program. Additional funding was provided through a public-private partnership with contributions from the American Association for Dental Research and the Colgate-Palmolive Company to the Foundation for the National Institutes of Health. The authors reported no relevant conflicts of interest.
This article was created using several editorial tools, including artificial intelligence, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Is 1-Week Radiotherapy Safe for Breast Cancer?
TOPLINE:
Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.
METHODOLOGY:
- In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
- In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
- Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
- Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
- Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.
TAKEAWAY:
- Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
- At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
- Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
- Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.
IN PRACTICE:
“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”
SOURCE:
The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.
LIMITATIONS:
The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.
DISCLOSURES:
Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.
METHODOLOGY:
- In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
- In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
- Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
- Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
- Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.
TAKEAWAY:
- Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
- At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
- Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
- Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.
IN PRACTICE:
“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”
SOURCE:
The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.
LIMITATIONS:
The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.
DISCLOSURES:
Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Most patients also reported that the reduced treatment time was a major benefit of the 1-week radiotherapy schedule.
METHODOLOGY:
- In March 2020, during the COVID-19 pandemic, international and national guidelines recommended adopting a 1-week ultrahypofractionated radiotherapy schedule for patients with node-negative breast cancer. Subsequently, a phase 3 trial demonstrated that a 1-week regimen of 26 Gy in five fractions led to similar breast cancer outcomes compared with a standard moderately hypofractionated regimen.
- In this study, researchers wanted to assess real world toxicities following ultrahypofractionated radiotherapy and enrolled 135 consecutive patients who received 1-week ultrahypofractionated adjuvant radiation of 26 Gy in five fractions from March to August 2020 at three centers in Ireland, with 33 patients (25%) receiving a sequential boost.
- Researchers recorded patient-reported outcomes on breast pain, swelling, firmness, and hypersensitivity at baseline, 3, 6, and 12 months. Virtual consultations without video occurred at baseline, 3 months, 6 months, and video consultations were offered at 1 year for a physician-led breast evaluation.
- Researchers assessed patient perspectives on this new schedule and telehealth workflows using questionnaires.
- Overall, 90% of patients completed the 1-year assessment plus another assessment. The primary endpoint was the worst toxicity reported at each time point.
TAKEAWAY:
- Overall, 76% of patients reported no or mild toxicities at 3 and 6 months, and 82% reported no or mild toxicities 12 months.
- At 1 year, 20 patients (17%) reported moderate toxicity, most commonly breast pain, and only two patients (2%) reported marked toxicities, including breast firmness and skin changes.
- Researchers found no difference in toxicities between patients who received only 26 Gy in five fractions and those who received an additional sequential boost.
- Most patients reported reduced treatment time (78.6%) and infection control (59%) as major benefits of the 1-week radiotherapy regimen. Patients also reported high satisfaction with the use of telehealth, with 97.3% feeling well-informed about their diagnosis, 88% feeling well-informed about treatment side effects, and 94% feeling supported by the medical team. However, only 27% agreed to video consultations for breast inspections at 1 year.
IN PRACTICE:
“Ultrahypofractionated whole breast radiotherapy leads to acceptable late toxicity rates at 1 year even when followed by a hypofractionated tumour bed boost,” the authors wrote. “Patient satisfaction with ultrahypofractionated treatment and virtual consultations without video was high.”
SOURCE:
The study, led by Jill Nicholson, MBBS, MRCP, FFFRRCSI, St Luke’s Radiation Oncology Network, St. Luke’s Hospital, Dublin, Ireland, was published online in Advances in Radiation Oncology.
LIMITATIONS:
The short follow-up period might not capture all late toxicities. Variability in patient-reported outcomes could affect consistency. The range in boost received (four to eight fractions) could have influenced patients’ experiences.
DISCLOSURES:
Nicholson received funding from the St. Luke’s Institute of Cancer Research, Dublin, Ireland. No other relevant conflicts of interest were disclosed by the authors.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Spinal Cord Stimulation Promising for Chronic Back, Leg Pain
TOPLINE:
Spinal cord stimulation (SCS) therapies for chronic back and/or leg pain is superior to conventional medical management (CMM) for reduced pain intensity and functional disability, new research suggests.
METHODOLOGY:
- Researchers performed a systematic review and network meta-analysis of 13 randomized clinical trials that compared conventional and novel SCS therapies with CMM.
- More than 1500 adults with chronic back and/or leg pain and no past history of receiving SCS therapies were included.
- Novel therapies included high frequency, burst, differential target multiplexed, and closed-loop SCS; conventional therapies included tonic SCS wave forms.
- Study outcomes included pain intensity in the back and in the leg, proportion of patients achieving at least 50% pain reduction in the back and in the leg, quality of life as measured by the EuroQol-5 Dimensions (EQ-5D) index, and functional disability on the Oswestry Disability Index.
- The analysis included data from multiple follow-up points at 3, 6, 12, and 24 months, with 6-month data being those from the longest mutually reported timepoint across all outcomes.
TAKEAWAY:
- Both conventional and novel SCS therapies demonstrated superior efficacy vs CMM in pain reduction, but the novel SCS therapies were more likely to provide ≥ 50% reduction in back pain (odds ratio, 8.76; 95% credible interval [CrI], 3.84-22.31).
- Both SCS therapies showed a significant reduction in pain intensity, with novel SCS providing the greatest mean difference (MD) for back pain (–2.34; 95% CrI, –2.96 to –1.73) and lower leg pain (MD, –4.01; 95% CrI, –5.31 to –2.75).
- Quality of life improved with both types of SCS therapies, with novel SCS therapies yielding the highest MD (0.17; 95% CrI, 0.13-0.21) in EQ-5D index score.
- Conventional SCS showed greater improvement in functionality vs CMM, yielding the lowest MD (–7.10; 95% CrI, –10.91 to –3.36) in Oswestry Disability Index score.
IN PRACTICE:
“We found that SCS was associated with improved pain and QOL [quality of life] and reduced disability, compared with CMM, after 6 months of follow-up. These findings highlight the potential of SCS therapies as an effective and valuable option in chronic pain management,” the investigators wrote.
SOURCE:
The study was led by Frank J.P.M. Huygen, PhD, MD, Erasmus Medical Center, Rotterdam, the Netherlands. It was published online in JAMA Network Open.
LIMITATIONS:
The lack of randomized clinical trials with long-term follow-up data restricted the inclusion of extended outcome assessments. Most included studies showed a high risk for bias. Safety estimates could not be evaluated as adverse events were only reported as procedure-related outcomes, which are not applicable for CMM. Additionally, the network meta-analytical approach, which combined evidence from studies with varying patient eligibility criteria, may have introduced bias because of between-study heterogeneity.
DISCLOSURES:
This study was funded by Medtronic. Huygen reported receiving personal fees from Abbott, Saluda, and Grunenthal outside the submitted work. The four other authors reported receiving funding from Medtronic.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Spinal cord stimulation (SCS) therapies for chronic back and/or leg pain is superior to conventional medical management (CMM) for reduced pain intensity and functional disability, new research suggests.
METHODOLOGY:
- Researchers performed a systematic review and network meta-analysis of 13 randomized clinical trials that compared conventional and novel SCS therapies with CMM.
- More than 1500 adults with chronic back and/or leg pain and no past history of receiving SCS therapies were included.
- Novel therapies included high frequency, burst, differential target multiplexed, and closed-loop SCS; conventional therapies included tonic SCS wave forms.
- Study outcomes included pain intensity in the back and in the leg, proportion of patients achieving at least 50% pain reduction in the back and in the leg, quality of life as measured by the EuroQol-5 Dimensions (EQ-5D) index, and functional disability on the Oswestry Disability Index.
- The analysis included data from multiple follow-up points at 3, 6, 12, and 24 months, with 6-month data being those from the longest mutually reported timepoint across all outcomes.
TAKEAWAY:
- Both conventional and novel SCS therapies demonstrated superior efficacy vs CMM in pain reduction, but the novel SCS therapies were more likely to provide ≥ 50% reduction in back pain (odds ratio, 8.76; 95% credible interval [CrI], 3.84-22.31).
- Both SCS therapies showed a significant reduction in pain intensity, with novel SCS providing the greatest mean difference (MD) for back pain (–2.34; 95% CrI, –2.96 to –1.73) and lower leg pain (MD, –4.01; 95% CrI, –5.31 to –2.75).
- Quality of life improved with both types of SCS therapies, with novel SCS therapies yielding the highest MD (0.17; 95% CrI, 0.13-0.21) in EQ-5D index score.
- Conventional SCS showed greater improvement in functionality vs CMM, yielding the lowest MD (–7.10; 95% CrI, –10.91 to –3.36) in Oswestry Disability Index score.
IN PRACTICE:
“We found that SCS was associated with improved pain and QOL [quality of life] and reduced disability, compared with CMM, after 6 months of follow-up. These findings highlight the potential of SCS therapies as an effective and valuable option in chronic pain management,” the investigators wrote.
SOURCE:
The study was led by Frank J.P.M. Huygen, PhD, MD, Erasmus Medical Center, Rotterdam, the Netherlands. It was published online in JAMA Network Open.
LIMITATIONS:
The lack of randomized clinical trials with long-term follow-up data restricted the inclusion of extended outcome assessments. Most included studies showed a high risk for bias. Safety estimates could not be evaluated as adverse events were only reported as procedure-related outcomes, which are not applicable for CMM. Additionally, the network meta-analytical approach, which combined evidence from studies with varying patient eligibility criteria, may have introduced bias because of between-study heterogeneity.
DISCLOSURES:
This study was funded by Medtronic. Huygen reported receiving personal fees from Abbott, Saluda, and Grunenthal outside the submitted work. The four other authors reported receiving funding from Medtronic.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
TOPLINE:
Spinal cord stimulation (SCS) therapies for chronic back and/or leg pain is superior to conventional medical management (CMM) for reduced pain intensity and functional disability, new research suggests.
METHODOLOGY:
- Researchers performed a systematic review and network meta-analysis of 13 randomized clinical trials that compared conventional and novel SCS therapies with CMM.
- More than 1500 adults with chronic back and/or leg pain and no past history of receiving SCS therapies were included.
- Novel therapies included high frequency, burst, differential target multiplexed, and closed-loop SCS; conventional therapies included tonic SCS wave forms.
- Study outcomes included pain intensity in the back and in the leg, proportion of patients achieving at least 50% pain reduction in the back and in the leg, quality of life as measured by the EuroQol-5 Dimensions (EQ-5D) index, and functional disability on the Oswestry Disability Index.
- The analysis included data from multiple follow-up points at 3, 6, 12, and 24 months, with 6-month data being those from the longest mutually reported timepoint across all outcomes.
TAKEAWAY:
- Both conventional and novel SCS therapies demonstrated superior efficacy vs CMM in pain reduction, but the novel SCS therapies were more likely to provide ≥ 50% reduction in back pain (odds ratio, 8.76; 95% credible interval [CrI], 3.84-22.31).
- Both SCS therapies showed a significant reduction in pain intensity, with novel SCS providing the greatest mean difference (MD) for back pain (–2.34; 95% CrI, –2.96 to –1.73) and lower leg pain (MD, –4.01; 95% CrI, –5.31 to –2.75).
- Quality of life improved with both types of SCS therapies, with novel SCS therapies yielding the highest MD (0.17; 95% CrI, 0.13-0.21) in EQ-5D index score.
- Conventional SCS showed greater improvement in functionality vs CMM, yielding the lowest MD (–7.10; 95% CrI, –10.91 to –3.36) in Oswestry Disability Index score.
IN PRACTICE:
“We found that SCS was associated with improved pain and QOL [quality of life] and reduced disability, compared with CMM, after 6 months of follow-up. These findings highlight the potential of SCS therapies as an effective and valuable option in chronic pain management,” the investigators wrote.
SOURCE:
The study was led by Frank J.P.M. Huygen, PhD, MD, Erasmus Medical Center, Rotterdam, the Netherlands. It was published online in JAMA Network Open.
LIMITATIONS:
The lack of randomized clinical trials with long-term follow-up data restricted the inclusion of extended outcome assessments. Most included studies showed a high risk for bias. Safety estimates could not be evaluated as adverse events were only reported as procedure-related outcomes, which are not applicable for CMM. Additionally, the network meta-analytical approach, which combined evidence from studies with varying patient eligibility criteria, may have introduced bias because of between-study heterogeneity.
DISCLOSURES:
This study was funded by Medtronic. Huygen reported receiving personal fees from Abbott, Saluda, and Grunenthal outside the submitted work. The four other authors reported receiving funding from Medtronic.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.
Which Breast Cancer Patients Can Skip Postop Radiotherapy?
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Overall, patients with a high POLAR score derived a significant benefit from adjuvant radiotherapy, while those with a low score did not and might consider forgoing radiotherapy.
METHODOLOGY:
- Radiation therapy after breast-conserving surgery has been shown to reduce the risk for locoregional recurrence and is a standard approach to manage early breast cancer. However, certain patients with low locoregional recurrence risks may not necessarily benefit from adjuvant radiation, but there has not been a commercially available molecular test to help identify which patients that might be.
- In the current analysis, researchers assessed whether the POLAR biomarker test could reliably predict locoregional recurrence as well as identify patients who would not benefit from radiotherapy.
- The meta-analysis used data from three randomized trials — Scottish Conservation Trial, SweBCG91-RT, and Princess Margaret RT trial — to validate the POLAR biomarker test in patients with low-risk, HR-positive, HER2-negative, node-negative breast cancer.
- The analysis included 623 patients (ages 50-76), of whom 429 (69%) had high POLAR scores and 194 (31%) had low POLAR scores.
- The primary endpoint was the time to locoregional recurrence, and secondary endpoints included evaluating POLAR as a prognostic factor for locoregional recurrence in patients without radiotherapy and effect of radiotherapy in patients with low and high POLAR scores.
TAKEAWAY:
- Patients with high POLAR scores demonstrated a significant benefit from radiotherapy. The 10-year locoregional recurrence rate was 7% with radiotherapy vs 20% without radiotherapy (hazard ratio [HR], 0.37; P < .001).
- Patients with low POLAR scores, however, did not experience a significant benefit from radiotherapy. In this group, the 10-year locoregional recurrence rates were similar with and without radiotherapy (7% vs 5%, respectively; HR, 0.92; P = .832), indicating that radiotherapy could potentially be omitted for these patients.
- Among patients who did not receive radiotherapy (n = 309), higher POLAR scores predicted a greater risk for recurrence, suggesting the genomic signature has prognostic value. There is no evidence, however, that POLAR predicts radiotherapy benefit or predicts patients’ risk for distant metastases or mortality.
IN PRACTICE:
“This meta-analysis from three randomized controlled trials clearly demonstrates the clinical potential for POLAR to be used in smaller estrogen receptor positive node negative breast cancer patients to identify those women who do not appear to benefit from the use of post-operative adjuvant radiotherapy,” the authors wrote. “ This classifier is an important step towards molecularly-stratified targeting of the use of radiotherapy.”
SOURCE:
The study, led by Per Karlsson, MD, PhD, University of Gothenburg, Sweden, was published online in the Journal of the National Cancer Institute.
LIMITATIONS:
One cohort (SweBCG) had limited use of adjuvant systemic therapy, which could affect generalizability. Additionally, low numbers of patients with low POLAR scores in two trials could affect the observed benefit of radiotherapy.
DISCLOSURES:
This study was supported by the Breast Cancer Institute Fund (Edinburgh and Lothians Health Foundation), Canadian Institutes of Health Research, Exact Sciences Corporation, PFS Genomics, Swedish Cancer Society, and Swedish Research Council. One author reported being an employee and owning stock or stock options or patents with Exact Sciences. Several authors reported having various ties with various sources including Exact Sciences.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.