User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
Older Patients With COPD at Increased Risk for PE-Associated Death
BOSTON — Patients with COPD are at an increased risk for fatal pulmonary embolism (PE) and may require personalized, targeted thromboprophylaxis.
The data suggest that “maybe we should start thinking about if we are admitting a patient with COPD in that specific age group, higher thromboprophylaxis for PE,” said Marwa Oudah, MD, a pulmonary hypertension fellow at the University of Pennsylvania, Philadelphia. She presented her group’s findings in a rapid-fire oral abstract session at the CHEST Annual Meeting.
Known Risk Factor
COPD is a known risk factor for PE. To estimate how the obstructive lung disease may contribute to PE-related deaths among patients of varying ages, Oudah and colleagues drew data on deaths due to an underlying cause of PE from 1999 to 2020 from the Centers for Disease Control and Prevention’s WONDER database.
They stratified the patients into two groups — those with or without COPD — whose data were included in the Multiple Causes of Death dataset, according to age groups ranging from 35 years to over 100 years. The investigators calculated proportional mortality ratios in the non-COPD group and applied these to the COPD-positive group among different age ranges to estimate the observed vs expected number of deaths.
A total of 10,434 persons who died from PE and had COPD listed among causes of death were identified. The sample was evenly divided by sex. The peak range of deaths was among those aged 75-84 years.
The authors saw an increase in PE-related mortality among patients with COPD aged 65-85 years (P < .001).
The ratios of observed-to-expected deaths among patients in this age range were “substantially greater than 1” said Oudah, with patients aged 75-79 years at highest risk for PE-related death, with an observed-to-expected ratio of 1.443.
In contrast, the rate of observed deaths among patients aged 85-89 years was similar to the expected rate, suggesting that the COPD-PE interaction may wane among older patients, she said.
Among patients aged 35-64 years, the risk for death from PE was not significantly higher for any of the 5-year age categories.
The investigators emphasized that “given the observed trend, individualized patient assessments are imperative to optimize preventable measures against PE in the aging COPD population.”
Confounding Comorbidities
In an interview, a pulmonary specialist who was not involved in the study commented that older persons with COPD tend to have multiple comorbidities that may contribute to the risk for PE.
“Older patients have so many comorbidities, and their risk for pulmonary embolism and thromboembolic disease is pretty high, so I’m not surprised that 75 to 79 years olds are having a higher mortality from PE, but it’s a little difficult to say whether that’s due to COPD,” said Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, who moderated the session.
The authors did not report a study funding source. Oudah and Sundar reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
BOSTON — Patients with COPD are at an increased risk for fatal pulmonary embolism (PE) and may require personalized, targeted thromboprophylaxis.
The data suggest that “maybe we should start thinking about if we are admitting a patient with COPD in that specific age group, higher thromboprophylaxis for PE,” said Marwa Oudah, MD, a pulmonary hypertension fellow at the University of Pennsylvania, Philadelphia. She presented her group’s findings in a rapid-fire oral abstract session at the CHEST Annual Meeting.
Known Risk Factor
COPD is a known risk factor for PE. To estimate how the obstructive lung disease may contribute to PE-related deaths among patients of varying ages, Oudah and colleagues drew data on deaths due to an underlying cause of PE from 1999 to 2020 from the Centers for Disease Control and Prevention’s WONDER database.
They stratified the patients into two groups — those with or without COPD — whose data were included in the Multiple Causes of Death dataset, according to age groups ranging from 35 years to over 100 years. The investigators calculated proportional mortality ratios in the non-COPD group and applied these to the COPD-positive group among different age ranges to estimate the observed vs expected number of deaths.
A total of 10,434 persons who died from PE and had COPD listed among causes of death were identified. The sample was evenly divided by sex. The peak range of deaths was among those aged 75-84 years.
The authors saw an increase in PE-related mortality among patients with COPD aged 65-85 years (P < .001).
The ratios of observed-to-expected deaths among patients in this age range were “substantially greater than 1” said Oudah, with patients aged 75-79 years at highest risk for PE-related death, with an observed-to-expected ratio of 1.443.
In contrast, the rate of observed deaths among patients aged 85-89 years was similar to the expected rate, suggesting that the COPD-PE interaction may wane among older patients, she said.
Among patients aged 35-64 years, the risk for death from PE was not significantly higher for any of the 5-year age categories.
The investigators emphasized that “given the observed trend, individualized patient assessments are imperative to optimize preventable measures against PE in the aging COPD population.”
Confounding Comorbidities
In an interview, a pulmonary specialist who was not involved in the study commented that older persons with COPD tend to have multiple comorbidities that may contribute to the risk for PE.
“Older patients have so many comorbidities, and their risk for pulmonary embolism and thromboembolic disease is pretty high, so I’m not surprised that 75 to 79 years olds are having a higher mortality from PE, but it’s a little difficult to say whether that’s due to COPD,” said Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, who moderated the session.
The authors did not report a study funding source. Oudah and Sundar reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
BOSTON — Patients with COPD are at an increased risk for fatal pulmonary embolism (PE) and may require personalized, targeted thromboprophylaxis.
The data suggest that “maybe we should start thinking about if we are admitting a patient with COPD in that specific age group, higher thromboprophylaxis for PE,” said Marwa Oudah, MD, a pulmonary hypertension fellow at the University of Pennsylvania, Philadelphia. She presented her group’s findings in a rapid-fire oral abstract session at the CHEST Annual Meeting.
Known Risk Factor
COPD is a known risk factor for PE. To estimate how the obstructive lung disease may contribute to PE-related deaths among patients of varying ages, Oudah and colleagues drew data on deaths due to an underlying cause of PE from 1999 to 2020 from the Centers for Disease Control and Prevention’s WONDER database.
They stratified the patients into two groups — those with or without COPD — whose data were included in the Multiple Causes of Death dataset, according to age groups ranging from 35 years to over 100 years. The investigators calculated proportional mortality ratios in the non-COPD group and applied these to the COPD-positive group among different age ranges to estimate the observed vs expected number of deaths.
A total of 10,434 persons who died from PE and had COPD listed among causes of death were identified. The sample was evenly divided by sex. The peak range of deaths was among those aged 75-84 years.
The authors saw an increase in PE-related mortality among patients with COPD aged 65-85 years (P < .001).
The ratios of observed-to-expected deaths among patients in this age range were “substantially greater than 1” said Oudah, with patients aged 75-79 years at highest risk for PE-related death, with an observed-to-expected ratio of 1.443.
In contrast, the rate of observed deaths among patients aged 85-89 years was similar to the expected rate, suggesting that the COPD-PE interaction may wane among older patients, she said.
Among patients aged 35-64 years, the risk for death from PE was not significantly higher for any of the 5-year age categories.
The investigators emphasized that “given the observed trend, individualized patient assessments are imperative to optimize preventable measures against PE in the aging COPD population.”
Confounding Comorbidities
In an interview, a pulmonary specialist who was not involved in the study commented that older persons with COPD tend to have multiple comorbidities that may contribute to the risk for PE.
“Older patients have so many comorbidities, and their risk for pulmonary embolism and thromboembolic disease is pretty high, so I’m not surprised that 75 to 79 years olds are having a higher mortality from PE, but it’s a little difficult to say whether that’s due to COPD,” said Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, who moderated the session.
The authors did not report a study funding source. Oudah and Sundar reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM CHEST 2024
AF Burden Increases Around Time of COPD Hospitalizations
BOSTON — Patients with COPD who have exacerbations requiring hospitalization should be monitored for cardiac arrhythmias, investigators said.
This recommendation is based on results of a study of medical records showing that among more than 20,000 hospitalizations for patients with COPD without concurrent heart failure (HF), 40% patients had at least 6 minutes of daily atrial fibrillation (AF) burden, and nearly half of these patients had at least an hour of daily AF burden; patients with COPD and concurrent HF had similar daily AF burdens, reported Trent Fischer, MD, MS, senior principal scientist at Medtronic in Minneapolis.
“We can conclude that AF burden increases in the weeks after a hospitalization for COPD if they don’t have a concurrent diagnosis of heart failure. Also, having concurrent heart failure increases the risk of atrial fibrillation and increases the atrial fibrillation burden around the time of COPD hospitalization,” he said in a rapid-fire oral abstract session at the CHEST Annual Meeting.
The findings indicated a need for increased vigilance for AF around the time of a serious COPD exacerbation and may explain at least some of the increased risks for stroke observed in patients who are hospitalized for COPD exacerbations, he said.
Retrospective Study
They drew data from 2007 through 2021 on patients with implantable cardioverter defibrillators, cardiac resynchronization therapy devices, pacemakers, and implantable cardiac monitors, using the Optum de-identified electronic health record dataset linked with Medtronic’s CareLink database to conduct a retrospective analysis.
They looked at admissions for COPD linked to available device diagnostic parameters between 30 days prior to and 60 days after admission for COPD.
They identified a total of 20,056 COPD hospitalizations for patients with concurrent HF and 3877 for those without HF.
Among patients with HF, 43% had a daily AF burden of at least 6 minutes, and 22% had at least 1 hour of irregular rhythms. Among patients without HF, 40% had at least 6 minutes of irregular rhythms daily, and 18% had at least 1 hour.
Among patients with HF, the daily average AF burden increased from a baseline of 158 min/d 30 days before an admission to 170 min/d at admission, returning to baseline by 20 days after hospitalization.
For patients without HF, the AF burden increased from 107 min/d at baseline to 113 min/d during hospitalization and returned to baseline by 20 days after hospitalization.
Confounding Factor?
In the Q&A, session moderator Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, said that when patients with HF get admitted for COPD exacerbations, their HF typically worsens and asked Dr. Fischer how he could tell the difference.
“I know there’s a lot of interaction between heart failure and COPD. They’re well-know comorbidities, and the exacerbation of one can bring on worsening of the other. At least with this database, we can’t really tease out any sort of differences,” Dr. Fischer replied.
“I think that a diagnosis of COPD exacerbation is pretty well laid out, but it’s sometimes difficult to separate worsening of heart failure in these patients, and often these patients get treated for both problems. It’s clear that it’s the heart failure patients who are having more atrial fibrillation episodes, which is not surprising, but the question is how much is the COPD exacerbation contributing to the atrial fibrillation?” said Dr. Sundar.
The study was supported by Medtronic. Dr. Fischer is employed by the company. Dr. Sundar reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
BOSTON — Patients with COPD who have exacerbations requiring hospitalization should be monitored for cardiac arrhythmias, investigators said.
This recommendation is based on results of a study of medical records showing that among more than 20,000 hospitalizations for patients with COPD without concurrent heart failure (HF), 40% patients had at least 6 minutes of daily atrial fibrillation (AF) burden, and nearly half of these patients had at least an hour of daily AF burden; patients with COPD and concurrent HF had similar daily AF burdens, reported Trent Fischer, MD, MS, senior principal scientist at Medtronic in Minneapolis.
“We can conclude that AF burden increases in the weeks after a hospitalization for COPD if they don’t have a concurrent diagnosis of heart failure. Also, having concurrent heart failure increases the risk of atrial fibrillation and increases the atrial fibrillation burden around the time of COPD hospitalization,” he said in a rapid-fire oral abstract session at the CHEST Annual Meeting.
The findings indicated a need for increased vigilance for AF around the time of a serious COPD exacerbation and may explain at least some of the increased risks for stroke observed in patients who are hospitalized for COPD exacerbations, he said.
Retrospective Study
They drew data from 2007 through 2021 on patients with implantable cardioverter defibrillators, cardiac resynchronization therapy devices, pacemakers, and implantable cardiac monitors, using the Optum de-identified electronic health record dataset linked with Medtronic’s CareLink database to conduct a retrospective analysis.
They looked at admissions for COPD linked to available device diagnostic parameters between 30 days prior to and 60 days after admission for COPD.
They identified a total of 20,056 COPD hospitalizations for patients with concurrent HF and 3877 for those without HF.
Among patients with HF, 43% had a daily AF burden of at least 6 minutes, and 22% had at least 1 hour of irregular rhythms. Among patients without HF, 40% had at least 6 minutes of irregular rhythms daily, and 18% had at least 1 hour.
Among patients with HF, the daily average AF burden increased from a baseline of 158 min/d 30 days before an admission to 170 min/d at admission, returning to baseline by 20 days after hospitalization.
For patients without HF, the AF burden increased from 107 min/d at baseline to 113 min/d during hospitalization and returned to baseline by 20 days after hospitalization.
Confounding Factor?
In the Q&A, session moderator Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, said that when patients with HF get admitted for COPD exacerbations, their HF typically worsens and asked Dr. Fischer how he could tell the difference.
“I know there’s a lot of interaction between heart failure and COPD. They’re well-know comorbidities, and the exacerbation of one can bring on worsening of the other. At least with this database, we can’t really tease out any sort of differences,” Dr. Fischer replied.
“I think that a diagnosis of COPD exacerbation is pretty well laid out, but it’s sometimes difficult to separate worsening of heart failure in these patients, and often these patients get treated for both problems. It’s clear that it’s the heart failure patients who are having more atrial fibrillation episodes, which is not surprising, but the question is how much is the COPD exacerbation contributing to the atrial fibrillation?” said Dr. Sundar.
The study was supported by Medtronic. Dr. Fischer is employed by the company. Dr. Sundar reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
BOSTON — Patients with COPD who have exacerbations requiring hospitalization should be monitored for cardiac arrhythmias, investigators said.
This recommendation is based on results of a study of medical records showing that among more than 20,000 hospitalizations for patients with COPD without concurrent heart failure (HF), 40% patients had at least 6 minutes of daily atrial fibrillation (AF) burden, and nearly half of these patients had at least an hour of daily AF burden; patients with COPD and concurrent HF had similar daily AF burdens, reported Trent Fischer, MD, MS, senior principal scientist at Medtronic in Minneapolis.
“We can conclude that AF burden increases in the weeks after a hospitalization for COPD if they don’t have a concurrent diagnosis of heart failure. Also, having concurrent heart failure increases the risk of atrial fibrillation and increases the atrial fibrillation burden around the time of COPD hospitalization,” he said in a rapid-fire oral abstract session at the CHEST Annual Meeting.
The findings indicated a need for increased vigilance for AF around the time of a serious COPD exacerbation and may explain at least some of the increased risks for stroke observed in patients who are hospitalized for COPD exacerbations, he said.
Retrospective Study
They drew data from 2007 through 2021 on patients with implantable cardioverter defibrillators, cardiac resynchronization therapy devices, pacemakers, and implantable cardiac monitors, using the Optum de-identified electronic health record dataset linked with Medtronic’s CareLink database to conduct a retrospective analysis.
They looked at admissions for COPD linked to available device diagnostic parameters between 30 days prior to and 60 days after admission for COPD.
They identified a total of 20,056 COPD hospitalizations for patients with concurrent HF and 3877 for those without HF.
Among patients with HF, 43% had a daily AF burden of at least 6 minutes, and 22% had at least 1 hour of irregular rhythms. Among patients without HF, 40% had at least 6 minutes of irregular rhythms daily, and 18% had at least 1 hour.
Among patients with HF, the daily average AF burden increased from a baseline of 158 min/d 30 days before an admission to 170 min/d at admission, returning to baseline by 20 days after hospitalization.
For patients without HF, the AF burden increased from 107 min/d at baseline to 113 min/d during hospitalization and returned to baseline by 20 days after hospitalization.
Confounding Factor?
In the Q&A, session moderator Krishna Sundar, MBBS, MD, FCCP, a pulmonary, sleep medicine, and critical care medicine specialist at St. John’s Medical Center in Jackson, Wyoming, said that when patients with HF get admitted for COPD exacerbations, their HF typically worsens and asked Dr. Fischer how he could tell the difference.
“I know there’s a lot of interaction between heart failure and COPD. They’re well-know comorbidities, and the exacerbation of one can bring on worsening of the other. At least with this database, we can’t really tease out any sort of differences,” Dr. Fischer replied.
“I think that a diagnosis of COPD exacerbation is pretty well laid out, but it’s sometimes difficult to separate worsening of heart failure in these patients, and often these patients get treated for both problems. It’s clear that it’s the heart failure patients who are having more atrial fibrillation episodes, which is not surprising, but the question is how much is the COPD exacerbation contributing to the atrial fibrillation?” said Dr. Sundar.
The study was supported by Medtronic. Dr. Fischer is employed by the company. Dr. Sundar reported no relevant financial relationships.
A version of this article appeared on Medscape.com.
FROM CHEST 2024
Novel Intervention Slows Cognitive Decline in At-Risk Adults
new research suggests.
The cognitive remediation intervention included a series of progressively difficult computer-based and facilitator-monitored mental exercises designed to sharpen cognitive function.
Researchers found that using cognitive remediation with tDCS slowed decline in executive function and verbal memory more than other cognitive functions. The effect was stronger among people with rMDD versus those with MCI and in those at low genetic risk for Alzheimer’s disease.
“We have developed a novel intervention, combining two interventions that if used separately have a weak effect but together have substantial and clinically meaningful effect of slowing the progression of cognitive decline,” said study author Benoit H. Mulsant, MD, chair of the Department of Psychiatry, University of Toronto, Ontario, Canada, and senior scientist at the Center for Addiction and Mental Health, also in Toronto.
The findings were published online in JAMA Psychiatry.
High-Risk Group
Research shows that older adults with MDD or MCI are at high risk for cognitive decline and dementia. Evidence also suggests that depression in early or mid-life significantly increases the risk for dementia in late life, even if the depression has been in remission for decades.
A potential mechanism underlying this increased risk for dementia could be impaired cortical plasticity, or the ability of the brain to compensate for damage.
The PACt-MD trial included 375 older adults with rMDD, MCI, or both (mean age, 72 years; 62% women) at five academic hospitals in Toronto.
Participants received either cognitive remediation plus tDCS or sham intervention 5 days per week for 8 weeks (acute phase), followed by 5-day “boosters” every 6 months.
tDCS was administered by trained personnel and involved active stimulation for 30 minutes at the beginning of each cognitive remediation group session. The intervention targets the prefrontal cortex, a critical region for cognitive compensation in normal cognitive aging.
The sham group received a weakened version of cognitive remediation, with exercises that did not get progressively more difficult. For the sham stimulation, the current flowed at full intensity for only 54 seconds before and after 30-second ramp-up and ramp-down phases, to create a blinding effect, the authors noted.
A geriatric psychiatrist followed all participants throughout the study, conducting assessments at baseline, month 2, and yearly for 3-7 years (mean follow-up, 48.3 months).
Participants’ depressive symptoms were evaluated at baseline and at all follow-ups and underwent neuropsychological testing to assess six cognitive domains: processing speed, working memory, executive functioning, verbal memory, visual memory, and language.
To get a norm for the cognitive tests, researchers recruited a comparator group of 75 subjects similar in age, gender, and years of education, with no neuropsychiatric disorder or cognitive impairment. They completed the same assessments but not the intervention.
Study participants and assessors were blinded to treatment assignment.
Slower Cognitive Decline
Participants in the intervention group had a significantly slower decline in cognitive function, compared with those in the sham group (adjusted z score difference [active – sham] at month 60, 0.21; P = .006). This is equivalent to slowing cognitive decline by about 4 years, researchers reported. The intervention also showed a positive effect on executive function and verbal memory.
“If I can push dementia from 85 to 89 years and you die at 86, in practice, I have prevented you from ever developing dementia,” Mulsant said.
The efficacy of cognitive remediation plus tDCS in rMDD could be tied to enhanced neuroplasticity, said Mulsant.
The treatment worked well in people with a history of depression, regardless of MCI status, but was not as effective for people with just MCI, researchers noted. The intervention also did not work as well among people at genetic risk for Alzheimer’s disease.
“We don’t believe we have discovered an intervention to prevent dementia in people who are at high risk for Alzheimer disease, but we have discovered an intervention that could prevent dementia in people who have an history of depression,” said Mulsant.
These results suggest the pathways to dementia among people with MCI and rMDD are different, he added.
Because previous research showed either treatment alone demonstrated little efficacy, researchers said the new results indicate that there may be a synergistic effect of combining the two.
The ideal amount of treatment and optimal age for initiation still need to be determined, said Mulsant. The study did not include a comparator group without rMDD or MCI, so the observed cognitive benefits might be specific to people with these high-risk conditions. Another study limitation is lack of diversity in terms of ethnicity, race, and education.
Promising, Important Findings
Commenting on the research, Badr Ratnakaran, MD, assistant professor and division director of geriatric psychiatry at Carilion Clinic–Virginia Tech Carilion School of Medicine, Roanoke, said the results are promising and important because there are so few treatment options for the increasing number of older patients with depression and dementia.
The side-effect profile of the combined treatment is better than that of many pharmacologic treatments, Ratnakaran noted. As more research like this comes out, Ratnakaran predicts that cognitive remediation and tCDS will become more readily available.
“This is telling us that the field of psychiatry, and also dementia, is progressing beyond your usual pharmacotherapy treatments,” said Ratnakaran, who also is chair of the American Psychiatric Association’s Council on Geriatric Psychiatry.
The study received support from the Canada Brain Research Fund of Brain Canada, Health Canada, the Chagnon Family, and the Centre for Addiction and Mental Health Discovery Fund. Mulsant reported holding and receiving support from the Labatt Family Chair in Biology of Depression in Late-Life Adults at the University of Toronto; being a member of the Center for Addiction and Mental Health Board of Trustees; research support from Brain Canada, Canadian Institutes of Health Research, Center for Addiction and Mental Health Foundation, Patient-Centered Outcomes Research Institute, and National Institutes of Health; and nonfinancial support from Capital Solution Design and HappyNeuron. Ratnakaran reported no relevant conflicts.
A version of this article appeared on Medscape.com.
new research suggests.
The cognitive remediation intervention included a series of progressively difficult computer-based and facilitator-monitored mental exercises designed to sharpen cognitive function.
Researchers found that using cognitive remediation with tDCS slowed decline in executive function and verbal memory more than other cognitive functions. The effect was stronger among people with rMDD versus those with MCI and in those at low genetic risk for Alzheimer’s disease.
“We have developed a novel intervention, combining two interventions that if used separately have a weak effect but together have substantial and clinically meaningful effect of slowing the progression of cognitive decline,” said study author Benoit H. Mulsant, MD, chair of the Department of Psychiatry, University of Toronto, Ontario, Canada, and senior scientist at the Center for Addiction and Mental Health, also in Toronto.
The findings were published online in JAMA Psychiatry.
High-Risk Group
Research shows that older adults with MDD or MCI are at high risk for cognitive decline and dementia. Evidence also suggests that depression in early or mid-life significantly increases the risk for dementia in late life, even if the depression has been in remission for decades.
A potential mechanism underlying this increased risk for dementia could be impaired cortical plasticity, or the ability of the brain to compensate for damage.
The PACt-MD trial included 375 older adults with rMDD, MCI, or both (mean age, 72 years; 62% women) at five academic hospitals in Toronto.
Participants received either cognitive remediation plus tDCS or sham intervention 5 days per week for 8 weeks (acute phase), followed by 5-day “boosters” every 6 months.
tDCS was administered by trained personnel and involved active stimulation for 30 minutes at the beginning of each cognitive remediation group session. The intervention targets the prefrontal cortex, a critical region for cognitive compensation in normal cognitive aging.
The sham group received a weakened version of cognitive remediation, with exercises that did not get progressively more difficult. For the sham stimulation, the current flowed at full intensity for only 54 seconds before and after 30-second ramp-up and ramp-down phases, to create a blinding effect, the authors noted.
A geriatric psychiatrist followed all participants throughout the study, conducting assessments at baseline, month 2, and yearly for 3-7 years (mean follow-up, 48.3 months).
Participants’ depressive symptoms were evaluated at baseline and at all follow-ups and underwent neuropsychological testing to assess six cognitive domains: processing speed, working memory, executive functioning, verbal memory, visual memory, and language.
To get a norm for the cognitive tests, researchers recruited a comparator group of 75 subjects similar in age, gender, and years of education, with no neuropsychiatric disorder or cognitive impairment. They completed the same assessments but not the intervention.
Study participants and assessors were blinded to treatment assignment.
Slower Cognitive Decline
Participants in the intervention group had a significantly slower decline in cognitive function, compared with those in the sham group (adjusted z score difference [active – sham] at month 60, 0.21; P = .006). This is equivalent to slowing cognitive decline by about 4 years, researchers reported. The intervention also showed a positive effect on executive function and verbal memory.
“If I can push dementia from 85 to 89 years and you die at 86, in practice, I have prevented you from ever developing dementia,” Mulsant said.
The efficacy of cognitive remediation plus tDCS in rMDD could be tied to enhanced neuroplasticity, said Mulsant.
The treatment worked well in people with a history of depression, regardless of MCI status, but was not as effective for people with just MCI, researchers noted. The intervention also did not work as well among people at genetic risk for Alzheimer’s disease.
“We don’t believe we have discovered an intervention to prevent dementia in people who are at high risk for Alzheimer disease, but we have discovered an intervention that could prevent dementia in people who have an history of depression,” said Mulsant.
These results suggest the pathways to dementia among people with MCI and rMDD are different, he added.
Because previous research showed either treatment alone demonstrated little efficacy, researchers said the new results indicate that there may be a synergistic effect of combining the two.
The ideal amount of treatment and optimal age for initiation still need to be determined, said Mulsant. The study did not include a comparator group without rMDD or MCI, so the observed cognitive benefits might be specific to people with these high-risk conditions. Another study limitation is lack of diversity in terms of ethnicity, race, and education.
Promising, Important Findings
Commenting on the research, Badr Ratnakaran, MD, assistant professor and division director of geriatric psychiatry at Carilion Clinic–Virginia Tech Carilion School of Medicine, Roanoke, said the results are promising and important because there are so few treatment options for the increasing number of older patients with depression and dementia.
The side-effect profile of the combined treatment is better than that of many pharmacologic treatments, Ratnakaran noted. As more research like this comes out, Ratnakaran predicts that cognitive remediation and tCDS will become more readily available.
“This is telling us that the field of psychiatry, and also dementia, is progressing beyond your usual pharmacotherapy treatments,” said Ratnakaran, who also is chair of the American Psychiatric Association’s Council on Geriatric Psychiatry.
The study received support from the Canada Brain Research Fund of Brain Canada, Health Canada, the Chagnon Family, and the Centre for Addiction and Mental Health Discovery Fund. Mulsant reported holding and receiving support from the Labatt Family Chair in Biology of Depression in Late-Life Adults at the University of Toronto; being a member of the Center for Addiction and Mental Health Board of Trustees; research support from Brain Canada, Canadian Institutes of Health Research, Center for Addiction and Mental Health Foundation, Patient-Centered Outcomes Research Institute, and National Institutes of Health; and nonfinancial support from Capital Solution Design and HappyNeuron. Ratnakaran reported no relevant conflicts.
A version of this article appeared on Medscape.com.
new research suggests.
The cognitive remediation intervention included a series of progressively difficult computer-based and facilitator-monitored mental exercises designed to sharpen cognitive function.
Researchers found that using cognitive remediation with tDCS slowed decline in executive function and verbal memory more than other cognitive functions. The effect was stronger among people with rMDD versus those with MCI and in those at low genetic risk for Alzheimer’s disease.
“We have developed a novel intervention, combining two interventions that if used separately have a weak effect but together have substantial and clinically meaningful effect of slowing the progression of cognitive decline,” said study author Benoit H. Mulsant, MD, chair of the Department of Psychiatry, University of Toronto, Ontario, Canada, and senior scientist at the Center for Addiction and Mental Health, also in Toronto.
The findings were published online in JAMA Psychiatry.
High-Risk Group
Research shows that older adults with MDD or MCI are at high risk for cognitive decline and dementia. Evidence also suggests that depression in early or mid-life significantly increases the risk for dementia in late life, even if the depression has been in remission for decades.
A potential mechanism underlying this increased risk for dementia could be impaired cortical plasticity, or the ability of the brain to compensate for damage.
The PACt-MD trial included 375 older adults with rMDD, MCI, or both (mean age, 72 years; 62% women) at five academic hospitals in Toronto.
Participants received either cognitive remediation plus tDCS or sham intervention 5 days per week for 8 weeks (acute phase), followed by 5-day “boosters” every 6 months.
tDCS was administered by trained personnel and involved active stimulation for 30 minutes at the beginning of each cognitive remediation group session. The intervention targets the prefrontal cortex, a critical region for cognitive compensation in normal cognitive aging.
The sham group received a weakened version of cognitive remediation, with exercises that did not get progressively more difficult. For the sham stimulation, the current flowed at full intensity for only 54 seconds before and after 30-second ramp-up and ramp-down phases, to create a blinding effect, the authors noted.
A geriatric psychiatrist followed all participants throughout the study, conducting assessments at baseline, month 2, and yearly for 3-7 years (mean follow-up, 48.3 months).
Participants’ depressive symptoms were evaluated at baseline and at all follow-ups and underwent neuropsychological testing to assess six cognitive domains: processing speed, working memory, executive functioning, verbal memory, visual memory, and language.
To get a norm for the cognitive tests, researchers recruited a comparator group of 75 subjects similar in age, gender, and years of education, with no neuropsychiatric disorder or cognitive impairment. They completed the same assessments but not the intervention.
Study participants and assessors were blinded to treatment assignment.
Slower Cognitive Decline
Participants in the intervention group had a significantly slower decline in cognitive function, compared with those in the sham group (adjusted z score difference [active – sham] at month 60, 0.21; P = .006). This is equivalent to slowing cognitive decline by about 4 years, researchers reported. The intervention also showed a positive effect on executive function and verbal memory.
“If I can push dementia from 85 to 89 years and you die at 86, in practice, I have prevented you from ever developing dementia,” Mulsant said.
The efficacy of cognitive remediation plus tDCS in rMDD could be tied to enhanced neuroplasticity, said Mulsant.
The treatment worked well in people with a history of depression, regardless of MCI status, but was not as effective for people with just MCI, researchers noted. The intervention also did not work as well among people at genetic risk for Alzheimer’s disease.
“We don’t believe we have discovered an intervention to prevent dementia in people who are at high risk for Alzheimer disease, but we have discovered an intervention that could prevent dementia in people who have an history of depression,” said Mulsant.
These results suggest the pathways to dementia among people with MCI and rMDD are different, he added.
Because previous research showed either treatment alone demonstrated little efficacy, researchers said the new results indicate that there may be a synergistic effect of combining the two.
The ideal amount of treatment and optimal age for initiation still need to be determined, said Mulsant. The study did not include a comparator group without rMDD or MCI, so the observed cognitive benefits might be specific to people with these high-risk conditions. Another study limitation is lack of diversity in terms of ethnicity, race, and education.
Promising, Important Findings
Commenting on the research, Badr Ratnakaran, MD, assistant professor and division director of geriatric psychiatry at Carilion Clinic–Virginia Tech Carilion School of Medicine, Roanoke, said the results are promising and important because there are so few treatment options for the increasing number of older patients with depression and dementia.
The side-effect profile of the combined treatment is better than that of many pharmacologic treatments, Ratnakaran noted. As more research like this comes out, Ratnakaran predicts that cognitive remediation and tCDS will become more readily available.
“This is telling us that the field of psychiatry, and also dementia, is progressing beyond your usual pharmacotherapy treatments,” said Ratnakaran, who also is chair of the American Psychiatric Association’s Council on Geriatric Psychiatry.
The study received support from the Canada Brain Research Fund of Brain Canada, Health Canada, the Chagnon Family, and the Centre for Addiction and Mental Health Discovery Fund. Mulsant reported holding and receiving support from the Labatt Family Chair in Biology of Depression in Late-Life Adults at the University of Toronto; being a member of the Center for Addiction and Mental Health Board of Trustees; research support from Brain Canada, Canadian Institutes of Health Research, Center for Addiction and Mental Health Foundation, Patient-Centered Outcomes Research Institute, and National Institutes of Health; and nonfinancial support from Capital Solution Design and HappyNeuron. Ratnakaran reported no relevant conflicts.
A version of this article appeared on Medscape.com.
FROM JAMA PSYCHIATRY
A Finger-Prick Test for Alzheimer’s Disease?
In a pilot study, researchers found a good correlation of p-tau217 levels from blood obtained via standard venous sampling and from a single finger prick.
“We see the potential that capillary p-tau217 from dried blood spots could overcome the limitations of standard venous collection of being invasive, dependent on centrifuges and ultra-low temperature freezers, and also requiring less volume than standard plasma analysis,” said lead investigator Hanna Huber, PhD, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden.
The findings were presented at the 17th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
Strong Link Between Venous and Capillary Samples
p-tau217 has emerged as the most effective blood test to identify Alzheimer’s disease. However, traditional venous blood sampling requires certain infrastructure and immediate processing. Increased and simplified access to this blood biomarker could be crucial for early diagnosis, proper patient management, and prompt initiation of disease-modifying treatments.
The DROP-AD project is investigating the diagnostic performance of finger-prick collection to accurately measure p-tau217. In the current study, the research team obtained paired venous blood and capillary blood samples from 206 adults (mean age, 71.8 years; 59% women), with or without cognitive impairment, from five European centers. A subset of participants provided a second finger-prick sample collected without any supervision.
The capillary blood samples were obtained via a single finger prick, and then single blood drops were applied to a dried plasma spot (DPS) card, which was then shipped to a lab (without temperature control or cooling) for p-tau217 measurement. Cerebrospinal fluid biomarkers were available for a subset of individuals.
Throughout the entire study population, there was a “very convincing correlation” between p-tau217 levels from capillary DPS and venous plasma, Huber told conference attendees.
Additionally, capillary DPS p-tau217 levels were able to discriminate amyloid-positive from amyloid-negative individuals, with levels of this biomarker increasing in a stepwise fashion, “from cognitively unimpaired individuals to individuals with mild cognitive impairment and, finally, to dementia patients,” Huber said.
Of note, capillary p-tau217 levels from DPS samples that were collected by research staff did not differ from unsupervised self-collected samples.
What about the stability of the samples? Capillary DPS p-tau-217 is “stable over 2 weeks at room temperature,” Huber said.
Ready for Prime Time?
Preliminary data from the DROP-AD project highlight the potential of using finger-prick blood collection to identify neurofilament light (NfL) and glial fibrillary acidic protein (GFAP), two other Alzheimer’s disease biomarkers.
“We think that capillary p-tau217, but also other biomarkers, could be a widely accessible and cheap alternative for clinical practice and clinical trials in individuals with cognitive decline if the results are confirmed in longitudinal and home-sampling cohorts,” Huber concluded.
“Measuring biomarkers by a simple finger prick could facilitate regular and autonomous sampling at home, which would be particularly useful in remote and rural settings,” she noted.
The findings in this study confirm and extend earlier findings that the study team reported last year at the Alzheimer’s Association International Conference (AAIC).
“The data shared at CTAD 2024, along with the related material previously presented at AAIC 2023, reporting on a ‘finger prick’ blood test approach is interesting and emerging work but not yet ready for clinical use,” said Rebecca M. Edelmayer, PhD, Alzheimer’s Association vice president of scientific engagement.
“That said, the idea of a highly accessible and scalable tool that can aid in easier and more equitable diagnosis would be welcomed by researchers, clinicians, and individuals and families affected by Alzheimer’s disease and all other dementias,” Edelmayer said.
“This finger-prick blood testing technology for Alzheimer’s biomarkers still has to be validated more broadly, but it is very promising. Advancements in technology and practice demonstrate the simplicity, transportability, and diagnostic value of blood-based biomarkers for Alzheimer’s,” she added.
The Alzheimer’s Association is currently conducting a systematic review of the evidence and preparing clinical practice guidelines on blood-based biomarker tests for specialized healthcare settings, with publications, clinical resources, and tools anticipated in 2025, Edelmayer noted.
The study had no commercial funding. Huber and Edelmayer report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
In a pilot study, researchers found a good correlation of p-tau217 levels from blood obtained via standard venous sampling and from a single finger prick.
“We see the potential that capillary p-tau217 from dried blood spots could overcome the limitations of standard venous collection of being invasive, dependent on centrifuges and ultra-low temperature freezers, and also requiring less volume than standard plasma analysis,” said lead investigator Hanna Huber, PhD, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden.
The findings were presented at the 17th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
Strong Link Between Venous and Capillary Samples
p-tau217 has emerged as the most effective blood test to identify Alzheimer’s disease. However, traditional venous blood sampling requires certain infrastructure and immediate processing. Increased and simplified access to this blood biomarker could be crucial for early diagnosis, proper patient management, and prompt initiation of disease-modifying treatments.
The DROP-AD project is investigating the diagnostic performance of finger-prick collection to accurately measure p-tau217. In the current study, the research team obtained paired venous blood and capillary blood samples from 206 adults (mean age, 71.8 years; 59% women), with or without cognitive impairment, from five European centers. A subset of participants provided a second finger-prick sample collected without any supervision.
The capillary blood samples were obtained via a single finger prick, and then single blood drops were applied to a dried plasma spot (DPS) card, which was then shipped to a lab (without temperature control or cooling) for p-tau217 measurement. Cerebrospinal fluid biomarkers were available for a subset of individuals.
Throughout the entire study population, there was a “very convincing correlation” between p-tau217 levels from capillary DPS and venous plasma, Huber told conference attendees.
Additionally, capillary DPS p-tau217 levels were able to discriminate amyloid-positive from amyloid-negative individuals, with levels of this biomarker increasing in a stepwise fashion, “from cognitively unimpaired individuals to individuals with mild cognitive impairment and, finally, to dementia patients,” Huber said.
Of note, capillary p-tau217 levels from DPS samples that were collected by research staff did not differ from unsupervised self-collected samples.
What about the stability of the samples? Capillary DPS p-tau-217 is “stable over 2 weeks at room temperature,” Huber said.
Ready for Prime Time?
Preliminary data from the DROP-AD project highlight the potential of using finger-prick blood collection to identify neurofilament light (NfL) and glial fibrillary acidic protein (GFAP), two other Alzheimer’s disease biomarkers.
“We think that capillary p-tau217, but also other biomarkers, could be a widely accessible and cheap alternative for clinical practice and clinical trials in individuals with cognitive decline if the results are confirmed in longitudinal and home-sampling cohorts,” Huber concluded.
“Measuring biomarkers by a simple finger prick could facilitate regular and autonomous sampling at home, which would be particularly useful in remote and rural settings,” she noted.
The findings in this study confirm and extend earlier findings that the study team reported last year at the Alzheimer’s Association International Conference (AAIC).
“The data shared at CTAD 2024, along with the related material previously presented at AAIC 2023, reporting on a ‘finger prick’ blood test approach is interesting and emerging work but not yet ready for clinical use,” said Rebecca M. Edelmayer, PhD, Alzheimer’s Association vice president of scientific engagement.
“That said, the idea of a highly accessible and scalable tool that can aid in easier and more equitable diagnosis would be welcomed by researchers, clinicians, and individuals and families affected by Alzheimer’s disease and all other dementias,” Edelmayer said.
“This finger-prick blood testing technology for Alzheimer’s biomarkers still has to be validated more broadly, but it is very promising. Advancements in technology and practice demonstrate the simplicity, transportability, and diagnostic value of blood-based biomarkers for Alzheimer’s,” she added.
The Alzheimer’s Association is currently conducting a systematic review of the evidence and preparing clinical practice guidelines on blood-based biomarker tests for specialized healthcare settings, with publications, clinical resources, and tools anticipated in 2025, Edelmayer noted.
The study had no commercial funding. Huber and Edelmayer report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
In a pilot study, researchers found a good correlation of p-tau217 levels from blood obtained via standard venous sampling and from a single finger prick.
“We see the potential that capillary p-tau217 from dried blood spots could overcome the limitations of standard venous collection of being invasive, dependent on centrifuges and ultra-low temperature freezers, and also requiring less volume than standard plasma analysis,” said lead investigator Hanna Huber, PhD, Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, University of Gothenburg, Sweden.
The findings were presented at the 17th Clinical Trials on Alzheimer’s Disease (CTAD) conference.
Strong Link Between Venous and Capillary Samples
p-tau217 has emerged as the most effective blood test to identify Alzheimer’s disease. However, traditional venous blood sampling requires certain infrastructure and immediate processing. Increased and simplified access to this blood biomarker could be crucial for early diagnosis, proper patient management, and prompt initiation of disease-modifying treatments.
The DROP-AD project is investigating the diagnostic performance of finger-prick collection to accurately measure p-tau217. In the current study, the research team obtained paired venous blood and capillary blood samples from 206 adults (mean age, 71.8 years; 59% women), with or without cognitive impairment, from five European centers. A subset of participants provided a second finger-prick sample collected without any supervision.
The capillary blood samples were obtained via a single finger prick, and then single blood drops were applied to a dried plasma spot (DPS) card, which was then shipped to a lab (without temperature control or cooling) for p-tau217 measurement. Cerebrospinal fluid biomarkers were available for a subset of individuals.
Throughout the entire study population, there was a “very convincing correlation” between p-tau217 levels from capillary DPS and venous plasma, Huber told conference attendees.
Additionally, capillary DPS p-tau217 levels were able to discriminate amyloid-positive from amyloid-negative individuals, with levels of this biomarker increasing in a stepwise fashion, “from cognitively unimpaired individuals to individuals with mild cognitive impairment and, finally, to dementia patients,” Huber said.
Of note, capillary p-tau217 levels from DPS samples that were collected by research staff did not differ from unsupervised self-collected samples.
What about the stability of the samples? Capillary DPS p-tau-217 is “stable over 2 weeks at room temperature,” Huber said.
Ready for Prime Time?
Preliminary data from the DROP-AD project highlight the potential of using finger-prick blood collection to identify neurofilament light (NfL) and glial fibrillary acidic protein (GFAP), two other Alzheimer’s disease biomarkers.
“We think that capillary p-tau217, but also other biomarkers, could be a widely accessible and cheap alternative for clinical practice and clinical trials in individuals with cognitive decline if the results are confirmed in longitudinal and home-sampling cohorts,” Huber concluded.
“Measuring biomarkers by a simple finger prick could facilitate regular and autonomous sampling at home, which would be particularly useful in remote and rural settings,” she noted.
The findings in this study confirm and extend earlier findings that the study team reported last year at the Alzheimer’s Association International Conference (AAIC).
“The data shared at CTAD 2024, along with the related material previously presented at AAIC 2023, reporting on a ‘finger prick’ blood test approach is interesting and emerging work but not yet ready for clinical use,” said Rebecca M. Edelmayer, PhD, Alzheimer’s Association vice president of scientific engagement.
“That said, the idea of a highly accessible and scalable tool that can aid in easier and more equitable diagnosis would be welcomed by researchers, clinicians, and individuals and families affected by Alzheimer’s disease and all other dementias,” Edelmayer said.
“This finger-prick blood testing technology for Alzheimer’s biomarkers still has to be validated more broadly, but it is very promising. Advancements in technology and practice demonstrate the simplicity, transportability, and diagnostic value of blood-based biomarkers for Alzheimer’s,” she added.
The Alzheimer’s Association is currently conducting a systematic review of the evidence and preparing clinical practice guidelines on blood-based biomarker tests for specialized healthcare settings, with publications, clinical resources, and tools anticipated in 2025, Edelmayer noted.
The study had no commercial funding. Huber and Edelmayer report no relevant conflicts of interest.
A version of this article appeared on Medscape.com.
FROM CTAD 2024
Minor Progress in Gender Pay Equity, But a Big Gap Persists
Despite some recent progress in compensation equity, women in medicine continue to be paid significantly lower salaries than men.
According to the Female Compensation Report 2024 by Medscape, male doctors of any kind earned an average salary of about $400,000, whereas female doctors earned approximately $309,000 — a 29% gap.
The report analyzed survey data from 7000 practicing physicians who were recruited over a 4-month period starting in October 2023. The respondents comprised roughly 60% women representing over 29 specialties.
In the 2022 report, the pay gap between the genders was 32%. But some women in the field argued substantial headway is still needed.
“You can try and pick apart the data, but I’d say we’re not really making progress,” said Susan T. Hingle, MD, an internist in Illinois and president of the American Medical Women’s Association. “A decline by a couple of percentage points is not significantly addressing this pay gap that over a lifetime is huge, can be millions of dollars.”
The gender gap was narrower among female primary care physicians (PCPs) vs medical specialists. Female PCPs earned around $253,000 per year, whereas male PCPs earned about $295,000 per year. Hingle suggested that female PCPs may enjoy more pay equity because health systems have a harder time filling these positions.
On the other hand, the gap for specialists rose from 27% in 2022 to 31% in 2023. Differences in how aggressively women and men negotiate compensation packages may play a role, said Hingle.
“Taking negotiation out of the equation would be progress to me,” said Hingle.
Pay disparity did not appear to be the result of time spent on the job — female doctors reported an average of 49 work hours per week, whereas their male counterparts reported 50 work hours per week.
Meanwhile, the pay gap progressively worsened over time. Among doctors aged 28-34 years, men earned an average of $53,000 more than women. By ages 46-49, men earned an average of $157,000 more than women.
“I had to take my employer to court to get equal compensation, sad as it is to say,” said a hospitalist in North Carolina.
Nearly 60% of women surveyed felt they were not being paid fairly for their efforts, up from less than half reported in Medscape’s 2021 report. Hingle said that this figure may not only reflect sentiments about the compensation gap, but also less support on the job, including fewer physician assistants (PAs), nurses, and administrative staff.
“At my job, I do the work of multiple people,” said a survey respondent. “Junior resident, senior resident, social worker, nurse practitioner, PA — as well as try to be a teacher, researcher, [and] an excellent doctor and have the time to make patients feel as if they are not in a rush.”
Roughly 30% of women physicians said they would not choose to go into medicine again if given the chance compared with 26% of male physicians.
“Gender inequities in our profession have a direct impact,” said Shikha Jain, MD, an oncologist in Chicago and founder of the Women in Medicine nonprofit. “I think women in general don’t feel valued in the care they’re providing.”
Jain cited bullying, harassment, and fewer opportunities for leadership and recognition as factors beyond pay that affect female physicians’ feelings of being valued.
A version of this article first appeared on Medscape.com.
Despite some recent progress in compensation equity, women in medicine continue to be paid significantly lower salaries than men.
According to the Female Compensation Report 2024 by Medscape, male doctors of any kind earned an average salary of about $400,000, whereas female doctors earned approximately $309,000 — a 29% gap.
The report analyzed survey data from 7000 practicing physicians who were recruited over a 4-month period starting in October 2023. The respondents comprised roughly 60% women representing over 29 specialties.
In the 2022 report, the pay gap between the genders was 32%. But some women in the field argued substantial headway is still needed.
“You can try and pick apart the data, but I’d say we’re not really making progress,” said Susan T. Hingle, MD, an internist in Illinois and president of the American Medical Women’s Association. “A decline by a couple of percentage points is not significantly addressing this pay gap that over a lifetime is huge, can be millions of dollars.”
The gender gap was narrower among female primary care physicians (PCPs) vs medical specialists. Female PCPs earned around $253,000 per year, whereas male PCPs earned about $295,000 per year. Hingle suggested that female PCPs may enjoy more pay equity because health systems have a harder time filling these positions.
On the other hand, the gap for specialists rose from 27% in 2022 to 31% in 2023. Differences in how aggressively women and men negotiate compensation packages may play a role, said Hingle.
“Taking negotiation out of the equation would be progress to me,” said Hingle.
Pay disparity did not appear to be the result of time spent on the job — female doctors reported an average of 49 work hours per week, whereas their male counterparts reported 50 work hours per week.
Meanwhile, the pay gap progressively worsened over time. Among doctors aged 28-34 years, men earned an average of $53,000 more than women. By ages 46-49, men earned an average of $157,000 more than women.
“I had to take my employer to court to get equal compensation, sad as it is to say,” said a hospitalist in North Carolina.
Nearly 60% of women surveyed felt they were not being paid fairly for their efforts, up from less than half reported in Medscape’s 2021 report. Hingle said that this figure may not only reflect sentiments about the compensation gap, but also less support on the job, including fewer physician assistants (PAs), nurses, and administrative staff.
“At my job, I do the work of multiple people,” said a survey respondent. “Junior resident, senior resident, social worker, nurse practitioner, PA — as well as try to be a teacher, researcher, [and] an excellent doctor and have the time to make patients feel as if they are not in a rush.”
Roughly 30% of women physicians said they would not choose to go into medicine again if given the chance compared with 26% of male physicians.
“Gender inequities in our profession have a direct impact,” said Shikha Jain, MD, an oncologist in Chicago and founder of the Women in Medicine nonprofit. “I think women in general don’t feel valued in the care they’re providing.”
Jain cited bullying, harassment, and fewer opportunities for leadership and recognition as factors beyond pay that affect female physicians’ feelings of being valued.
A version of this article first appeared on Medscape.com.
Despite some recent progress in compensation equity, women in medicine continue to be paid significantly lower salaries than men.
According to the Female Compensation Report 2024 by Medscape, male doctors of any kind earned an average salary of about $400,000, whereas female doctors earned approximately $309,000 — a 29% gap.
The report analyzed survey data from 7000 practicing physicians who were recruited over a 4-month period starting in October 2023. The respondents comprised roughly 60% women representing over 29 specialties.
In the 2022 report, the pay gap between the genders was 32%. But some women in the field argued substantial headway is still needed.
“You can try and pick apart the data, but I’d say we’re not really making progress,” said Susan T. Hingle, MD, an internist in Illinois and president of the American Medical Women’s Association. “A decline by a couple of percentage points is not significantly addressing this pay gap that over a lifetime is huge, can be millions of dollars.”
The gender gap was narrower among female primary care physicians (PCPs) vs medical specialists. Female PCPs earned around $253,000 per year, whereas male PCPs earned about $295,000 per year. Hingle suggested that female PCPs may enjoy more pay equity because health systems have a harder time filling these positions.
On the other hand, the gap for specialists rose from 27% in 2022 to 31% in 2023. Differences in how aggressively women and men negotiate compensation packages may play a role, said Hingle.
“Taking negotiation out of the equation would be progress to me,” said Hingle.
Pay disparity did not appear to be the result of time spent on the job — female doctors reported an average of 49 work hours per week, whereas their male counterparts reported 50 work hours per week.
Meanwhile, the pay gap progressively worsened over time. Among doctors aged 28-34 years, men earned an average of $53,000 more than women. By ages 46-49, men earned an average of $157,000 more than women.
“I had to take my employer to court to get equal compensation, sad as it is to say,” said a hospitalist in North Carolina.
Nearly 60% of women surveyed felt they were not being paid fairly for their efforts, up from less than half reported in Medscape’s 2021 report. Hingle said that this figure may not only reflect sentiments about the compensation gap, but also less support on the job, including fewer physician assistants (PAs), nurses, and administrative staff.
“At my job, I do the work of multiple people,” said a survey respondent. “Junior resident, senior resident, social worker, nurse practitioner, PA — as well as try to be a teacher, researcher, [and] an excellent doctor and have the time to make patients feel as if they are not in a rush.”
Roughly 30% of women physicians said they would not choose to go into medicine again if given the chance compared with 26% of male physicians.
“Gender inequities in our profession have a direct impact,” said Shikha Jain, MD, an oncologist in Chicago and founder of the Women in Medicine nonprofit. “I think women in general don’t feel valued in the care they’re providing.”
Jain cited bullying, harassment, and fewer opportunities for leadership and recognition as factors beyond pay that affect female physicians’ feelings of being valued.
A version of this article first appeared on Medscape.com.
Weight Loss Surgery, Obesity Drugs Achieve Similar Results but Have Different Safety Profiles
PHILADELPHIA — according to a meta-analysis comparing the efficacy and safety of the different treatment options.
However, tirzepatide, a long-acting glucose-dependent insulinotropic polypeptide (GIP) receptor agonist and glucagon-like peptide 1 receptor agonist (GLP-1 RA), produces comparable weight loss and has a favorable safety profile, reported principal investigator Jena Velji-Ibrahim, MD, MSc, from Prisma Health–Upstate/University of South Carolina School of Medicine in Greenville.
In addition, there was “no significant difference in percentage total body weight loss between tirzepatide when comparing it to one-anastomosis gastric bypass (OAGB), as well as laparoscopic sleeve gastrectomy,” she said.
All 11 interventions studied exerted weight loss effects, and side-effect profiles were also deemed largely favorable, particularly for endoscopic interventions, she added.
“When we compare bariatric surgery to bariatric endoscopy, endoscopic sleeve gastroplasty and transpyloric shuttle offer a minimally invasive alternative with good weight loss outcomes and fewer adverse events,” she said.
Velji-Ibrahim presented the findings at the annual meeting of the American College of Gastroenterology (ACG).
Comparing Weight Loss Interventions
Many of the studies comparing weight loss interventions to date have been limited by relatively small sample sizes, observational designs, and inconsistent results. This prompted Velji-Ibrahim and her colleagues to conduct what they believe to be the first-of-its-kind meta-analysis on this topic.
They began by conducting a systematic search of the literature to identify randomized controlled trials (RCTs) that compared the efficacy of Food and Drug Administration–approved bariatric surgeries, bariatric endoscopies, and medications — against each other or with placebo — in adults with a body mass index of 25-45, with or without concurrent type 2 diabetes.
A network meta-analysis was then performed to assess the various interventions’ impact on percentage total weight loss and side-effect profiles. P-scores were calculated to rank the treatments and identify the preferred interventions. The duration of therapy was 52 weeks.
In total, 34 eligible RCTs with 15,660 patients were included. Overall, the RCTs analyzed 11 weight loss treatments, including bariatric surgeries (four studies), bariatric endoscopies (three studies), and medications (four studies).
Specifically, the bariatric surgeries included RYGB, laparoscopic sleeve gastrectomy, OAGB, and laparoscopic adjustable gastric banding; bariatric endoscopies included endoscopic sleeve gastroplasty, transpyloric shuttle, and intragastric balloon; and medications included tirzepatide, semaglutide, and liraglutide.
Although all interventions were associated with reductions in percentage total weight loss compared with placebo, RYGB led to the greatest reductions (19.29%) and was ranked as the first preferred treatment (97% probability). It was followed in the rankings by OAGB, tirzepatide 15 mg, laparoscopic sleeve gastrectomy, and semaglutide 2.4 mg.
Tirzepatide 15 mg had a slightly lower percentage total weight loss (15.18%) but a favorable safety profile. There was no significant difference in percentage total weight loss between tirzepatide 15 mg and OAGB (mean difference, 2.97%) or laparoscopic sleeve gastrectomy (mean difference, 0.43%).
There was also no significant difference in percentage total weight loss between semaglutide 2.4 mg, compared with endoscopic sleeve gastroplasty and transpyloric shuttle.
Endoscopic sleeve, transpyloric shuttle, and intragastric balloon all resulted in weight loss > 5%.
When compared with bariatric surgery, “endoscopic interventions had a better side-effect profile, with no increased odds of mortality and intensive care needs,” Velji-Ibrahim said.
When it came to the medications, “the most common side effects were gastrointestinal in nature, which included nausea, vomiting, diarrhea, and constipation,” she said.
Combining, Rather Than Comparing, Therapies
Following the presentation, session co-moderator Shivangi T. Kothari, MD, assistant professor of medicine and associate director of endoscopy at the University of Rochester Medical Center in New York, shared her thoughts of what the future of obesity management research might look like.
It’s not just going to be about percentage total weight loss, she said, but about how well the effect is sustained following the intervention.
And we might move “away from comparing one modality to another” and instead study combination therapies, “which would be ideal,” said Kothari.
This was the focus of another meta-analysis presented at ACG 2024, in which Nihal Ijaz I. Khan, MD, and colleagues compared the efficacy of endoscopic bariatric treatment alone vs its combined use with GLP-1 RAs.
The researchers identified three retrospective studies with 266 patients, of whom 143 underwent endoscopic bariatric treatment alone (either endoscopic sleeve gastroplasty or intragastric balloon) and 123 had it combined with GLP-1 RAs, specifically liraglutide.
They reported that superior absolute weight loss was achieved in the group of patients receiving GLP-1 RAs in combination with endoscopic bariatric treatment. The standardized mean difference in body weight loss at treatment follow-up was 0.61 (P <.01).
“Further studies are required to evaluate the safety and adverse events comparing these two treatment modalities and to discover differences between comparing the two endoscopic options to various GLP-1 receptor agonists,” Khan noted.
Neither study had specific funding. Velji-Ibrahim and Khan reported no relevant financial relationships. Kothari reported serving as a consultant for Boston Scientific and Olympus, as well as serving as an advisory committee/board member for Castle Biosciences.
A version of this article first appeared on Medscape.com.
PHILADELPHIA — according to a meta-analysis comparing the efficacy and safety of the different treatment options.
However, tirzepatide, a long-acting glucose-dependent insulinotropic polypeptide (GIP) receptor agonist and glucagon-like peptide 1 receptor agonist (GLP-1 RA), produces comparable weight loss and has a favorable safety profile, reported principal investigator Jena Velji-Ibrahim, MD, MSc, from Prisma Health–Upstate/University of South Carolina School of Medicine in Greenville.
In addition, there was “no significant difference in percentage total body weight loss between tirzepatide when comparing it to one-anastomosis gastric bypass (OAGB), as well as laparoscopic sleeve gastrectomy,” she said.
All 11 interventions studied exerted weight loss effects, and side-effect profiles were also deemed largely favorable, particularly for endoscopic interventions, she added.
“When we compare bariatric surgery to bariatric endoscopy, endoscopic sleeve gastroplasty and transpyloric shuttle offer a minimally invasive alternative with good weight loss outcomes and fewer adverse events,” she said.
Velji-Ibrahim presented the findings at the annual meeting of the American College of Gastroenterology (ACG).
Comparing Weight Loss Interventions
Many of the studies comparing weight loss interventions to date have been limited by relatively small sample sizes, observational designs, and inconsistent results. This prompted Velji-Ibrahim and her colleagues to conduct what they believe to be the first-of-its-kind meta-analysis on this topic.
They began by conducting a systematic search of the literature to identify randomized controlled trials (RCTs) that compared the efficacy of Food and Drug Administration–approved bariatric surgeries, bariatric endoscopies, and medications — against each other or with placebo — in adults with a body mass index of 25-45, with or without concurrent type 2 diabetes.
A network meta-analysis was then performed to assess the various interventions’ impact on percentage total weight loss and side-effect profiles. P-scores were calculated to rank the treatments and identify the preferred interventions. The duration of therapy was 52 weeks.
In total, 34 eligible RCTs with 15,660 patients were included. Overall, the RCTs analyzed 11 weight loss treatments, including bariatric surgeries (four studies), bariatric endoscopies (three studies), and medications (four studies).
Specifically, the bariatric surgeries included RYGB, laparoscopic sleeve gastrectomy, OAGB, and laparoscopic adjustable gastric banding; bariatric endoscopies included endoscopic sleeve gastroplasty, transpyloric shuttle, and intragastric balloon; and medications included tirzepatide, semaglutide, and liraglutide.
Although all interventions were associated with reductions in percentage total weight loss compared with placebo, RYGB led to the greatest reductions (19.29%) and was ranked as the first preferred treatment (97% probability). It was followed in the rankings by OAGB, tirzepatide 15 mg, laparoscopic sleeve gastrectomy, and semaglutide 2.4 mg.
Tirzepatide 15 mg had a slightly lower percentage total weight loss (15.18%) but a favorable safety profile. There was no significant difference in percentage total weight loss between tirzepatide 15 mg and OAGB (mean difference, 2.97%) or laparoscopic sleeve gastrectomy (mean difference, 0.43%).
There was also no significant difference in percentage total weight loss between semaglutide 2.4 mg, compared with endoscopic sleeve gastroplasty and transpyloric shuttle.
Endoscopic sleeve, transpyloric shuttle, and intragastric balloon all resulted in weight loss > 5%.
When compared with bariatric surgery, “endoscopic interventions had a better side-effect profile, with no increased odds of mortality and intensive care needs,” Velji-Ibrahim said.
When it came to the medications, “the most common side effects were gastrointestinal in nature, which included nausea, vomiting, diarrhea, and constipation,” she said.
Combining, Rather Than Comparing, Therapies
Following the presentation, session co-moderator Shivangi T. Kothari, MD, assistant professor of medicine and associate director of endoscopy at the University of Rochester Medical Center in New York, shared her thoughts of what the future of obesity management research might look like.
It’s not just going to be about percentage total weight loss, she said, but about how well the effect is sustained following the intervention.
And we might move “away from comparing one modality to another” and instead study combination therapies, “which would be ideal,” said Kothari.
This was the focus of another meta-analysis presented at ACG 2024, in which Nihal Ijaz I. Khan, MD, and colleagues compared the efficacy of endoscopic bariatric treatment alone vs its combined use with GLP-1 RAs.
The researchers identified three retrospective studies with 266 patients, of whom 143 underwent endoscopic bariatric treatment alone (either endoscopic sleeve gastroplasty or intragastric balloon) and 123 had it combined with GLP-1 RAs, specifically liraglutide.
They reported that superior absolute weight loss was achieved in the group of patients receiving GLP-1 RAs in combination with endoscopic bariatric treatment. The standardized mean difference in body weight loss at treatment follow-up was 0.61 (P <.01).
“Further studies are required to evaluate the safety and adverse events comparing these two treatment modalities and to discover differences between comparing the two endoscopic options to various GLP-1 receptor agonists,” Khan noted.
Neither study had specific funding. Velji-Ibrahim and Khan reported no relevant financial relationships. Kothari reported serving as a consultant for Boston Scientific and Olympus, as well as serving as an advisory committee/board member for Castle Biosciences.
A version of this article first appeared on Medscape.com.
PHILADELPHIA — according to a meta-analysis comparing the efficacy and safety of the different treatment options.
However, tirzepatide, a long-acting glucose-dependent insulinotropic polypeptide (GIP) receptor agonist and glucagon-like peptide 1 receptor agonist (GLP-1 RA), produces comparable weight loss and has a favorable safety profile, reported principal investigator Jena Velji-Ibrahim, MD, MSc, from Prisma Health–Upstate/University of South Carolina School of Medicine in Greenville.
In addition, there was “no significant difference in percentage total body weight loss between tirzepatide when comparing it to one-anastomosis gastric bypass (OAGB), as well as laparoscopic sleeve gastrectomy,” she said.
All 11 interventions studied exerted weight loss effects, and side-effect profiles were also deemed largely favorable, particularly for endoscopic interventions, she added.
“When we compare bariatric surgery to bariatric endoscopy, endoscopic sleeve gastroplasty and transpyloric shuttle offer a minimally invasive alternative with good weight loss outcomes and fewer adverse events,” she said.
Velji-Ibrahim presented the findings at the annual meeting of the American College of Gastroenterology (ACG).
Comparing Weight Loss Interventions
Many of the studies comparing weight loss interventions to date have been limited by relatively small sample sizes, observational designs, and inconsistent results. This prompted Velji-Ibrahim and her colleagues to conduct what they believe to be the first-of-its-kind meta-analysis on this topic.
They began by conducting a systematic search of the literature to identify randomized controlled trials (RCTs) that compared the efficacy of Food and Drug Administration–approved bariatric surgeries, bariatric endoscopies, and medications — against each other or with placebo — in adults with a body mass index of 25-45, with or without concurrent type 2 diabetes.
A network meta-analysis was then performed to assess the various interventions’ impact on percentage total weight loss and side-effect profiles. P-scores were calculated to rank the treatments and identify the preferred interventions. The duration of therapy was 52 weeks.
In total, 34 eligible RCTs with 15,660 patients were included. Overall, the RCTs analyzed 11 weight loss treatments, including bariatric surgeries (four studies), bariatric endoscopies (three studies), and medications (four studies).
Specifically, the bariatric surgeries included RYGB, laparoscopic sleeve gastrectomy, OAGB, and laparoscopic adjustable gastric banding; bariatric endoscopies included endoscopic sleeve gastroplasty, transpyloric shuttle, and intragastric balloon; and medications included tirzepatide, semaglutide, and liraglutide.
Although all interventions were associated with reductions in percentage total weight loss compared with placebo, RYGB led to the greatest reductions (19.29%) and was ranked as the first preferred treatment (97% probability). It was followed in the rankings by OAGB, tirzepatide 15 mg, laparoscopic sleeve gastrectomy, and semaglutide 2.4 mg.
Tirzepatide 15 mg had a slightly lower percentage total weight loss (15.18%) but a favorable safety profile. There was no significant difference in percentage total weight loss between tirzepatide 15 mg and OAGB (mean difference, 2.97%) or laparoscopic sleeve gastrectomy (mean difference, 0.43%).
There was also no significant difference in percentage total weight loss between semaglutide 2.4 mg, compared with endoscopic sleeve gastroplasty and transpyloric shuttle.
Endoscopic sleeve, transpyloric shuttle, and intragastric balloon all resulted in weight loss > 5%.
When compared with bariatric surgery, “endoscopic interventions had a better side-effect profile, with no increased odds of mortality and intensive care needs,” Velji-Ibrahim said.
When it came to the medications, “the most common side effects were gastrointestinal in nature, which included nausea, vomiting, diarrhea, and constipation,” she said.
Combining, Rather Than Comparing, Therapies
Following the presentation, session co-moderator Shivangi T. Kothari, MD, assistant professor of medicine and associate director of endoscopy at the University of Rochester Medical Center in New York, shared her thoughts of what the future of obesity management research might look like.
It’s not just going to be about percentage total weight loss, she said, but about how well the effect is sustained following the intervention.
And we might move “away from comparing one modality to another” and instead study combination therapies, “which would be ideal,” said Kothari.
This was the focus of another meta-analysis presented at ACG 2024, in which Nihal Ijaz I. Khan, MD, and colleagues compared the efficacy of endoscopic bariatric treatment alone vs its combined use with GLP-1 RAs.
The researchers identified three retrospective studies with 266 patients, of whom 143 underwent endoscopic bariatric treatment alone (either endoscopic sleeve gastroplasty or intragastric balloon) and 123 had it combined with GLP-1 RAs, specifically liraglutide.
They reported that superior absolute weight loss was achieved in the group of patients receiving GLP-1 RAs in combination with endoscopic bariatric treatment. The standardized mean difference in body weight loss at treatment follow-up was 0.61 (P <.01).
“Further studies are required to evaluate the safety and adverse events comparing these two treatment modalities and to discover differences between comparing the two endoscopic options to various GLP-1 receptor agonists,” Khan noted.
Neither study had specific funding. Velji-Ibrahim and Khan reported no relevant financial relationships. Kothari reported serving as a consultant for Boston Scientific and Olympus, as well as serving as an advisory committee/board member for Castle Biosciences.
A version of this article first appeared on Medscape.com.
FROM ACG 2024
Cannabis Often Used as a Substitute for Traditional Medications
Nearly two thirds of patients with rheumatic conditions switched to medical cannabis from medications such as nonsteroidal anti-inflammatory drugs (NSAIDs) and opioids, with the substitution being associated with greater self-reported improvement in symptoms than nonsubstitution.
METHODOLOGY:
- Researchers conducted a secondary analysis of a cross-sectional survey to investigate the prevalence of switching to medical cannabis from traditional medications in patients with rheumatic conditions from the United States and Canada.
- The survey included questions on current and past medical cannabis use, sociodemographic characteristics, medication taken and substituted, substance use, and patient-reported outcomes.
- Of the 1727 patients who completed the survey, 763 patients (mean age, 59 years; 84.1% women) reported current use of cannabis and were included in this analysis.
- Participants were asked if they had substituted any medications with medical cannabis and were sub-grouped accordingly.
- They also reported any changes in symptoms after initiating cannabis, the current and anticipated duration of medical cannabis use, methods of ingestion, cannabinoid content, and frequency of use.
TAKEAWAY:
- Overall, 62.5% reported substituting medical cannabis for certain medications, including NSAIDs (54.7%), opioids (48.6%), sleep aids (29.6%), muscle relaxants (25.2%), benzodiazepines (15.5%), and gabapentinoids (10.5%).
- The most common reasons given for substituting medical cannabis were fewer side effects (39%), better symptom control (27%), and fewer adverse effects (12%).
- Participants who substituted medical cannabis reported significant improvements in symptoms such as pain, sleep, joint stiffness, muscle spasms, and inflammation, and in overall health, compared with those who did not substitute it for medications.
- The substitution group was more likely to use inhalation methods (smoking and vaporizing) than the nonsubstitution group; they also used medical cannabis more frequently and preferred products containing delta-9-tetrahydrocannabinol.
IN PRACTICE:
“The changing legal status of cannabis has allowed a greater openness with more people willing to try cannabis for symptom relief. These encouraging results of medication reduction and favorable effect of [medical cannabis] require confirmation with more rigorous methods. At this time, survey information may be seen as a signal for effect, rather than sound evidence that could be applicable to those with musculoskeletal complaints in general,” the authors wrote.
SOURCE:
The study was led by Kevin F. Boehnke, PhD, University of Michigan Medical School, Ann Arbor, and was published online in ACR Open Rheumatology.
LIMITATIONS:
The cross-sectional nature of the study limited the determination of causality between medical cannabis use and symptom improvement. Moreover, the anonymous and self-reported nature of the survey at a single timepoint may have introduced recall bias. The sample predominantly consisted of older, White females, which may have limited the generalizability of the findings to other demographic groups.
DISCLOSURES:
Some authors received grant support from the National Institute on Drug Abuse and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some others received payments, honoraria, grant funding, consulting fees, and travel support, and reported other ties with pharmaceutical companies and other institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Nearly two thirds of patients with rheumatic conditions switched to medical cannabis from medications such as nonsteroidal anti-inflammatory drugs (NSAIDs) and opioids, with the substitution being associated with greater self-reported improvement in symptoms than nonsubstitution.
METHODOLOGY:
- Researchers conducted a secondary analysis of a cross-sectional survey to investigate the prevalence of switching to medical cannabis from traditional medications in patients with rheumatic conditions from the United States and Canada.
- The survey included questions on current and past medical cannabis use, sociodemographic characteristics, medication taken and substituted, substance use, and patient-reported outcomes.
- Of the 1727 patients who completed the survey, 763 patients (mean age, 59 years; 84.1% women) reported current use of cannabis and were included in this analysis.
- Participants were asked if they had substituted any medications with medical cannabis and were sub-grouped accordingly.
- They also reported any changes in symptoms after initiating cannabis, the current and anticipated duration of medical cannabis use, methods of ingestion, cannabinoid content, and frequency of use.
TAKEAWAY:
- Overall, 62.5% reported substituting medical cannabis for certain medications, including NSAIDs (54.7%), opioids (48.6%), sleep aids (29.6%), muscle relaxants (25.2%), benzodiazepines (15.5%), and gabapentinoids (10.5%).
- The most common reasons given for substituting medical cannabis were fewer side effects (39%), better symptom control (27%), and fewer adverse effects (12%).
- Participants who substituted medical cannabis reported significant improvements in symptoms such as pain, sleep, joint stiffness, muscle spasms, and inflammation, and in overall health, compared with those who did not substitute it for medications.
- The substitution group was more likely to use inhalation methods (smoking and vaporizing) than the nonsubstitution group; they also used medical cannabis more frequently and preferred products containing delta-9-tetrahydrocannabinol.
IN PRACTICE:
“The changing legal status of cannabis has allowed a greater openness with more people willing to try cannabis for symptom relief. These encouraging results of medication reduction and favorable effect of [medical cannabis] require confirmation with more rigorous methods. At this time, survey information may be seen as a signal for effect, rather than sound evidence that could be applicable to those with musculoskeletal complaints in general,” the authors wrote.
SOURCE:
The study was led by Kevin F. Boehnke, PhD, University of Michigan Medical School, Ann Arbor, and was published online in ACR Open Rheumatology.
LIMITATIONS:
The cross-sectional nature of the study limited the determination of causality between medical cannabis use and symptom improvement. Moreover, the anonymous and self-reported nature of the survey at a single timepoint may have introduced recall bias. The sample predominantly consisted of older, White females, which may have limited the generalizability of the findings to other demographic groups.
DISCLOSURES:
Some authors received grant support from the National Institute on Drug Abuse and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some others received payments, honoraria, grant funding, consulting fees, and travel support, and reported other ties with pharmaceutical companies and other institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Nearly two thirds of patients with rheumatic conditions switched to medical cannabis from medications such as nonsteroidal anti-inflammatory drugs (NSAIDs) and opioids, with the substitution being associated with greater self-reported improvement in symptoms than nonsubstitution.
METHODOLOGY:
- Researchers conducted a secondary analysis of a cross-sectional survey to investigate the prevalence of switching to medical cannabis from traditional medications in patients with rheumatic conditions from the United States and Canada.
- The survey included questions on current and past medical cannabis use, sociodemographic characteristics, medication taken and substituted, substance use, and patient-reported outcomes.
- Of the 1727 patients who completed the survey, 763 patients (mean age, 59 years; 84.1% women) reported current use of cannabis and were included in this analysis.
- Participants were asked if they had substituted any medications with medical cannabis and were sub-grouped accordingly.
- They also reported any changes in symptoms after initiating cannabis, the current and anticipated duration of medical cannabis use, methods of ingestion, cannabinoid content, and frequency of use.
TAKEAWAY:
- Overall, 62.5% reported substituting medical cannabis for certain medications, including NSAIDs (54.7%), opioids (48.6%), sleep aids (29.6%), muscle relaxants (25.2%), benzodiazepines (15.5%), and gabapentinoids (10.5%).
- The most common reasons given for substituting medical cannabis were fewer side effects (39%), better symptom control (27%), and fewer adverse effects (12%).
- Participants who substituted medical cannabis reported significant improvements in symptoms such as pain, sleep, joint stiffness, muscle spasms, and inflammation, and in overall health, compared with those who did not substitute it for medications.
- The substitution group was more likely to use inhalation methods (smoking and vaporizing) than the nonsubstitution group; they also used medical cannabis more frequently and preferred products containing delta-9-tetrahydrocannabinol.
IN PRACTICE:
“The changing legal status of cannabis has allowed a greater openness with more people willing to try cannabis for symptom relief. These encouraging results of medication reduction and favorable effect of [medical cannabis] require confirmation with more rigorous methods. At this time, survey information may be seen as a signal for effect, rather than sound evidence that could be applicable to those with musculoskeletal complaints in general,” the authors wrote.
SOURCE:
The study was led by Kevin F. Boehnke, PhD, University of Michigan Medical School, Ann Arbor, and was published online in ACR Open Rheumatology.
LIMITATIONS:
The cross-sectional nature of the study limited the determination of causality between medical cannabis use and symptom improvement. Moreover, the anonymous and self-reported nature of the survey at a single timepoint may have introduced recall bias. The sample predominantly consisted of older, White females, which may have limited the generalizability of the findings to other demographic groups.
DISCLOSURES:
Some authors received grant support from the National Institute on Drug Abuse and the National Institute of Arthritis and Musculoskeletal and Skin Diseases. Some others received payments, honoraria, grant funding, consulting fees, and travel support, and reported other ties with pharmaceutical companies and other institutions.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Primary Care Physicians Underutilize Nonantibiotic Prophylaxis for Recurrent UTIs
While primary care physicians are generally comfortable prescribing vaginal estrogen therapy for recurrent urinary tract infections (UTIs), other nonantibiotic prophylactic options remain significantly underutilized, according to new research that highlights a crucial gap in antibiotic stewardship practices among primary care physicians.
UTIs are the most common bacterial infection in women of all ages, and an estimated 30%-40% of women will experience reinfection within 6 months. Recurrent UTI is typically defined as two or more infections within 6 months or a greater number of infections within a year, according to the American Academy of Family Physicians.
Antibiotics are the first line of defense in preventing and treating recurrent UTIs, but repeated and prolonged use could lead to antibiotic resistance.
Researchers at the University of North Carolina surveyed 40 primary care physicians at one academic medical center and found that 96% of primary care physicians prescribe vaginal estrogen therapy for recurrent UTI prevention, with 58% doing so “often.” Estrogen deficiency and urinary retention are strong contributors to infection.
However, 78% of physicians surveyed said they had never prescribed methenamine hippurate, and 85% said they had never prescribed D-mannose.
Physicians with specialized training in menopausal care felt more at ease prescribing vaginal estrogen therapy to patients with complex medical histories, such as those with a family history of breast cancer or endometrial cancer. This suggests that enhanced education could play a vital role in increasing comfort levels among general practitioners, said Lauren Tholemeier, MD, a urogynecology fellow at the University of North Carolina at Chapel Hill.
“Primary care physicians are the front line of managing patients with recurrent UTI,” said Tholemeier.
“There’s an opportunity for further education on, and even awareness of, methenamine hippurate and D-mannose as an option that has data behind it and can be included as a tool” for patient care, she said.
Indeed, physicians who saw six or more recurrent patients with UTI each month were more likely to prescribe methenamine hippurate, the study found, suggesting that familiarity with recurrent UTI cases can lead to greater confidence in employing alternative prophylactic strategies.
Tholemeier presented her research at the American Urogynecologic Society’s PFD Week in Washington, DC.
Expanding physician knowledge and utilization of all available nonantibiotic therapies can help them better care for patients who don’t necessarily have access to a subspecialist, Tholemeier said.
According to the American Urogynecologic Society’s best practice guidelines, there is limited evidence supporting routine use of D-mannose to prevent recurrent UTI. Methenamine hippurate, however, may be effective for short-term UTI prevention, according to the group.
By broadening the use of vaginal estrogen therapy, methenamine hippurate, and D-mannose, primary care physicians can help reduce reliance on antibiotics for recurrent UTI prevention — a practice that may contribute to growing antibiotic resistance, said Tholemeier.
“The end goal isn’t going to be to say that we should never prescribe antibiotics for UTI infection,” said Tholemeier, adding that, in some cases, physicians can consider using these other medications in conjunction with antibiotics.
“But it’s knowing they [clinicians] have some other options in their toolbox,” she said.
A version of this article first appeared on Medscape.com.
While primary care physicians are generally comfortable prescribing vaginal estrogen therapy for recurrent urinary tract infections (UTIs), other nonantibiotic prophylactic options remain significantly underutilized, according to new research that highlights a crucial gap in antibiotic stewardship practices among primary care physicians.
UTIs are the most common bacterial infection in women of all ages, and an estimated 30%-40% of women will experience reinfection within 6 months. Recurrent UTI is typically defined as two or more infections within 6 months or a greater number of infections within a year, according to the American Academy of Family Physicians.
Antibiotics are the first line of defense in preventing and treating recurrent UTIs, but repeated and prolonged use could lead to antibiotic resistance.
Researchers at the University of North Carolina surveyed 40 primary care physicians at one academic medical center and found that 96% of primary care physicians prescribe vaginal estrogen therapy for recurrent UTI prevention, with 58% doing so “often.” Estrogen deficiency and urinary retention are strong contributors to infection.
However, 78% of physicians surveyed said they had never prescribed methenamine hippurate, and 85% said they had never prescribed D-mannose.
Physicians with specialized training in menopausal care felt more at ease prescribing vaginal estrogen therapy to patients with complex medical histories, such as those with a family history of breast cancer or endometrial cancer. This suggests that enhanced education could play a vital role in increasing comfort levels among general practitioners, said Lauren Tholemeier, MD, a urogynecology fellow at the University of North Carolina at Chapel Hill.
“Primary care physicians are the front line of managing patients with recurrent UTI,” said Tholemeier.
“There’s an opportunity for further education on, and even awareness of, methenamine hippurate and D-mannose as an option that has data behind it and can be included as a tool” for patient care, she said.
Indeed, physicians who saw six or more recurrent patients with UTI each month were more likely to prescribe methenamine hippurate, the study found, suggesting that familiarity with recurrent UTI cases can lead to greater confidence in employing alternative prophylactic strategies.
Tholemeier presented her research at the American Urogynecologic Society’s PFD Week in Washington, DC.
Expanding physician knowledge and utilization of all available nonantibiotic therapies can help them better care for patients who don’t necessarily have access to a subspecialist, Tholemeier said.
According to the American Urogynecologic Society’s best practice guidelines, there is limited evidence supporting routine use of D-mannose to prevent recurrent UTI. Methenamine hippurate, however, may be effective for short-term UTI prevention, according to the group.
By broadening the use of vaginal estrogen therapy, methenamine hippurate, and D-mannose, primary care physicians can help reduce reliance on antibiotics for recurrent UTI prevention — a practice that may contribute to growing antibiotic resistance, said Tholemeier.
“The end goal isn’t going to be to say that we should never prescribe antibiotics for UTI infection,” said Tholemeier, adding that, in some cases, physicians can consider using these other medications in conjunction with antibiotics.
“But it’s knowing they [clinicians] have some other options in their toolbox,” she said.
A version of this article first appeared on Medscape.com.
While primary care physicians are generally comfortable prescribing vaginal estrogen therapy for recurrent urinary tract infections (UTIs), other nonantibiotic prophylactic options remain significantly underutilized, according to new research that highlights a crucial gap in antibiotic stewardship practices among primary care physicians.
UTIs are the most common bacterial infection in women of all ages, and an estimated 30%-40% of women will experience reinfection within 6 months. Recurrent UTI is typically defined as two or more infections within 6 months or a greater number of infections within a year, according to the American Academy of Family Physicians.
Antibiotics are the first line of defense in preventing and treating recurrent UTIs, but repeated and prolonged use could lead to antibiotic resistance.
Researchers at the University of North Carolina surveyed 40 primary care physicians at one academic medical center and found that 96% of primary care physicians prescribe vaginal estrogen therapy for recurrent UTI prevention, with 58% doing so “often.” Estrogen deficiency and urinary retention are strong contributors to infection.
However, 78% of physicians surveyed said they had never prescribed methenamine hippurate, and 85% said they had never prescribed D-mannose.
Physicians with specialized training in menopausal care felt more at ease prescribing vaginal estrogen therapy to patients with complex medical histories, such as those with a family history of breast cancer or endometrial cancer. This suggests that enhanced education could play a vital role in increasing comfort levels among general practitioners, said Lauren Tholemeier, MD, a urogynecology fellow at the University of North Carolina at Chapel Hill.
“Primary care physicians are the front line of managing patients with recurrent UTI,” said Tholemeier.
“There’s an opportunity for further education on, and even awareness of, methenamine hippurate and D-mannose as an option that has data behind it and can be included as a tool” for patient care, she said.
Indeed, physicians who saw six or more recurrent patients with UTI each month were more likely to prescribe methenamine hippurate, the study found, suggesting that familiarity with recurrent UTI cases can lead to greater confidence in employing alternative prophylactic strategies.
Tholemeier presented her research at the American Urogynecologic Society’s PFD Week in Washington, DC.
Expanding physician knowledge and utilization of all available nonantibiotic therapies can help them better care for patients who don’t necessarily have access to a subspecialist, Tholemeier said.
According to the American Urogynecologic Society’s best practice guidelines, there is limited evidence supporting routine use of D-mannose to prevent recurrent UTI. Methenamine hippurate, however, may be effective for short-term UTI prevention, according to the group.
By broadening the use of vaginal estrogen therapy, methenamine hippurate, and D-mannose, primary care physicians can help reduce reliance on antibiotics for recurrent UTI prevention — a practice that may contribute to growing antibiotic resistance, said Tholemeier.
“The end goal isn’t going to be to say that we should never prescribe antibiotics for UTI infection,” said Tholemeier, adding that, in some cases, physicians can consider using these other medications in conjunction with antibiotics.
“But it’s knowing they [clinicians] have some other options in their toolbox,” she said.
A version of this article first appeared on Medscape.com.
FROM PFD WEEK 2024
Maternal BMI and Eating Disorders Tied to Mental Health in Kids
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
TOPLINE:
Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.
METHODOLOGY:
- Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
- They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
- Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
- Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
- Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.
TAKEAWAY:
- From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
- Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
- The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
- The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.
IN PRACTICE:
“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.
SOURCE:
This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.
LIMITATIONS:
A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.
DISCLOSURES:
This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.
This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.
Semiannual Time Changes Linked to Accidents, Heart Attacks
As people turn their clocks back an hour on November 3 to mark the end of daylight saving time and return to standard time, they should remain aware of their sleep health and of potential risks associated with shifts in sleep patterns, according to a University of Calgary psychology professor who researches circadian cycles.
In an interview, Antle explained the science behind the health risks associated with time changes, offered tips to prepare for the shift, and discussed scientists’ suggestion to move to year-round standard time. This interview has been condensed and edited for clarity.
Why is it important to pay attention to circadian rhythms?
Circadian rhythms are patterns of physiologic and behavioral changes that affect everything inside the body and everything we do, including when hormones are secreted, digestive juices are ready to digest, and growth hormones are released at night. The body is a carefully coordinated orchestra, and everything has to happen at the right time.
When we start messing with those rhythms, that’s when states of disease start coming on and we don’t feel well. You’ve probably experienced it — when you try to stay up late, eat at the wrong times, or have jet lag. Flying across one or two time zones is usually tolerable, but if you fly across the world, it can be profound and make you feel bad, even up to a week. Similar shifts happen with the time changes.
How do the time changes affect health risks?
The wintertime change is generally more tolerable, and you’ll hear people talk about “gaining an hour” of sleep. It’s better than that, because we’re realigning our social clocks — such as our work schedules and school schedules — with daylight. We tend to go to bed relative to the sun but wake up based on when our boss says to be at our desk, so an earlier sunset helps us to fall asleep earlier and is healthier for our body.
In the spring, the opposite happens, and the time change affects us much more than just one bad night of sleep. For some people, it can feel like losing an hour of sleep every day for weeks, and that abrupt change can lead to car accidents, workplace injuries, heart attacks, and strokes. Our body experiences extra strain when we’re not awake and ready for the day.
What does your research show?
Most of my work focuses on preclinical models to understand what’s going on in the brain and body. Because we can’t study this ethically in humans, we learn a lot from animal models, especially mice. In a recent study looking at mild circadian disruption — where we raised mice on days that were about 75 minutes shorter — we saw they started developing diabetes, heart disease, and insulin resistance within in a few months, or about the time they were a young adult.
Oftentimes, people think about their sleep rhythm as an arbitrary choice, but in fact, it does affect your health. We know that if your human circadian clock runs slow, morning light can help fix that and reset it, whereas evening light moves us in the other direction and makes it harder to get up in the morning.
Some people want to switch to one year-round time. What do you advocate?
In most cases, the standard time (or winter time) is the more natural time that fits better with our body cycle. If we follow a time where we get up before sunrise or have a later sunset, then it’s linked to more social jet lag, where people are less attentive at work, don’t learn as well at school, and have more accidents.
Instead of picking what sounds good or chasing the name — such as “daylight saving time” — we need to think about the right time for us and our circadian clock. Some places, such as Maine in the United States, would actually fit better with the Atlantic time zone or the Maritime provinces in Canada, whereas some parts of Alberta are geographically west of Los Angeles based on longitude and would fit better with the Pacific time zone. Sticking with a year-round daylight saving time in some cities in Alberta would mean people wouldn’t see the sun until 10:30 AM in the winter, which is really late and could affect activities such as skiing and hockey.
The Canadian Society for Chronobiology advocates for year-round standard time to align our social clocks with our biological clocks. Sleep and circadian rhythm experts in the US and globally have issued similar position statements.
What tips do you suggest to help people adjust their circadian clocks in November?
For people who know their bodies and that it will affect them more, give yourself extra time. If your schedule permits, plan ahead and change your clocks sooner, especially if you’re able to do so over the weekend. Don’t rush around while tired — rushing when you’re not ready leads to those increased accidents on the road or on the job. Know that the sun will still be mismatched for a bit and your body clock will take time to adjust, so you might feel out of sorts for a few days.
Antle reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
As people turn their clocks back an hour on November 3 to mark the end of daylight saving time and return to standard time, they should remain aware of their sleep health and of potential risks associated with shifts in sleep patterns, according to a University of Calgary psychology professor who researches circadian cycles.
In an interview, Antle explained the science behind the health risks associated with time changes, offered tips to prepare for the shift, and discussed scientists’ suggestion to move to year-round standard time. This interview has been condensed and edited for clarity.
Why is it important to pay attention to circadian rhythms?
Circadian rhythms are patterns of physiologic and behavioral changes that affect everything inside the body and everything we do, including when hormones are secreted, digestive juices are ready to digest, and growth hormones are released at night. The body is a carefully coordinated orchestra, and everything has to happen at the right time.
When we start messing with those rhythms, that’s when states of disease start coming on and we don’t feel well. You’ve probably experienced it — when you try to stay up late, eat at the wrong times, or have jet lag. Flying across one or two time zones is usually tolerable, but if you fly across the world, it can be profound and make you feel bad, even up to a week. Similar shifts happen with the time changes.
How do the time changes affect health risks?
The wintertime change is generally more tolerable, and you’ll hear people talk about “gaining an hour” of sleep. It’s better than that, because we’re realigning our social clocks — such as our work schedules and school schedules — with daylight. We tend to go to bed relative to the sun but wake up based on when our boss says to be at our desk, so an earlier sunset helps us to fall asleep earlier and is healthier for our body.
In the spring, the opposite happens, and the time change affects us much more than just one bad night of sleep. For some people, it can feel like losing an hour of sleep every day for weeks, and that abrupt change can lead to car accidents, workplace injuries, heart attacks, and strokes. Our body experiences extra strain when we’re not awake and ready for the day.
What does your research show?
Most of my work focuses on preclinical models to understand what’s going on in the brain and body. Because we can’t study this ethically in humans, we learn a lot from animal models, especially mice. In a recent study looking at mild circadian disruption — where we raised mice on days that were about 75 minutes shorter — we saw they started developing diabetes, heart disease, and insulin resistance within in a few months, or about the time they were a young adult.
Oftentimes, people think about their sleep rhythm as an arbitrary choice, but in fact, it does affect your health. We know that if your human circadian clock runs slow, morning light can help fix that and reset it, whereas evening light moves us in the other direction and makes it harder to get up in the morning.
Some people want to switch to one year-round time. What do you advocate?
In most cases, the standard time (or winter time) is the more natural time that fits better with our body cycle. If we follow a time where we get up before sunrise or have a later sunset, then it’s linked to more social jet lag, where people are less attentive at work, don’t learn as well at school, and have more accidents.
Instead of picking what sounds good or chasing the name — such as “daylight saving time” — we need to think about the right time for us and our circadian clock. Some places, such as Maine in the United States, would actually fit better with the Atlantic time zone or the Maritime provinces in Canada, whereas some parts of Alberta are geographically west of Los Angeles based on longitude and would fit better with the Pacific time zone. Sticking with a year-round daylight saving time in some cities in Alberta would mean people wouldn’t see the sun until 10:30 AM in the winter, which is really late and could affect activities such as skiing and hockey.
The Canadian Society for Chronobiology advocates for year-round standard time to align our social clocks with our biological clocks. Sleep and circadian rhythm experts in the US and globally have issued similar position statements.
What tips do you suggest to help people adjust their circadian clocks in November?
For people who know their bodies and that it will affect them more, give yourself extra time. If your schedule permits, plan ahead and change your clocks sooner, especially if you’re able to do so over the weekend. Don’t rush around while tired — rushing when you’re not ready leads to those increased accidents on the road or on the job. Know that the sun will still be mismatched for a bit and your body clock will take time to adjust, so you might feel out of sorts for a few days.
Antle reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
As people turn their clocks back an hour on November 3 to mark the end of daylight saving time and return to standard time, they should remain aware of their sleep health and of potential risks associated with shifts in sleep patterns, according to a University of Calgary psychology professor who researches circadian cycles.
In an interview, Antle explained the science behind the health risks associated with time changes, offered tips to prepare for the shift, and discussed scientists’ suggestion to move to year-round standard time. This interview has been condensed and edited for clarity.
Why is it important to pay attention to circadian rhythms?
Circadian rhythms are patterns of physiologic and behavioral changes that affect everything inside the body and everything we do, including when hormones are secreted, digestive juices are ready to digest, and growth hormones are released at night. The body is a carefully coordinated orchestra, and everything has to happen at the right time.
When we start messing with those rhythms, that’s when states of disease start coming on and we don’t feel well. You’ve probably experienced it — when you try to stay up late, eat at the wrong times, or have jet lag. Flying across one or two time zones is usually tolerable, but if you fly across the world, it can be profound and make you feel bad, even up to a week. Similar shifts happen with the time changes.
How do the time changes affect health risks?
The wintertime change is generally more tolerable, and you’ll hear people talk about “gaining an hour” of sleep. It’s better than that, because we’re realigning our social clocks — such as our work schedules and school schedules — with daylight. We tend to go to bed relative to the sun but wake up based on when our boss says to be at our desk, so an earlier sunset helps us to fall asleep earlier and is healthier for our body.
In the spring, the opposite happens, and the time change affects us much more than just one bad night of sleep. For some people, it can feel like losing an hour of sleep every day for weeks, and that abrupt change can lead to car accidents, workplace injuries, heart attacks, and strokes. Our body experiences extra strain when we’re not awake and ready for the day.
What does your research show?
Most of my work focuses on preclinical models to understand what’s going on in the brain and body. Because we can’t study this ethically in humans, we learn a lot from animal models, especially mice. In a recent study looking at mild circadian disruption — where we raised mice on days that were about 75 minutes shorter — we saw they started developing diabetes, heart disease, and insulin resistance within in a few months, or about the time they were a young adult.
Oftentimes, people think about their sleep rhythm as an arbitrary choice, but in fact, it does affect your health. We know that if your human circadian clock runs slow, morning light can help fix that and reset it, whereas evening light moves us in the other direction and makes it harder to get up in the morning.
Some people want to switch to one year-round time. What do you advocate?
In most cases, the standard time (or winter time) is the more natural time that fits better with our body cycle. If we follow a time where we get up before sunrise or have a later sunset, then it’s linked to more social jet lag, where people are less attentive at work, don’t learn as well at school, and have more accidents.
Instead of picking what sounds good or chasing the name — such as “daylight saving time” — we need to think about the right time for us and our circadian clock. Some places, such as Maine in the United States, would actually fit better with the Atlantic time zone or the Maritime provinces in Canada, whereas some parts of Alberta are geographically west of Los Angeles based on longitude and would fit better with the Pacific time zone. Sticking with a year-round daylight saving time in some cities in Alberta would mean people wouldn’t see the sun until 10:30 AM in the winter, which is really late and could affect activities such as skiing and hockey.
The Canadian Society for Chronobiology advocates for year-round standard time to align our social clocks with our biological clocks. Sleep and circadian rhythm experts in the US and globally have issued similar position statements.
What tips do you suggest to help people adjust their circadian clocks in November?
For people who know their bodies and that it will affect them more, give yourself extra time. If your schedule permits, plan ahead and change your clocks sooner, especially if you’re able to do so over the weekend. Don’t rush around while tired — rushing when you’re not ready leads to those increased accidents on the road or on the job. Know that the sun will still be mismatched for a bit and your body clock will take time to adjust, so you might feel out of sorts for a few days.
Antle reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.