Subscribe: PLoS Medicine: New Articles
http://www.plosmedicine.org/article/feed;jsessionid=03B0FF946350B12F1FCA748D8654D4E1
Added By: Feedage Forager Feedage Grade A rated
Language:
Tags:
age  art  attacks  data collection  data  facilities  health  healthcare  hiv  incidents  methods  postpartum  risk  services  women 
Rate this Feed
Rate this feedRate this feedRate this feedRate this feedRate this feed
Rate this feed 1 starRate this feed 2 starRate this feed 3 starRate this feed 4 starRate this feed 5 star

Comments (0)

Feed Details and Statistics Feed Statistics
Preview: PLoS Medicine: New Articles

PLOS Medicine: New Articles



A Peer-Reviewed Open-Access Journal



Updated: 2018-04-25T19:22:27Z

 



(XML) Attacks on medical workers in Syria: Implications for conflict research

2018-04-24T21:00:00Z

by Michael Spagat

In a Perspective linked to the Research Article by Haar and colleagues, Michael Spagat discusses the challenges and importance of conducting research on mortality in regions affected by violent conflicts.(image)



(XML) Determining the scope of attacks on health in four governorates of Syria in 2016: Results of a field surveillance program

2018-04-24T21:00:00Z

by Rohini J. Haar, Casey B. Risko, Sonal Singh, Diana Rayes, Ahmad Albaik, Mohammed Alnajar, Mazen Kewara, Emily Clouse, Elise Baker, Leonard S. Rubenstein

Background

Violent attacks on and interferences with hospitals, ambulances, health workers, and patients during conflict destroy vital health services during a time when they are most needed and undermine the long-term capacity of the health system. In Syria, such attacks have been frequent and intense and represent grave violations of the Geneva Conventions, but the number reported has varied considerably. A systematic mechanism to document these attacks could assist in designing more protection strategies and play a critical role in influencing policy, promoting justice, and addressing the health needs of the population.

Methods and findings

We developed a mobile data collection questionnaire to collect data on incidents of attacks on healthcare directly from the field. Data collectors from the Syrian American Medical Society (SAMS), using the tool or a text messaging system, recorded information on incidents across four of Syria’s northern governorates (Aleppo, Idleb, Hama, and Homs) from January 1, 2016, to December 31, 2016. SAMS recorded a total of 200 attacks on healthcare in 2016, 102 of them using the mobile data collection tool. Direct attacks on health facilities comprised the majority of attacks recorded (88.0%; n = 176). One hundred and twelve healthcare staff and 185 patients were killed in these incidents. Thirty-five percent of the facilities were attacked more than once over the data collection period; hospitals were significantly more likely to be attacked more than once compared to clinics and other types of healthcare facilities. Aerial bombs were used in the overwhelming majority of cases (91.5%). We also compared the SAMS data to a separate database developed by Physicians for Human Rights (PHR) based on media reports and matched the incidents to compare the results from the two methods (this analysis was limited to incidents at health facilities). Among 90 relevant incidents verified by PHR and 177 by SAMS, there were 60 that could be matched to each other, highlighting the differences in results from the two methods. This study is limited by the complexities of data collection in a conflict setting, only partial use of the standardized reporting tool, and the fact that limited accessibility of some health facilities and workers and may be biased towards the reporting of attacks on larger or more visible health facilities.

Conclusions

The use of field data collectors and use of consistent definitions can play an important role in the tracking incidents of attacks on health services. A mobile systematic data collection tool can complement other methods for tracking incidents of attacks on healthcare and ensure the collection of detailed information about each attack that may assist in better advocacy, programs, and accountability but can be practically challenging. Comparing attacks between SAMS and PHR suggests that there may have been significantly more attacks than previously captured by any one methodology. This scale of attacks suggests that targeting of healthcare in Syria is systematic and highlights the failure of condemnation by the international community and medical groups working in Syria of such attacks to stop them.

(image)



(XML) Maternal age and offspring developmental vulnerability at age five: A population-based cohort study of Australian children

2018-04-24T21:00:00Z

by Kathleen Falster, Mark Hanly, Emily Banks, John Lynch, Georgina Chambers, Marni Brownell, Sandra Eades, Louisa Jorm

Background

In recent decades, there has been a shift to later childbearing in high-income countries. There is limited large-scale evidence of the relationship between maternal age and child outcomes beyond the perinatal period. The objective of this study is to quantify a child’s risk of developmental vulnerability at age five, according to their mother’s age at childbirth.

Methods and findings

Linkage of population-level perinatal, hospital, and birth registration datasets to data from the Australian Early Development Census (AEDC) and school enrolments in Australia’s most populous state, New South Wales (NSW), enabled us to follow a cohort of 99,530 children from birth to their first year of school in 2009 or 2012. The study outcome was teacher-reported child development on five domains measured by the AEDC, including physical health and well-being, emotional maturity, social competence, language and cognitive skills, and communication skills and general knowledge. Developmental vulnerability was defined as domain scores below the 2009 AEDC 10th percentile cut point.The mean maternal age at childbirth was 29.6 years (standard deviation [SD], 5.7), with 4,382 children (4.4%) born to mothers aged <20 years and 20,026 children (20.1%) born to mothers aged ≥35 years. The proportion vulnerable on ≥1 domains was 21% overall and followed a reverse J-shaped distribution according to maternal age: it was highest in children born to mothers aged ≤15 years, at 40% (95% CI, 32–49), and was lowest in children born to mothers aged between 30 years and ≤35 years, at 17%–18%. For maternal ages 36 years to ≥45 years, the proportion vulnerable on ≥1 domains increased to 17%–24%. Adjustment for sociodemographic characteristics significantly attenuated vulnerability risk in children born to younger mothers, while adjustment for potentially modifiable factors, such as antenatal visits, had little additional impact across all ages. Although the multi-agency linkage yielded a broad range of sociodemographic, perinatal, health, and developmental variables at the child’s birth and school entry, the study was necessarily limited to variables available in the source data, which were mostly recorded for administrative purposes.

Conclusions

Increasing maternal age was associated with a lesser risk of developmental vulnerability for children born to mothers aged 15 years to about 30 years. In contrast, increasing maternal age beyond 35 years was generally associated with increasing vulnerability, broadly equivalent to the risk for children born to mothers in their early twenties, which is highly relevant in the international context of later childbearing. That socioeconomic disadvantage explained approximately half of the increased risk of developmental vulnerability associated with younger motherhood suggests there may be scope to improve population-level child development through policies and programs that support disadvantaged mothers and children.

(image)



(XML) From surviving to thriving: What evidence is needed to move early child-development interventions to scale?

2018-04-24T21:00:00Z

by Mark Tomlinson

In a Perspective, Mark Tomlinson discusses research on early interventions to support child development in developing countries.(image)



(XML) Impacts 2 years after a scalable early childhood development intervention to increase psychosocial stimulation in the home: A follow-up of a cluster randomised controlled trial in Colombia

2018-04-24T21:00:00Z

by Alison Andrew, Orazio Attanasio, Emla Fitzsimons, Sally Grantham-McGregor, Costas Meghir, Marta Rubio-Codina

Background

Poor early childhood development (ECD) in low- and middle-income countries is a major concern. There are calls to universalise access to ECD interventions through integrating them into existing government services but little evidence on the medium- or long-term effects of such scalable models. We previously showed that a psychosocial stimulation (PS) intervention integrated into a cash transfer programme improved Colombian children’s cognition, receptive language, and home stimulation. In this follow-up study, we assessed the medium-term impacts of the intervention, 2 years after it ended, on children’s cognition, language, school readiness, executive function, and behaviour.

Methods and findings

Study participants were 1,419 children aged 12–24 months at baseline from beneficiary households of the cash transfer programme, living in 96 Colombian towns. The original cluster randomised controlled trial (2009–2011) randomly allocated the towns to control (N = 24, n = 349), PS (N = 24, n = 357), multiple micronutrient (MN) supplementation (N = 24, n = 354), and combined PS and MN (N = 24, n = 359). Interventions lasted 18 months. In this study (26 September 2013 to 11 January 2014), we assessed impacts on cognition, language, school readiness, executive function, and behaviour 2 years after intervention, at ages 4.5–5.5 years. Testers, but not participants, were blinded to treatment allocation. Analysis was on an intent-to-treat basis. We reassessed 88.5% of the children in the original study (n = 1,256). Factor analysis of test scores yielded 2 factors: cognitive (cognition, language, school readiness, executive function) and behavioural. We found no effect of the interventions after 2 years on the cognitive factor (PS: −0.031 SD, 95% CI −0.229–0.167; MN: −0.042 SD, 95% CI −0.249–0.164; PS and MN: −0.111 SD, 95% CI −0.311–0.089), the behavioural factor (PS: 0.013 SD, 95% CI −0.172–0.198; MN: 0.071 SD, 95% CI −0.115–0.258; PS and MN: 0.062 SD, 95% CI −0.115–0.239), or home stimulation. Study limitations include that behavioural development was measured through maternal report and that very small effects may have been missed, despite the large sample size.

Conclusions

We found no evidence that a scalable PS intervention benefited children’s development 2 years after it ended. It is possible that the initial effects on child development were too small to be sustained or that the lack of continued impact on home stimulation contributed to fade out. Both are likely related to compromises in implementation when going to scale and suggest one should not extrapolate from medium-term effects of small efficacy trials to scalable interventions. Understanding the salient differences between small efficacy trials and scaled-up versions will be key to making ECD interventions effective tools for policymakers.

Trial registration

ISRCTN18991160

(image)



(XML) Two-year impact of community-based health screening and parenting groups on child development in Zambia: Follow-up to a cluster-randomized controlled trial

2018-04-24T21:00:00Z

by Peter C. Rockers, Arianna Zanolini, Bowen Banda, Mwaba Moono Chipili, Robert C. Hughes, Davidson H. Hamer, Günther Fink

Background

Early childhood interventions have potential to offset the negative impact of early adversity. We evaluated the impact of a community-based parenting group intervention on child development in Zambia.

Methods and findings

We conducted a non-masked cluster-randomized controlled trial in Southern Province, Zambia. Thirty clusters of villages were matched based on population density and distance from the nearest health center, and randomly assigned to intervention (15 clusters, 268 caregiver–child dyads) or control (15 clusters, 258 caregiver–child dyads). Caregivers were eligible if they had a child 6 to 12 months old at baseline. In intervention clusters, caregivers were visited twice per month during the first year of the study by child development agents (CDAs) and were invited to attend fortnightly parenting group meetings. Parenting groups selected “head mothers” from their communities who were trained by CDAs to facilitate meetings and deliver a diverse parenting curriculum. The parenting group intervention, originally designed to run for 1 year, was extended, and households were visited for a follow-up assessment at the end of year 2. The control group did not receive any intervention. Intention-to-treat analysis was performed for primary outcomes measured at the year 2 follow-up: stunting and 5 domains of neurocognitive development measured using the Bayley Scales of Infant and Toddler Development–Third Edition (BSID-III). In order to show Cohen’s d estimates, BSID-III composite scores were converted to z-scores by standardizing within the study population. In all, 195/268 children (73%) in the intervention group and 182/258 children (71%) in the control group were assessed at endline after 2 years. The intervention significantly reduced stunting (56/195 versus 72/182; adjusted odds ratio 0.45, 95% CI 0.22 to 0.92; p = 0.028) and had a significant positive impact on language (β 0.14, 95% CI 0.01 to 0.27; p = 0.039). The intervention did not significantly impact cognition (β 0.11, 95% CI −0.06 to 0.29; p = 0.196), motor skills (β −0.01, 95% CI −0.25 to 0.24; p = 0.964), adaptive behavior (β 0.21, 95% CI −0.03 to 0.44; p = 0.088), or social-emotional development (β 0.20, 95% CI −0.04 to 0.44; p = 0.098). Observed impacts may have been due in part to home visits by CDAs during the first year of the intervention.

Conclusions

The results of this trial suggest that parenting groups hold promise for improving child development, particularly physical growth, in low-resource settings like Zambia.

Trial registration

ClinicalTrials.gov NCT02234726

(image)



(XML) Breastfeeding during infancy and neurocognitive function in adolescence: 16-year follow-up of the PROBIT cluster-randomized trial

2018-04-20T21:00:00Z

by Seungmi Yang, Richard M. Martin, Emily Oken, Mikhail Hameza, Glen Doniger, Shimon Amit, Rita Patel, Jennifer Thompson, Sheryl L. Rifas-Shiman, Konstantin Vilchuck, Natalia Bogdanovich, Michael S. Kramer

Background

Evidence on the long-term effect of breastfeeding on neurocognitive development is based almost exclusively on observational studies. In the 16-year follow-up study of a large, cluster-randomized trial of a breastfeeding promotion intervention, we evaluated the long-term persistence of the neurocognitive benefits of the breastfeeding promotion intervention previously observed at early school age.

Methods and findings

A total of 13,557 participants (79.5% of the 17,046 randomized) of the Promotion of Breastfeeding Intervention Trial (PROBIT) were followed up at age 16 from September 2012 to July 2015. At the follow-up, neurocognitive function was assessed in 7 verbal and nonverbal cognitive domains using a computerized, self-administered test battery among 13,427 participants. Using an intention-to-treat (ITT) analysis as our prespecified primary analysis, we estimated cluster- and baseline characteristic-adjusted mean differences between the intervention (prolonged and exclusive breastfeeding promotion modelled on the Baby-Friendly Hospital Initiative) and control (usual care) groups in 7 cognitive domains and a global cognitive score. In our prespecified secondary analysis, we estimated mean differences by instrumental variable (IV) analysis to account for noncompliance with the randomly assigned intervention and estimate causal effects of breastfeeding. The 16-year follow-up rates were similar in the intervention (79.7%) and control groups (79.3%), and baseline characteristics were comparable between the two. In the cluster-adjusted ITT analyses, children in the intervention group did not show statistically significant differences in the scores from children in the control group. Prespecified additional adjustment for baseline characteristics improved statistical precision and resulted in slightly higher scores among children in the intervention for verbal function (1.4 [95% CI 0.3–2.5]) and memory (1.2 [95% CI 0.01–2.4]). IV analysis showed that children who were exclusively breastfed for ≥3 (versus <3) months had a 3.5-point (95% CI 0.9–6.1) higher verbal function, but no differences were observed in other domains. While our computerized, self-administered cognitive testing reduced the cluster-level variability in the scores, it may have increased individual-level measurement errors in adolescents.

Conclusions

We observed no benefit of a breastfeeding promotion intervention on overall neurocognitive function. The only beneficial effect was on verbal function at age 16. The higher verbal ability is consistent with results observed at early school age; however, the effect size was substantially smaller in adolescence.

PROBIT trial registration

ClinicalTrials.gov NCT01561612

(image)



(XML) Universal versus conditional day 3 follow-up for children with non-severe unclassified fever at the community level in Ethiopia: A cluster-randomised non-inferiority trial

2018-04-17T21:00:00Z

by Karin Källander, Tobias Alfvén, Tjede Funk, Ayalkibet Abebe, Abreham Hailemariam, Dawit Getachew, Max Petzold, Laura C. Steinhardt, Julie R. Gutman

Background

With declining malaria prevalence and improved use of malaria diagnostic tests, an increasing proportion of children seen by community health workers (CHWs) have unclassified fever. Current community management guidelines by WHO advise that children seen with non-severe unclassified fever (on day 1) should return to CHWs on day 3 for reassessment. We compared the safety of conditional follow-up reassessment only in cases where symptoms do not resolve with universal follow-up on day 3.

Methods and findings

We undertook a 2-arm cluster-randomised controlled non-inferiority trial among children aged 2–59 months presenting with fever and without malaria, pneumonia, diarrhoea, or danger signs to 284 CHWs affiliated with 25 health centres (clusters) in Southern Nations, Nationalities, and Peoples’ Region, Ethiopia. The primary outcome was treatment failure (persistent fever, development of danger signs, hospital admission, death, malaria, pneumonia, or diarrhoea) at 1 week (day 8) of follow-up. Non-inferiority was defined as a 4% or smaller difference in the proportion of treatment failures with conditional follow-up compared to universal follow-up. Secondary outcomes included the percentage of children brought for reassessment, antimicrobial prescription, and severe adverse events (hospitalisations and deaths) after 4 weeks (day 29). From December 1, 2015, to November 30, 2016, we enrolled 4,595 children, of whom 3,946 (1,953 universal follow-up arm; 1,993 conditional follow-up arm) adhered to the CHW’s follow-up advice and also completed a day 8 study visit within ±1 days. Overall, 2.7% had treatment failure on day 8: 0.8% (16/1,993) in the conditional follow-up arm and 4.6% (90/1,953) in the universal follow-up arm (risk difference of treatment failure −3.81%, 95% CI −∞, 0.65%), meeting the prespecified criterion for non-inferiority. There were no deaths recorded by day 29. In the universal follow-up arm, 94.6% of caregivers reported returning for reassessment on day 3, in contrast to 7.5% in the conditional follow-up arm (risk ratio 22.0, 95% CI 17.9, 27.2, p < 0.001). Few children sought care from another provider after their initial visit to the CHW: 3.0% (59/1,993) in the conditional follow-up arm and 1.1% (22/1,953) in the universal follow-up arm, on average 3.2 and 3.4 days later, respectively, with no significant difference between arms (risk difference 1.79%, 95% CI −1.23%, 4.82%, p = 0.244). The mean travel time to another provider was 2.2 hours (95% CI 0.01, 5.3) in the conditional follow-up arm and 2.6 hours (95% CI 0.02, 4.5) in the universal follow-up arm (p = 0.82); the mean cost for seeking care after visiting the CHW was 26.5 birr (95% CI 7.8, 45.2) and 22.8 birr (95% CI 15.6, 30.0), respectively (p = 0.69). Though this study was an important step to evaluate the safety of conditional follow-up, the high adherence seen may have resulted from knowledge of the 1-week follow-up visit and may therefore not transfer to routine practice; hence, in an implementation setting it is crucial that CHWs are well trained in counselling skills to advise caregivers on when to come back for follow-up.

Conclusions

Conditional follow-up of children with non-severe unclassified fever in a low malaria endemic setting in Ethiopia was non-inferior to universal follow-up through day 8. Allowing CHWs to advise caregivers to bring children back only in case of continued symptoms might be a more efficient use of resources in similar settings.

Trial registration

www.clinicaltrials.gov, identifier NCT02926625

(image)



(XML) Universal versus conditional day 3 follow-up for children with non-severe unclassified fever at the community level in the Democratic Republic of the Congo: A cluster-randomized, community-based non-inferiority trial

2018-04-17T21:00:00Z

by Luke C. Mullany, Elburg W. van Boetzelaer, Julie R. Gutman, Laura C. Steinhardt, Pascal Ngoy, Yolanda Barbera Lainez, Alison Wittcoff, Steven A. Harvey, Lara S. Ho Background The World Health Organization’s integrated community case management (iCCM) guidelines recommend that all children presenting with uncomplicated fever and no danger signs return for follow-up on day 3 following the initial consultation on day 1. Such fevers often resolve rapidly, however, and previous studies suggest that expectant home care for uncomplicated fever can be safely recommended. We aimed to determine if a conditional follow-up visit was non-inferior to a universal follow-up visit for these children. Methods and findings We conducted a cluster-randomized, community-based non-inferiority trial among children 2–59 months old presenting to community health workers (CHWs) with non-severe unclassified fever in Tanganyika Province, Democratic Republic of the Congo. Clusters (n = 28) of CHWs were randomized to advise caregivers to either (1) return for a follow-up visit on day 3 following the initial consultation on day 1, regardless of illness resolution (as per current WHO guidelines; universal follow-up group) or (2) return for a follow-up visit on day 3 only if illness continued (conditional follow-up group). Children in both arms were assessed again at day 8, and classified as a clinical failure if fever (caregiver-reported), malaria, diarrhea, pneumonia, or decline of health status (development of danger signs, hospitalization, or death) was noted (failure definition 1). Alternative failure definitions were examined, whereby caregiver-reported fever was first restricted to caregiver-reported fever of at least 3 days (failure definition 2) and then replaced with fever measured via axillary temperature (failure definition 3). Study participants, providers, and investigators were not masked. Among 4,434 enrolled children, 4,141 (93.4%) met the per-protocol definition of receipt of the arm-specific advice from the CHW and a timely day 8 assessment (universal follow-up group: 2,210; conditional follow-up group: 1,931). Failure was similar (difference: –0.7%) in the conditional follow-up group (n = 188, 9.7%) compared to the universal follow-up group (n = 230, 10.4%); however, the upper bound of a 1-sided 95% confidence interval around this difference (−∞, 5.1%) exceeded the prespecified non-inferiority margin of 4.0% (non-inferiority p = 0.089). When caregiver-reported fever was restricted to fevers lasting ≥3 days, failure in the conditional follow-up group (n = 159, 8.2%) was similar to that in the universal follow-up group (n = 200, 9.1%) (difference: −0.8%; 95% CI: −∞, 4.1%; p = 0.053). If caregiver-reported fever was replaced by axillary temperature measurement in the definition of failure, failure in the conditional follow-up group (n = 113, 5.9%) was non-inferior to that in the universal follow-up group (n = 160, 7.2%) (difference: −1.4%; 95% CI: −∞, 2.5%; p = 0.012). In post hoc analysis, when the definition of failure was limited to malaria, diarrhea, pneumonia, development of danger signs, hospitalization, or death, failure in the conditional follow-up group (n = 108, 5.6%) was similar to that in the universal follow-up group (n = 147, 6.7%), and within the non-inferiority margin (95% CI: −∞, 2.9%; p = 0.017). Limitations include initial underestimation of the proportion of clinical failures as well as substantial variance in cluster-specific failure rates, reducing the precision of our estimates. In addition, heightened security concerns slowed recruitment in the final months of the study. Conclusions We found that advising caregivers to return only if children worsened or remained ill on day 3 resulted in similar rates of caregiver-reported fever and other clinical outcomes on day 8, co[...]



(XML) Preprints in medical research: Progress and principles

2018-04-16T21:00:00Z

by Larry Peiperl, on behalf of the PLOS Medicine Editors

In this month’s editorial, the PLOS Medicine Editors discuss the role of preprints in human health research and propose a 3-part framework for ensuring benefit.(image)



(XML) Estimating the health and economic effects of the proposed US Food and Drug Administration voluntary sodium reformulation: Microsimulation cost-effectiveness analysis

2018-04-10T21:00:00Z

by Jonathan Pearson-Stuttard, Chris Kypridemos, Brendan Collins, Dariush Mozaffarian, Yue Huang, Piotr Bandosz, Simon Capewell, Laurie Whitsel, Parke Wilde, Martin O’Flaherty, Renata Micha

Background

Sodium consumption is a modifiable risk factor for higher blood pressure (BP) and cardiovascular disease (CVD). The US Food and Drug Administration (FDA) has proposed voluntary sodium reduction goals targeting processed and commercially prepared foods. We aimed to quantify the potential health and economic impact of this policy.

Methods and findings

We used a microsimulation approach of a close-to-reality synthetic population (US IMPACT Food Policy Model) to estimate CVD deaths and cases prevented or postponed, quality-adjusted life years (QALYs), and cost-effectiveness from 2017 to 2036 of 3 scenarios: (1) optimal, 100% compliance with 10-year reformulation targets; (2) modest, 50% compliance with 10-year reformulation targets; and (3) pessimistic, 100% compliance with 2-year reformulation targets, but with no further progress. We used the National Health and Nutrition Examination Survey and high-quality meta-analyses to inform model inputs. Costs included government costs to administer and monitor the policy, industry reformulation costs, and CVD-related healthcare, productivity, and informal care costs. Between 2017 and 2036, the optimal reformulation scenario achieving the FDA sodium reduction targets could prevent approximately 450,000 CVD cases (95% uncertainty interval: 240,000 to 740,000), gain approximately 2.1 million discounted QALYs (1.7 million to 2.4 million), and produce discounted cost savings (health savings minus policy costs) of approximately $41 billion ($14 billion to $81 billion). In the modest and pessimistic scenarios, health gains would be 1.1 million and 0.7 million QALYS, with savings of $19 billion and $12 billion, respectively. All the scenarios were estimated with more than 80% probability to be cost-effective (incremental cost/QALY < $100,000) by 2021 and to become cost-saving by 2031. Limitations include evaluating only diseases mediated through BP, while decreasing sodium consumption could have beneficial effects upon other health burdens such as gastric cancer. Further, the effect estimates in the model are based on interventional and prospective observational studies. They are therefore subject to biases and confounding that may have influenced also our model estimates.

Conclusions

Implementing and achieving the FDA sodium reformulation targets could generate substantial health gains and net cost savings.

(image)



(XML) Preprints: An underutilized mechanism to accelerate outbreak science

2018-04-03T21:00:00Z

by Michael A. Johansson, Nicholas G. Reich, Lauren Ancel Meyers, Marc Lipsitch

In an Essay, Michael Johansson and colleagues advocate the posting of research studies addressing infectious disease outbreaks as preprints.(image)



(XML) Genetic scores to stratify risk of developing multiple islet autoantibodies and type 1 diabetes: A prospective study in children

2018-04-03T21:00:00Z

by Ezio Bonifacio, Andreas Beyerlein, Markus Hippich, Christiane Winkler, Kendra Vehik, Michael N. Weedon, Michael Laimighofer, Andrew T. Hattersley, Jan Krumsiek, Brigitte I. Frohnert, Andrea K. Steck, William A. Hagopian, Jeffrey P. Krischer, Åke Lernmark, Marian J. Rewers, Jin-Xiong She, Jorma Toppari, Beena Akolkar, Richard A. Oram, Stephen S. Rich, Anette-G. Ziegler, for the TEDDY Study Group

Background

Around 0.3% of newborns will develop autoimmunity to pancreatic beta cells in childhood and subsequently develop type 1 diabetes before adulthood. Primary prevention of type 1 diabetes will require early intervention in genetically at-risk infants. The objective of this study was to determine to what extent genetic scores (two previous genetic scores and a merged genetic score) can improve the prediction of type 1 diabetes.

Methods and findings

The Environmental Determinants of Diabetes in the Young (TEDDY) study followed genetically at-risk children at 3- to 6-monthly intervals from birth for the development of islet autoantibodies and type 1 diabetes. Infants were enrolled between 1 September 2004 and 28 February 2010 and monitored until 31 May 2016. The risk (positive predictive value) for developing multiple islet autoantibodies (pre-symptomatic type 1 diabetes) and type 1 diabetes was determined in 4,543 children who had no first-degree relatives with type 1 diabetes and either a heterozygous HLA DR3 and DR4-DQ8 risk genotype or a homozygous DR4-DQ8 genotype, and in 3,498 of these children in whom genetic scores were calculated from 41 single nucleotide polymorphisms. In the children with the HLA risk genotypes, risk for developing multiple islet autoantibodies was 5.8% (95% CI 5.0%–6.6%) by age 6 years, and risk for diabetes by age 10 years was 3.7% (95% CI 3.0%–4.4%). Risk for developing multiple islet autoantibodies was 11.0% (95% CI 8.7%–13.3%) in children with a merged genetic score of >14.4 (upper quartile; n = 907) compared to 4.1% (95% CI 3.3%–4.9%, P < 0.001) in children with a genetic score of ≤14.4 (n = 2,591). Risk for developing diabetes by age 10 years was 7.6% (95% CI 5.3%–9.9%) in children with a merged score of >14.4 compared with 2.7% (95% CI 1.9%–3.6%) in children with a score of ≤14.4 (P < 0.001). Of 173 children with multiple islet autoantibodies by age 6 years and 107 children with diabetes by age 10 years, 82 (sensitivity, 47.4%; 95% CI 40.1%–54.8%) and 52 (sensitivity, 48.6%, 95% CI 39.3%–60.0%), respectively, had a score >14.4. Scores were higher in European versus US children (P = 0.003). In children with a merged score of >14.4, risk for multiple islet autoantibodies was similar and consistently >10% in Europe and in the US; risk was greater in males than in females (P = 0.01). Limitations of the study include that the genetic scores were originally developed from case–control studies of clinical diabetes in individuals of mainly European decent. It is, therefore, possible that it may not be suitable to all populations.

Conclusions

A type 1 diabetes genetic score identified infants without family history of type 1 diabetes who had a greater than 10% risk for pre-symptomatic type 1 diabetes, and a nearly 2-fold higher risk than children identified by high-risk HLA genotypes alone. This finding extends the possibilities for enrolling children into type 1 diabetes primary prevention trials.

(image)



(XML) Integration of postpartum healthcare services for HIV-infected women and their infants in South Africa: A randomised controlled trial

2018-03-30T21:00:00Z

by Landon Myer, Tamsin K. Phillips, Allison Zerbe, Kirsty Brittain, Maia Lesosky, Nei-Yuan Hsiao, Robert H. Remien, Claude A. Mellins, James A. McIntyre, Elaine J. Abrams

Background

As the number of HIV-infected women initiating lifelong antiretroviral therapy (ART) during pregnancy increases globally, concerns have emerged regarding low levels of retention in HIV services and suboptimal adherence to ART during the postpartum period. We examined the impact of integrating postpartum ART for HIV+ mothers alongside infant follow-up within maternal and child health (MCH) services in Cape Town, South Africa.

Methods and findings

We conducted a randomised trial among HIV+ postpartum women aged ≥18 years who initiated ART during pregnancy in the local antenatal care clinic and were breastfeeding when screened before 6 weeks postpartum. We compared an integrated postnatal service among mothers and their infants (the MCH-ART intervention) to the local standard of care (control)—immediate postnatal referral of HIV+ women on ART to general adult ART services and their infants to separate routine infant follow-up. Evaluation data were collected through medical records and trial measurement visits scheduled and located separately from healthcare services involved in either arm. The primary trial outcome was a composite endpoint of women’s retention in ART care and viral suppression (VS) (viral load < 50 copies/ml) at 12 months postpartum; secondary outcomes included duration of any and exclusive breastfeeding, mother-to-child HIV transmission, and infant mortality. Between 5 June 2013 and 10 December 2014, a total of 471 mother–infant pairs were enrolled and randomised (mean age, 28.6 years; 18% nulliparous; 57% newly diagnosed with HIV in pregnancy; median duration of ART use at randomisation, 18 weeks). Among 411 women (87%) with primary endpoint data available, 77% of women (n = 155) randomised to the MCH-ART intervention achieved the primary composite outcome of retention in ART services with VS at 12 months postpartum, compared to 56% of women (n = 117) randomised to the control arm (absolute risk difference, 0.21; 95% CI: 0.12–0.30; p < 0.001). The findings for improved retention in care and VS among women in the MCH-ART intervention arm were consistent across subgroups of participants according to demographic and clinical characteristics. The median durations of any breastfeeding and exclusive breastfeeding were longer in women randomised to the intervention versus control arm (6.9 versus 3.0 months, p = 0.006, and 3.0 versus 1.4 months, p < 0.001, respectively). For the infants, overall HIV-free survival through 12 months of age was 97%: mother-to-child HIV transmission was 1.2% overall (n = 4 and n = 1 transmissions in the intervention and control arms, respectively), and infant mortality was 1.9% (n = 6 and n = 3 deaths in the intervention and control arms, respectively), and these outcomes were similar by trial arm. Interpretation of these findings should be qualified by the location of this study in a single urban area as well as the self-reported nature of breastfeeding outcomes.

Conclusions

In this study, we found that integrating ART services into the MCH platform during the postnatal period was a simple and effective intervention, and this should be considered for improving maternal and child outcomes in the context of HIV.

Trial registration

ClinicalTrials.gov NCT01933477.

(image)



(XML) Cardiovascular disease: The rise of the genetic risk score

2018-03-30T21:00:00Z

by Joshua W. Knowles, Euan A. Ashley

In a Perspective, Joshua Knowles and Euan Ashley discuss the potential for use of genetic risk scores in clinical practice(image)



(XML) Polycystic ovary syndrome, androgen excess, and the risk of nonalcoholic fatty liver disease in women: A longitudinal study based on a United Kingdom primary care database

2018-03-28T21:00:00Z

by Balachandran Kumarendran, Michael W. O’Reilly, Konstantinos N. Manolopoulos, Konstantinos A. Toulis, Krishna M. Gokhale, Alice J. Sitch, Chandrika N. Wijeyaratne, Arri Coomarasamy, Wiebke Arlt, Krishnarajah Nirantharakumar

Background

Androgen excess is a defining feature of polycystic ovary syndrome (PCOS), which affects 10% of women and represents a lifelong metabolic disorder, with increased risk of type 2 diabetes, hypertension, and cardiovascular events. Previous studies have suggested an increased risk of nonalcoholic fatty liver disease (NAFLD) in individuals with PCOS and implicated androgen excess as a potential driver.

Methods and findings

We carried out a retrospective longitudinal cohort study utilizing a large primary care database in the United Kingdom, evaluating NAFLD rates in 63,120 women with PCOS and 121,064 age-, body mass index (BMI)-, and location-matched control women registered from January 2000 to May 2016. In 2 independent cohorts, we also determined the rate of NAFLD in women with a measurement of serum testosterone (n = 71,061) and sex hormone-binding globulin (SHBG; n = 49,625). We used multivariate Cox models to estimate the hazard ratio (HR) for NAFLD and found that women with PCOS had an increased rate of NAFLD (HR = 2.23, 95% CI 1.86–2.66, p < 0.001), also after adjusting for BMI or dysglycemia. Serum testosterone >3.0 nmol/L was associated with an increase in NAFLD (HR = 2.30, 95% CI 1.16–4.53, p = 0.017 for 3–3.49 nmol/L and HR = 2.40, 95% CI 1.24–4.66, p = 0.009 for >3.5 nmol/L). Mirroring this finding, SHBG <30 nmol/L was associated with increased NAFLD hazard (HR = 4.75, 95% CI 2.44–9.25, p < 0.001 for 20–29.99 nmol/L and HR = 4.98, 95% CI 2.45–10.11, p < 0.001 for <20 nmol/L). Limitations of this study include its retrospective nature, absence of detailed information on criteria used to diagnosis PCOS and NAFLD, and absence of data on laboratory assays used to measure serum androgens.

Conclusions

We found that women with PCOS have an increased rate of NAFLD. In addition to increased BMI and dysglycemia, androgen excess contributes to the development of NAFLD in women with PCOS. In women with PCOS-related androgen excess, systematic NAFLD screening should be considered.

(image)



(XML) Cardiovascular disease and multimorbidity: A call for interdisciplinary research and personalized cardiovascular care

2018-03-27T21:00:00Z

by Kazem Rahimi, Carolyn S. P. Lam, Steven Steinhubl

In a Guest Editorial, Kazem Rahimi, Carolyn Lam, and Steven Steinhubl call for interdisciplinary research and personalized cardiovascular care to better manage patients with cardiovascular disease and multimorbidity.(image)



(XML) Comparative analysis of the association between 35 frailty scores and cardiovascular events, cancer, and total mortality in an elderly general population in England: An observational study

2018-03-27T21:00:00Z

by Gloria A. Aguayo, Michel T. Vaillant, Anne-Françoise Donneau, Anna Schritz, Saverio Stranges, Laurent Malisoux, Anna Chioti, Michèle Guillaume, Majon Muller, Daniel R. Witte

Background

Frail elderly people experience elevated mortality. However, no consensus exists on the definition of frailty, and many frailty scores have been developed. The main aim of this study was to compare the association between 35 frailty scores and incident cardiovascular disease (CVD), incident cancer, and all-cause mortality. Also, we aimed to assess whether frailty scores added predictive value to basic and adjusted models for these outcomes.

Methods and findings

Through a structured literature search, we identified 35 frailty scores that could be calculated at wave 2 of the English Longitudinal Study of Ageing (ELSA), an observational cohort study. We analysed data from 5,294 participants, 44.9% men, aged 60 years and over. We studied the association between each of the scores and the incidence of CVD, cancer, and all-cause mortality during a 7-year follow-up using Cox proportional hazard models at progressive levels of adjustment. We also examined the added predictive performance of each score on top of basic models using Harrell’s C statistic. Using age of the participant as a timescale, in sex-adjusted models, hazard ratios (HRs) (95% confidence intervals) for all-cause mortality ranged from 2.4 (95% CI: 1.7–3.3) to 26.2 (95% CI: 15.4–44.5). In further adjusted models including smoking status and alcohol consumption, HR ranged from 2.3 (95% CI: 1.6–3.1) to 20.2 (95% CI: 11.8–34.5). In fully adjusted models including lifestyle and comorbidity, HR ranged from 0.9 (95% CI: 0.5–1.7) to 8.4 (95% CI: 4.9–14.4). HRs for CVD and cancer incidence in sex-adjusted models ranged from 1.2 (95% CI: 0.5–3.2) to 16.5 (95% CI: 7.8–35.0) and from 0.7 (95% CI: 0.4–1.2) to 2.4 (95% CI: 1.0–5.7), respectively. In sex- and age-adjusted models, all frailty scores showed significant added predictive performance for all-cause mortality, increasing the C statistic by up to 3%. None of the scores significantly improved basic prediction models for CVD or cancer. A source of bias could be the differences in mortality follow-up time compared to CVD/cancer, because the existence of informative censoring cannot be excluded.

Conclusion

There is high variability in the strength of the association between frailty scores and 7-year all-cause mortality, incident CVD, and cancer. With regard to all-cause mortality, some scores give a modest improvement to the predictive ability. Our results show that certain scores clearly outperform others with regard to three important health outcomes in later life. Finally, we think that despite their limitations, the use of frailty scores to identify the elderly population at risk is still a useful measure, and the choice of a frailty score should balance feasibility with performance.

(image)



(XML) Multimorbidity in patients with heart failure from 11 Asian regions: A prospective cohort study using the ASIAN-HF registry

2018-03-27T21:00:00Z

by Jasper Tromp, Wan Ting Tay, Wouter Ouwerkerk, Tiew-Hwa Katherine Teng, Jonathan Yap, Michael R. MacDonald, Kirsten Leineweber, John J. V. McMurray, Michael R. Zile, Inder S. Anand, Carolyn S. P. Lam, ASIAN-HF authors

Background

Comorbidities are common in patients with heart failure (HF) and complicate treatment and outcomes. We identified patterns of multimorbidity in Asian patients with HF and their association with patients’ quality of life (QoL) and health outcomes.

Methods and findings

We used data on 6,480 patients with chronic HF (1,204 with preserved ejection fraction) enrolled between 1 October 2012 and 6 October 2016 in the Asian Sudden Cardiac Death in Heart Failure (ASIAN-HF) registry. The ASIAN-HF registry is a prospective cohort study, with patients prospectively enrolled from in- and outpatient clinics from 11 Asian regions (Hong Kong, Taiwan, China, Japan, Korea, India, Malaysia, Thailand, Singapore, Indonesia, and Philippines). Latent class analysis was used to identify patterns of multimorbidity. The primary outcome was defined as a composite of all-cause mortality or HF hospitalization within 1 year. To assess differences in QoL, we used the Kansas City Cardiomyopathy Questionnaire. We identified 5 distinct multimorbidity groups: elderly/atrial fibrillation (AF) (N = 1,048; oldest, more AF), metabolic (N = 1,129; obesity, diabetes, hypertension), young (N = 1,759; youngest, low comorbidity rates, non-ischemic etiology), ischemic (N = 1,261; ischemic etiology), and lean diabetic (N = 1,283; diabetic, hypertensive, low prevalence of obesity, high prevalence of chronic kidney disease). Patients in the lean diabetic group had the worst QoL, more severe signs and symptoms of HF, and the highest rate of the primary combined outcome within 1 year (29% versus 11% in the young group) (p for all <0.001). Adjusting for confounders (demographics, New York Heart Association class, and medication) the lean diabetic (hazard ratio [HR] 1.79, 95% CI 1.46–2.22), elderly/AF (HR 1.57, 95% CI 1.26–1.96), ischemic (HR 1.51, 95% CI 1.22–1.88), and metabolic (HR 1.28, 95% CI 1.02–1.60) groups had higher rates of the primary combined outcome compared to the young group. Potential limitations include site selection and participation bias.

Conclusions

Among Asian patients with HF, comorbidities naturally clustered in 5 distinct patterns, each differentially impacting patients’ QoL and health outcomes. These data underscore the importance of studying multimorbidity in HF and the need for more comprehensive approaches in phenotyping patients with HF and multimorbidity.

Trial registration

ClinicalTrials.gov NCT01633398

(image)



(XML) Comorbidity health pathways in heart failure patients: A sequences-of-regressions analysis using cross-sectional data from 10,575 patients in the Swedish Heart Failure Registry

2018-03-27T21:00:00Z

by Claire A. Lawson, Ivonne Solis-Trapala, Ulf Dahlstrom, Mamas Mamas, Tiny Jaarsma, Umesh T. Kadam, Anna Stromberg

Background

Optimally treated heart failure (HF) patients often have persisting symptoms and poor health-related quality of life. Comorbidities are common, but little is known about their impact on these factors, and guideline-driven HF care remains focused on cardiovascular status. The following hypotheses were tested: (i) comorbidities are associated with more severe symptoms and functional limitations and subsequently worse patient-rated health in HF, and (ii) these patterns of association differ among selected comorbidities.

Methods and findings

The Swedish Heart Failure Registry (SHFR) is a national population-based register of HF patients admitted to >85% of hospitals in Sweden or attending outpatient clinics. This study included 10,575 HF patients with patient-rated health recorded during first registration in the SHFR (1 February 2008 to 1 November 2013). An a priori health model and sequences-of-regressions analysis were used to test associations among comorbidities and patient-reported symptoms, functional limitations, and patient-rated health. Patient-rated health measures included the EuroQol–5 dimension (EQ-5D) questionnaire and the EuroQol visual analogue scale (EQ-VAS). EQ-VAS score ranges from 0 (worst health) to 100 (best health). Patient-rated health declined progressively from patients with no comorbidities (mean EQ-VAS score, 66) to patients with cardiovascular comorbidities (mean EQ-VAS score, 62) to patients with non-cardiovascular comorbidities (mean EQ-VAS score, 59). The relationships among cardiovascular comorbidities and patient-rated health were explained by their associations with anxiety or depression (atrial fibrillation, odds ratio [OR] 1.16, 95% CI 1.06 to 1.27; ischemic heart disease [IHD], OR 1.20, 95% CI 1.09 to 1.32) and with pain (IHD, OR 1.25, 95% CI 1.14 to 1.38). Associations of non-cardiovascular comorbidities with patient-rated health were explained by their associations with shortness of breath (diabetes, OR 1.17, 95% CI 1.03 to 1.32; chronic kidney disease [CKD, OR 1.23, 95% CI 1.10 to 1.38; chronic obstructive pulmonary disease [COPD], OR 95% CI 1.84, 1.62 to 2.10) and with fatigue (diabetes, OR 1.27, 95% CI 1.13 to 1.42; CKD, OR 1.24, 95% CI 1.12 to 1.38; COPD, OR 1.69, 95% CI 1.50 to 1.91). There were direct associations between all symptoms and patient-rated health, and indirect associations via functional limitations. Anxiety or depression had the strongest association with functional limitations (OR 10.03, 95% CI 5.16 to 19.50) and patient-rated health (mean difference in EQ-VAS score, −18.68, 95% CI −23.22 to −14.14). HF optimizing therapies did not influence these associations. Key limitations of the study include the cross-sectional design and unclear generalisability to other populations. Further prospective HF studies are required to test the consistency of the relationships and their implications for health.

Conclusions

Identification of distinct comorbidity health pathways in HF could provide the evidence for individualised person-centred care that targets specific comorbidities and associated symptoms.

(image)



(XML) Transmission of HIV-1 drug resistance mutations within partner-pairs: A cross-sectional study of a primary HIV infection cohort

2018-03-27T21:00:00Z

by Joanne D. Stekler, Ross Milne, Rachel Payant, Ingrid Beck, Joshua Herbeck, Brandon Maust, Wenjie Deng, Kenneth Tapia, Sarah Holte, Janine Maenza, Claire E. Stevens, James I. Mullins, Ann C. Collier, Lisa M. Frenkel

Background

Transmission of human immunodeficiency virus type 1 (HIV-1) drug resistance mutations, particularly that of minority drug-resistant variants, remains poorly understood. Population-based studies suggest that drug-resistant HIV-1 is less transmissible than drug-susceptible viruses. We compared HIV-1 drug-resistant genotypes among partner-pairs in order to assess the likelihood of transmission of drug resistance mutations and investigate the role of minority variants in HIV transmission.

Methods and findings

From 1992–2010, 340 persons with primary HIV-1 infection and their partners were enrolled into observational research studies at the University of Washington Primary Infection Clinic (UWPIC). Out of 50 partner-pairs enrolled, 36 (72%) transmission relationships were confirmed by phylogenetic distance analysis of HIV-1 envelope (env) sequences, and 31 partner-pairs enrolled after 1995 met criteria for this study. Drug resistance mutations in the region of the HIV-1 polymerase gene (pol) that encodes protease and reverse transcriptase were assessed by 454-pyrosequencing. In 25 partner-pairs where the transmission direction could be determined, 12 (48%) transmitters had 1–4 drug resistance mutations (23 total) detected in their HIV-1 populations at a median frequency of 6.0% (IQR 1.5%–98.7%, range 1.0%–99.6%). Of 10 major mutations detected in five transmitters at a frequency >95%, 100% (95% CI 69.2%–100%) were detected in recipients. All of these transmitters were antiretroviral (ARV)-naïve at the time of specimen collection. Fourteen mutations (eight major mutations and six accessory mutations) were detected in nine transmitters at low frequencies (1.0%–11.8%); four of these transmitters had previously received ARV therapy. Two (14% [95% CI 1.8%–42.8%]) G73S accessory mutations were detected in both transmitter and recipient. This number is not significantly different from the number expected based on the observed frequencies of drug-resistant viruses in transmitting partners. Limitations of this study include the small sample size and uncertainties in determining the timing of virus transmission and mutation history.

Conclusions

Drug-resistant majority variants appeared to be commonly transmitted by ARV-naïve participants in our analysis and may contribute significantly to transmitted drug resistance on a population level. When present at low frequency, no major mutation was observed to be shared between partner-pairs; identification of accessory mutations shared within a pair could be due to transmission, laboratory artifact, or apolipoprotein B mRNA-editing enzyme, catalytic polypeptides (APOBECs), and warrants further study.

(image)



(XML) Antiviral efficacy of favipiravir against Ebola virus: A translational study in cynomolgus macaques

2018-03-27T21:00:00Z

by Jérémie Guedj, Géraldine Piorkowski, Frédéric Jacquot, Vincent Madelain, Thi Huyen Tram Nguyen, Anne Rodallec, Stephan Gunther, Caroline Carbonnelle, France Mentré, Hervé Raoul, Xavier de Lamballerie

Background

Despite repeated outbreaks, in particular the devastating 2014–2016 epidemic, there is no effective treatment validated for patients with Ebola virus disease (EVD). Among the drug candidates is the broad-spectrum polymerase inhibitor favipiravir, which showed a good tolerance profile in patients with EVD (JIKI trial) but did not demonstrate a strong antiviral efficacy. In order to gain new insights into the antiviral efficacy of favipiravir and improve preparedness and public health management of future outbreaks, we assess the efficacy achieved by ascending doses of favipiravir in Ebola-virus-infected nonhuman primates (NHPs).

Methods and findings

A total of 26 animals (Macaca fascicularis) were challenged intramuscularly at day 0 with 1,000 focus-forming units of Ebola virus Gabon 2001 strain and followed for 21 days (study termination). This included 13 animals left untreated and 13 treated with doses of 100, 150, and 180 mg/kg (N = 3, 5, and 5, respectively) favipiravir administered intravenously twice a day for 14 days, starting 2 days before infection. All animals left untreated or treated with 100 mg/kg died within 10 days post-infection, while animals receiving 150 and 180 mg/kg had extended survival (P < 0.001 and 0.001, respectively, compared to untreated animals), leading to a survival rate of 40% (2/5) and 60% (3/5), respectively, at day 21. Favipiravir inhibited viral replication (molecular and infectious viral loads) in a drug-concentration-dependent manner (P values < 0.001), and genomic deep sequencing analyses showed an increase in virus mutagenesis over time. These results allowed us to identify that plasma trough favipiravir concentrations greater than 70–80 μg/ml were associated with reduced viral loads, lower virus infectivity, and extended survival. These levels are higher than those found in the JIKI trial, where patients had median trough drug concentrations equal to 46 and 26 μg/ml at day 2 and day 4 post-treatment, respectively, and suggest that the dosing regimen in the JIKI trial was suboptimal. The environment of a biosafety level 4 laboratory introduces a number of limitations, in particular the difficulty of conducting blind studies and performing detailed pharmacological assessments. Further, the extrapolation of the results to patients with EVD is limited by the fact that the model is fully lethal and that treatment initiation in patients with EVD is most often initiated several days after infection, when symptoms and high levels of viral replication are already present.

Conclusions

Our results suggest that favipiravir may be an effective antiviral drug against Ebola virus that relies on RNA chain termination and possibly error catastrophe. These results, together with previous data collected on tolerance and pharmacokinetics in both NHPs and humans, support a potential role for high doses of favipiravir for future human interventions.

(image)



(XML) What is the value of multidisciplinary care for chronic kidney disease?

2018-03-27T21:00:00Z

by Richard J. Fluck, Maarten W. Taal

In a Persepctive, Richard Fluck and Maarten Taal discuss the potential value of implementing multidisciplinary care programs for chronic kidney disease.(image)



(XML) Cost-effectiveness of multidisciplinary care in mild to moderate chronic kidney disease in the United States: A modeling study

2018-03-27T21:00:00Z

by Eugene Lin, Glenn M. Chertow, Brandon Yan, Elizabeth Malcolm, Jeremy D. Goldhaber-Fiebert

Background

Multidisciplinary care (MDC) programs have been proposed as a way to alleviate the cost and morbidity associated with chronic kidney disease (CKD) in the US.

Methods and findings

We assessed the cost-effectiveness of a theoretical Medicare-based MDC program for CKD compared to usual CKD care in Medicare beneficiaries with stage 3 and 4 CKD between 45 and 84 years old in the US. The program used nephrologists, advanced practitioners, educators, dieticians, and social workers. From Medicare claims and published literature, we developed a novel deterministic Markov model for CKD progression and calibrated it to long-term risks of mortality and progression to end-stage renal disease. We then used the model to project accrued discounted costs and quality-adjusted life years (QALYs) over patients’ remaining lifetime. We estimated the incremental cost-effectiveness ratio (ICER) of MDC, or the cost of the intervention per QALY gained. MDC added 0.23 (95% CI: 0.08, 0.42) QALYs over usual care, costing $51,285 per QALY gained (net monetary benefit of $23,100 at a threshold of $150,000 per QALY gained; 95% CI: $6,252, $44,323). In all subpopulations analyzed, ICERs ranged from $42,663 to $72,432 per QALY gained. MDC was generally more cost-effective in patients with higher urine albumin excretion. Although ICERs were higher in younger patients, MDC could yield greater improvements in health in younger than older patients. MDC remained cost-effective when we decreased its effectiveness to 25% of the base case or increased the cost 5-fold. The program costed less than $70,000 per QALY in 95% of probabilistic sensitivity analyses and less than $87,500 per QALY in 99% of analyses. Limitations of our study include its theoretical nature and being less generalizable to populations at low risk for progression to ESRD. We did not study the potential impact of MDC on hospitalization (cardiovascular or other).

Conclusions

Our model estimates that a Medicare-funded MDC program could reduce the need for dialysis, prolong life expectancy, and meet conventional cost-effectiveness thresholds in middle-aged to elderly patients with mild to moderate CKD.

(image)



(XML) Time for high-burden countries to lead the tuberculosis research agenda

2018-03-23T21:00:00Z

by Madhukar Pai

In a Guest Editorial, Madhukar Pai discusses the need for high-burden burden, middle-income countries to take a leading role in tuberculosis research.(image)



(XML) HIV treatment eligibility expansion and timely antiretroviral treatment initiation following enrollment in HIV care: A metaregression analysis of programmatic data from 22 countries

2018-03-23T21:00:00Z

by Olga Tymejczyk, Ellen Brazier, Constantin Yiannoutsos, Kara Wools-Kaloustian, Keri Althoff, Brenda Crabtree-Ramírez, Kinh Van Nguyen, Elizabeth Zaniewski, Francois Dabis, Jean d'Amour Sinayobye, Nanina Anderegg, Nathan Ford, Radhika Wikramanayake, Denis Nash, IeDEA Collaboration

Background

The effect of antiretroviral treatment (ART) eligibility expansions on patient outcomes, including rates of timely ART initiation among those enrolling in care, has not been assessed on a large scale. In addition, it is not known whether ART eligibility expansions may lead to “crowding out” of sicker patients.

Methods and findings

We examined changes in timely ART initiation (within 6 months) at the original site of HIV care enrollment after ART eligibility expansions among 284,740 adult ART-naïve patients at 171 International Epidemiology Databases to Evaluate AIDS (IeDEA) network sites in 22 countries where national policies expanding ART eligibility were introduced between 2007 and 2015. Half of the sites included in this analysis were from Southern Africa, one-third were from East Africa, and the remainder were from the Asia-Pacific, Central Africa, North America, and South and Central America regions. The median age of patients enrolling in care at contributing sites was 33.5 years, and the median percentage of female patients at these clinics was 62.5%. We assessed the 6-month cumulative incidence of timely ART initiation (CI-ART) before and after major expansions of ART eligibility (i.e., expansion to treat persons with CD4 ≤ 350 cells/μL [145 sites in 22 countries] and CD4 ≤ 500 cells/μL [152 sites in 15 countries]). Random effects metaregression models were used to estimate absolute changes in CI-ART at each site before and after guideline expansion. The crude pooled estimate of change in CI-ART was 4.3 percentage points (95% confidence interval [CI] 2.6 to 6.1) after ART eligibility expansion to CD4 ≤ 350, from a baseline median CI-ART of 53%; and 15.9 percentage points (pp) (95% CI 14.3 to 17.4) after ART eligibility expansion to CD4 ≤ 500, from a baseline median CI-ART of 57%. The largest increases in CI-ART were observed among those newly eligible for treatment (18.2 pp after expansion to CD4 ≤ 350 and 47.4 pp after expansion to CD4 ≤ 500), with no change or small increases among those eligible under prior guidelines (CD4 ≤ 350: −0.6 pp, 95% CI −2.0 to 0.7 pp; CD4 ≤ 500: 4.9 pp, 95% CI 3.3 to 6.5 pp). For ART eligibility expansion to CD4 ≤ 500, changes in CI-ART were largest among younger patients (16–24 years: 21.5 pp, 95% CI 18.9 to 24.2 pp). Key limitations include the lack of a counterfactual and difficulty accounting for secular outcome trends, due to universal exposure to guideline changes in each country.

Conclusions

These findings underscore the potential of ART eligibility expansion to improve the timeliness of ART initiation globally, particularly for young adults.

(image)



(XML) Primary prevention of cardiovascular disease: The past, present, and future of blood pressure- and cholesterol-lowering treatments

2018-03-20T21:00:00Z

by Maarten J. G. Leening, M. Arfan Ikram

In a Perspective, M. Afran Ikram and Maarten Leening discuss the evolving approaches to determining cardiovascular risk.(image)



(XML) Blood pressure-lowering treatment strategies based on cardiovascular risk versus blood pressure: A meta-analysis of individual participant data

2018-03-20T21:00:00Z

by Kunal N. Karmali, Donald M. Lloyd-Jones, Joep van der Leeuw, David C. Goff Jr., Salim Yusuf, Alberto Zanchetti, Paul Glasziou, Rodney Jackson, Mark Woodward, Anthony Rodgers, Bruce C. Neal, Eivind Berge, Koon Teo, Barry R. Davis, John Chalmers, Carl Pepine, Kazem Rahimi, Johan Sundström, on behalf of the Blood Pressure Lowering Treatment Trialists’ Collaboration Background Clinical practice guidelines have traditionally recommended blood pressure treatment based primarily on blood pressure thresholds. In contrast, using predicted cardiovascular risk has been advocated as a more effective strategy to guide treatment decisions for cardiovascular disease (CVD) prevention. We aimed to compare outcomes from a blood pressure-lowering treatment strategy based on predicted cardiovascular risk with one based on systolic blood pressure (SBP) level. Methods and findings We used individual participant data from the Blood Pressure Lowering Treatment Trialists’ Collaboration (BPLTTC) from 1995 to 2013. Trials randomly assigned participants to either blood pressure-lowering drugs versus placebo or more intensive versus less intensive blood pressure-lowering regimens. We estimated 5-y risk of CVD events using a multivariable Weibull model previously developed in this dataset. We compared the two strategies at specific SBP thresholds and across the spectrum of risk and blood pressure levels studied in BPLTTC trials. The primary outcome was number of CVD events avoided per persons treated. We included data from 11 trials (47,872 participants). During a median of 4.0 y of follow-up, 3,566 participants (7.5%) experienced a major cardiovascular event. Areas under the curve comparing the two treatment strategies throughout the range of possible thresholds for CVD risk and SBP demonstrated that, on average, a greater number of CVD events would be avoided for a given number of persons treated with the CVD risk strategy compared with the SBP strategy (area under the curve 0.71 [95% confidence interval (CI) 0.70–0.72] for the CVD risk strategy versus 0.54 [95% CI 0.53–0.55] for the SBP strategy). Compared with treating everyone with SBP ≥ 150 mmHg, a CVD risk strategy would require treatment of 29% (95% CI 26%–31%) fewer persons to prevent the same number of events or would prevent 16% (95% CI 14%–18%) more events for the same number of persons treated. Compared with treating everyone with SBP ≥ 140 mmHg, a CVD risk strategy would require treatment of 3.8% (95% CI 12.5% fewer to 7.2% more) fewer persons to prevent the same number of events or would prevent 3.1% (95% CI 1.5%–5.0%) more events for the same number of persons treated, although the former estimate was not statistically significant. In subgroup analyses, the CVD risk strategy did not appear to be more beneficial than the SBP strategy in patients with diabetes mellitus or established CVD. Conclusions A blood pressure-lowering treatment strategy based on predicted cardiovascular risk is more effective than one based on blood pressure levels alone across a range of thresholds. These results support using cardiovascular risk assessment to guide blood pressure treatment decision-making in moderate- to high-risk individuals, particularly for primary prevention.[...]



(XML) Causes of death and infant mortality rates among full-term births in the United States between 2010 and 2012: An observational study

2018-03-20T21:00:00Z

by Neha Bairoliya, Günther Fink Background While the high prevalence of preterm births and its impact on infant mortality in the US have been widely acknowledged, recent data suggest that even full-term births in the US face substantially higher mortality risks compared to European countries with low infant mortality rates. In this paper, we use the most recent birth records in the US to more closely analyze the primary causes underlying mortality rates among full-term births. Methods and findings Linked birth and death records for the period 2010–2012 were used to identify the state- and cause-specific burden of infant mortality among full-term infants (born at 37–42 weeks of gestation). Multivariable logistic models were used to assess the extent to which state-level differences in full-term infant mortality (FTIM) were attributable to observed differences in maternal and birth characteristics. Random effects models were used to assess the relative contribution of state-level variation to FTIM. Hypothetical mortality outcomes were computed under the assumption that all states could achieve the survival rates of the best-performing states. A total of 10,175,481 infants born full-term in the US between January 1, 2010, and December 31, 2012, were analyzed. FTIM rate (FTIMR) was 2.2 per 1,000 live births overall, and ranged between 1.29 (Connecticut, 95% CI 1.08, 1.53) and 3.77 (Mississippi, 95% CI 3.39, 4.19) at the state level. Zero states reached the rates reported in the 6 low-mortality European countries analyzed (FTIMR < 1.25), and 13 states had FTIMR > 2.75. Sudden unexpected death in infancy (SUDI) accounted for 43% of FTIM; congenital malformations and perinatal conditions accounted for 31% and 11.3% of FTIM, respectively. The largest mortality differentials between states with good and states with poor FTIMR were found for SUDI, with particularly large risk differentials for deaths due to sudden infant death syndrome (SIDS) (odds ratio [OR] 2.52, 95% CI 1.86, 3.42) and suffocation (OR 4.40, 95% CI 3.71, 5.21). Even though these mortality differences were partially explained by state-level differences in maternal education, race, and maternal health, substantial state-level variation in infant mortality remained in fully adjusted models (SIDS OR 1.45, suffocation OR 2.92). The extent to which these state differentials are due to differential antenatal care standards as well as differential access to health services could not be determined due to data limitations. Overall, our estimates suggest that infant mortality could be reduced by 4,003 deaths (95% CI 2,284, 5,587) annually if all states were to achieve the mortality levels of the best-performing state in each cause-of-death category. Key limitations of the analysis are that information on termination rates at the state level was not available, and that causes of deaths may have been coded differentially across states. Conclusions More than 7,000 full-term infants die in the US each year. The results presented in this paper suggest that a substantial share of these deaths may be preventable. Potential improvements seem particularly large for SUDI, where very low rates have been achieved in a few states while average mortality rates remain high in most other areas. Given the high mortality burden due to S[...]



(XML) Cerebral white matter disease and functional decline in older adults from the Northern Manhattan Study: A longitudinal cohort study

2018-03-20T21:00:00Z

by Mandip S. Dhamoon, Ying-Kuen Cheung, Yeseon Moon, Janet DeRosa, Ralph Sacco, Mitchell S. V. Elkind, Clinton B. Wright

Background

Cerebral white matter hyperintensities (WMHs) on MRI are common and associated with vascular and functional outcomes. However, the relationship between WMHs and longitudinal trajectories of functional status is not well characterized. We hypothesized that whole brain WMHs are associated with functional decline independently of intervening clinical vascular events and other vascular risk factors.

Methods and findings

In the Northern Manhattan Study (NOMAS), a population-based racially/ethnically diverse prospective cohort study, 1,290 stroke-free individuals underwent brain MRI and were followed afterwards for a mean 7.3 years with annual functional assessments using the Barthel index (BI) (range 0–100) and vascular event surveillance. Whole brain white matter hyperintensity volume (WMHV) (as percentage of total cranial volume [TCV]) was standardized and treated continuously. Generalized estimating equation (GEE) models tested associations between whole brain WMHV and baseline BI and change in BI, adjusting for sociodemographic, vascular, and cognitive risk factors, as well as stroke and myocardial infarction (MI) occurring during follow-up. Mean age was 70.6 (standard deviation [SD] 9.0) years, 40% of participants were male, 66% Hispanic; mean whole brain WMHV was 0.68% (SD 0.84). In fully adjusted models, annual functional change was −1.04 BI points (−1.20, −0.88), with −0.74 additional points annually per SD whole brain WMHV increase from the mean (−0.99, −0.49). Whole brain WMHV was not associated with baseline BI, and results were similar for mobility and non-mobility BI domains and among those with baseline BI 95–100. A limitation of the study is the possibility of a healthy survivor bias, which would likely have underestimated the associations we found.

Conclusions

In this large population-based study, greater whole brain WMHV was associated with steeper annual decline in functional status over the long term, independently of risk factors, vascular events, and baseline functional status. Subclinical brain ischemic changes may be an independent marker of long-term functional decline.

(image)