Dr Linda Calabresi

Dr Linda Calabresi

GP; Medical Editor, Healthed
Dr Linda Calabresi is an Australian-based health professional. Linda is trained as a GP (General Practitioner) and has practices located in North Ryde, Artarmon.

More from this expert

Clinical Articles iconClinical Articles

In what will be seen as a blow to cryptic crossword compilers the world over, it appears wealth is a better determinant of whether you keep your marbles than education. In a UK prospective study of over 6000 adults aged over 65 years, researchers found those people in the lowest quintile in terms of socioeconomic status were almost 70% more likely to get dementia than those categorised to be in the top fifth, over a 12 year follow-up period. Depressingly, this finding held true regardless of education level. “This longitudinal cohort study found that wealth in late life, but not education, was associated with increased risk of dementia, suggesting people with fewer financial resources were at higher risk,” the study authors said. On further analysis, researchers found the association between wealth, or the lack thereof and dementia was even more pronounced in the younger participants in the cohort. So what did the researchers think was the reason behind the link between poverty and dementia? One explanation was that having money allowed one to access more mentally stimulating environments including cultural resources (reading, theatre etc) and increased social networks that might help preserve cognitive function. While on the flip side, poverty (or ‘persistent socioeconomic disadvantage’ as the authors describe it) affects physiological functioning, increasing the risk of depression, vascular disease and stroke – all known risk factors for dementia. Other factors such as poor diet and lack of exercise also appear to more common among poorer people in the community. All this seems fairly logical, but what of the lack of a protective effect of education? Well, the researchers think this might be a particularly British phenomenon in this age group. “This might be a specific cohort effect in the English population born and educated in the period surrounding the World War II,” they suggested. A number of other studies have shown other results, with some, including the well-respected Canadian Study of Health and Aging-  showing the complete opposite – education protects against dementia. Consequently, the authors of this study, published in JAMA Psychiatry, hypothesise that perhaps this cohort of patients may have been unable to access higher education because of military service or financial restrictions but were able to access intellectually challenging jobs after the war. All in all, the study is an observational one and it is possible there are a number of confounding factors from smoking to availability of medical care that play a role in why poorer people are at greater risk of dementia. And while the researchers are not advocating older people give up their Bridge game and just buy lottery tickets, it would seem money is useful, if not for happiness, then at least for preserving brain power. Ref: JAMA Psychiatry doi:10.1001/jamapsychiatry.2018.1012

Clinical Articles iconClinical Articles

Almost three quarters of men with low grade prostate cancer may not be being adequately monitored, a recent Victorian study suggests. According to data from the Prostate Cancer Outcomes Registry – Victoria, only 26.5% of over 1600 men who had low risk prostate cancer had follow-up investigations consistent with standard active surveillance protocols in the two years after their diagnosis. Specifically, researchers were investigating whether these men adhered to the schedule that consisted of at least three PSA measures and at least one biopsy in the two years post diagnosis. While the study authors concede the clinical consequences of this shortcoming are yet unknown, the finding is still of concern. “If [these men] are not being followed appropriately according to [Active Surveillance] protocols, men may miss the opportunity to be treated with curative intent,” they wrote in the MJA. Active surveillance is increasingly the management of choice for men with low risk prostate cancer. In Victoria, 60% of men diagnosed with this grade of cancer are now managed with active surveillance, the study authors said. A major issue with active surveillance as a management option, is that the optimal timing of follow-up investigations has not been strictly defined, resulting in several different protocols and guidelines being developed worldwide.  This, in part was the impetus for the development of the Victorian prostate cancer registry, which was established in 2009 ‘to improve knowledge of patterns of care and outcomes for men diagnosed with prostate cancer.’ Currently the Australian protocol for active surveillance is based on the consensus opinion, and the three PSA tests and repeat biopsy within two years has been widely accepted as standard care. The finding that 73.5% did not receive monitoring in accordance with thisprotocol, reflected adherence levels that were among the worst when compared to similar studies around the world, and the study authors suggested the reason was likely to be multifactorial. The reasons for the non-compliance ‘may reflect patient-, clinician- and health service-related factors,” they wrote. Patients may avoid biopsy because of pain, clinicians may delay testing based on a patient’s comorbidities or health services may have fewer resources for sending reminders or pursuing patients who miss appointments, the study authors suggested. Nonetheless, efforts needed to be made to ensure men with low grade prostate cancer are not disadvantaged in terms of health outcomes if they opt to accept active surveillance as their management strategy. “To improve adherence, a multifaceted approach may be required, including an education campaign that highlights the need for men to undergo regular PSA assessment and prostate biopsy,” they concluded. Ref: MJA doi:10.5694/mja17.00559

Clinical Articles iconClinical Articles

Surviving childhood cancer is a major win in anyone’s language. However, it is well-known, that even as adults these survivors are at an increased risk of dying before the age of their cancer-history-free peers. Now, it would appear, these people can do something about managing that Damocles sword. According to a recent study published in JAMA Oncology, regular vigorous exercise in early adulthood is associated with a lower risk of mortality in adult survivors of childhood cancer. And the study authors believe the finding could have a significant impact, given the numbers of children that now survive cancer. “These findings may be of importance for the large and rapidly growing population of adult survivors of childhood cancer at substantially higher risk of mortality due to multiple competing risks,” they said. The study was a multicentre cohort analysis of data from over 15,400 adults who had had cancer diagnosed at one of number of paediatric tertiary hospitals in North America before the age of 21. Interviews were conducted at baseline (median age at almost 26 years) at which, among a range of other parameters, levels of exercise were assessed. These patients were then followed for up to 15 years (median follow-up 9.6 years). Overall, after adjusting for chronic health conditions and treatment exposures the researchers found an inverse association between exercise and all-cause mortality. More compelling was the analysis of a subset of almost 5700 survivors, which showed that increased exercise over an eight-year period was associated with a 40% reduction in all-cause mortality compared with low levels of exercise. Of course, the critical question is what constitutes vigorous or increased exercise? How much exercise does a person need to do to qualify for this benefit? The question these patients were asked was ‘on how many of the past seven days did you exercise or do a sport that made you sweat or breathe hard?’ This was considered vigorous exercise, and the mortality benefit was seen in people who exercised to this level for at least 60 minutes a week. But the benefit was not entirely dose-dependent.  It appeared that vigorously exercising for about an hour a day, five days a week (eg a brisk 60 min walk) was the most advantageous in terms of mortality, beyond that the benefit was attenuated. It is well known that, in the general population, regular exercise reduces all-cause and cause-specific mortality, however there are much fewer studies looking at its benefit among cancer survivors, especially younger-age cancer survivors. “To this end, our findings….significantly extend the current evidence base and arguably provide the best available epidemiological evidence to support the endorsement of exercise for cancer survivors,” the study authors said. Ref: JAMA Oncology doi10:1001/jamaoncol.2018.2254

Clinical Articles iconClinical Articles

Findings from a newly published study are likely to silence those who suggest doctors need to return to wearing white coats to improve patient respect. According to US researchers, patient perception in terms of a doctor’s capability, trustworthiness and reliability is not affected by the presence of visible tattoos and non-traditional piercings (on the doctor not the patient). “Physician tattoos and facial piercings were not factors in patients’ evaluations of physician competence, professionalism or approachability,” the study authors said. This interesting study, published in the Emergency Medicine Journal involved surveying over 900 patients who had attended the emergency department of a large teaching hospital in the US. The patients were not told the purpose of the survey, they were simply asked to rate their experience, including the care they received from their attending doctor across a range of domains including competence, professionalism, caring, approachability, trustworthiness and reliability. The doctors provided their own controls meaning that some shifts they worked without any ‘exposed body art’, other shifts they were ‘pierced’ (hoop earrings for men or fake nasal studs for women) and other shifts they were ‘tattooed’ (with a temporary standardised black tribal tattoo around the arm). Sometimes the doctors were both pierced and tattooed. Nurses asked the patients to complete the survey and they did provide prompts to remind the patient of the doctor who had attended them, but the prompt was along the line of ‘the young male doctor with red hair’ rather than drawing attention to the tattoo or piercing. More than 75% of the time the patient gave the attending doctor the top rating across all the domains surveyed regardless of appearance. The findings appear to be in contrast with previous research which has found patients prefer their physicians to be traditionally dressed. However, the authors of this study suggest that the evidence to date has been limited by a lack of patient blinding to the study purpose. What’s more, they suggest that existing policies regarding visible body art on doctors are likely to be driven by administrator preferences rather than data on patient satisfaction. There were a number of limitations to the study in particular the small number of doctors involved (seven) and because the patients weren’t specifically asked about the body art we don’t know whether they actually didn’t care about it or whether they were able to overcome their disapproval of it. Given the trial took place in an emergency department, the results cannot necessarily be extrapolated to the community setting as in general practice. Nonetheless, the results will be of interest given the increasing popularity of body art, especially tattoos with the study authors citing research showing, in 2006, 24% of young and middle adult persons had at least one tattoo. Concerns that a doctor’s visible body art has a negative impact on a patient’s perception of their professionalism or a patient’s satisfaction do not appear founded, the researchers concluded. Ref: Doi: 10.1136/emermed-2017-206887

Clinical Articles iconClinical Articles

New study findings confirm what many parents already believe, introducing solids early helps babies sleep through the night. The UK randomised trial, published in JAMA Pediatrics showed the early introduction of solids into an infant’s diet (from three months of age) was associated with longer sleep duration, less frequent waking at night and a reduction in parents reporting major sleep problems in their child. Researchers analysed data collected as part of the Enquiring About Tolerance study, which included on-going parent-reported assessments on over 1300 infants from England and Wales who were exclusively breastfed to three months of age. At baseline, there were no significant differences in sleeping patterns between those infants who were then introduced to solids early and those who remained exclusively breastfed to six months, as per the World Health Organisation recommendation. However, at six months the difference between was significant. “At age six months,..[those babies who had started solids] were sleeping 17 minutes longer at night, equating to two hours of extra sleep per week, and were waking two fewer times at night per week,” the study authors said. “Most significantly, at this point, [early introduction group] families were reporting half the rate of very serious sleep problems,” they added, saying the results confirm the link between poor infant sleep and parental quality of life. And the findings contradict previous claims that a baby’s poor sleep habits and frequent waking has nothing to do with hunger. The study found that those babies with the highest weight gain between birth and three months (when they were enrolled in the study) were the most likely to be waking at night. “This is consistent with the idea that their rapid weight gain was leading to an enhanced caloric and nutritional requirement, resulting in hunger and disrupted sleep,” they said. Overall, it seems that the study has simply proved what many parents had already suspected. The study authors referred to previous research that showed that, despite WHO and British guidelines recommending babies be exclusively breastfed to six months, three quarters of British mothers introduce solids before five months and 26% report night waking as influencing this decision. Interestingly, recent evidence with regard reducing the risk of allergy and atopy has seen some organisations including our own Australian Society of Clinical Immunology and Allergy, recommending infants be introduced to solids earlier than six months. The authors of this study suggest that parents following these newer guidelines might find they get the added benefit of more sleep. “With recent guidelines advocating introducing solids from age four to six months in some or all infants, our results suggest that improved sleep may be a concomitant benefit,” they concluded. Ref: JAMA Pediatr. doi:10.1001/jamapediatrics.2018.0739

Clinical Articles iconClinical Articles

All newly-diagnosed hypertensive patients should be screened for primary aldosteronism before they are started on treatment, Australian experts suggest in the latest issue of the MJA. “Primary aldosteronism is common, specifically treatable, and associated with significant cardiovascular morbidity and mortality,” say researchers Dr Jun Yang, Professor Peter Fuller and Professor Michael Stowasser. They refer to a recent systematic review of over 30 studies, that found among a cohort of people with severe or resistant hypertension (systolic BP >180mmHg and diastolic BP >110), 16.4% were found to have primary aldosteronism. Admittedly these studies were carried out in tertiary centres. There been far fewer studies on the issue conducted in primary care with somewhat mixed results, with one small Australian study suggesting 11.5% of people with significant hypertension in the general practice setting had primary aldosteronism. But its not only the patients with severe hypertension that need to be considered for primary aldosteronism screening, the authors suggest. They point to an Italian study including over 1600 GP patients selected randomly who were screened for primary aldosteronism and found a prevalence of 5.9%.  Importantly 45% of these had mild hypertension (BP 140-159/90-99mmHg). According to the article authors, these patients, because would have most likely remained undiagnosed if not for the study. And the effect of the untreated aldosterone excess would have most likely led to poor blood pressure control and increased cardiovascular, renal and metabolic morbidity long-term. In other words, identifying these patients early in the course of the disease could allow more appropriate treatment and ultimately avoid the end-organ damage that is more likely to occur if diagnosis is delayed until after the development of severe hypertension. “Targeted treatment of [primary aldosteronism] using surgery or mineralocorticoid receptor antagonists, such as spironolactone and eplerenone, rather than non-specific antihypertensive medications, can reverse the underlying cardiovascular pathology,” they said. The recommended biochemical screening tool for primary aldosteronism is the aldosterone to renin ratio which is elevated in this condition because plasma aldosterone is normal or elevated while renin is suppressed. The experts suggest screening prior to commencing antihypertensive therapy as many of these drugs, including beta blockers, calcium channel blockers, ACE inhibitors, ARBs and diuretics usually interfere with this aldosterone to renin ratio. The test isn’t perfect, they admit, as it can be influenced by a number of confounders including salt intake and age, but as a screening tool it has been proven, in trials both in Australia and internationally to be very useful, resulting in significantly increased numbers of patients diagnosed. Current Australian hypertension guidelines recommend clinicians consider primary aldosteronism in patients with hypertension particularly those with moderate to severe or treatment-resistant hypertension. But, as the article authors point out, given the prevalence of primary aldosteronism and the health burden associated with this cardiovascular risk factor both to the Australian population and the economy, maybe it is time to consider screening all newly-diagnosed hypertensive patients for this condition, before the commencement of non-specific antihypertensive therapy. “This diagnostic strategy should lead to significant individual and population health and economic impacts as a result of many patients with hypertension being offered the chance of curative or simpler treatment at an early stage of their disease.” Ref: MJA doi:10.5694/mja17.00783

Clinical Articles iconClinical Articles

Teenagers who are constantly checking their phones are more likely to develop ADHD symptoms than their less social-media-engaged peers, US researchers say. In what the study authors say is the first longitudinal study investigating the issue, researchers found that the frequency of digital media use among over 2500 non-ADHD 15-and 16-year-olds was significantly associated with the subsequent development of ADHD symptoms over a two-year period of follow up. A high frequency of media activity – most commonly checking their smart phone was associated with an 10% increased likelihood of developing inattentive and hyperactive-impulsive symptoms in this teenage cohort. Associations were significantly stronger in boys and participants with more mental health symptoms, such as depressive symptoms and delinquent behaviours. But while the association was statistically significant, further research was needed to determine if the digital media use was the cause of problem, the US authors said in JAMA. “The possibility that reverse causality or undetected baseline ADHD symptoms influenced the association cannot be ruled out”, they said. To date, the potential risks of intense engagement in social media is largely an evidence-free zone, they said. Prior longitudinal studies on this topic have most commonly involved computers, televisions and video-game consoles. But the engagement associated with these devices is markedly different to that seen with modern media platforms especially in terms of accessibility, operating speed, level of stimulation and potential for high-frequency exposure. And as an accompanying editorial points out, television and gaming are sporadic activities whereas the current widespread use of smartphones means social media is now close at hand. “In 2018, 95% of adolescents reported having access to a smartphone (a 22-percentage-point increase from 2014-2015), and 45% said they were online ‘almost constantly’”, the US editorial author explained. This instant access to highly engaging content is designed to be habit-forming. Also the effect of current social media engagement not only involves exposure to violence in games and displacement of other activities that were the major issues in the past. Social media today has been designed to engage the user for longer periods and reward repeated users. New behaviours to consider include frequent attention shifts and the constant media multitasking, which might interfere with a person’s ability to focus on a single task, especially a non-preferred task. It is also hypothesised that the ready availability of desired information may affect impulse control (no waiting is required). And the ‘always-on’ mentality may be depriving young brains of ‘down time’, allowing the mind to rest, tolerate boredom and even practise mindfulness. The study researchers were keen to emphasise their research findings are a long way from proving digital media increases the risk of ADHD symptoms, and even if they did, the public health and clinical implications of this are uncertain. However, the editorial was more enthusiastic about the study’s implications. “With more timely digital media research, parents may feel more confident in the evidence underlying recommendations for how to manage the onslaught of media in their households,” it said. The editorial author suggested the findings support American Academy of Pediatrics guidelines that recommend adolescents focus on activities that have been proven to promote ‘executive functioning’ such as sleep, physical activity, distraction-free homework and positive interactions with family and friends – with the implication being – ‘switch the phone off’. Ref: JAMA 2018; 320(3): 255-263 doi:10.1001/jama.2018.8931 JAMA 2018; 320(3): 237-239

Clinical Articles iconClinical Articles

Effectively treating depression in patients who have just experienced a heart attack will not only improve their quality of life, it could well improve their mortality, new research from Korea suggests. Among 300 patients who had recently experienced acute coronary syndrome and had depression as a comorbidity, those randomised to a 24-week course of escitalopram were 30% less likely to have a major adverse cardiac event over a median of eight years compared with those given placebo. In actual numbers, 40.9% (61)of the 149 patients given escitalopram had a major adverse event (including cardiac death, MI or PCI) over the period of follow-up compared with 53.6% (81) of the placebo group (151 patients), according to the study findings published in JAMA. It has long been known that depression is a common morbidity associated with acute coronary syndrome. It is also known that patients who have this comorbidity tend to have worse long-term cardiac outcomes than those who are depression-free. But what has yet to be proven is the benefit of treating this depression, at least in terms of mitigating this increased risk of a poor cardiac outcome. To date studies on the topic have yet to prove a significant benefit, with research providing conflicting results. According to the study authors, in this trial there was a significant correlation between improvement in the depression and better protection against major cardiac events. Even when they excluded those people who were still taking the antidepressant one year after the acute coronary syndrome, the protective effect was still present. Consequently, they hypothesised that the protection was more a reflection of the successfully treatment of the depression rather than the particular medication. This was consistent with a trend seen in previous research using different medications and treatments. However, the better result could be because escitalopram is more effective in treating acute coronary syndrome depression than other agents that were studied previously, the authors suggested. “Escitalopram may have modifying effects on disease prognosis in ACS-associated depressive disorder through reduction of depressive symptoms,” the study authors suggested. There were a number of caveats with regard this study that the authors said needed to be considered. These included the fact the cohort was entirely Korean which may have caused an ethnic bias, the depressive symptoms were less severe than in previous studies (though this was more likely to lead to the effect being an under-estimate) and also the severity of the underlying heart disease (namely heart failure) was relatively low. Nonetheless the researchers were able to conclude that among patients with depression who had had a recent acute coronary event, 24 weeks of treatment of escitalopram significantly reduced the risk of dying or having a further adverse cardiac event after a median of 8.1 years. How generalisable these findings are, will need to be the subject of further research. Ref: JAMA 2018;320 (4): 350-357. Doi: 10.1001/jama.2018.9422

Clinical Articles iconClinical Articles

Children who persistently or frequently experience high anxiety need help, says psychologist Jennie Hudson, Professor and Director of the Centre for Emotional Health, at Sydney’s Macquarie University. “There has been a tendency to believe kids are going to grow out of [their anxiety]”, she said. In the past, anxiety in children was believed to be normal part of growing up. In fact, in the first Australian Child and Adolescent Mental Health survey in 1998, the question of anxiety disorders in children was not included at all. But the reality is, anxious children grow into anxious teenagers and then into anxious adults, and by then it is not only harder to treat it is also too late to reverse much of the negative impact this condition has had on these people’s lives, she explained in an interview following her presentation on the subject at HealthEd’s Mental Health in General Practice evening seminar held recently in Sydney. “Children need strategies to manage their anxiety now,” she said. “We, as health professionals need to be encouraging parents to seek help if they feel their child’s anxiety is interfering with their life.” For GPs who are wondering about the most appropriate advice to give parents of anxious children, a key principle is to encourage children not to avoid tasks or situations they fear. Parents need to support their child in facing the situations that make them afraid, even if it is ‘bit by bit’, and celebrate each time they manage to accomplish even part of a feared task be it at school, sport or socially. “There is a natural tendency for a parent to protect their child from feeling anxious – they will answer for the child who gets worried about replying or say they don’t need to give the speech in class that is making them nervous for example” but this tends to fuel the anxiety. By enabling the child to practise avoidance, the parent is inadvertently endorsing the child’s belief that this is something to be feared. Another important principle in managing anxiety in children is to try and get the child to identify their worried thoughts, what it is that they fear is going to happen. Commonly a child will catastrophise the consequences of a situation for example “failing this maths test means my life will be ruined”. Once the fear is described the parent and child can discuss, logically why this feared consequence is unlikely to happen. “We call it ‘detective thinking’ – encouraging the child to develop strategies to undertake a realistic appraisal of the situation,” Professor Hudson explained. In terms of resources available for parents, there are a number Professor Hudson recommends. “Helping Your Anxious Child: A Step-by-Step Guide for Parents,” written by Australian psychologists Ronald Rapee, Ann Wignall, Susan Spence, Vanessa Cobham, and Heidi Lyneham is practical, relevant and up-to-date. Another good option is “Helping Your Child with Fears and Worries 2nd Edition: A self-help guide for parents” written by UK experts in anxiety, Cathy Creswell and Lucy Willetts. As well as written material, there are some online programs and resources available, Professor  Hudson said. Macquarie University, Sydney has developed a couple of online programs, one called Cool Kids for 7-16-year-olds (https://www.mq.edu.au/about/campus-services-and-facilities/hospital-and-clinics/centre-for-emotional-health-clinic/programs-for-children-and-teenagers#Online) and another called Cool Little Kids (https://coollittlekids.org.au/ ) for children aged seven and under. Another good, evidence-based, online program is Brave (http://www.brave-online.com/) designed for 7-16-year-olds, and developed by researchers at the University of Queensland. Useful fact sheets for parents are available from the Macquarie University’s,  Centre for Emotional Health website (https://www.mq.edu.au/research/research-centres-groups-and-facilities/healthy-people/centres/centre-for-emotional-health-ceh/resources) as well as the Raising Children: The Australian parenting website (www.raisingchildren.net.au) For children with anxiety, CBT is recommended as the first line of treatment. As the risk of adverse effects with CBT is negligible it is recommended that treatment in children be commenced early on the basis of concern of the parent, carer or health professional. There are a number of reliable screening measures for anxiety in children, including the Spence Children’s Anxiety Scale (www.scaswebsite.com). The SCAS has a parent, child and teacher report along with Australian norms for 6-18-year-olds. The DASS21 is a reliable screening and monitoring tool for older adolescents. Currently in Australia only two of the SSRIs, fluvoxamine and sertraline, are approved for use in children and adolescents with obsessive compulsive disorder, Professor Hudson said. “There have been trials in Australia and the US combining CBT and sertraline. In our study, combining CBT and sertraline did not improve outcomes over and above CBT and placebo for children and adolescents with anxiety,” she added.

Clinical Articles iconClinical Articles

Low density lipoprotein cholesterol is the well-known culprit in terms of cardiovascular risk. Courtesy of a large meta-analysis of statin trials done in 2010 (the Cholesterol Treatment Trialists Collaboration), we know that for people starting with higher LDL-C levels (approximately 3.4 mmol/L), they can lower their risk of having a major adverse vascular event by 22%, every time they lower their LDL-C level by 1mmol/L. But what happens once your LDL level is lower? Can you continue to increase your protection by lowering your LDL levels further? Or does the beneficial effect plateau at a certain level? Or, worse still can very low LDL levels actually cause harm? A new meta-analysis just published in JAMA Cardiology has gone some way in answering these questions. The researchers analysed data from the 26 statin studies in the CTTC as well as three large trials of non-statin, cholesterol-lowering therapy looking at those patients who had an LDL-C level of 1.8 mmol/L or less at baseline. They found the cardioprotective benefits continued as LDL-C levels declined to even lower levels. “We found consistent clinical benefit from further LDL-C lowering in patient populations starting as low as a median of 1.6 mmol/L and achieving levels as low as a median of 0.5 mmol/L”. What’s more, the incremental benefit was of an almost identical magnitude to that seen when the LDL-C levels were higher - 21% relative risk reduction per 1-mmol/L reduction in LDL-C through this range. “This relative risk reduction is virtually the same as the 22% reduction seen in the overall CTTC analysis in which the starting LDL-C was nearly twice as high,” they said. And even though very low cholesterol levels have been rumoured to be associated with everything from cancer to dementia, across all these studies there were no offsetting safety concerns with LDL-C lowering, even when extremely low levels were recorded, levels that were lower than those seen in newborns. Given the weight of benefit over risk, the study authors suggest the current targets for LDL-C could be lowered further, to even as low as 0.5 mmol/L to reduce cardiovascular risk. This suggestion is supported by an accompanying editorial, in which the author, Dr Antonio Gotto, a New York cardiologist, predicts the findings will be included as part of the revision of the American Heart Association National Cholesterol guidelines which is currently underway. He said the study findings would provide much needed evidence to help clinicians manage patients with these extremely low achieved cholesterol levels, that until recently have been very rare. “Whether one calls it a target or a threshold, practicing physicians need some guidance as they venture into achieved levels of LDL-C levels that are as foreign as travel to outer space. I have confidence that the new guidelines will be closer to a global positioning system map rather than just a compass and the stars”, he concluded. Ref: JAMA Cardiol. Published online August 1, 2018. doi:10.1001/jamacardio.2018.2258

Clinical Articles iconClinical Articles

Salt may have been unfairly targeted as a killer in the healthy heart stakes, according to newly published research. The observational study of over 90000 people in 300 communities across 18 countries, found that sodium consumption was not associated with an increase in health risks unless the average daily consumption was excessive – more than 5g/day or 2.5 teaspoons of salt. And, this average high daily sodium intake was mostly seen in China, with only about 15% of communities outside of China exceeding this 5g a day limit. As part of this ongoing Prospective Urban Rural Epidemiology (PURE) study, participants aged 35-70 were assessed initially at baseline and then followed for an average of 8.1 years, over which time the occurrence of any major cardiovascular events or death was recorded. What the researchers found was that the risk of hypertension and strokes was only increased in communities where the average daily sodium intake was greater than 5g. Perhaps unexpectedly, this higher sodium intake was actually found to be also associated with lower rates of myocardial infarction and total mortality. Furthermore, the research found that very low levels of sodium intake were harmful, being associated with an increased risk of cardiovascular disease and mortality. The findings fly in the face of the current WHO guidelines that recommend, as a global approach we should be aiming for populations to reduce their sodium intake to below 2g/day. However, no communities in the study came close to achieving this target. In fact, no communities in the study had an average sodium intake of less than 3g/day, based on morning fasting urine samples from the participants. “Sodium intake was associated with cardiovascular disease and strokes only in communities where mean intake was greater than 5g/day. A strategy of sodium reduction in these communities and countries but not in others might be appropriate,” the Canadian study authors said. But before we all go and stock up on our Saxa, an accompanying editorial sounds a word of caution. While acknowledging the findings that ‘normal’ salt intake appeared to be at least health-neutral if not beneficial, the editorial authors remind us that the study is observational and has not taken into consideration a number of potential confounders such as diet. Without taking these confounders into account, one can’t assume that just decreasing salt intake in people at high risk of stroke or increasing it in people at risk of a heart attack will work, they said. “Nevertheless the findings are exceedingly interesting and should be tested in a randomised controlled trial,” they concluded, adding that such a trial, to be conducted in a US federal prison population had been proposed.   Ref: Lancet Vol 392 No 10146 pp:496-506 Vol 392 No 10146 pp: 456-458

Clinical Articles iconClinical Articles

Among low-risk, nulliparous women, inducing a pregnancy at 39 weeks will not only be at least as safe as letting nature run its course but it will reduce the risk of having a Caesarean, according to US research. According to the randomised trial involving over 6000 women, those who were assigned to ‘expectant management’ ended up having a median gestational age of 40 weeks exactly, not a huge difference from the median gestational age of the induction group which was 39.3 weeks. However, the main aim of the study was to determine if induction at 39 weeks resulted in more adverse perinatal outcomes including conditions such as perinatal death, need for respiratory support, Apgars of less than three at five minutes, intracranial haemorrhage and the like. This potential association has been the concern which has dictated what is currently common obstetric practice. “When gestation is between 39 weeks 0 days and 40 weeks 6 days, common practice has been to avoid elective labour induction because of a lack of evidence of perinatal benefit and concern about a higher frequency of Caesarean delivery and other possible adverse maternal outcomes, particularly among nulliparous women”, the study authors said in the new England Journal of Medicine. What they found in their study however, was that these adverse perinatal outcomes occurred in only 4.3% of the babies born in the induction group and in 5.4% of those born to mothers who went into labour naturally. It appears the relative risk was reduced by 20%. And even though the induction group tended to have longer labours they had quicker recovery times and shorter hospital stays. In terms of maternal outcomes, induction at 39 weeks was associated with a significant reduction in the risk of both Caesarean section and hypertensive disorders of pregnancies. The researchers estimated one Caesarean would be avoided for every 28 low-risk, first-time mothers induced at 39 weeks. The study authors suggest that these findings have the capacity to change practice, or at the very least, provide evidence to relook at current obstetric practice policies. “These results suggest that policies aimed at the avoidance of elective labour induction among low-risk nulliparous women at 39 weeks of gestation are unlikely to reduce the rate of Caesarean delivery on a population level”, they concluded. Ref: NEJM 2018; 379:513-23 DOI: 10.1056/NEJMoa1800566

In what will be seen as a blow to cryptic crossword compilers the world over, it appears wealth is a better determinant of whether you keep your marbles than education. In a UK prospective study of over 6000 adults aged over 65 years, researchers found those people in the lowest quintile in terms of socioeconomic status were almost 70% more likely to get dementia than those categorised to be in the top fifth, over a 12 year follow-up period. Depressingly, this finding held true regardless of education level. “This longitudinal cohort study found that wealth in late life, but not education, was associated with increased risk of dementia, suggesting people with fewer financial resources were at higher risk,” the study authors said. On further analysis, researchers found the association between wealth, or the lack thereof and dementia was even more pronounced in the younger participants in the cohort. So what did the researchers think was the reason behind the link between poverty and dementia? One explanation was that having money allowed one to access more mentally stimulating environments including cultural resources (reading, theatre etc) and increased social networks that might help preserve cognitive function. While on the flip side, poverty (or ‘persistent socioeconomic disadvantage’ as the authors describe it) affects physiological functioning, increasing the risk of depression, vascular disease and stroke – all known risk factors for dementia. Other factors such as poor diet and lack of exercise also appear to more common among poorer people in the community. All this seems fairly logical, but what of the lack of a protective effect of education? Well, the researchers think this might be a particularly British phenomenon in this age group. “This might be a specific cohort effect in the English population born and educated in the period surrounding the World War II,” they suggested. A number of other studies have shown other results, with some, including the well-respected Canadian Study of Health and Aging-  showing the complete opposite – education protects against dementia. Consequently, the authors of this study, published in JAMA Psychiatry, hypothesise that perhaps this cohort of patients may have been unable to access higher education because of military service or financial restrictions but were able to access intellectually challenging jobs after the war. All in all, the study is an observational one and it is possible there are a number of confounding factors from smoking to availability of medical care that play a role in why poorer people are at greater risk of dementia. And while the researchers are not advocating older people give up their Bridge game and just buy lottery tickets, it would seem money is useful, if not for happiness, then at least for preserving brain power. Ref: JAMA Psychiatry doi:10.1001/jamapsychiatry.2018.1012

Clinical Articles iconClinical Articles

Almost three quarters of men with low grade prostate cancer may not be being adequately monitored, a recent Victorian study suggests. According to data from the Prostate Cancer Outcomes Registry – Victoria, only 26.5% of over 1600 men who had low risk prostate cancer had follow-up investigations consistent with standard active surveillance protocols in the two years after their diagnosis. Specifically, researchers were investigating whether these men adhered to the schedule that consisted of at least three PSA measures and at least one biopsy in the two years post diagnosis. While the study authors concede the clinical consequences of this shortcoming are yet unknown, the finding is still of concern. “If [these men] are not being followed appropriately according to [Active Surveillance] protocols, men may miss the opportunity to be treated with curative intent,” they wrote in the MJA. Active surveillance is increasingly the management of choice for men with low risk prostate cancer. In Victoria, 60% of men diagnosed with this grade of cancer are now managed with active surveillance, the study authors said. A major issue with active surveillance as a management option, is that the optimal timing of follow-up investigations has not been strictly defined, resulting in several different protocols and guidelines being developed worldwide.  This, in part was the impetus for the development of the Victorian prostate cancer registry, which was established in 2009 ‘to improve knowledge of patterns of care and outcomes for men diagnosed with prostate cancer.’ Currently the Australian protocol for active surveillance is based on the consensus opinion, and the three PSA tests and repeat biopsy within two years has been widely accepted as standard care. The finding that 73.5% did not receive monitoring in accordance with thisprotocol, reflected adherence levels that were among the worst when compared to similar studies around the world, and the study authors suggested the reason was likely to be multifactorial. The reasons for the non-compliance ‘may reflect patient-, clinician- and health service-related factors,” they wrote. Patients may avoid biopsy because of pain, clinicians may delay testing based on a patient’s comorbidities or health services may have fewer resources for sending reminders or pursuing patients who miss appointments, the study authors suggested. Nonetheless, efforts needed to be made to ensure men with low grade prostate cancer are not disadvantaged in terms of health outcomes if they opt to accept active surveillance as their management strategy. “To improve adherence, a multifaceted approach may be required, including an education campaign that highlights the need for men to undergo regular PSA assessment and prostate biopsy,” they concluded. Ref: MJA doi:10.5694/mja17.00559

Clinical Articles iconClinical Articles

Surviving childhood cancer is a major win in anyone’s language. However, it is well-known, that even as adults these survivors are at an increased risk of dying before the age of their cancer-history-free peers. Now, it would appear, these people can do something about managing that Damocles sword. According to a recent study published in JAMA Oncology, regular vigorous exercise in early adulthood is associated with a lower risk of mortality in adult survivors of childhood cancer. And the study authors believe the finding could have a significant impact, given the numbers of children that now survive cancer. “These findings may be of importance for the large and rapidly growing population of adult survivors of childhood cancer at substantially higher risk of mortality due to multiple competing risks,” they said. The study was a multicentre cohort analysis of data from over 15,400 adults who had had cancer diagnosed at one of number of paediatric tertiary hospitals in North America before the age of 21. Interviews were conducted at baseline (median age at almost 26 years) at which, among a range of other parameters, levels of exercise were assessed. These patients were then followed for up to 15 years (median follow-up 9.6 years). Overall, after adjusting for chronic health conditions and treatment exposures the researchers found an inverse association between exercise and all-cause mortality. More compelling was the analysis of a subset of almost 5700 survivors, which showed that increased exercise over an eight-year period was associated with a 40% reduction in all-cause mortality compared with low levels of exercise. Of course, the critical question is what constitutes vigorous or increased exercise? How much exercise does a person need to do to qualify for this benefit? The question these patients were asked was ‘on how many of the past seven days did you exercise or do a sport that made you sweat or breathe hard?’ This was considered vigorous exercise, and the mortality benefit was seen in people who exercised to this level for at least 60 minutes a week. But the benefit was not entirely dose-dependent.  It appeared that vigorously exercising for about an hour a day, five days a week (eg a brisk 60 min walk) was the most advantageous in terms of mortality, beyond that the benefit was attenuated. It is well known that, in the general population, regular exercise reduces all-cause and cause-specific mortality, however there are much fewer studies looking at its benefit among cancer survivors, especially younger-age cancer survivors. “To this end, our findings….significantly extend the current evidence base and arguably provide the best available epidemiological evidence to support the endorsement of exercise for cancer survivors,” the study authors said. Ref: JAMA Oncology doi10:1001/jamaoncol.2018.2254

Clinical Articles iconClinical Articles

Findings from a newly published study are likely to silence those who suggest doctors need to return to wearing white coats to improve patient respect. According to US researchers, patient perception in terms of a doctor’s capability, trustworthiness and reliability is not affected by the presence of visible tattoos and non-traditional piercings (on the doctor not the patient). “Physician tattoos and facial piercings were not factors in patients’ evaluations of physician competence, professionalism or approachability,” the study authors said. This interesting study, published in the Emergency Medicine Journal involved surveying over 900 patients who had attended the emergency department of a large teaching hospital in the US. The patients were not told the purpose of the survey, they were simply asked to rate their experience, including the care they received from their attending doctor across a range of domains including competence, professionalism, caring, approachability, trustworthiness and reliability. The doctors provided their own controls meaning that some shifts they worked without any ‘exposed body art’, other shifts they were ‘pierced’ (hoop earrings for men or fake nasal studs for women) and other shifts they were ‘tattooed’ (with a temporary standardised black tribal tattoo around the arm). Sometimes the doctors were both pierced and tattooed. Nurses asked the patients to complete the survey and they did provide prompts to remind the patient of the doctor who had attended them, but the prompt was along the line of ‘the young male doctor with red hair’ rather than drawing attention to the tattoo or piercing. More than 75% of the time the patient gave the attending doctor the top rating across all the domains surveyed regardless of appearance. The findings appear to be in contrast with previous research which has found patients prefer their physicians to be traditionally dressed. However, the authors of this study suggest that the evidence to date has been limited by a lack of patient blinding to the study purpose. What’s more, they suggest that existing policies regarding visible body art on doctors are likely to be driven by administrator preferences rather than data on patient satisfaction. There were a number of limitations to the study in particular the small number of doctors involved (seven) and because the patients weren’t specifically asked about the body art we don’t know whether they actually didn’t care about it or whether they were able to overcome their disapproval of it. Given the trial took place in an emergency department, the results cannot necessarily be extrapolated to the community setting as in general practice. Nonetheless, the results will be of interest given the increasing popularity of body art, especially tattoos with the study authors citing research showing, in 2006, 24% of young and middle adult persons had at least one tattoo. Concerns that a doctor’s visible body art has a negative impact on a patient’s perception of their professionalism or a patient’s satisfaction do not appear founded, the researchers concluded. Ref: Doi: 10.1136/emermed-2017-206887

Clinical Articles iconClinical Articles

New study findings confirm what many parents already believe, introducing solids early helps babies sleep through the night. The UK randomised trial, published in JAMA Pediatrics showed the early introduction of solids into an infant’s diet (from three months of age) was associated with longer sleep duration, less frequent waking at night and a reduction in parents reporting major sleep problems in their child. Researchers analysed data collected as part of the Enquiring About Tolerance study, which included on-going parent-reported assessments on over 1300 infants from England and Wales who were exclusively breastfed to three months of age. At baseline, there were no significant differences in sleeping patterns between those infants who were then introduced to solids early and those who remained exclusively breastfed to six months, as per the World Health Organisation recommendation. However, at six months the difference between was significant. “At age six months,..[those babies who had started solids] were sleeping 17 minutes longer at night, equating to two hours of extra sleep per week, and were waking two fewer times at night per week,” the study authors said. “Most significantly, at this point, [early introduction group] families were reporting half the rate of very serious sleep problems,” they added, saying the results confirm the link between poor infant sleep and parental quality of life. And the findings contradict previous claims that a baby’s poor sleep habits and frequent waking has nothing to do with hunger. The study found that those babies with the highest weight gain between birth and three months (when they were enrolled in the study) were the most likely to be waking at night. “This is consistent with the idea that their rapid weight gain was leading to an enhanced caloric and nutritional requirement, resulting in hunger and disrupted sleep,” they said. Overall, it seems that the study has simply proved what many parents had already suspected. The study authors referred to previous research that showed that, despite WHO and British guidelines recommending babies be exclusively breastfed to six months, three quarters of British mothers introduce solids before five months and 26% report night waking as influencing this decision. Interestingly, recent evidence with regard reducing the risk of allergy and atopy has seen some organisations including our own Australian Society of Clinical Immunology and Allergy, recommending infants be introduced to solids earlier than six months. The authors of this study suggest that parents following these newer guidelines might find they get the added benefit of more sleep. “With recent guidelines advocating introducing solids from age four to six months in some or all infants, our results suggest that improved sleep may be a concomitant benefit,” they concluded. Ref: JAMA Pediatr. doi:10.1001/jamapediatrics.2018.0739

Clinical Articles iconClinical Articles

All newly-diagnosed hypertensive patients should be screened for primary aldosteronism before they are started on treatment, Australian experts suggest in the latest issue of the MJA. “Primary aldosteronism is common, specifically treatable, and associated with significant cardiovascular morbidity and mortality,” say researchers Dr Jun Yang, Professor Peter Fuller and Professor Michael Stowasser. They refer to a recent systematic review of over 30 studies, that found among a cohort of people with severe or resistant hypertension (systolic BP >180mmHg and diastolic BP >110), 16.4% were found to have primary aldosteronism. Admittedly these studies were carried out in tertiary centres. There been far fewer studies on the issue conducted in primary care with somewhat mixed results, with one small Australian study suggesting 11.5% of people with significant hypertension in the general practice setting had primary aldosteronism. But its not only the patients with severe hypertension that need to be considered for primary aldosteronism screening, the authors suggest. They point to an Italian study including over 1600 GP patients selected randomly who were screened for primary aldosteronism and found a prevalence of 5.9%.  Importantly 45% of these had mild hypertension (BP 140-159/90-99mmHg). According to the article authors, these patients, because would have most likely remained undiagnosed if not for the study. And the effect of the untreated aldosterone excess would have most likely led to poor blood pressure control and increased cardiovascular, renal and metabolic morbidity long-term. In other words, identifying these patients early in the course of the disease could allow more appropriate treatment and ultimately avoid the end-organ damage that is more likely to occur if diagnosis is delayed until after the development of severe hypertension. “Targeted treatment of [primary aldosteronism] using surgery or mineralocorticoid receptor antagonists, such as spironolactone and eplerenone, rather than non-specific antihypertensive medications, can reverse the underlying cardiovascular pathology,” they said. The recommended biochemical screening tool for primary aldosteronism is the aldosterone to renin ratio which is elevated in this condition because plasma aldosterone is normal or elevated while renin is suppressed. The experts suggest screening prior to commencing antihypertensive therapy as many of these drugs, including beta blockers, calcium channel blockers, ACE inhibitors, ARBs and diuretics usually interfere with this aldosterone to renin ratio. The test isn’t perfect, they admit, as it can be influenced by a number of confounders including salt intake and age, but as a screening tool it has been proven, in trials both in Australia and internationally to be very useful, resulting in significantly increased numbers of patients diagnosed. Current Australian hypertension guidelines recommend clinicians consider primary aldosteronism in patients with hypertension particularly those with moderate to severe or treatment-resistant hypertension. But, as the article authors point out, given the prevalence of primary aldosteronism and the health burden associated with this cardiovascular risk factor both to the Australian population and the economy, maybe it is time to consider screening all newly-diagnosed hypertensive patients for this condition, before the commencement of non-specific antihypertensive therapy. “This diagnostic strategy should lead to significant individual and population health and economic impacts as a result of many patients with hypertension being offered the chance of curative or simpler treatment at an early stage of their disease.” Ref: MJA doi:10.5694/mja17.00783

Clinical Articles iconClinical Articles

Teenagers who are constantly checking their phones are more likely to develop ADHD symptoms than their less social-media-engaged peers, US researchers say. In what the study authors say is the first longitudinal study investigating the issue, researchers found that the frequency of digital media use among over 2500 non-ADHD 15-and 16-year-olds was significantly associated with the subsequent development of ADHD symptoms over a two-year period of follow up. A high frequency of media activity – most commonly checking their smart phone was associated with an 10% increased likelihood of developing inattentive and hyperactive-impulsive symptoms in this teenage cohort. Associations were significantly stronger in boys and participants with more mental health symptoms, such as depressive symptoms and delinquent behaviours. But while the association was statistically significant, further research was needed to determine if the digital media use was the cause of problem, the US authors said in JAMA. “The possibility that reverse causality or undetected baseline ADHD symptoms influenced the association cannot be ruled out”, they said. To date, the potential risks of intense engagement in social media is largely an evidence-free zone, they said. Prior longitudinal studies on this topic have most commonly involved computers, televisions and video-game consoles. But the engagement associated with these devices is markedly different to that seen with modern media platforms especially in terms of accessibility, operating speed, level of stimulation and potential for high-frequency exposure. And as an accompanying editorial points out, television and gaming are sporadic activities whereas the current widespread use of smartphones means social media is now close at hand. “In 2018, 95% of adolescents reported having access to a smartphone (a 22-percentage-point increase from 2014-2015), and 45% said they were online ‘almost constantly’”, the US editorial author explained. This instant access to highly engaging content is designed to be habit-forming. Also the effect of current social media engagement not only involves exposure to violence in games and displacement of other activities that were the major issues in the past. Social media today has been designed to engage the user for longer periods and reward repeated users. New behaviours to consider include frequent attention shifts and the constant media multitasking, which might interfere with a person’s ability to focus on a single task, especially a non-preferred task. It is also hypothesised that the ready availability of desired information may affect impulse control (no waiting is required). And the ‘always-on’ mentality may be depriving young brains of ‘down time’, allowing the mind to rest, tolerate boredom and even practise mindfulness. The study researchers were keen to emphasise their research findings are a long way from proving digital media increases the risk of ADHD symptoms, and even if they did, the public health and clinical implications of this are uncertain. However, the editorial was more enthusiastic about the study’s implications. “With more timely digital media research, parents may feel more confident in the evidence underlying recommendations for how to manage the onslaught of media in their households,” it said. The editorial author suggested the findings support American Academy of Pediatrics guidelines that recommend adolescents focus on activities that have been proven to promote ‘executive functioning’ such as sleep, physical activity, distraction-free homework and positive interactions with family and friends – with the implication being – ‘switch the phone off’. Ref: JAMA 2018; 320(3): 255-263 doi:10.1001/jama.2018.8931 JAMA 2018; 320(3): 237-239

Clinical Articles iconClinical Articles

Effectively treating depression in patients who have just experienced a heart attack will not only improve their quality of life, it could well improve their mortality, new research from Korea suggests. Among 300 patients who had recently experienced acute coronary syndrome and had depression as a comorbidity, those randomised to a 24-week course of escitalopram were 30% less likely to have a major adverse cardiac event over a median of eight years compared with those given placebo. In actual numbers, 40.9% (61)of the 149 patients given escitalopram had a major adverse event (including cardiac death, MI or PCI) over the period of follow-up compared with 53.6% (81) of the placebo group (151 patients), according to the study findings published in JAMA. It has long been known that depression is a common morbidity associated with acute coronary syndrome. It is also known that patients who have this comorbidity tend to have worse long-term cardiac outcomes than those who are depression-free. But what has yet to be proven is the benefit of treating this depression, at least in terms of mitigating this increased risk of a poor cardiac outcome. To date studies on the topic have yet to prove a significant benefit, with research providing conflicting results. According to the study authors, in this trial there was a significant correlation between improvement in the depression and better protection against major cardiac events. Even when they excluded those people who were still taking the antidepressant one year after the acute coronary syndrome, the protective effect was still present. Consequently, they hypothesised that the protection was more a reflection of the successfully treatment of the depression rather than the particular medication. This was consistent with a trend seen in previous research using different medications and treatments. However, the better result could be because escitalopram is more effective in treating acute coronary syndrome depression than other agents that were studied previously, the authors suggested. “Escitalopram may have modifying effects on disease prognosis in ACS-associated depressive disorder through reduction of depressive symptoms,” the study authors suggested. There were a number of caveats with regard this study that the authors said needed to be considered. These included the fact the cohort was entirely Korean which may have caused an ethnic bias, the depressive symptoms were less severe than in previous studies (though this was more likely to lead to the effect being an under-estimate) and also the severity of the underlying heart disease (namely heart failure) was relatively low. Nonetheless the researchers were able to conclude that among patients with depression who had had a recent acute coronary event, 24 weeks of treatment of escitalopram significantly reduced the risk of dying or having a further adverse cardiac event after a median of 8.1 years. How generalisable these findings are, will need to be the subject of further research. Ref: JAMA 2018;320 (4): 350-357. Doi: 10.1001/jama.2018.9422

Clinical Articles iconClinical Articles

Children who persistently or frequently experience high anxiety need help, says psychologist Jennie Hudson, Professor and Director of the Centre for Emotional Health, at Sydney’s Macquarie University. “There has been a tendency to believe kids are going to grow out of [their anxiety]”, she said. In the past, anxiety in children was believed to be normal part of growing up. In fact, in the first Australian Child and Adolescent Mental Health survey in 1998, the question of anxiety disorders in children was not included at all. But the reality is, anxious children grow into anxious teenagers and then into anxious adults, and by then it is not only harder to treat it is also too late to reverse much of the negative impact this condition has had on these people’s lives, she explained in an interview following her presentation on the subject at HealthEd’s Mental Health in General Practice evening seminar held recently in Sydney. “Children need strategies to manage their anxiety now,” she said. “We, as health professionals need to be encouraging parents to seek help if they feel their child’s anxiety is interfering with their life.” For GPs who are wondering about the most appropriate advice to give parents of anxious children, a key principle is to encourage children not to avoid tasks or situations they fear. Parents need to support their child in facing the situations that make them afraid, even if it is ‘bit by bit’, and celebrate each time they manage to accomplish even part of a feared task be it at school, sport or socially. “There is a natural tendency for a parent to protect their child from feeling anxious – they will answer for the child who gets worried about replying or say they don’t need to give the speech in class that is making them nervous for example” but this tends to fuel the anxiety. By enabling the child to practise avoidance, the parent is inadvertently endorsing the child’s belief that this is something to be feared. Another important principle in managing anxiety in children is to try and get the child to identify their worried thoughts, what it is that they fear is going to happen. Commonly a child will catastrophise the consequences of a situation for example “failing this maths test means my life will be ruined”. Once the fear is described the parent and child can discuss, logically why this feared consequence is unlikely to happen. “We call it ‘detective thinking’ – encouraging the child to develop strategies to undertake a realistic appraisal of the situation,” Professor Hudson explained. In terms of resources available for parents, there are a number Professor Hudson recommends. “Helping Your Anxious Child: A Step-by-Step Guide for Parents,” written by Australian psychologists Ronald Rapee, Ann Wignall, Susan Spence, Vanessa Cobham, and Heidi Lyneham is practical, relevant and up-to-date. Another good option is “Helping Your Child with Fears and Worries 2nd Edition: A self-help guide for parents” written by UK experts in anxiety, Cathy Creswell and Lucy Willetts. As well as written material, there are some online programs and resources available, Professor  Hudson said. Macquarie University, Sydney has developed a couple of online programs, one called Cool Kids for 7-16-year-olds (https://www.mq.edu.au/about/campus-services-and-facilities/hospital-and-clinics/centre-for-emotional-health-clinic/programs-for-children-and-teenagers#Online) and another called Cool Little Kids (https://coollittlekids.org.au/ ) for children aged seven and under. Another good, evidence-based, online program is Brave (http://www.brave-online.com/) designed for 7-16-year-olds, and developed by researchers at the University of Queensland. Useful fact sheets for parents are available from the Macquarie University’s,  Centre for Emotional Health website (https://www.mq.edu.au/research/research-centres-groups-and-facilities/healthy-people/centres/centre-for-emotional-health-ceh/resources) as well as the Raising Children: The Australian parenting website (www.raisingchildren.net.au) For children with anxiety, CBT is recommended as the first line of treatment. As the risk of adverse effects with CBT is negligible it is recommended that treatment in children be commenced early on the basis of concern of the parent, carer or health professional. There are a number of reliable screening measures for anxiety in children, including the Spence Children’s Anxiety Scale (www.scaswebsite.com). The SCAS has a parent, child and teacher report along with Australian norms for 6-18-year-olds. The DASS21 is a reliable screening and monitoring tool for older adolescents. Currently in Australia only two of the SSRIs, fluvoxamine and sertraline, are approved for use in children and adolescents with obsessive compulsive disorder, Professor Hudson said. “There have been trials in Australia and the US combining CBT and sertraline. In our study, combining CBT and sertraline did not improve outcomes over and above CBT and placebo for children and adolescents with anxiety,” she added.

Clinical Articles iconClinical Articles

Low density lipoprotein cholesterol is the well-known culprit in terms of cardiovascular risk. Courtesy of a large meta-analysis of statin trials done in 2010 (the Cholesterol Treatment Trialists Collaboration), we know that for people starting with higher LDL-C levels (approximately 3.4 mmol/L), they can lower their risk of having a major adverse vascular event by 22%, every time they lower their LDL-C level by 1mmol/L. But what happens once your LDL level is lower? Can you continue to increase your protection by lowering your LDL levels further? Or does the beneficial effect plateau at a certain level? Or, worse still can very low LDL levels actually cause harm? A new meta-analysis just published in JAMA Cardiology has gone some way in answering these questions. The researchers analysed data from the 26 statin studies in the CTTC as well as three large trials of non-statin, cholesterol-lowering therapy looking at those patients who had an LDL-C level of 1.8 mmol/L or less at baseline. They found the cardioprotective benefits continued as LDL-C levels declined to even lower levels. “We found consistent clinical benefit from further LDL-C lowering in patient populations starting as low as a median of 1.6 mmol/L and achieving levels as low as a median of 0.5 mmol/L”. What’s more, the incremental benefit was of an almost identical magnitude to that seen when the LDL-C levels were higher - 21% relative risk reduction per 1-mmol/L reduction in LDL-C through this range. “This relative risk reduction is virtually the same as the 22% reduction seen in the overall CTTC analysis in which the starting LDL-C was nearly twice as high,” they said. And even though very low cholesterol levels have been rumoured to be associated with everything from cancer to dementia, across all these studies there were no offsetting safety concerns with LDL-C lowering, even when extremely low levels were recorded, levels that were lower than those seen in newborns. Given the weight of benefit over risk, the study authors suggest the current targets for LDL-C could be lowered further, to even as low as 0.5 mmol/L to reduce cardiovascular risk. This suggestion is supported by an accompanying editorial, in which the author, Dr Antonio Gotto, a New York cardiologist, predicts the findings will be included as part of the revision of the American Heart Association National Cholesterol guidelines which is currently underway. He said the study findings would provide much needed evidence to help clinicians manage patients with these extremely low achieved cholesterol levels, that until recently have been very rare. “Whether one calls it a target or a threshold, practicing physicians need some guidance as they venture into achieved levels of LDL-C levels that are as foreign as travel to outer space. I have confidence that the new guidelines will be closer to a global positioning system map rather than just a compass and the stars”, he concluded. Ref: JAMA Cardiol. Published online August 1, 2018. doi:10.1001/jamacardio.2018.2258

Clinical Articles iconClinical Articles

Salt may have been unfairly targeted as a killer in the healthy heart stakes, according to newly published research. The observational study of over 90000 people in 300 communities across 18 countries, found that sodium consumption was not associated with an increase in health risks unless the average daily consumption was excessive – more than 5g/day or 2.5 teaspoons of salt. And, this average high daily sodium intake was mostly seen in China, with only about 15% of communities outside of China exceeding this 5g a day limit. As part of this ongoing Prospective Urban Rural Epidemiology (PURE) study, participants aged 35-70 were assessed initially at baseline and then followed for an average of 8.1 years, over which time the occurrence of any major cardiovascular events or death was recorded. What the researchers found was that the risk of hypertension and strokes was only increased in communities where the average daily sodium intake was greater than 5g. Perhaps unexpectedly, this higher sodium intake was actually found to be also associated with lower rates of myocardial infarction and total mortality. Furthermore, the research found that very low levels of sodium intake were harmful, being associated with an increased risk of cardiovascular disease and mortality. The findings fly in the face of the current WHO guidelines that recommend, as a global approach we should be aiming for populations to reduce their sodium intake to below 2g/day. However, no communities in the study came close to achieving this target. In fact, no communities in the study had an average sodium intake of less than 3g/day, based on morning fasting urine samples from the participants. “Sodium intake was associated with cardiovascular disease and strokes only in communities where mean intake was greater than 5g/day. A strategy of sodium reduction in these communities and countries but not in others might be appropriate,” the Canadian study authors said. But before we all go and stock up on our Saxa, an accompanying editorial sounds a word of caution. While acknowledging the findings that ‘normal’ salt intake appeared to be at least health-neutral if not beneficial, the editorial authors remind us that the study is observational and has not taken into consideration a number of potential confounders such as diet. Without taking these confounders into account, one can’t assume that just decreasing salt intake in people at high risk of stroke or increasing it in people at risk of a heart attack will work, they said. “Nevertheless the findings are exceedingly interesting and should be tested in a randomised controlled trial,” they concluded, adding that such a trial, to be conducted in a US federal prison population had been proposed.   Ref: Lancet Vol 392 No 10146 pp:496-506 Vol 392 No 10146 pp: 456-458

Clinical Articles iconClinical Articles

Among low-risk, nulliparous women, inducing a pregnancy at 39 weeks will not only be at least as safe as letting nature run its course but it will reduce the risk of having a Caesarean, according to US research. According to the randomised trial involving over 6000 women, those who were assigned to ‘expectant management’ ended up having a median gestational age of 40 weeks exactly, not a huge difference from the median gestational age of the induction group which was 39.3 weeks. However, the main aim of the study was to determine if induction at 39 weeks resulted in more adverse perinatal outcomes including conditions such as perinatal death, need for respiratory support, Apgars of less than three at five minutes, intracranial haemorrhage and the like. This potential association has been the concern which has dictated what is currently common obstetric practice. “When gestation is between 39 weeks 0 days and 40 weeks 6 days, common practice has been to avoid elective labour induction because of a lack of evidence of perinatal benefit and concern about a higher frequency of Caesarean delivery and other possible adverse maternal outcomes, particularly among nulliparous women”, the study authors said in the new England Journal of Medicine. What they found in their study however, was that these adverse perinatal outcomes occurred in only 4.3% of the babies born in the induction group and in 5.4% of those born to mothers who went into labour naturally. It appears the relative risk was reduced by 20%. And even though the induction group tended to have longer labours they had quicker recovery times and shorter hospital stays. In terms of maternal outcomes, induction at 39 weeks was associated with a significant reduction in the risk of both Caesarean section and hypertensive disorders of pregnancies. The researchers estimated one Caesarean would be avoided for every 28 low-risk, first-time mothers induced at 39 weeks. The study authors suggest that these findings have the capacity to change practice, or at the very least, provide evidence to relook at current obstetric practice policies. “These results suggest that policies aimed at the avoidance of elective labour induction among low-risk nulliparous women at 39 weeks of gestation are unlikely to reduce the rate of Caesarean delivery on a population level”, they concluded. Ref: NEJM 2018; 379:513-23 DOI: 10.1056/NEJMoa1800566

Clinical Articles iconClinical Articles