Articles

Read the latest articles relevant to your clinical practice, including exclusive insights from Healthed surveys and polls.

By reading selected clinical articles, you earn CPD in the Educational Activities (EA) category whenever you click the “Claim CPD” button and follow the prompts. 

Dr Anup Desai

Insomnia is a common condition in which patients experience difficulty initiating sleep, maintaining sleep and/or wake earlier than desired. It can cause significant distress and impaired functioning. Population surveys suggest that approximately 33% of the population experience at least one insomnia symptom, with only 1 in 10 seeking treatment. Female gender, older age, pain and psychological distress have all been associated with increased prevalence rates. There is a strong association between insomnia and psychiatric disorders, such as depression, anxiety, and drug abuse. Rates of psychiatric comorbidity as high as 80% have been reported, with insomnia predating the onset of mood disorder in approximately half of cases. Insomnia has also been independently associated with increased healthcare utilisation, increased workplace injuries and absenteeism, and reductions in quality of life. A number of studies have demonstrated an association between insomnia and increased cardiovascular risk. The management of insomnia can broadly be categorised into pharmacological and non -pharmacological therapies. Although pharmacotherapy is often used first by doctors and as primary therapy, it is not indicated long term and should not be used in isolation. Pharmacotherapy is only indicated for short term use. Benzodiazepines, non-benzodiazepine hypnotics, melatonin, sedating anti-depressants and antipsychotics have all been used. The majority of these agents have been shown to be more efficacious than placebo in short term randomised controlled trials, however their use is often tempered by extensive side effect profiles, detrimental effects on sleep architecture and the risk of tolerance and dependence. Non-drug treatments for insomnia, namely Cognitive Behavioural Therapy (CBT) for sleep are very effective both acutely and for the longer term. CBT for sleep should be initiated in all patients. CBT is effective as a sole treatment for insomnia or it may reduce the reliance on medications in the longer term. CBT addresses dysfunctional behaviours and beliefs about sleep and consists of sleep hygiene, stimulus control, sleep restriction, and cognitive restructuring. In the past, access to CBT for sleep has been a challenge, with limited trained providers and poor availability. However, recent studies of computer based (online) CBT for sleep have been encouraging with comparable efficacy to conventional CBT for sleep. Online CBT can be accessed in Australia through the US based SHUTi program if referred by GP’s or Specialists (http://www.sleepcentres.com.au/online-insomnia-cbt-program.html). Online CBT for sleep is convenient, effective and easy to access, and arguably is a good option for non-drug insomnia management for all patients.

Dr Vivienne Miller

Based on an interview with endocrinologist and obesity expert, Professor Joseph Proietto at the Annual Women and Children’s Health Update, Melbourne, March 2018 There are many reasons proposed for our society becoming more overweight than ever before. The commonest explanation is that people are overeating because they have more refined, energy dense foods easily available and requiring little physical effort to access. The other consideration is that people are not moving and exercising as much, due to increased sedentary employment and entertainments that are clearly less effort. Once people become overweight, they feel less like exercising and so the situation worsens. Unfortunately, in our society, food (including alcohol), socialising and entertainment are all strongly associated. Food is easily obtained and is abundant in variety and quantity. Previous generations ate less because of cost, availability and the fact that food generally plainer and perhaps less tasty. This was especially true for the poorer in society, who also tended to have more physically demanding jobs, with less time and money to spend on eating during the day. On a scientific level, genetics and epigenetics are now known to play an important role in the development of obesity. In particular, there are many genes currently being researched in relation to appetite and obesity including leptin (a hormone made mostly by adipose cells that inhibits hunger) and its receptor, and the melanocortin 4 receptor. "For obvious evolutionary reasons, there are no genes (yet) identified that reduce metabolic rate," said Professor Joseph Proietto. So far, all genes that have been found to be associated with obesity have been linked to increased hunger. There are no genes known that reduce metabolism. It is interesting that force-feeding increases energy expenditure while weight loss reduces energy expenditure and, in both cases, it is spontaneous activity that changes, with only minor alteration in basal metabolic rate. This has been demonstrated in overfeeding experiments. Some causes of obesity may be epigenetic. For example, some women who gain excess weight during pregnancy find it more difficult to lose after the pregnancy. This is likely to be due to epigenetic change in the expression of genes connected with obesity. Unfortunately, the offspring of mothers who become overweight before or during pregnancy are likely to inherit these genes, and hence themselves have trouble with weight gain. Certain medical conditions (hypothyroidism, Cushing's syndrome) may induce modest weight gain, but the extreme numbers of people in our society with serious weight problems mean that endocrinological causes are very much in the minority. Hence, we need to look for other causes for obesity in the modern age. One of the biggest problems with healthy lifestyle programmes and extensive community information about diet, weight and exercise in our society is that genetics trump willpower in many cases, especially over the long-term. Following weight loss there are hormonal changes that lead to increased hunger (leptin levels fall and ghrelin levels increase) and in 2011 these changes were shown to be long lasting, so the weight-reduced individual has to fight increased hunger. Given the prolific amount of available food, temptation adds to the problem. In effect, one is then fighting nature.

Dr Linda Calabresi

Ketamine’s controversial role in treating depression has been boosted by a new randomised controlled trial showing it significantly and rapidly reduces suicidal ideation. Among a group of 80 severely depressed individuals already on pharmacotherapy, the US researchers found that a single, subanaesthetic infusion of ketamine was associated with a greater reduction in clinically significant suicidal ideation within 24 hours than a control midazolam infusion. After one day, 55% of the patients who received the ketamine infusion had more than halved the severity of their suicidal ideation, compared with 30% in the midazolam group. What’s more, and in contrast to previous studies on ketamine infusions, the improvement appeared to persist for at least six weeks combined with optimised pharmacotherapy, the study authors wrote in the American Journal of Psychiatry. Ketamine was first mooted as having antidepressant properties back in the 1990s, after having first been approved by the US FDA for anaesthetic use in 1970.There had been reports that it could reduce suicidal ideation, but to date the evidence to support this has been lacking. In this study, researchers used the validated Scale for Suicidal Ideation to monitor the participants who were all psychiatric outpatients. The scale categorises a score of over two as predictive of suicide in the next 20 years. The depressed adults enrolled in this study were rated as having a score of at least four– ‘a clinically significant cut-off for suicidal ideation.’ Midazolam was chosen as the comparator in the trial because, like ketamine it is a psychoactive anaesthetic agent with a similar half-life but no established antidepressant or antisuicidal effects. The finding that only four patients needed to be treated with ketamine to see a benefit over midazolam was described as a ‘medium effect’, but nonetheless significant given the lack of evidence-based pharmacotherapy currently available for suicidal patients with major depressive disorders. “Suicidal depressed patients need rapid relief of suicidal ideation,” the study authors said. And yet, despite suicidal behaviour often being associated with depression, most antidepressant trials have excluded suicidal patients and did not assess suicidal ideation and behaviour. “Standard antidepressants may reduce suicidal ideation and behaviour in depressed adults …. but this effect takes weeks,” they said. Consequently, for many, the findings of this study represent a promising new option for an area of medicine that has been notoriously difficult to treat. An accompanying editorial, also acknowledges the hope this study represents. “[T]he excitement about ketamine in our field is a reflection of the serious challenges we face in managing treatment-resistant depression,” said Dr Charles Nemeroff, leading US psychiatrist in his editorial. But he says significant concerns still exist with regard the inclusion of ketamine in the psychiatrist’s toolkit. Who regulates the use of ketamine? How do we handle its potential as a drug of abuse? Exactly how does it work? These are just some of the questions that need to be answered before it can be seriously considered as part of mainstream psychiatric medicine, Dr Nemeroff suggests. In addition, he says we may looking for a new, quick fix solution for patients too early – before having really tried all other possible treatments. “When treated with monoamine oxidase inhibitors, tricyclic antidepressants, ECT, repetitive transcranial magnetic stimulation, or augmentation with lithium, T3, atypical antipsychotics, or pramipexole, many patients with treatment-resistant depression show remarkable improvement,” he said. He suggests a ‘wait and see’ attitude be adopted, as the research and study results come in, and the evidence to support ketamine’s exact role in the world of mental illness becomes clearer. Ref: Am J Psychiatry 2018; 175: 327-335; doi:10.1176/appi.ajp.2017.17060647 Am J Psychiatry 2018; 175: 297-299; doi:10.1176/appi.ajp.2018.18010014

Dr Jonathan Grasko

In 1889, Dr Charles-Édouard Brown-Séquard, a world-renowned physiologist and neurologist, whom first described a syndrome which bears his name, published in The Lancet, a paper based on a series of lectures. He described a number of experiments done on animals and humans (including himself) which involved injecting an ELIXER derived from blood from the testicular artery, semen and fluid extracted from freshly crushed animal testicles. He concluded “…great dynamogenic power is possessed by some substance or substances which our blood owes to the testicles.” and “I can assert that the … given liquid is endowed with very great power.”1 The inherent belief that human performance can be improved by the addition of an ELIXER can be traced to ancient Greece.  Athletes and warriors ingested berries and herbal infusions to improve strength and skill.2 The intrinsic risk attached to these substances has always been appreciated. Scandinavian mythology mentions Berserkers (ancient Norse warriors) would drink a mixture called "butotens", to greatly increase their physical power at the risk of insanity. They would literally go berserk (where the modern meaning of the word arises) by biting into their shields and gnawing at their skin before launching into battle, indiscriminately injuring, maiming and killing anything in their path.3 This unrelenting desire to out-compete rivals at any cost seems to be branded into the human psyche. The willingness to partake of substances that may inevitably be detrimental even to the point of death has been repeatedly demonstrated. Thomas Hicks, an American born athlete won the 1904 Olympic marathon having received multiple injections of strychnine by his trainer. Hicks survived his ordeal but never raced again.4 An attempt at understanding the extent of this risk-taking behaviour was undertaken by physician Dr Bob Goldman. In his research involving elite athletes he presented a scenario where success in sport would be guaranteed by the ingestion of an undetectable substance, however with the inevitable outcome of death after five years. He concluded that approximately half the athletes would take the drug. This scenario has been dubbed the “Goldman's dilemma”.5 A more recent repeat of this study yielded a lower correlation.6,7 During the Second World War soldiers on both sides were given amphetamines to counteract fatigue, elevate mood, and heighten endurance.8,9 Following the war these drugs – nicknamed “La bomba” and “Atoom” by Italian and Dutch cyclists -  started to enter the sporting arena with the intention of minimising the uncomfortable sensations of fatigue during exercise.10 In the 1950s, there was the perception in the USA that the great success of the Russian weightlifting team was solely due to the use of performance enhancing drugs. Dr John Ziegler in collaboration with CIBA Pharmaceuticals and under FDA approval developed the first oral anabolic steroid, methandrostenolone. U.S. Athletics, which believed they needed chemical assistance to remain competitive, gave their entire Olympic weightlifting team methandrostenolone.11 Zeigler was later quoted when finding out that athletes were taking 20 times the recommended dose - "I lost interest in fooling with IQ's of that calibre. Now it's about as widespread among these idiots as marijuana."12 This belief came to a head in the 1960 Olympic Games where Danish cyclist, Knud Enemark Jensen collapsed and died while competing in the 100 km race. An autopsy later revealed the presence of amphetamines and nicotinyl tartrate in his system.13 In the 1967 Tour de France, British cyclist Tommy Simpson, who had previously been named Sports Personality of the Year by the BBC, died during the 13th stage after consuming excessive amounts of amphetamines and brandy. Simpson’s motto was allegedly "If it takes ten to kill you, take nine and win!"14 Simpson's death created pressure for sporting agencies to take action against doping. This ultimately led to the formation of The World Anti-Doping Agency (WADA) in 1999 as an international independent agency composed and funded equally by sport movements and governments of the world. Every year they publish an updated list of banned drugs. In general terms these fall into three groups:15 M1. Manipulation of Blood and Blood Components M2. Chemical and Physical Manipulation M3. Gene Doping Gene doping is the use of gene therapy in order to improve athletic performance in competitive sporting events. It would involve the use of gene transfer to increase or decrease gene expression and protein biosynthesis of a specific human protein. This could be done by injecting the gene carrier into the athlete using a viral vector or by transfecting the cells outside the body and then reintroducing them. WADA has placed significant resources in order to detect this process.  Currently there is no evidence that this is common practice.16,17 In addition to the traditional incentives such as fame, honour and power, the past 60 years has brought with it the most potent of drivers - money, which in western culture embodies all three.  The financial incentives to both the sporting institutions and athletes have been profound. With some authorities claiming almost a 250% increase in revenue with the introduction of industrial scale performance-enhancing drugs.18 The supplements industry has already exceeded $60 billion per year. 19 The combination of primal ambition and ever improving designer performance- enhancing modalities makes the future of professional sport, I believe, the realm of the highest bidder. References:
  1. Dr Charles-Édouard Brown-Séquard, The Effects Produced On Man By Subcutaneous Injections Of A Liquid Obtained From The Testicles Of Animals. The Lancet, 20 April 1889.
  2. Kumar, Rajesh. "Competing against doping". Br J Sports 1 September 2010
  3. The Saga of King Hrolf Kraki
  4. Pariente, R., & Lagorce, G. La Fabuleuse Histoire Des Jeux Olympiques. Livres Bronx Books, Lasalle, QC, Canada. ODIL France 1973
  5. Goldman, Robert; Ronald Klatz (1992). Death in the locker room: drugs & sports (2nd ed.). Elite Sports Medicine Publications. p. 24. ISBN 9780963145109.
  6. Connor, James; Woolf, Jules; Mazanov, Jason (January 2013). "Would they dope? Revisiting the Goldman dilemma" (PDF). British Journal of Sports Medicine. 47 (11): 697–700.
  7. Connor, James; Jason Mazanov (2009). "Would you dope? A general population test of the Goldman dilemma.". British Journal of Sports Medicine. 43 (11): 871–872.
  8. Doping of athletes, a European survey, Council of Europe, France, 1964
  9. Grant, D.N.W, Air Force, UK, 1944
  10. Timothy D. Noakes, Tainted Glory — Doping and Athletic Performance, NEJM 2004; 351:847-849August 26, 2004
  11. John D. Fair, "Isometrics or Steroids? Exploring New Frontiers Of Strength in the Early 1960s, Journal of Sport History, Vol. 20. No. 1 (Spring 1993)
  12. Wade N, Anabolic steroids: doctors denounce them, but athletes aren’t listening. Science (1972).176, 1399–1403.
  13. Maraniss, David (2008). Rome 1960: The Olympics That Changed the World. New York: Simon & Schuster. p. 111.
  14. Graham M. R., Exercise, Science and Designer Doping: Traditional and Emerging Trends, Journal of Steroids & Hormonal Science, October 28, 2011
  15. https://www.wada-ama.org/en/who-we-are
  16. WADA, The Gene and Cell Doping Expert Group
  17. Pray, L. (2008) Sports, gene doping, and WADA. Nature Education 1(1):77
  18. Mitchell Grossman, et al, Steroids and Major League Baseball, Berkeley College
  19. Nutrition Business Journal
  This article was first published in Medical Forum October, 2016
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

Dr Linda Calabresi

Eating nuts at least three times a week reduces the risk of developing atrial fibrillation, Swedish researchers report. For the first time it has been shown that nut consumption has a linear, dose-response association with atrial fibrillation. The findings of this long-term, prospective study of over 60,000 adults, showed people who ate nuts three or more times a week were 18% less likely to develop AF than their non-nut-consuming counterparts. The study, published in the BMJ journal, Heart also found those adults with a moderate consumption of nuts (defined as up to 1-2 times a week) had a reduced risk of heart failure, but this benefit disappeared if the intake was greater than this. The study authors said it was already known that nut consumption was beneficial to heart health. “Meta-analyses of prospective studies have shown that nut consumption is inversely associated with death from cardiovascular disease, total coronary heart disease and total stroke,” they wrote. However, what was not known was exactly which cardiac conditions nut consumption affected and which outcomes it influenced. So back in 1997, they got this large cohort of men and women to complete a Food Frequency Questionnaire and then followed them up for the next 17 years utilising data from the much-admired Swedish National Patient and Death registers. In addition to nuts’ protective effect against atrial fibrillation and, to some degree heart failure, the study findings also seemed to suggest that eating nuts reduced the risk of non-fatal myocardial infarction and abdominal aortic aneurysm but this association did not hold true once confounders were taken into account. There was no link found between nut consumption and any other cardiovascular condition namely aortic valve stenosis, ischaemic stroke or intracerebral haemorrhage. Researchers suggested that nuts were effective through their anti-inflammatory and antioxidant effect, their ability to improve endothelial function and reduce LDL-cholesterol levels. They also said that the overall consumption of nuts among this study population was very low, maybe too low to have a meaningful effect on cholesterol levels. By far the majority of participants either didn’t report eating nuts at all or ate them only one to three times a month. But this may represent an opportunity for intervention. “Since only a small percentage of this population had moderate (about 5%) or high (<2%) nut consumption, even a small increase in nut consumption may have large potential to lead to a reduction in incidence of atrial fibrillation and heart failure in this population,” the study authors concluded. Ref: doi:10.1136/heartjnl-2017-312819

Dr Vivienne Miller

How easy is it to say HFpEF and HFrEF?   The answer is… not very easy! However, heart failure has a new classification based on ejection fraction that doctors will need to know about. HFpEF stands for “heart failure with preserved ejection fraction.” This preserved ejection fraction is defined as greater than or equal to 50%. HFrEF stands for “heart failure with reduced ejection fraction.” This is the “classic” form of heart failure that doctors are familiar with. The ejection fraction in HRrEF is defined as less than or equal to 50%. Patients who have clinical signs of heart failure and a normal ejection fraction used to be diagnosed with diastolic heart failure.  They are now said, under the new classification, to have HFpEF. It should be noted that a patient may have diastolic dysfunction typically reported on echo, however if they do not have any clinical signs of heart failure they do NOT have HFpEF. In this situation, the diastolic dysfunction refers to the cardiac echo finding of impaired diastolic relaxation. This may be an age-related change or due to left ventricular hypertrophy, both of which may occur without necessarily causing symptoms and signs of heart failure. There is an additional group that some researchers refer to, and that is HFmEF, which stands for “heart failure with mid-range ejection fraction”. HFmEF is defined as an ejection fraction of between 40% and 50%. There is debate about the utility of the additional sub-classification of HFmEF. Most clinicians would consider HFmEF as simply mild HFrEF. Most agree that HFmEF simply identifies as subgroup of HFrEF for which there are fewer clinical trials or evidence for effective therapy, and so this highlights areas for future investigation and research. The utility of this new classification, particularly HFrEF versus HFpEF, is mainly to distinguish different pathophysiological processes, cardiac mechanics and treatment options. Presently, it is only HFrEF for which there exists medications that reduce mortality and improve survival. Additionally, device therapies such as implantable cardioverter defibrillators and biventricular pacemakers (now more commonly referred to as “cardiac resynchronisation therapies”) have only demonstrated benefit in HFrEF. For HFpEF, there are no medications or devices that have been shown to reduce mortality and improve survival. Typically, symptoms are managed with diuretic therapy. There is evidence to support a benefit from spironolactone, however the most recent trial (TOPCAT)1 failed to demonstrate a mortality benefit and it was plagued with disparities regarding the nature of recruitment in one of the large regions participating. Certainly, from a treatment viewpoint, the underlying causes contributing to HFpEF can often be managed. These typically include hypertension, diabetes, obesity and coronary artery disease. Not surprisingly, there are studies to show that patients with HFpEF do benefit from exercise, and from maintaining a healthy weight. But how best do we explain these definitions to the patient sitting in front of us? 'It can be very helpful to clarify the term [heart failure] and to explain that their heart has neither “failed”, nor has it “stopped working”, but that “it is just not working as well as normal”, said cardiologist, Dr Hendrik Zimmet. HFrEF can be explained as “the heart muscle not pumping as well as usual”. HFpEF can be explained as “the heart muscle being stiffer than usual, and not relaxing as well”. But no matter how the problem is explained to the patient, it is important to stress, as positively as possible, what can be done to help.
  1. Pfeffer, Marc et al. Regional Variation in Patients and Outcomes in the Treatment of Preserved Cardiac Function Heart Failure With an Aldosterone Antagonist (TOPCAT) Trial
Circulation. 2014; CIRCULATIONAHA.114.013255 Originally published November 18, 2014 https://doi.org/10.1161/CIRCULATIONAHA.114.013255 Based on an interview with cardiologist, Dr Hendrik Zimmet at the Annual Women and Children’s Health Update, Melbourne, March 2018

Dr Linda Calabresi

Infants receiving acid suppressive medications are more than twice as likely to develop food allergies later in life, US researchers say. Findings from a large retrospective study, analysing data from almost 800,000 children, showed that being prescribed either an H2 receptor antagonist or a proton pump inhibitor in the first six months more than doubled the risk of developing a food allergy (hazard ratios of 2.18 and 2.59 respectively) when they got older. Similarly, the use of these medications was also found to associated with an increased risk of other allergies as well, including medication allergy (HR 1.70 and 1.84), anaphylaxis (HR 1.50 and 1.45) and, to a lesser extent, allergic rhinitis and asthma. As part of the same study, the researchers also looked at antibiotics in the first six months and, perhaps unsurprisingly found a link between this type of medication and developing an allergic condition. In the case of antibiotics, children were more likely to develop allergic respiratory conditions such as asthma and allergic rhinitis than food allergies. The findings have biological plausibility, the researchers said in JAMA. Acid suppressive medications inhibit the breakdown of ingested protein which, in turn facilitates IgE antibody production increasing the sensitivity to ingested antigens. The medications also, by definition, interfere with histamine which researchers now believe has a greater role in modulating immune system functioning than previously thought. The association between increased allergy and antibiotics, on the other hand supports findings from previous studies, and is thought to be related to the effect of the antibiotics on the gut bacteria or microbiome. It is one of a number of reasons why there has been growing pressure on clinicians to try to avoid prescribing antibiotics to infants. “While there has been increasing recognition of the potential risks of antibiotic use during infancy, H2 [receptor antagonists] and PPIs are considered to be generally safe and are commonly prescribed for children younger than a year,” the study authors say. Among the almost 800,000 children included in the study, 7.6% had been prescribed a H2 receptor antagonist in infancy and 1.7% had had a PPI. The researchers did concede that a limitation of this study could be ‘the potential bias from reverse causality’. Namely an infant’s symptoms of a food allergy could have originally been misdiagnosed as gastro-oesophageal reflux necessitating acid suppression, or early symptoms of asthma could have mistakenly been thought to be an indicator of a bacterial respiratory infection. However, the authors say, this is unlikely to be the whole story. Such scenarios cannot explain the increased rates of anaphylaxis or urticaria or medication allergy. And many food allergies don’t develop until well after the first six months so it would be unlikely that allergy would have caused the symptoms experienced by an infant. All in all, best practice, according to these researchers is to minimise the use of acid suppressive medications and antibiotics in children, particularly in children less than six months old. “This study provides further impetus that antibiotics and anti-suppressive medications should be used during infancy only in situations of clear clinical benefit,” they concluded.

Dr Linda Calabresi

Patients are still not consistently being prescribed exercise despite the wealth of evidence that shows its health benefit, according to an editorial in the latest issue of the MJA. The authors, all sports medicine specialists point to statistics showing physical inactivity being the fourth leading cause of morbidity and mortality worldwide. And they reiterate the well-proven benefits of exercise in helping to manage a wide array of chronic diseases from diabetes to depression. Even though physicians have a good track record of influencing lifestyle factors as evidenced by smoking cessation rates, it appears when it comes to exercise GPs are dropping the ball. “Most physicians do not regularly assess or prescribe physical activity or specific exercises,” said the editorial authors who included GP, Dr Anita Green, Chief Medical Officer of the Gold Coast Commonwealth Games. “Even when exercise is advised by physicians, the advice is often not specific or in depth, and simple evidence-based behaviour modification techniques are not routinely used.” But why is this advice, which is also recommended in the RACGP Handbook of non-drug interventions not being given to patients as a matter of routine? One of the greatest barriers to the dispensing of this advice, according to the editorial, is the clinician not practising what he or she should be preaching. “It has been consistently shown that physically active clinicians are more likely to provide physical activity counselling to their patients,” the authors said. And apparently the medical profession could do better in terms of regular exercise. Physical activity levels have been shown to decline during medical training and through residency, perhaps unsurprisingly. More emphasis needs to be placed on the importance of physical activity and exercise prescription as part of both undergraduate and postgraduate training, not only to help clinicians to help their patients but also to help clinicians help themselves. According to the editorial, the current Gold Coast 2018 Commonwealth Games are likely to inspire the next generation of elite athletes to commit to specialised exercise regimens and dedicated training rituals. However, for the vast majority of the sports-viewing population, the spectacle is unlikely to prove sufficiently inspirational to prize them off the couch. If the medical profession really wants to achieve better health outcomes for their inactive patients, it appears they need to lead by example. “Physicians should unequivocally incorporate physical activity into their own daily routine, for their own health benefit, and to become an exercise role model, more confident in prescribing exercise to their patients,” the authors concluded. Ref: MJA doi: 10.5694/mja18.00033

Dr Jane Nankervis

Eczema (eczematous inflammation) is the most common inflammatory disease of skin. These rashes are itchy and recognised by erythema, scale and vesicles, but can have secondary changes of infection, irritation or scratching. The term “dermatitis” is a broader, non-specific term which is not synonymous with eczema. There are three stages of evolution (acute, subacute and chronic) and numerous presentations depending on stage, age and aetiology. Histologically, the eczematous inflammatory processes have in common the spongiotic tissue reaction. Spongiosis refers to intra-epidermal oedema which resembles sponge.

Stages of eczema

Acute eczema

Clinical: Red, swollen, pebbly plaques. History of contact with specific allergen or chemicals. For the id reaction, the vesicles will occur at distant sites. Histology: Spongiosis, spongiotic vesiculation, intercellular oedema, perivascular dermal inflammation and occasional eosinophils.

Subacute eczema

Clinical: red, scaly lesions with indistinct borders, which may resemble psoriasis or fungal infections. Any allergic contact, asteatotic, atopic, nappy-related, chemical exposure, irritant contact, nummular, perioral-lick or stasis dermatitis may present this way. Histology: Less spongiosis and exocytosis (presence in the epidermis) of lymphocytes than acute form, and thickening (acanthosis) of the epidermis which may become psoriasiform. Parakeratosis, perivascular dermal inflammation and oedema are also present.

Chronic eczema

Clinical: Thick skin, skin lines accentuated (lichenified), fissures and excoriations. Caused by irritation of any subacute form, or appearing as lichen simplex chronicus. If the lichen simplex chronicus forms a local lump it is referred to as a prurigo nodularis. Histology: Hyperkeratosis, psoriasiform thickening of the epidermis, mild spongiosis, dermal mast cells and eosinophils. Lichen simplex chronicus will show marked hyperkeratosis, hypergranulosis, long epidermal ridges and vertical streaking of collagen in the dermis. Prurigo nodularis has thick, possibly excoriated, hyperkeratotic epidermis and marked dermal fibrosis and inflammation.

Specific types of eczema

Atopic dermatitis (atopic eczema)

Atopic dermatitis is an itchy, chronic relapsing skin disease which often start in childhood. There is a personal or family history of dry skin, eczema, hay fever, asthma and elevated serum IgE levels. There are essential and important clinical criteria, and the diagnosis requires exclusion of other conditions (e.g. scabies, contact dermatitis, psoriasis, photosensitive dermatosis). The pathogenesis and aetiology are not entirely clear and the disease is increasing in frequency.

Nummular dermatitis (discoid eczema)

Clinical: A chronic disorder of adults, of unknown aetiology, not related to atopy, but possibly to dry skin. Papules and papulovesicles coalesce to form nummular plaques 1-4 cm in diameter with oozing, crust, and scale. They are paler and les scaly than psoriasis. Most common sites of involvement are upper extremities, including the dorsal hands in women, and the lower extremities in men. Histology: Varies with duration. Spongiosis with mild acanthosis and exocytosis of inflammatory cells in earlier lesions. With time, the degree of acanthosis (thickening) increases. Additional features include scale-crust formation above the thickened epidermis and dermal perivascular inflammatory infiltrate.

Contact dermatitis

The substance could be an irritant or an allergen. Irritant (e.g. concentrated solvents, soaps) will cause a non-immunological reaction in any exposed person. Allergic reactions will occur in predisposed people on the basis of a delayed hypersensitivity reaction to a substance at low concentration and evolves rapidly at the site once sensitised. Occupational contact dermatitis is common and may be of irritant or allergenic (or both) types.

Irritant contact dermatitis

Commonly provoked by environmental substance e.g. contact with water, detergents and other chemicals where the epidermal barrier is compromised, and subsequently occurs most commonly on the hands, but any site where external stimuli could be suspected. Histology: Mild spongiosis, epidermal cell (keratinocyte) necrosis, and neutrophilic infiltration of the epidermis.

Allergic contact dermatitis

Clinical: Exposure to, and absorption of an antigen through skin. Most allergens are weak and there may be repeated exposure before sensitisation. The shape and location of the rash are the best clues. Histology: Subacute, chronic dermatitis or acute dermatitis may be seen. The dermal inflammatory infiltrate predominately contains lymphocytes and other mononuclear cells. Occasional atypical T-cell infiltrates may simulate mycosis fungoides.

Stasis dermatitis

Clinical: Occurring on the legs where venous drainage is impaired. However, most patients with venous insufficiency do not develop dermatitis. Unfortunately, topical medicines used in this situation seem to have many potential sensitising agents. Histology: Mild spongiosis, foci of parakeratosis and scale crust. Dermal changes are prominent with neovascularisation, haemosiderin deposition and varying degrees of fibrosis (depending on chronicity) and there is often ulceration.

Seborrhoeic dermatitis

Seborrhoeic dermatitis is a common and chronic disease which most commonly occurs on the scalp and face secondary to toxic substances produced by yeasts (malassezia), but with genetic and environmental factors contributing. Histology: Spongiosis at the side of a hair follicle, often with overlying scale crust, which may be acute, subacute or chronic depending on the lesion biopsied. Neutrophils within the epidermis or stratum corneum requires a search for yeast on a PAS stain. More chronic lesions show progressive psoriasiform hyperplasia of the epidermis with less spongiosis. Mild oedema of the papillary dermis with a mild superficial perivascular infiltrate of lymphocytes, histiocytes and neutrophils.

Asteatotic eczema

Asteototic eczema develops as the result of very dry skin. It is most common in the elderly and on the lower limbs. Histology: Usually a mild subacute spongiotic dermatitis. Compact and irregular stratum corneum. Id reaction Clinical: ‘Autoeczematisation’: generalised eczema in response to a localised dermatosis or infection at a distant site. Can be a pompholyx-like reaction affecting hands or more generalised papular eruption. Will resolve when the acute initiating process is controlled. Histology: Mimics that of the initial localised dermatosis or shows a spongiotic reaction with varied intensity. Mild dermal oedema and lymphocytic infiltration are seen. References: Thomas Habif, Clinical Dermatology 6th edition 2016, Elsevier, chapters 3-5 Weedon’s Skin Pathology, 4th edition, editor James W Paterson, Churchill Livingstone Elsevier A.Bernard Ackerman, Histological Diagnosis of Inflammatory Skin Diseases, 2nd edition, Williams & Wilkins Dermnet skin disease atlas at dermnet.com and Dermnet NZ online at dermnet.nz.org
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Post-menopausal women experiencing vulvovaginal symptoms will benefit just as much from using the cheapest over-the-counter lubricant or moisturiser as using topical oestrogen, a new study suggests. The 12-week randomised clinical trial, published in JAMA Internal Medicine, compared the efficacy of a low-dose vaginal oestradiol tablet and a vaginal moisturiser, each versus placebo among a group of over 300 post-menopausal women with moderate to severe vulvovaginal symptoms. To determine the effectiveness of the treatment women were asked to report on the severity of their ‘most bothersome symptom’ which included pain with vaginal penetration (60%), dryness (21%), itching (7%), irritation (6%) and pain (5%). Across the board, regardless of which treatment was used, most women had a decrease of at least 50% in symptom severity over the course of the study. This was significant in light of the fact that most women said they ‘frequently’ or ‘always’ distressed about their sex life at enrolment, whereas after the 12-week study nearly half said they were ‘rarely’ or ‘never’ distressed. “No treatment group differences in symptom reduction were observed for vaginal oestradiol tablet plus placebo gel vs dual placebo, or vaginal moisturiser plus placebo tablet vs dual placebo”, the US researchers reported. And it didn’t matter if the most bothersome symptom was dyspareunia or itching, it appeared the hormone treatment or the specific vaginal moisturiser (Replens) had no advantage over the placebo combination. According to the study authors, the placebo gel used in the study had a similar pH and viscosity as the vaginal moisturiser (Replens) but was less mucoadhesive. The fact that both formulations were equally effective in reducing symptoms suggests that the mucoadhesive properties are less important than previously thought. Similarly, markers of vaginal oestrogenisation such as the vaginal maturation index, did, naturally improve more with the topical oestrogen but this did not translate into a greater benefit in terms of symptoms over placebo. As an accompanying editorial points out, “ultimately, it is improvement in symptoms rather than surrogates such as tissue markers that should define the goal of care.” And while the study authors conclude that treatment choice for women with troublesome postmenopausal vulvovaginal symptoms should be ‘based on individual patient preferences regarding cost and formulation’ the editorial authors go in much stronger. “[P]ostmenopausal women experiencing vulvovaginal symptoms should choose the cheapest moisturiser or lubricant available over the counter – at least until new evidence arises to suggest there is any benefit to doing otherwise.” Ref: JAMA Intern Med. doi:10.1001/jamainternmed.2018.0116 JAMA Intern Med. doi:10.1001/jamainternmed.2018.0094

Dr Vivienne Miller

Let us imagine that there has been a significant side-effect from a contraceptive choice occurs and a patient suffers harm. It is a known but very rare side-effect. How much legal and ethical responsibility lies with the doctor who prescribes the contraceptive, how much lies with the medical experts advocating this form of contraception as reasonable and safe, and how much lies with the pharmaceutical company who researched this product? Should this contraceptive be withdrawn from use, and if so, why would it be still available and advised for use in other countries around the world? A reasonable response to this question would include an assessment of the incidence of this particular complication among all users of this contraceptive, the incidence of any other significant complications, and the outcome for the patients of these complications. However, let us imagine the media finds this story and runs with it, giving widespread coverage of this single case and highlighting the contraceptive as the cause. This is the situation at present with the progestogen IUD, Mirena® in the United States. It is also the case with oral contraceptive pills that contain cyproterone acetate (such as Diane-35®) in Australia. Contraceptive pills like Diane-35® are more oestrogenic in their balance and this could potentially increase their risk of venous-thromboembolism (VTE), although this still remains somewhat controversial. It was temporarily banned in France because of this in 2013. However, the risks need to be put in perspective. Even if the worst case-scenario is accepted, the actual increased risk of VTE for these newer pills over older types is an extra four to six VTEs per 10,000 Pill users per year.1 “The risk of death from a VTE induced by a combined oral contraceptive is approximately one in 100,000, significantly less than the risk during pregnancy,” said Dr Foran. It is known that oral contraceptive pills containing 35ug ethinyl oestradiol and cyproterone acetate are being prescribed in Australia for indications beyond contraception, namely androgenising signs in women.  It is also known, that for some women these pills provide the best control of their symptoms. In Europe, the regulatory authorities decided that the benefits outweighed the rare risks for properly selected patients and this OCP was quickly reintroduced to the market after only six months. However, in Australia there have recently been calls for the banning or restriction of this product in Australia following the diagnosis of a VTE in a young woman. How reasonable is it in our society to allow the traumatic stories of individuals to override medical opinion and determine regulation? The public needs to be made to realise that not only are these products very safe for the overwhelming majority of women, when prescribed appropriately, but they are also so much safer for women than an unplanned pregnancy would be. It might be valid to argue that there are other combined oral contraceptives that are ‘safer’ than those containing 35ug ethinyl oestradiol and cyproterone acetate, or that cyproterone acetate is available separately for use. However, what happens when one of these other oral contraceptive choices causes a major medical event in a different woman? In the UK, doctors have been advised to warn patients that there is an increased risk of VTE with Femodene®, Marvelon® and Yasmin® named as some examples. The Daily Mail UK1 had a massive heading to this effect: “Deadly risk of pill used by one million women: Every GP in Britain told to warn about threat from popular contraceptive” If media and legal pressure is allowed to result in the withdrawal of these medications, at some stage, there will be no oral contraceptive choices left. The seriousness of the situation is highlighted in the case of the Mirena® IUD, since there is no similar alternative to this product in Australia. In the United States, this contraceptive device has been under a cloud of bad publicity since 2009, due to US Food and Drug Administration warnings relating to migration and perforation. Since then, the Mirena® IUD has been scrutinised by patients with side-effects and, of course, lawyers. “The real question here is whether hysterectomy or endometrial ablation is a safer option than the Mirena® IUD for women with heavy menstrual bleeding.” Dr Foran. The maintenance of a range of choices is important and women should have the right to make these decisions for themselves in consultation with their doctors. The Mirena IUD is also a safe form of contraception, especially for women who have thrombophilias and for older premenopausal women, most of whose other choices are less safe. Is it still enough for doctors to fully inform women of the side-effects and complications of their contraceptive options and to let them decide, or is modern contraception becoming a very personal, public and legal battlefield, the main casualties of this being expert medical advice and a woman’s choice?  …and in the end, who is left holding the baby?  
  1. Bitzer J et al. Statement on combined hormonal contraceptives containing third- or fourth-generation progestogens or cyproterone acetate and the associated risk of thromboembolism. J Fam Plann Reprod Health Care. doi: 10. 1136/ jfprhc-2013-100624 http://srh.bmj.com/content/familyplanning/early/2013/04/10/jfprhc-2013-100624.full.pdf
  2. Daily Mail, UK, 22nd Feb 2018 http://www.dailymail.co.uk/news/article-2550216/Deadly-risk-pill-used-1m-women-Every-GP-Britain-told-warn-threat-popular-contraceptive.html
  This article is based on an interview with Dr Terri Foran, Sexual Health Physician, Lecturer with the UNSW’s School of Women’s and Children’s Health and Director of Master Women’s Health Medicine on Saturday 17th February 2018 at the Annual Women’s and Children’s Health Update, Sydney

Dr Linda Calabresi

Want to preserve your brain function into old age? Cut down on the booze. That’s the conclusion of a large, longitudinal study just published in JAMA Psychiatry. After comparing more than 200 alcohol dependent adults with a similar number of healthy adults, over a 14 year period, the US researchers concluded alcohol-dependence accelerated the cortical ageing process even if the alcohol habit developed later in life. They found, through a series of MRIs that alcohol dependence (as per the DSM-IV criteria) resulted in more rapid frontal lobe deterioration than that which just occurred with age, regardless of gender. As part of the study the researchers also looked at whether comorbidities such as drug use or hepatitis C infection made a difference to the decline in cognitive function. And while they found they compounded the shrinkage of the frontal lobe, the actual deficits in the frontal cortex seemed to be associated chiefly with the alcohol. “We observed a selectivity of frontal cortex to age-alcoholism interaction beyond normal aging effects and independent of deficits related to drug dependence,” they said. Also, the researchers found that the deterioration was more associated with current drinking habits  than the cumulative effect of many years of alcohol abuse. People who had become alcohol dependent later in life were just as vulnerable as people whose alcohol use disorder started when they were younger. “The accelerated volume deficits in the older alcohol-dependent participants could not readily be attributed to more years of heavy drinking, given that many had a late onset of their disorder and lower lifetime alcohol consumption estimates than their early-onset counterparts,” the study authors said. So, it appears the frontal cortex, which is that part of brain that helps people plan, reason, modify behaviours and problem solve is the most vulnerable to damage in people with alcohol use disorder. Add this to the fact the frontal cortex deterioration associated with aging, is fundamentally responsible for the deterioration in executive function that limits an elderly person’s ability to function and live independently, and you have a recipe for disaster for those older people who drink to excess. What does this all mean? An accompanying editorial makes the take home message quite clear. “Given the rapidly growing aging population… it is critical that we improve and implement strategies to address alcohol misuse among older drinkers. As Yoda might say, “Protect their brains, we must.” Ref: JAMA Psychiatry. Published online March 14, 2018. doi:10.1001/jamapsychiatry.2018.002 JAMA Psychiatry. Published online March 14, 2018. doi:10.1001/jamapsychiatry.2018.0009