Articles

Read the latest articles relevant to your clinical practice, including exclusive insights from Healthed surveys and polls.

By reading selected clinical articles, you earn CPD in the Educational Activities (EA) category whenever you click the “Claim CPD” button and follow the prompts. 

Dr Perminder Sachdev

When people think of lithium, it’s usually to do with batteries, but lithium also has a long history in medicine. Lithium carbonate, or lithium salt, is mainly used to treat and prevent bipolar disorder. This is a condition in which a person experiences significant mood swings from highs that can tip into mania to lows that can plunge into depression. More recently, though, lithium has been explored as a potential preventive therapy for dementia. A recent paper even led some to question whether we should start putting lithium in drinking water to lower population dementia rates.
But despite early studies linking lithium to better cognitive function, there is currently not enough evidence to start using it as a preventive dementia strategy.

Lithium’s medical history

Lithium is a soft, light-silver metal present in many water systems, which means humans have always been exposed to it. Its concentrations in water range from undetectable to very high, especially in geothermal waters and oil-gas field brines. The high concentration of lithium in some natural springs led to it being related to healing. In the 19th century, lithium water was used to treat gout and rheumatism. Of course this was with little objective evidence of any benefit. Early attempts to treat diseases such as kidney stones with higher doses of lithium often led to lithium toxicity – potentially irreversible damage to the kidneys and brain. The landmark event in the medical history of lithium was a 1949 paper by Australian psychiatrist John Cade in the Medical Journal of Australia. This demonstrated its benefit in bipolar disorder, then known as manic-depressive illness. The psychiatric community took some time to absorb this finding – the US regulator the Food and Drug Administration only approved lithium for use in 1970. After that, lithium as a drug transformed psychiatric practice, especially in the treatment and prevention of bipolar disorder. This led to extensive research into the mechanisms of lithium in the brain.
Read more: What is bipolar disorder?

How lithium affects the brain

We don’t know exactly how lithium works, but we know it helps the way brain cell connections remodel themselves, usually referred to as synaptic plasticity. It also protects brain neurons by controlling cellular pathways, such as those involved in oxidative stress (where the brain struggles to control toxins) and inflammation. Animal studies have shown that long-term treatment with lithium leads to improvement in memory and learning. These observations led to studies of lithium’s protective effects on brain neurons in bipolar patients who had been taking it for a long time. One of these was a review of more than 20 studies, seven of which examined dementia rates in patients with mood disorders (such as bipolar) being treated with standard therapeutic doses of lithium. Five of these studies showed lithium treatment was related to low dementia rates. The review looked at four randomised controlled trials (comparing one group of patients on lithium with a group taking a placebo). These examined lithium’s effects on cognitive impairment (such as memory loss) or dementia over six to 15 months. One study did not show a statistically significant benefit on cognition but showed a biologically positive effect on the levels of a protein that promotes nerve cell growth. The other three showed statistically significant, albeit modest, beneficial effects of lithium on cognitive decline.
Read more: How we can protect our brains from memory loss and dementia

Lithium in water

A number of epidemiological studies – which track patterns and causes of diseases in populations – have linked lithium concentrations in drinking water with rates of psychiatric disease. In the above-mentioned review, nine out of 11 studies found an association between trace-dose lithium (low doses in drinking water but not detectable in blood of the people consuming it) and low rates of suicide and, less commonly, homicide, mortality and crime. More recently, researchers in Denmark conducted a nation-wide study linking dementia rates based on hospital records for people aged 50-90 with their likely exposure to lithium. This was based on the lithium levels in the waterworks predominantly supplying the region where they lived. Those with higher dementia rates came from regions with lower mean levels of lithium in the water than those without. This was 11.5 micrograms (µg) per litre compared to 12.2µg per litre. The Danish population is geographically stable and the health record linkage is excellent for such studies. The reliability and validity of dementia diagnosis in Danish health registers is also high. But the study had a number of limitations. The lithium intake was based on sampling of waterworks that provide water to only 42% of the population. The sampling was done for only four years (2009-2013) and extrapolated to a lifetime. Many potential, additional variables were not considered. For instance, a major source of lithium is diet, and some bottled water contains lithium. The study did not take this into account. An intriguing aspect of the results, for which no explanation was given, was that the relationship wasn’t linear. That is, lower doses (5.1-10µg per litre) increased the risk of dementia by about 20%, whereas exposure to levels over 15µg/L reduced the risk by about the same amount.

We’re not there yet

Observational studies (which make educated assumptions by observing a sample of the population) have considerable merit in the epidemiology of dementia, but have sometimes led to blind alleys. Aluminium is a useful example, with its preventive role in dementia still unclear after several decades of observations. A concern is lithium may take the same path.
Read more – In defence of observational science: randomised experiments aren’t the only way to the truth
Lithium was once widely used as an elixir and even as a salt substitute, but was discredited because of lack of effectiveness, marked toxicity and early death. We must wait for more observational studies with the rigour such studies warrant before we start clinical tests of its effects in drinking water. The ConversationWe must also study the potential harmful effects of lithium on the thyroid and the kidney, as these organs bear the brunt of long-term harms of lithium. For now, there is insufficient evidence to add lithium to the drinking water. Perminder Sachdev, Scientia Professor of Neuropsychiatry, Centre for Healthy Brain Ageing (CHeBA), School of Psychiatry, UNSW This article was originally published on The Conversation. Read the original article.
Dr Linda Calabresi

New US guidelines are the most aggressive yet in terms of targets for blood pressure control. Put out by the American College of Cardiology and the American Heart Association, and published in JAMA, the guidelines recommend we now consider anyone with a BP of 120/80 mmHg or above as having abnormal blood pressure. People who have a systolic between 120 and 130 mmHg but whose diastolic is still below 80 mmHg are to be considered to have elevated BP. But those who have both a systolic up to 10mmHg above target (120-130mmHg) and a diastolic between 80 and 90 mmHg should now be classified as having stage 1 hypertension. An accompanying editorial estimates that this reclassification will result in a 14% increase in the US population who should be recognised as having hypertension. But before clinicians start reaching for the script pad, the guidelines recommend this stage 1 hypertension be initially treated with non-pharmacological therapies – basically addressing the factors that most likely pushed their blood pressure up to start with – lose weight, exercise more, reduce salt intake, cut down on alcohol. The exception to this, is that group of patients whose absolute 10 year CVD risk predictor has them with a 10% or more chance of having a major CV event. In these cases, it’s gloves off. The less than 130/80 target for high risk patients is very similar to Australian guidelines. What’s different is that this is now a recommended target for everyone. The new US guidelines recommend everyone with a BP over 140/90 mmHg be treated with medication (preferably two agents) regardless of their absolute CV risk. Our Heart Foundation says try other lifestyle changes in people with a very low CV risk and no other comorbidities until we reach the 160/100 mmHg mark. The other new development in the US guidelines is the recommendation to use BP measurements from ambulatory or home BP monitoring to both confirm a diagnosis of hypertension and titrate therapy. This is in keeping with Australian recommended practice. The US guidelines were developed by an expert committee after examining all the current evidence and conducting a series of systematic reviews looking at some key clinical questions. “From a public health perspective, considering the high population-attributable risk of CVD associated with hypertension, the potential benefits of tighter control of hypertension are substantial,” the guideline authors wrote. However, they do acknowledge that such an aggressive approach does carry risks, especially in the elderly. “Although studies do suggest that lower BP is better for most patients, including those older than 75 years, the balance of the potential benefits of hypertension management and medication costs, adverse effects, and polypharmacy must be considered for each individual patient,” they said. Ref: JAMA. Published online November 20, 2017. doi:10.1001/jama.2017.18706

Dr Lee Price

High sensitivity(HS) troponin measurement in the emergency room/hospital setting is now widely established in Australia and is now being recommended for widespread implementation in the USA. Lower cut-offs into the normal range may find value as a single determinant for exclusion purposes in the acute emergency ward setting, however, because HS troponin may be elevated in a number of noncoronary cardiac conditions, a rise and/or fall in the level is usually required for diagnosis of a coronary infarct1. In unstable angina pectoris, a troponin level may be normal, as may an ECG recording if the patient is pain free at the time. Two articles in the Medical Journal of Australia published in the past three years have addressed the issues/problems surrounding ordering of the test in general practice 1,2. In both articles the authors agree that there are times when a single measurement of HS troponin can be useful clinically; however, there are times when it can be counterproductive. Firstly, it is agreed that a patient with classical features of the acute coronary syndrome (ACS) plus or minus ECG findings who has had pain in the 24 hours prior to assessment should be referred urgently to an emergency centre without troponin measurement. The turnaround time for an urgent troponin in most acute hospitals is of the order of 60 minutes or less. In the community private pathology scenario, turnaround time for a troponin result, even when treated as urgent, could take anywhere from four to 12 hours. That usually means that the result is only available after hours. Frequently, the ordering clinician is unavailable to receive or act on the result. A troponin can be useful in the general practice setting if the patient has had atypical chest pain with a low but not negligible likelihood of ACS; or if the patient has been pain and symptom free for 24 hours with a normal ECG. After an infarct, troponin can remain elevated for over a week. For the laboratory, an abnormal troponin requires phoning the result if it is an urgent request from the clinician. This may be after hours – even after midnight. Usually the context of the result is only known by the requesting clinician. If a requesting clinician is unavailable to receive the result after hours, the patient will usually be contacted by a pathologist or emergency services. After-hours doctor services often are uninterested in receiving or acting on critical results such as troponin. In summary, there is a place for troponin measurement in general practice. Elevated levels are not uncommon due to causes other than the ACS. Turnaround time for a result may take much longer when collected in a collection centre than in the hospital setting. When ordering an urgent troponin please ensure that the laboratory has a valid contact number for after hours. References 1. Aroney CA, Cullen L. Appropriate use of serum troponin testing in general practice: a narrative review. MJA 2016; 205:(2) 91-94. 2. Marshall GA, Wijeratne NG, Thomas D. Should general practitioners order troponin tests? MJA 2014; 201: 155-157.
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

Dr Linda Calabresi

At a time when there is increasing pressure on GPs not to prescribe antibiotics, a new primary care study endorsing their role in the early treatment of uncomplicated UTI makes a welcome change. The trial, recently published in the BMJ showed that not only did early antibiotic treatment for a lower UTI significantly shorten the duration of symptoms, it also reduced the risk of the patient developing pyelonephritis. However, the researchers stopped short of recommending all women with lower UTI symptoms commence antibiotics at first presentation. In deference to the rising rates of antibiotic resistance against UTI-causing bacteria, and the fact that little harm came to the women who were originally in the NSAID group but were eventually put on antibiotics, they effectively suggest a ‘just in case’ script. “[A] strategy of selectively deferring rather than completely withholding antibiotic treatment may be preferable for uncomplicated lower UTI,” they said. The only caveat they suggested to this strategy, was for women who had lower UTI symptoms and a CRP greater than 10mg/L who appeared, in post hoc analysis to have a greater likelihood of developing pyelonephritis and might therefore benefit from immediate antibiotics. But this would need further research they suggested. The Swiss study, a randomised, double blind trial involved more than 250 women who presented to their GP with symptoms of an uncomplicated lower UTI, and were found to have either leucocytes or nitrite or both on a urine dipstick test. The women were randomised to receive either norfloxacin or the NSAID, diclofenac. The choice of norfloxacin as the antibiotic, which does seem a little like using a hammer to crack a nut, was based on pre-determined high susceptibility rates in this Swiss population and diclofenac was the NSAID chosen because it had the same dosing regimen as the norfloxacin. Overall, symptoms were gone after a median of two days in the antibiotic group but lasted twice as long in the NSAID group, with the majority of NSAID women eventually needing antibiotics. Also of note was that 5% of women in the NSAID group developed pyelonephritis compared with none in the antibiotic group. So even though research suggests we can safely withhold antibiotics in a number of self-limiting bacterial diseases such as acute otitis media, sinusitis and traveller’s diarrhoea – we should perhaps reconsider that strategy for treating UTIs, the study authors suggest. BMJ 2017; 359: j4784. http://dx.doi.org/10.1136/bmj.j4784

Diana Lucia

Abstaining from alcohol during preconception and pregnancy is usually considered to be the woman’s responsibility. The main concern surrounding alcohol exposure during pregnancy often relates to well-established evidence of newborns developing a range of behavioural, physical and cognitive disabilities later in life. But recent research is also pointing to a link between alcohol and poor sperm development, meaning the onus is on expectant fathers too. A myriad of studies are showing biological fathers who drink alcohol may have a significant role in causing health problems in their children. Studies are showing paternal alcohol consumption has negative effects at all levels of the male reproductive system. This is as well as altered neurological, behavioural and biochemical outcomes in subsequent generations.
Read more: Hey dad, your health affects your baby’s well-being too

Men and risky drinking

In Australia, men consume alcohol at high or risky levels on a regular basis. National health guidelines recommend no more than two standard drinks on any day. According to the National Alcohol and Drug Knowledgebase, Australian men usually drink more alcohol than women. Data has shown males are twice more likely than females to consume more than two standard drinks per day on average over a 12-month period (24% compared with 9.8%). And about a third of males said they exceeded the guideline not to drink more than five standard drinks on a single occasion on a monthly basis.

Booze and swimmers

These figures are alarming given the compelling evidence about the impact of excessive, chronic or binge alcohol consumption on sperm, semen quality, fertility and child health.
Read more: Dads get postnatal depression too
Animal studies have shown a single dose of ethanol into the stomach lining (equivalent to a human binge drinking) induces damage to the testis, damaging the cells essential for sperm formation. In another experimental study, sperm health and fertility was assessed in male rats after administration of alcohol into the stomach for ten weeks. The results confirmed alcohol significantly reduced sperm concentration and the ability of the sperm to move properly. And none of the rats exposed to alcohol fertilised the females, despite confirmation of successful mating. A myriad of other non-human studies have also shown similar results, suggesting ethanol has the ability to damage sperm and fertility. Studies in humans have also supported these findings. A recent study of 1,221 young Danish men (18-28 years of age) tracked alcohol consumption in the week preceding the study to determine its effects on semen quality (volume, concentration, total count, and shape). The results showed sperm concentration, total sperm count and percentage of sperm with normal shape got worse the more the men drank. This association was observed in men reporting at least five units of alcohol in a typical week, but was most pronounced for men with a typical intake of more than 25 units a week. This suggests even modest habitual alcohol consumption of more than five units a week can negatively affect semen quality.
Read more: Mother knows best? Fathers missing in research about kids
A recent review of studies and meta-analysis of population data replicated many of these findings. The main results showed daily alcohol intake at moderate to high levels had a detrimental effect on semen volume and normal shape.

The effects on children

Limited studies have tracked the drinking patterns of fathers around the time of conception and subsequent health outcomes of the child. But rodent models have shown changes in offspring weight and development, learning and activity, anxiety related behaviours and molecular and physiological effects. A study also reported the women whose partners consumed ten or more drinks per week prior to conception had two to five times increased risk of miscarriage compared to those whose partners did not drink during preconception. Other studies provide some preliminary evidence that paternal preconception alcohol use is associated with acute leukemia at high-level use, heart malformation with daily use, microcephaly with low to moderate use, and effects in relation to fetal growth and mild cognitive impairments.

How can alcohol affect kids before they’re born?

The exact mechanism of how alcohol alters developing sperm and the later health outcomes of the foetus is still not yet fully understood. It’s been suggested alcohol can change the micro-environment within the testes, altering the development and maturation of the sperm. It’s also been suggested alcohol can influence sperm by creating genetic alterations and epigenetic marks. This means changes to gene expression occur without changes to the underlying DNA sequence. These epigenetic marks can be transferred at the time of fertilisation. This can subsequently alter the molecular makeup of the early embryo, leading to alterations in foetal development and the potential to impair offspring health. The biggest hurdle for researchers now is continuing to translate findings from the basic sciences to more sophisticated research in humans. The next stage is to identify patterns of alcohol use by men during the preconception period on foetal and childhood outcomes in the Australian context. The ConversationBut most importantly we need to realise decisions about alcohol use during the preconception period are not the sole responsibility of women. We need to be talking to men about these issues to ensure healthy outcomes for the baby. Diana Lucia, PhD candidate, Neuroscience, School of Biomedical Sciences, The University of Queensland and Karen Moritz, Professor, The Univeristy of Queensland, The University of Queensland This article was originally published on The Conversation. Read the original article.
Dr Linda Calabresi

There has been a lot of noise around opioid use lately. In particular, in the States where it’s been declared a public health emergency. While concerted efforts are being made to ensure that patients who are experiencing chronic pain are not also in a position where they also have to deal with opioid addiction, in the cases of severe, acute pain most doctors would consider pain relief the priority and opioids the gold standard. Well it seems that too may need a rethink. According to a new randomised controlled trial just published in JAMA, an oral ibuprofen/paracetamol combination works just as well at reducing pain, such as that felt with a suspected fractured arm as a range of other oral opioid combinations including oxycodone and paracetamol. The US researchers randomly selected over 400 patients who presented to emergency with moderate to severe arm or leg pain, severe enough to warrant investigation by imaging to receive an oral paracetamol/ibuprofen combination pain relief or one of three other opioid combination analgesics including oxycodone/paracetamol, hydrocodone/paracetamol or codeine/paracetamol. Two hours after ingestion there were no statistically significant or clinically important difference in pain reduction between the four groups. A limitation of the study was that it didn’t compare adverse effects, however the study authors said their findings support the use of the paracetamol/ibuprofen combination as an alternative to oral opioid analgesics, at least in cases of severe arm or leg pain. Their findings also contradict the long-held idea that non-opioid pain killers are less effective than opioids, an idea that has been underpinned by the WHO pain ladder that has guided clinicians managing both cancer and non-cancer pain since 1986. Even though most scripts for opioids are written out in the community, previous research has showed that long-term opiate use is higher among those patients who were initially treated in hospital. “Typically, treatment regimens that provide adequate pain reduction in the ED setting are used for pain management at home,” an accompanying editorial stated. “[This trial] provides important evidence that nonopioid analgesia can provide similar pain reduction as opioid analgesia for selected patients in the ED setting.” What’s more, the effectiveness of this paracetamol and ibuprofen combination for moderate to severe pain may also translate to its more widespread use for acute pain in other clinical conditions traditionally treated with opioid medication, however this would need further investigation, the editorial author concluded. Ref: JAMA 2017; 318(17): 1661-1667. Doi:10.1001/jama.2017.16190 JAMA 2017; 318(17) 1655-1656

Dr Daman Langguth

Research in rheumatoid arthritis (RA) over the past 10 years has gained significant ground in both pathophysiological and clinical understanding. It is now known that early aggressive therapy within the first three months of the development of joint symptoms decreases the chance of developing severe disease, both clinically and radiologically. To enable this early diagnosis, there has been considerable effort made to discover serological markers of disease. Around 80% of RA patients become rheumatoid factor positive (IgM RF), though this can take many years to occur. In other words, IgM RF (hereafter called RF) has low sensitivity in the early stages of RA. Furthermore, patients with other inflammatory diseases (including Sjögren’s syndrome, chronic viral and bacterial infections) may also be positive for RF, and thus RF has a relatively low specificity for RA. The RF is, therefore, not an ideal test in the early detection and confirmation of RA. There has been an on-going search for an auto-antigen in RA over the past 30 years. It has been known that senescent cells display antigens not present on other cells, and that RA patients may make antibodies against them. This was first reported with the anti-perinuclear factor (APF) antibodies directed against senescent buccal mucosal cells in 1964, but this test was challenging to perform and interpret. These cells were later found to contain filament aggregating protein (filaggrin). Subsequently, in 1979, antibodies directed against keratin (anti-keratin antibodies, AKA) in senescent oesophageal cells were discovered. In 1994, another antibody named anti-Sa was discovered that reacted against modified vimentin in mesenchymal cells. In the late1990s, antibodies directed against citrullinated peptides were ‘discovered’. In fact, we now know that all of the aforementioned antibodies detect similar antigens. When cells grow old, some of the structural proteins undergo citrullination under the direction of cellular enzymes. Arginine residues undergo deamination to form the non-standard amino acid citrulline. Citrullinated peptides fit better into the HLA-DR4 molecules that are strongly associated with RA development, severity and prognosis. It is also known that many types of citrullinated peptides are present in the body, both in and outside joints. It has been determined that sera from individual RA patients contain antibodies that react against different citrullinated peptides, but these individuals’ antibodies do not react against all possible citrullinated peptides. Thus, to improve the sensitivity of the citrullinated peptide assays, cyclic citrullinated peptides (CCP) have been artificially generated to mimic a range of conformational epitopes present in vivo. It is these artificial peptides that are used in the second generation anti-CCP assays. Sullivan Nicolaides Pathology uses the Abbott Architect assay which is standardised against the Axis-Shield, Dundee UK, second generation CCP assay. False positive CCP antibodies have recently been reported to occur in acute viral (e.g. EBV, HIV) and some atypical bacterial (Q Fever) seroconversions. The antibodies may be present for a few months after seroconversion, but do not predict inflammatory arthritis in these individuals.

Anti-CCP assays

CCP antibodies alone give a sensitivity of around 66% in early RA, similar to RF, though they have a much higher specificity of >95% (compared with around 80% for RF). The combination of anti-CCP and RF tests is now considered to be the ‘gold standard’ in the early detection of RA. Combining RF with anti-CCP enables approximately 80% (i.e. 80% sensitivity) of RA patients to be detected in the early phase (less than sixmonths duration) of this disease. The presence of anti-CCP antibodies has also been shown to predict RA patients who will go on to develop more severe joint disease, both radiologically and clinically. They also appear to be a better marker of disease severity than RF. Anti-CCP antibodies have also been shown to be present prior to the development of clinical disease, and thus may predict the development of RA in patients with uncharacterised recent onset inflammatory arthritis. At present, it is not known whether monitoring the level of these antibodies will be useful as a marker of disease control, though some data in patients treated with biologic (e.g. etanercept, infliximab agents) suggests they may be useful. It has not been determined whether the absolute levels of CCP antibodies allow further disease risk stratification. Our pathology laboratories reports CCP antibodies in a quantitative fashion – normal less than 5 U/mL with a range of up to 2000 U/mL. References
  1. ACR Position statement on anti-CCP antibodies http://www.rheumatology.org/publications hotline/1003anticcp.asp.
  2. Forslind K, Ahlmen M, Eberhardt K et al. Prediction of radiologic outcome in early rheumatoid arthritis in clinical practice: role of antibodies to citrullinated peptides (anti-CCP). Ann Rheum Dis 2004; 63:1090-5.
  3. Huizinga TWJ, Amos CI, van der Helm-van Mil AHM et al. Refining the complex rheumatoid arthritis phenotype based on specificity of the HLA-DRB1 Shared epitope for antibodies to citrullinated proteins. Arthritis Rheum 2005; 52:3433-8.
  4. Lee DM, Schur PH. Clinical Utility of the anti-CCP assay in patients with rheumatic disease. Ann Rheum Dis 2003; 62:870-4.
  5. Van Gaalen FA, Linn-Rasker SP, van Venrooij Wj et al. Autoantibodies to cyclic citrullinated peptides predict progression to rheumatoid arthritis in patients with undifferentiated arthritis. Arthritis Rheum 2004;50: 709-15.
  6. Zendman AJW, van Venrooij WJ, Prujin GJM. Use and significance of anti-CCP autoantibodies in rheumatoid arthritis. Rheumatology 2006; 46:20-5.

General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.
Dr Linda Calabresi

Looks like there is yet another reason to rethink the long-term use of proton pump inhibitors. And this one is a doozy. According to a new study, recently published in the BMJ journal, Gut, the long-term use of PPIs is linked to a more than doubling of the risk of developing stomach cancer. And before you jump to the reasonable conclusion that these patients might have had untreated Helicobacter Pylori, this 2.4 fold increase in gastric cancer risk occurred in patients who had had H.pylori but had been successfully treated more than 12 months previously. What’s more, the risk increased proportionally with the duration of PPI use and the dose, which the Hong Kong authors said suggested a cause-effect relationship. No such increased risk was found among those patients who took H2 receptor antagonists. While the study was observational, the large sample size (more than 63,000 patients with a history of effective H.pylori treatment) and the relatively long duration of follow-up (median 7.6 years) lent validity to the findings. The link between H.pylori and gastric cancer, has been known for decades. It has been shown that eradicating H.pylori reduces the risk of developing gastric cancer by 33-47%. However, the study authors said, it is also known that a considerable proportion of these individuals go on to develop gastric cancer even after they have successfully eradicated the bacteria. “To our knowledge, this is the first study to demonstrate that long-term PPI use, even after H. pylori eradication therapy, is still associated with an increased risk of gastric cancer,” they said. By way of explanation, the researchers note that gastric atrophy is considered a precursor to gastric cancer. And while gastric atrophy is a known sequela of chronic H. pylori infection, it could also be worsened and maintained by the profound acid suppression associated with PPI use and this could be why the risk persisted even after the infection had been treated. Bottom line? According to the study authors, doctors need to ‘exercise caution when prescribing long-term PPIs to these patients even after successful eradication of H. pylori.’ Ref: Gut 2017; 0:1-8. Doi:10.1136/gutjnl-2017-314605

Dr Linda Calabresi

Self-harm among teenagers is on the increase, a new study confirms and frighteningly it’s our younger girls that appear most at risk. According to a population-based, UK study the annual incidence of self-harm increased by an incredible 68% between 2011 and 2014 among girls aged 13-16, from 46 per 10000 to 77 per 10000. The research, based on analysis of electronic health records from over 670 general practices, also found that girls were three times more likely to self-harm than boys among the almost 17,000 young people (aged 10-19 years) studied. The importance of identifying these patients and implementing effective interventions was highlighted by the other major finding of this study. “Children and adolescents who harmed themselves were approximately nine times more likely to die unnaturally during follow-up, with especially noticeable increases in risks of suicide…, and fatal acute alcohol and drug poisoning,” the BMJ study authors said. And if you were to think this might be a problem unique to the UK, the researchers, in their article actually referred to an Australian population based cohort study published five years ago that found that 8% of adolescents aged less than 20 years reported harming themselves at some time. The UK study also showed that the likelihood of referral was lowest in areas that were the most deprived, even though these were the areas where the incidence was highest, an example of the ‘inverse care law’ where the people in most need get the least care. While the link between social deprivation and self-harm might be understandable, researchers were at a loss to explain the recent sharp increase in incidence among the young 13-16 year old girls in particular. What they could say is that by analysing general practice data rather than inpatient hospital data, an additional 50% of self-harm episodes in children and adolescents were identified. In short, it is much more likely a self-harming teenager will engage with their GP rather than appear at a hospital service. And even though, as the study authors concede there is little evidence to guide the most effective way to manage these children and adolescents, the need for GPs to identify these patients and intervene early is imperative. “The increased risks of all cause and cause-specific mortality observed emphasise the urgent need for integrated care involving families, schools, and healthcare provision to enhance safety among these distressed young people in the short term, and to help secure their future mental health and wellbeing,” they concluded. BMJ 2017; 359:j4351 doi: 10.1136/bmj.j4351

Shomik Sengupta

Bladder cancer affects almost 3,000 Australians each year and causes thousands of deaths. Yet it often has a lower profile compared to other types of cancer such as breast, lung and prostate. The rate at which Australians are diagnosed with bladder cancer has decreased over time, which means the death rate has fallen too, although at a slower rate. This has led to an increase in the so called mortality-to-incidence ratio, a key statistic that measures the proportion of people with a cancer who die from it. For bladder cancer this went up from 0.3 (about 30%) in the 1980s to 0.4 (40%) in 2010 (compared to 0.2 for breast and colon cancer and 0.8 for lung cancer). While the relative survival (survival compared to a healthy individual of similar age) for most other cancers has improved in Australia, for bladder cancer this has decreased over time.

Who gets bladder cancer?

Australia’s anti-smoking measures and effective quitting campaigns have led to a progressive reduction in smoking rates over the last 25 years. This is undoubtedly one key reason behind the observed decline in bladder cancer diagnoses over time. Environmental risk factors are thought to be more important than genetic or inherited susceptibility when it comes to bladder cancer. The most significant known risk factor is cigarette smoking. Bladder cancer risk also increases with exposure to chemicals such as dyes and solvents used in industries like hairdressing, printing and textiles. Appropriate workplace safety measures are crucial to minimising exposure, but the increased risk of occupational bladder cancer remains an ongoing problem. Certain medications, such as the chemotherapy drug cyclophosphamide, and pelvic radiation therapy have also been linked to bladder cancer. Patients who have had such treatment need to be specifically checked for the main symptoms and signs of bladder cancer, such as blood in urine. Men develop bladder cancer about three times as often as women. In part, this may have to do with the fact that men are exposed more to the risk factors. Conversely, women have a relatively poorer survival from bladder cancer compared to men. The reasons for this are unclear, but may partly relate to difficulties in diagnosis.
Read more – Interactive body map: what really gives you cancer?

How is bladder cancer diagnosed?

At present, unlike other cancers such as breast cancer that can be picked up on mammograms, bladder cancer can’t be diagnosed at the stage where there are no symptoms. The usual symptoms that lead to the diagnosis of bladder cancer are blood in the urine (haematuria) or irritation during urination, such as frequency and burning. But symptoms are quite common and, in most instances, caused by relatively benign problems such as infections, urinary stones or enlargement of the prostate. So, the key to bladder cancer diagnosis is for suspicious symptoms to be quickly and appropriately assessed by a doctor. Haematuria, in particular, always needs to be considered a serious symptom and investigated further. Up to 20% of patients with blood in the urine will turn out to have bladder cancer. Even if the bleeding occurs transiently, this could still be the first symptom that leads to the earliest possible diagnosis of bladder cancer. It shouldn’t be ignored, since delayed diagnosis of bladder cancer is known to worsen treatment outcomes. Unfortunately, delays in investigation of blood in urine are well known to occur and particular subgroups such as women and smokers tend to experience the greatest delays. Recent studies from Victoria and West Australia have shown how some Australian patients have significant and concerning delays in investigation of urinary bleeding. Multiple factors contribute to such delays, including public perception and anxiety, lack of referral from general practitioners and administrative and resourcing limitations at hospitals. Patients reporting blood in their urine should be referred for scans such as an ultrasound or computerised tomography (CT) to assess the kidneys. They should also have their bladder examined internally (cystoscopy) using a fibre-optic instrument known as a cytoscope. Cystoscopy, a procedure usually performed by urologists (medical specialists of urinary tract surgery), remains the gold standard for diagnosing bladder cancer. Although diagnostic scans can help detect some bladder cancers, they have significant limitations in detecting certain types of tumours.

What happens if cancer is detected?

If a bladder cancer is noted on cystoscopy, it is removed and/or destroyed using instruments that can be passed into the bladder alongside the cystoscope. These procedures can be carried out at the same setting or subsequently, depending on available instruments and anaesthesia. The cancerous tissue removed is examined by a pathologist to confirm the diagnosis. This also provides additional information such as the stage of the cancer (how deep it has spread) and grade (based on appearance of the cancer cells), which help determine further management.

Are there any new developments?

Given that cystoscopy is an invasive procedure, there has been considerable effort to develop a non-invasive test, usually focusing on markers in the urine that can indicate the presence of cancer. To date, none of these have been reliable enough to obviate the need for cystoscopy.
Read more: Can we use a simple blood test to detect cancer?
Additionally, to enhance the ability to detect small bladder cancers, cystoscopy using blue light of a certain wavelength (360-450nm) can be combined with the administration of a fluorescent marker (hexaminolevulinate) which highlights the cancerous tissue. While this approach does lead to the detection of more cancers, the resulting clinical benefit remains uncertain. The ConversationAt present, immediate and appropriate investigation of suspicious symptoms, especially haematuria, using a combination of radiological scans and cystoscopy, remains the best means to diagnose bladder cancer in an accurate and timely manner. Shomik Sengupta, Professor of Surgery, Eastern Health Clinical School, Monash University This article was originally published on The Conversation. Read the original article.
A/Prof Ken Sikaris

Blood tests for iron status are among the most common requested in clinical medicine. This may be largely justified because of the prevalence of iron deficiency combined with a relatively common genetic condition of haemochromatosis. In Australia, iron deficiency, defined by the Royal College of Pathologists of Australasia (RCPA) as a ferritin level below 30 ug/L, affects only 3.4% of men but 22.3% of women according to the Australian Bureau of Statistics survey in 2011-2012. The issue in women is particularly related to premenopausal women (16-44 years) where 34.1% are iron deficient. This is not surprising when nutrition surveys show that 40% of premenopausal women have inadequate dietary iron intake. Despite this high prevalence, screening with iron studies is not currently recommended in any demographic. While many hospitals include a ferritin in the shared care antenatal panel, most antenatal guidelines assume that an FBE will detect iron deficiency (which is probably wrong). Anaemia is a late stage of iron deficiency and ideally not a stage we should be waiting for. While it is true that microcytosis of red cells is often found in iron deficiency this is unreliable as
  • thalassaemia also causes microcytosis and
  • vegetarians usually also have B12 deficiency which causes macrocytosis that ‘cancels out’ the low mean cell volume.
The unevenness (or high red cell distribution width / RDW) is a more sensitive test of early iron deficiency. The association of B12 deficiency and iron deficiency, especially in vegetarians, is so important that clinicians should always think of the other when the other is detected. It is estimated that one in eight Australians carry the predisposition to haemochromatosis. It is most common in British / Celtic peoples (C282Y or H63D are the common HFE gene mutations). When two HFE heterozygotes have children, one in four of the offspring will be homozygote therefore roughly (1/8 * 1/8 * 1/4 =) 1/256 Australian are homozygote - but only half develop disease. This may be because many have been protected from iron overload through diet or blood loss, such as blood donation. Even at a ‘disease’ prevalence of 1:400 to 1:500, haemochromatosis is a relatively common condition with significant potential morbidity that must be considered, especially in all relatives (first degree relatives can be gene tested without iron studies). ‘Iron overload’ is a little more awkward to define than iron deficiency. Serum ferritin levels above the population norms are not necessarily harmful, but if we waited for serum ferritin levels to reach dangerous levels (eg >1000 ug/L), we would not be preventing the sequelae of iron overload such as liver disease, but also a higher risk of cardiovascular disease and premature arthropathy. Most labs have upper clinical decision limits for ferritin of between 200 and 500 ug/L as a sensitive early warning for the possibility of haemochromatosis. Should a high ferritin level be confirmed, gene testing can be rebated according to Medicare Benefit Schedule (MBS) requirements. The pathology tests I have been discussing serum ferritin as the marker of iron stores however clinicians in Australia commonly request ‘iron studies’. Indeed, ferritin is the storage protein for iron that ‘leaks’ out of cells and most accurately reflects cellular iron stores. What is the value of the other two measurements? One of the other measurements is serum iron and it is a bad measure of iron status (we probably shouldn’t report it at all). The serum iron level depends on meals, depends on the time of day (lower in the afternoon) and most importantly, depends on the concentration of the protein that chaperones iron in the circulation: serum transferrin. Patients with higher transferrin levels will generally have higher serum iron levels. What is important is how iron is the transferrin carrying and this is calculated as the ‘transferrin saturation’ (a ratio of serum iron to transferrin). Typically transferrin saturation is at least 10% full, and uncommonly more than 45% full and levels outside this are supportive of iron deficiency and iron overload respectively. While the transferrin saturation calculation corrects some of the unreliability of serum iron, saturation is still subject to diet and supplements and diurnal variation. Clinicians in Australia are used to requesting the full iron study panel of tests. This is useful in iron overload because in haemochromatosis, the earliest change is a high transferrin saturation which may be found years before the ferritin rises above the upper decision limit. A confirmed elevation of transferrin saturation is also allows haemochromatosis gene testing to be MBS rebated. Iron deficiency can be identified by a low serum ferritin (less than 30 ug/L) and the rest of the iron studies may also be altered with low serum iron saturation and higher levels of transferrin. Low serum iron saturation is non-specific (eg diet and afternoon samples) and high transferrin is also non-specific (eg OCP and pregnancy). Unfortunately there are some patients that are misidentified with iron deficiency because of these non-specific tests even when ferritin was clearly normal and there is discussion of banning the ability to request serum iron and transferrin when looking for iron deficiency because of the potential harms in misinterpretation. For clinicians there are even more important confounders than the physiological effects on serum iron and transferrin saturation because when inflammation (the ‘acute phase reaction’) is present the body actually hides away its iron stores by decreasing iron release (low serum iron), decreasing transferrin production (ie negative acute phase reactant), and because iron is no longer being mobilised, it starts accumulating in cells (ferritin rise as if it were an acute phase protein). Unfortunately all the iron studies are therefore unreliable in the presence of inflammation and if there is some suspicion a serum CRP is the most sensitive and specific test to detect inflammation. All we can say otherwise is
  • that if the ferritin is below 30 ug/L in the presence of inflammation there must be iron deficiency and
  • (ii) if the ferritin rises above 100 ug/L in the presence of inflammation then there was probably enough iron around anyway.
There is a test that helps separate true iron deficiency in anaemic patients with inflammatory disorders called ‘soluble serum transferrin receptors’ but it is not covered in the Medicare Benefits Schedule although the RCPA have made a submission to government.
General Practice Pathology is a new regular column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs. The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

Dr Linda Calabresi

A case history recently published in the BMJ highlights one of those uncommon but very diagnoseable conditions if you just spot the clues. According to the French authors, the 62 year old man presented with a history of recurrent oral ulcers sometimes accompanied by laryngitis and conjunctivitis. During one of these episodes he had developed an acute fever, a sore throat when swallowing and laryngitis – he had sought medical attention and was prescribed ibuprofen and clarithromycin. Two days after this, the man developed conjunctivitis, erosions in the mucosal membrane in the mouth and skin lesions. Not unsurprisingly, the man’s attending doctors though he had Stevens-Johnson syndrome and sent him to hospital. Full examination showed painful diffuse erosions of mucous membranes not only of the oral cavity but also of the nose, the epiglottis and the glans. The skin lesions were noted to be target lesions involving three raised concentric red rings and they were found on the trunk, lower limbs and scrotum. He was febrile, fatigued and eating was painful. Diagnostic tests showed a raised CRP but little else. The skin biopsy showed a dense lichenoid lymphocytic infiltrate. So did he have Stevens-Johnson syndrome? Apparently not. The target lesions with their three concentric rings and the widespread oral, ocular and genital mucous membrane erosions are in fact suggestive of erythema multiforme, and specifically because of the fact more than one mucous membrane was involved, the more severe type of erythema multiforme – erythema multiforme major. The authors did concede that erythema multiforme is frequently confused with Stevens-Johnson syndrome, and even toxic epidermal necrolysis (TEN), which are life-threatening conditions. The features that helped distinguish this as a case of erythema multiforme rather than the other more serious alternatives were:
    • the previous episodes of oral ulcers, sometimes with laryngitis and conjunctivitis. Even though erythema multiforme is rare, of the people who do get it some 40% experience multiple recurrences often triggered by the herpes simplex virus.
    • erythema multiforme is generally a post-infectious disease most commonly herpes simplex (which was tricky in this case as viral cultures from the patient’s mouth were negative) whereas 85% of Stevens-Johnson syndrome and toxic epidermal necrolysis cases are drug-induced.
    • erythema multiforme usually begins with systemic symptoms such as fever and then mucosal involvement. The skin lesions typically appear later. In Stevens-Johnson syndrome and toxic epidermal necrolysis the severe cutaneous reaction is usually the first sign of the condition occurring four to 28 days after taking the offending drug.
    • finally the skin lesions are different. As in this case, the typical skin lesions of erythema multiforme are three raised concentric rings that usually respond to topical steroids and oral antihistamines. In Stevens-Johnson syndrome and toxic epidermal necrolysis the lesions are ‘atypical targets with two concentric rings and purpuric macules that evolve into blisters and skin that detaches with finger friction (Nikolsky sign).’
And what happened to this patient? According to the case report, he wound up staying eight days in hospital treated with enteral nutrition, topical steroids and steroid mouthwashes. All the skin and mucosal membrane lesions healed and he fully recovered. Interestingly, he did have minor relapses annually for a number of years but these weren’t severe enough to warrant any further treatment. Ref: BMJ 2017; 359 doi: https://doi.org/10.1136/bmj.j3817