PPIs for infant reflux a risk

It seemed such a godsend, didn’t it? Omeprazole for severe infant reflux. A massive improvement on the previous advice to elevate the head of the cot and nurse upright.

But since it first appeared in guidelines, there have been studies, reports and opinions cautioning against the overuse of PPIs citing everything from them being ineffectual to their potential to predispose the child to allergy.

Now it looks like there is yet another reason why we need to think again before prescribing a PPI for the distressed infant with reflux and their exhausted parents.

According to an article recently appearing in a JAMA network publication, recent study findings cast more doubt on the safety of this treatment option, suggesting that giving PPIs to infants less than six months of age is associated with a higher risk of bone fractures later in childhood.

The US researchers analysed data, including pharmacy outpatient data from over 850,000 children born within the Military Health Care System over a 12 year period. According to findings presented at a Pediatric Academic Societies Meeting earlier this year, children given a PPI in the first six months of their life had a 22% increased risk of fracture in the following 5-6 years. And if, for some reason they were also given a H2 blocker the risk jumped to 31%. Interestingly if they only received the H2 blocker there was no significant increase in fracture risk.

The study also showed the longer the duration of PPI use the greater the risk of fracture.

It is thought that the mechanism behind the increased fracture risk relates to the PPI-induced decrease in gastric acid causing a reduction in calcium absorption.

While the study is still going through the process of peer-review and is yet to be published, the study’s lead author, US Air Force Capt Laura Malchodi (MD) said the findings suggest increased caution should be exercised with regard these drugs.

“Our study adds to the growing body of evidence suggesting [acid-reducing] medications are not safe for children, especially very young children,” she told delegates.

“[PPIs] should only be prescribed to treat confirmed serious cases of more severe, symptomatic, gastroesophageal reflux disease (GERD), and for the shortest length of time needed.”

Ref: JAMA published online Sept 29, 2017. Doi:10.1001/jama.2017.12160

Obesity Surgery – Worth The Money

For most patients in Australia, obesity surgery is an expensive exercise. The surgery alone is likely to see you out of pocket to the tune of several thousand at least. And then there’s the time off work, specialist appointments, follow-up etc etc.

So you can understand patients being hesitant about the prospect. And then there’s the worry about effectiveness. Will it work? And if so for how long?

Well, new research, published in The New England Journal of Medicine goes a long way to alleviating those fears.

The prospective US study, showed that not only did more than 400 severely obese patients who underwent gastric bypass surgery lose a significant amount of weight but that weight loss and the health benefits obtained because of it, were sustained 12 years later.

Two years after undergoing the Roux-en-Y surgery, these patients had lost an average of 45kg. Over the following decade there was some weight gain, but at the end of the 12 years the average weight loss from baseline was still a massive 35kg.

The impressiveness of this statistic is put into perspective by researchers who compared this cohort with a similar number of severely obese people who had sought but did not undergo gastric bypass. Over the duration of the study this group lost an average of only 2.9kg. And another group, also obese patients who had not sought surgery lost no weight at all on average over this time period.

What is even more significant is the difference in morbidity associated with the surgery. The researchers found that of the patients who had type 2 diabetes at baseline, 75% no longer had the disease at two years. And despite the progressive nature of type 2 diabetes, 51% were still diabetes-free at 12 years. In addition, the surgery group had higher remission rates and lower incidence rates of hypertension and lipid disorders.

“This study showed long-term durability of weight loss and effective remission and prevention of type 2 diabetes, hypertension and dyslipidaemia after Roux-en-Y gastric bypass,” the study authors concluded.

Even though this surgery is done less commonly in Australia than laparoscopic procedures, the reality is that bariatric surgery, for the most part represents enormous value for severely obese patients. The dramatic results and the significant health benefits will no doubt increase pressure on the government and private health insurers to improve access to what could well be described as life-changing surgery.

Ref:

NEJM 2017; 377: 1143-1155. DOI: 10.1056/NEJMoa1700459

New Guidance For Assessment Of Lipids

Non-fasting specimens are now acceptable

Fasting specimens have traditionally been used for the formal assessment of lipid status (total, LDL and HDL cholesterol and triglycerides).1,2

In 2016, the European Atherosclerosis Society and the European Federation of Clinical Chemistry and Laboratory Medicine released a joint consensus statement that recommends the routine use of non-fasting specimens for the assessment of lipid status.2

Large population-based studies were reviewed which showed that for most subjects the changes in plasma lipids and lipoproteins values following food intake were not clinically significant.

Maximal mean changes at 1–6 hours after habitual meals were found to be: +0.3 mmol/L for triglycerides; -0.2 mmol/L for total cholesterol; -0.2 mmol/L for LDL cholesterol; -0.2 mmol/L for calculated non-HDL cholesterol and no change for HDL cholesterol.

Additionally, studies have found similar or sometimes superior cardiovascular disease risk associations for non-fasting compared with fasting lipid test results.

There have also been large clinical trials of statin therapy, monitoring the efficacy of treatment using non-fasting lipid measurements. Overall, the evidence suggests that non-fasting specimens are highly effective in assessing cardiovascular disease risk and treatment responses.

Non-HDL cholesterol as a risk predictor

In the 2016 European joint consensus statement2 and in previously published guidelines and recommendations, the clinical utility of non-HDL cholesterol (calculated from total cholesterol minus HDL cholesterol) has been noted as a predictor of cardiovascular disease risk.

Moreover, this marker has been found to be more predictive of cardiovascular risk when determined in a non-fasting specimen.

What this means for your patients

The assessment of lipid status with a non-fasting specimen has the following benefits:

  • No patient preparation is required, thereby reducing non-compliance
  • Greater convenience with attendance for specimen collection at any time
  • Reports are available for earlier review instead of potential delays associated with obtaining fasting results

Indications for repeat testing or a fasting specimen collection

For some patients, lipid testing on more than one occasion may be necessary in order to establish their baseline lipid status. It is also important to note that an assessment of lipid status carried out in the presence of any intercurrent illness may not be valid.

Conditions for which a fasting specimen collection is recommended2 include:

  • Non-fasting triglyceride >5.0 mmol/L
  • Known hypertriglyceridaemia followed in a lipid clinic
  • Recovering from hypertriglyceridaemic pancreatitis
  • Starting medications that may cause severe hypertriglyceridaemia (e.g., steroid, oestrogen, retinoid acid therapy)
  • Additional laboratory tests are requested that require fasting or morning specimens (e.g., fasting glucose, therapeutic drug monitoring)

Lipid reference limits and target levels for treatment are under review

The chemical pathology community in Australia is currently reviewing all relevant publications in order to implement a consensus approach to reporting and interpreting lipid results. This includes the guidelines for management of absolute cardiovascular disease risk developed by the National Vascular Disease Prevention Alliance (NVDPA).3

Further information

  • Absolute cardiovascular disease risk calculator is available atwww.cvdcheck.org.au
  • If familial hypercholesterolaemia is suspected, e.g. LDL cholesterol persistently above 5.0 mmol/L in adults, then advice about diagnosis and management is available at www.athero.org.au/fh

References

  1. Rifai N, et al. Non-fasting Sample for the Determination of Routine Lipid Profile: Is It an Idea Whose Time Has Come? ClinChem 2016;62: 428-35.
  2. Nordestgaard BG, et al. Fasting Is Not Routinely Required for Determination of a Lipid Profile: Clinical and Laboratory Implications Including Flagging at Desirable Concentration Cutpoints -A Joint Consensus Statement from the European Atherosclerosis Society and European Federation of Clinical Chemistry and Laboratory Medicine. Clin Chem 2016;62: 930-46.
  3. National Vascular Disease Prevention Alliance, Absolute cardiovascular disease management, Quick reference guide for health professionals

General Practice Pathology is a new fortnightly column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs.
The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

How much sleep do children really need?

How much sleep, and what type of sleep, do our children need to thrive?

In parenting, there aren’t often straightforward answers, and sleep tends to be contentious. There are questions about whether we are overstating children’s sleep problems. Yet we all know from experience how much better we feel, and how much more ready we are to take on the day, when we have had an adequate amount of good quality sleep.

I was one of a panel of experts at the American Academy of Sleep Medicine to review over 800 academic papers examining relationships between children’s sleep duration and outcomes. Our findings suggested optimal sleep durations to promote children’s health. These are the optimal hours (including naps) that children should sleep in every 24-hour cycle.

And yet these types of sleep recommendations are still controversial. Many of us have friends or acquaintances who say that they can function perfectly on four hours of sleep, when it is recommended that adults get seven to nine hours per night.

Optimal sleep hours: The science

We look for science to support our recommendations. Yet we cannot deprive young children of sleep for prolonged periods to see whether they have more problems than those sleeping the recommended amounts.

Some experiments have been conducted with teenagers when they have agreed to short periods of sleep deprivation followed by regular sleep durations. In one example, teenagers who got inadequate sleep time had worse moods and more difficulty controlling negative emotions.

Those findings are important because children and adolescents need to learn how to regulate their attention and manage their negative emotions and behaviour. Being able to self-regulate can enhance school adjustment and achievement.

With younger children, our studies have had to rely on examining relationships between their sleep duration and quality of their sleep and negative health outcomes. For example, when researchers have followed the same children over time, behavioural sleep problems in infancy have been associated with greater difficulty regulating emotions at two to three years of age.

Persistent sleep problems also predicted increased difficulty for the same children, followed at two to three years of age, to control their negative emotions from birth to six or seven years and for eight- to nine-year-old children to focus their attention.

Optimal sleep quality: The science

Not only has the duration of children’s sleep been demonstrated to be important but also the quality of their sleep. Poor sleep quality involves problems with starting and maintaining sleep. It also involves low satisfaction with sleep and feelings of being rested. It has been linked to poorer school performance.

Kindergarten children with poor sleep quality (those who take a long time to fall asleep and who wake in the night) demonstrated more aggressive behaviour and were represented more negatively by their parents.

Infants’ night waking was associated with more difficulties regulating attention and difficulty with behavioural control at three and four years of age.

From diabetes to self-harm

The Consensus Statement of the American Academy of Sleep Medicine suggested that children need enough sleep on a regular basis to promote optimal health.

The expert panel linked inadequate sleep duration to children’s attention and learning problems and to increased risk for accidents, injuries, hypertension, obesity, diabetes and depression.

Insufficient sleep in teenagers has also been related to increased risk of self-harm, suicidal thoughts and suicide attempts.

Parent behaviours

Children’s self-regulation skills can be developed through self-soothing to sleep at settling time and back to sleep after any night waking. Evidence has consistently pointed to the importance of parents’ behaviours not only in assisting children to achieve adequate sleep duration but also good sleep quality.

Parents can introduce techniques such as sleep routines and consistent sleep schedules that promote healthy sleep. They can also monitor children to ensure that bedtime is actually lights out without electronic devices in their room.

The ConversationIn summary, there are recommended hours of sleep that are associated with better outcomes for children at all ages and stages of development. High sleep quality is also linked to children’s abilities to control their negative behaviour and focus their attention — both important skills for success at school and in social interactions.

Wendy Hall, Professor, Associate Director Graduate Programs, UBC School of Nursing, University of British Columbia

This article was originally published on The Conversation. Read the original article.

How long do anxiety patients need medication?

It is well-known that when a patient with depression is commenced on antidepressants and they are effective, they should continue them for at least a year to lower their risk of relapse. The guidelines are pretty consistent on that point.

But what about anxiety disorders?

Along with cognitive behavioural therapy, antidepressants are considered a first-line option for treating anxiety conditions such as generalised anxiety disorder, obsessive-compulsive disorder and post-traumatic disorder. Antidepressants have been shown to generally effective and well-tolerated in treating these illnesses.

But how long should they be used in order to improve long-term prognosis?

Internationally, guidelines vary in their recommendations. If the treatment is effective the advice has been to continue treatment for variable durations (six to 24 months) and then taper the antidepressant, but this has been based on scant evidence.

To clarify this recommendation, Dutch researchers conducted a meta-analysis of 28 relapse prevention trials in patients with remitted anxiety disorders.

Their findings, recently published in the BMJ, support the continuation of pharmacotherapy.

“We have shown a clear benefit of continuing treatment compared with discontinuation for both relapse… and time to relapse”, the authors stated.

In addition, the researchers found the relapse risk was not significantly influenced by the type of anxiety disorder, whether the antidepressant was tapered or stopped abruptly or whether the patient was receiving concurrent psychotherapy

However, because of the duration of the studies included in the meta-analysis, only the advice to continue antidepressants for at least a year could be supported by evidence. After this, the researchers said there was no evidence-based advice that could be given.

“[However] the lack of evidence after this period should not be interpreted as explicit advice to discontinue antidepressants after one year,” they said.

The researchers suggested that those guidelines that advise antidepressant should be tapered after the patient has achieved a sustained remission should be revised.

In fact, they said, there were both advantages and disadvantages to continuing treatment beyond a year, and more research was needed to help clinicians assess an individual’s risk of relapse. This is especially important as anxiety disorders are generally chronic and there have been indications that in some patients, the antidepressant therapy is less effective when reinstated after a relapse.

“When deciding to continue or discontinue antidepressants in individual patients, the relapse risk should be considered in relation to side effects and the patient’s preferences,” they concluded.

Ref: BMJ 2017;358:j392 doi:10.1136/bmj:j3927

New Autism Guidelines Miss The Mark

The first national guidelines for diagnosing autism were released for public consultation last week. The report by research group Autism CRC was commissioned and funded by the National Disability Insurance Scheme (NDIS) in October 2016.

The NDIS has taken over the running of federal government early intervention programs that provide specialist services for families and children with disabilities. In doing so, they have inherited the problem of diagnostic variability. Biological diagnoses are definable. The genetic condition fragile X xyndrome, for instance, which causes intellectual disability and development problems, can be diagnosed using a blood test.

Autism diagnosis, by contrast, is imprecise. It’s based on a child’s behaviour and function at a point in time, benchmarked against age expectations and comprising multiple simultaneous components. Complexity and imprecision arise at each stage, implicit to the condition as well as the process. So, it makes sense the NDIS requested an objective approach to autism diagnosis.


Read more: The difficulties doctors face in diagnosing autism


The presumption of the Autism CRC report is that standardising the method of diagnosis will address this problem of diagnostic uncertainty. But rather than striving to secure diagnostic precision in the complexity and imprecision of the real world, a more salient question is how best to help children when diagnostic uncertainty is unavoidable.

What’s in the report?

The report recommends a two-tiered diagnostic strategy. The first tier is used when a child’s development and behaviour clearly meet the diagnostic criteria.

The process proposed does not differ markedly from current recommended practice, with one important exception. Currently, the only professionals who can “sign off” on a diagnosis of autism are certain medical specialists such as paediatricians, child and adolescent psychiatrists, and neurologists. The range of accepted diagnosticians has now been expanded to include allied health professionals such as psychologists, speech pathologists and occupational therapists.

This exposes the program to several risks. Rates of diagnosed children may further increase with greater numbers of diagnosticians. Conflict of interest may occur if diagnosticians potentially receive later benefit as providers of funded treatment interventions. And while psychologists and other therapists may have expertise in autism, they may not necessarily recognise the important conditions that can present similarly to it, as well as other problems the child may have alongside autism.

The second recommended tier of diagnosis is for complex situations, when it is not clear a child meets one or more diagnostic criteria. In this case, the report recommends assessment and agreement by a set of professionals – known as a multidisciplinary assessment. This poses important challenges:

  • Early intervention starts early. Multidisciplinary often means late, with delays on waiting lists for limited services. This is likely to worsen if more children require this type of assessment.
  • Multidisciplinary assessments are expensive. If health systems pay, capacity to subsequently help children in the health sector will be correspondingly reduced.
  • Groups of private providers may set up diagnostic one-stop shops. This may inadvertently discriminate against those who can’t pay and potentially bias towards diagnosis for those who can.
  • Multidisciplinary assessments discriminate against those in regional and rural areas, where professionals are not readily available. Telehealth (consultation over the phone or computer) is a poor substitute for direct observation and interaction. Those in rural and regional areas are already disadvantaged by limited access to intervention services, so diagnostic delays present an additional obstacle.

A diagnostic approach reflects a deeper, more fundamental problem. Methodological rigour is necessary for academic research validity, with the assumption autism has distinct and definable boundaries.

But consider two children almost identical in need. One just gets over the diagnostic threshold, the other not. This may be acceptable for academic studies, but it’s not acceptable in community practice. An arbitrary diagnostic boundary does not address complexities of need.

We’re asking the wrong question

The federal government’s first initiative to fund early intervention services for children diagnosed with autism was introduced in 2008. The Helping Children With Autism program provided A$12,000 for each diagnosed child, along with limited services through Medicare.

The Better Start program was introduced later in 2011. Under Better Start, intervention programs also became available for children diagnosed with cerebral palsy, Down syndrome, fragile X syndrome and hearing and vision impairments.

While this broadened the range of disabilities to be funded, it did not address the core problem of discrimination by diagnosis. This is where children who have equal needs but who for various reasons aren’t officially diagnosed are excluded from support services. Something is better than nothing, however, and these programs have helped about 60,000 children at a cost of over A$400 million.

Yet the NDIS now also faces a philosophical challenge. The NDIS considers funding based on a person’s ability to function and participate in life and society, regardless of diagnosis. By contrast, entry to both these early intervention programs is determined by diagnosis, irrespective of functional limitation.


Read more: Understanding the NDIS: will parents of newly diagnosed children with disability be left in the dark?


While funding incentives cannot change prevalence of fragile X syndrome in our community (because of its biological certainty), rates of autism diagnoses have more than doubled since the Helping Children with Autism program began in 2008. Autism has become a default consideration for any child who struggles socially, behaviourally, or with sensory stimuli.

Clinicians have developed alternative ways of thinking about this “grey zone” problem. One strategy is to provide support in proportion to functional need, in line with the NDIS philosophy.

Another strategy is to undertake response-to-intervention. This is well developed in education, where support is provided early and uncertainty is accepted. By observing a child’s pattern and rate of response over time, more information emerges about the nature of the child’s ongoing needs.

The proposed assessment strategy in the Autism CRC report addresses the question, “does this child meet criteria for autism?”. This is not the same as “what is going on for this child, and how do we best help them?”. And those are arguably the more important questions for our children.


The ConversationThis article was co-authored by Dr Jane Lesslie, a specialist developmental paediatrician. Until recently she was vice president of the Neurodevelopmental and Behavioural Paediatric Society of Australasia.

Michael McDowell, Associate Professor, The University of Queensland

This article was originally published on The Conversation. Read the original article.

Assassination by pacemaker: Australia needs to do more to regulate internet-connected medical devices

In the future, people are going to be just a little bit cyborg. We’ve accepted hearing aids, nicotine patches and spectacles, but implanted medical devices that are internet-connected present new safety challenges. Are Australian regulators keeping up?

A global recall of pacemakers has sparked new fears and splashy headlines about hacked medical devices. But the next 20 years of medicine will normalise the use of intelligent implants to control pain, provide data for diagnostic purposes and supplement ailing organs, which means we need proper security as well as access in case of emergency.


Read More: Three reasons why pacemakers are vulnerable to hacking


Pharmaceuticals and medical devices in Australia are regulated by the Therapeutic Goods Administration (TGA), an arm of the national Health Department.

Can we rely on Australia’s medical devices regime? Recurrent criticisms by parliamentary committees and government inquiries suggest the regulator may be struggling.

The job of the TGA

The TGA regulates medical devices such as stents, pacemakers, joint implants, breast implants, and the controversial vaginal mesh that has featured recently in the media (and a Senate inquiry) over claims it seriously injured patients.

The role of the TGA is vital, because defective devices can result in injury or death. They have a major cost for the public health system and affect patient quality of life. They often result in litigation, sometimes with billion-dollar settlements.

In undertaking its mission, the TGA looks to information from manufacturers and distributors, from overseas regulators and its own staff.

Like counterparts such as the US Food and Drug Administration, TGA staff are under pressure to get products into the marketplace and reduce “red tape”.

The TGA and cybersecurity

Wireless medical devices need greater security than, say, an internet-connected fridge. It is axiomatic that they must work.

We need to ensure that information provided by the devices is safeguarded and that control of the devices – implantable or otherwise – is not compromised.

To do that, we can use existing tools such as robust passwords, encryption and systems design. It also requires product vendors and practitioners to avoid negligence. Regulators must proactively foster and enforce standards.

Put simply, bodies like the TGA need to deal with software rather than simply bits of metal and plastic. It is unclear whether the TGA has the expertise or means to do so.

Solutions, not panic

The past decade has seen a succession of inquiries into the TGA, including the 2015 Sansom Review and 2012 Senate PIP Inquiry. Each has demonstrated that the TGA is not always keeping up with its task.

Problems are ongoing: think defective joint implants, breast implants and vaginal mesh. But there are some potential paths towards improvement.

Accountability

One solution is to ensure the TGA is more accountable.

Currently, if someone wishes to bring a claim alleging a device was improperly permitted, the TGA has immunity from civil litigation about regulatory failure.

Removal of immunity will force it to focus on outcomes. That can be reinforced by giving it independence from the Department of Health, making it report direct to Parliament and ensuring the openness emphasised by the Pearce Inquiry.

Regulatory capture

Medical products regulation in Australia has been a matter of penny wise, pound poor. The TGA is funded by fees from the manufacturers and distributors that it regulates, in addition to some government funding.

It needs a discrete budget that recoups costs but is not dependent on companies that complain regulation is expensive. It needs enough resources to do its job well in the emerging age of the internet of things, including access to independent expertise regarding cybersecurity and devices.

A device register

How many devices have been implanted and how many removed? The lack of data about medical devices is a problem.

The government has so far not embraced recommendations for a comprehensive device register, one allowing timely identification of what was implanted and by whom.


Read More: Vaginal mesh controversy shows collective failure of the TGA and Australia’s specialists


Such a register would provide a means for determining problems with devices or medical practice. We need timely, consistent reporting of problems on a mandatory basis, as well as recall and transparent investigation of what went wrong.

Disclosure of interests

The inquiry into vaginal mesh revealed the WA Branch of Australian Medical Association had a financial interest in a device that may have seriously affected numerous women.

There must be full disclosure of such interests, with meaningful sanctions where disclosure has not been made. This requires action by the TGA, professional bodies and the government.

So, what about assassination by wireless pacemaker?

The cybersecurity of medical devices is a matter for everyone.

We need the TGA to work with manufacturers, distributors and health professionals to mandate best practice. Should, for example, manufacturers and practitioners ensure that implants do not rely on default passwords that are easily crackable? What about access by emergency services?

The ConversationThere is a fundamental need to develop and enforce a national safety standard regarding all wireless implants. For that we need thoughtful policy, not just headlines.

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article was originally published on The Conversation. Read the original article.

Could It Be Endometriosis?

Endometriosis, or more particularly diagnosis of endometriosis is often a challenge in general practice.

When should you start investigating a young girl with painful periods? Is it worth investigating or should we just put them on the Pill? At what point should these young women be referred?

Consequently, the most recent NICE guidelines on the diagnosis and management of endometriosis, published in the BMJ will be of interest to any GP who manages young women.

According to the UK guidelines, there is commonly a delay of up to 10 years between the development of symptoms and the diagnosis of endometriosis, despite the condition affecting an estimated 10% of women in the reproductive age group.

Endometriosis should be suspected in women who have one or more of the following symptoms:

  • chronic pelvic pain
  • period pain that is severe enough to affect their activities
  • deep pain associated with or just after sex
  • period-related bowel symptoms such as painful bowel movements
  • period-related urinary symptoms such as dysuria or even haematuria

Sometimes it can be worthwhile to get the patient to keep a symptom diary especially if they are unsure if their symptoms are indeed cyclical. Women who present with infertility and a history of one or more of these symptoms should also be suspected as having endometriosis.

Investigations

With regard investigations, the guidelines importantly state that endometriosis cannot be ruled out by a normal examination and pelvic ultrasound. Nonetheless after abdominal and pelvic examination, transvaginal ultrasound should be the first investigation to identify endometriomas and deep endometriosis that has affected other organs such as the bowel or bladder. Transabdominal ultrasounds are a worthwhile alternative in women for whom a transvaginal ultrasound is not appropriate.

MRI might be appropriate as a second line investigation but only to determine the extent of the disease. It should not be used for initial diagnosis. Similarly, the serum CA-125 is an inappropriate and unreliable diagnostic test.

Diagnostic laparoscopy is reserved for women with suspected endometriosis who have a normal ultrasound.

Treatment

If the symptoms of endometriosis can’t be adequately controlled with analgesia, the guidelines recommend hormonal treatment with either the combined oral contraceptive pill or progestogen. Women need to be aware that this will reduce pain and will have no permanent negative effect on fertility.

Surgical options to treat endometriosis need to be considered in women whose symptoms remain intolerable despite hormonal treatment, if the endometriosis is extensive involving other organs or if fertility is a priority and it is suspected that the endometriosis might be affecting the woman’s ability to fall pregnant.

All in all, these guidelines from the Royal College of Obstetricians and Gynaecologists don’t offer much in the way of new treatments but they do provide a framework to help GPs manage suspected cases of endometriosis and hopefully reduce that time delay between symptom-onset and diagnosis.

BMJ 2017; 358: j3935 doi: 10.1136/bmj.j3935

Studying Chromosomes In 2017

Examining the structure of chromosomes

The first studies in human genetics were done in the early 1900s, well before we had any idea of the structure of DNA or chromosomes. It was not until the late 1950s that the double helix was deciphered, that we realised that chromosomes were large bundles of DNA, and that we were able to visualise the number and shape of chromosomes under the microscope.

In just a few years, numerous clinical disorders were identified as being due to abnormalities in the number or shape of chromosomes, and the field of “cytogenetics” was born.

Over the next five decades, techniques improved.

With the right sample and a good microscope, the laboratory could detect an abnormal gain or loss that was as small as 5-10 million base pairs of DNA on a specific chromosome. The light microscope reigned supreme as the ultimate tool for genetic analysis!

Examining the mass of chromosomes

In the last 10-15 years, a different technology called “microarrays” has challenged the supremacy of the microscope in genetic analysis.

There are many different implementations of microarrays, but in essence they are all based on breaking the chromosomes from a tissue sample into millions of tiny DNA fragments, thereby destroying the structural cues used in microscopy.

Each fragment then binds to a particular location on a prepared surface, and the amount of bound fragment is measured. The prepared surface, a “microarray”, is only a centimetre across and can have defined locations for millions of specific DNA fragments.

The relative amounts of specific fragments can indicate tiny chromosomal regions in which there is a relative deficiency or excess of material. For example, in a person with Down syndrome (trisomy 21), the locations on the microarray that bind fragments derived from chromosome 21 will have 1 ½ times the number of fragments as locations which correspond to other chromosomes (three copies from chromosome 21 versus two copies from other chromosomes). The microarray could be regarded as examining the relative mass, rather than the shape, of specific chromosomal regions.

Current microarrays can identify loss or gain of chromosomal material that is 10-100 times smaller than would be visible with the microscope. This has markedly improved the diagnostic yield in many situations but, as described below, conventional cytogenetics by light microscopy still has a role to play.

Microarrays in paediatrics

Conventional cytogenetics will identify a chromosome abnormality in 3-5% of children with intellectual disability or multiple malformations. A microarray will identify the same abnormality in those children, plus abnormalities in a further 10-15% i.e. the yield from microarray studies is approximately 15-20% (1).

For this reason, microarray studies are the recommended type of cytogenetic analysis in the investigation of children or adults with intellectual disability or multiple malformations.

There is a specific Medicare item for “diagnostic studies of a person with developmental delay, intellectual disability, autism, or at least two congenital abnormalities” by microarray. Requestors should request microarray analysis (item 73292) rather than use the less specific request for chromosome studies (item 73289).

There are three cautions about microarray studies in this setting.

First, a microarray will not detect every familial disorder. Intellectual disability due to a single gene disorder e.g. fragile X syndrome, will not be detected by a microarray.

Second, experience with microarrays has demonstrated that some gains and losses of genetic material are benign and familial. It may be necessary to test the parents as well as the child to clarify the clinical significance of an uncommon change identified by microarray; the laboratory would provide guidance in such instances.

And third, a microarray may identify an unexpected abnormality that has clinical consequences other than those which triggered the investigation.

Microarrays in antenatal care

The use of microarrays to investigate children with multiple malformations has now been extended to the investigation of fetuses with malformations.

By using microarrays rather than conventional microscopy, the diagnostic yield from antenatal cytogenetics has increased by 6%(2). The cautions noted above still apply i.e. a microarray cannot detect every genetic cause of malformations, and determining the clinical significance of an uncommon finding may require additional studies.

Microarrays can also be useful in the investigation of miscarriage and stillbirth.

Most miscarriages are due to chromosome abnormalities which occur during the formation of the sperm or egg, or during early embryogenesis(3). These abnormalities are not inherited from either parent and hence do not constitute a hazard in subsequent pregnancies. Many clinicians and couples wish to confirm that a miscarriage was due to a sporadic chromosome abnormality that carries little risk for a subsequent pregnancy.

This analysis can be done by either microarray or microscopic analysis of the products of conception. Microscopic analysis requires viable tissue, and up to 30% of studies may fail. Microarray analysis is preferred because it has better resolution and does not require living cells; as a result, the yield from microarray analysis is much higher(2). Requesters should specifically request microarray analysis, utilising the non-specific MBS item (73287).

Situations in which microarrays should not be used

There are two important antenatal situations in which microarrays should not be used: preconception screening, and investigation after a high risk non-invasive prenatal testing (NIPT) result.

As noted above, a microarray measures the relative amount of genetic material from a specific location on a chromosome; it does not evaluate the shape of that chromosome.

Approximately 1:1,000 healthy people has a balanced translocation i.e. part of one chromosome is attached to a different chromosome. The overall amount of genetic material is normal and there is usually no clinical consequence of this rearrangement. A balanced translocation would not be detected by microarray because there is not net gain or loss of chromosomal material.

Microscopic analysis is likely to detect the translocation because of the change in shape of the two chromosomes involved.

A person with a translocation can produce eggs or sperm that are unbalanced, having an abnormal gain or loss of chromosome material. This can cause infertility, recurrent miscarriages, or the birth of a child with intellectual disability or malformations. The unbalanced abnormality in the child would be detected by microarray, but the balanced precursor in the parent would not.

For this reason, cytogenetic investigation of infertility and recurrent miscarriages requires microscopic cytogenetic studies of both partners (MBS item 73289).

Approximately 4% of couples with recurrent miscarriages are found to have a balanced translocation in one or both partners.

For similar reasons, microarray testing is not recommended for follow-up studies of CVS or amniotic fluid after a high risk result from NIPT. A microarray would identify the trisomy, but may not detect the rare instance of trisomy due to a familial translocation. Prenatal testing for autosomal trisomy requires microscopic cytogenetic studies (MBS item 73287).

The future of microarrays

Rapid developments in DNA sequencing have raised the possibility that microarrays will themselves be displaced as the preferred method of cytogenetic analysis(4). It is already possible to replicate many of the functions of a microarray by advanced sequencing methods. However, the microarray currently has the advantages of precision, reproducibility, and affordability that will ensure its continuing use for at least the next few years. And, as already demonstrated above, there may still be clinical questions that require the older methods. Cytogenetics is changing, but it is not dead.

Sonic Genetics offers cytogenetic studies by both microscopic and microarray methods.

General Practice Pathology is a new fortnightly column each authored by an Australian expert pathologist on a topic of particular relevance and interest to practising GPs.
The authors provide this editorial, free of charge as part of an educational initiative developed and coordinated by Sonic Pathology.

References

  1. Miller DT, Adam MP, Aradhya S, Biesecker LG, Brothman AR, Carter NP, et al. Consensus statement: chromosomal microarray is a first-tier clinical diagnostic test for individuals with developmental disabilities or congenital anomalies. Am J Hum Genet. 2010 May 14;86(5):749–64.
  2. Dugoff L, Norton ME, Kuller JA. The use of chromosomal microarray for prenatal diagnosis. Am J Obstet Gynecol. 2016;215(4):B2–9.
  3. van den Berg MMJ, van Maarle MC, van Wely M, Goddijn M. Genetics of early miscarriage. Biochim Biophys Acta – Mol Basis Dis. 2012;1822(12):1951–9.
  4. Downie L, Donoghue S, Stutterd C. Advances in genomic testing. Aust Fam Physician. 2017;46(4):200–4.

Zika Now In Fiji

In what could represent a major blow to tourism in the region, the US Centers for Disease Control have, this week, issued a level 2 warning that mosquitoes in Fiji have been found to be infected with Zika virus and have transmitted the infection to humans.

Because of the strong link between Zika virus infection and severe birth defects, the CDC is strongly advising against women who are pregnant or who are even planning on becoming pregnant travelling to the area.

And as the virus can also be transmitted through the sex, the advice for pregnant women whose partner has travelled to Fiji is to use condoms or refrain from sex for the duration of the pregnancy.

The warning also signals an alert for Australian doctors to consider Zika virus in patients who present with symptoms such as fever, rash and headache following travel to Fiji. However, one of the major problems in curtailing the spread of this virus has been the fact that infected adults may display very few if any symptoms and maybe unaware that have contracted the disease.

What’s more an infected male can harbour the Zika virus in his semen for much longer than in other bodily fluids, so the CDC recommends that men travelling to a Zika-prone country, that now includes Fiji, avoid conceiving a child for six months after leaving the area or from the time they develop symptoms if they indeed do develop symptoms. Women clear the virus more quickly and therefore the recommendation from the CDC is that they avoid falling pregnant two months after potential exposure or from when symptoms appear, assuming their partner did not travel.

For those people, including pregnant women who can’t avoid travel Fiji or other Zika-prone area, the CDC advises they take precautions to avoid mosquito bites and continue these precautions for three weeks after returning home. These include the use of specific insect repellents and the wearing of long-sleeved clothing.

Ref: https://wwwnc.cdc.gov/travel/notices/alert/zika-virus-fiji

Why Animal Trial Results Aren’t The Final Word

Throughout the era of modern medicine, animals have been used extensively to develop and test therapies before they are tested in humans. Virtually every medical therapy in use today – including drugs, vaccines, surgical techniques, devices such as pacemakers and joint prostheses, radiation therapy – owes its existence, at some level, to animal experiments.

Animals have played a pivotal role in countless life-saving discoveries in the modern era. For example, in crude experiments in the 1800s, dogs were injected with extracts made from the pancreases of other animals, which led to insulin therapy for human diabetes. Much more recently, genetically modified mice were used to develop revolutionary cancer immunotherapy drugs, such as that credited with curing advanced melanoma in AFL footballer Jarryd Roughead.


Read more: How we’re arming the immune system to help fight cancer


In developing and testing drugs for human use, animal trials give us extremely valuable information that is impossible to get from test tube or petri dish experiments alone. They tell us how a drug is absorbed and spread around the body in a living animal and how it affects the targeted, and other, tissues. They also tell us how the body processes and eliminates a drug – for most drugs, this is primarily done by the liver and kidneys.

These studies help decide whether to progress the drug to human trials and, if so, what a reasonable starting dose for a human might be. However, because of species differences, something that is effective and safe in an animal might not be so in a human.

What’s the strike rate?

The late Judah Folkman, a cancer researcher at Children’s Hospital in Boston, discovered a compound in the 1990s that eliminated a range of tumours in laboratory mice. Unlike traditional chemotherapies, there were no apparent side effects and the tumours developed no resistance to the treatment. Mass media outlets heralded a miracle cancer cure, but Folkman knew that what happens in the laboratory often fails to translate to the bedside. He famously quipped:

If you have cancer and you are a mouse, we can take good care of you.

The compound, endostatin, went on to human trials and was well tolerated in patients. But its effect on tumour growth was minimal and inconsistent, and results were described as “lukewarm”. Endostatin has since been reformulated and shows some promise in managing certain cancers, especially when combined with other therapies, but it’s not the wonder drug it at first appeared to be.

Scientific journal publications on animal studies usually include a disclaimer along the lines of “this effect has only been demonstrated in animals and may not be replicated in humans”. And with very good reason. A 2006 review looked at studies where medical interventions were tested on animals and whether the results were replicated in human trials.

It showed that of the most-cited animal studies in prestigious scientific journals, such as Nature and Cell, only 37% were replicated in subsequent human randomised trials and 18% were contradicted in human trials. It is safe to assume that less-cited animal studies in lesser journals would have an even lower strike rate.

Another review found the treatment effect (benefit or harm) from six medical interventions carried out in humans and animals was similar for only half the interventions. That is, the results of animal and human trials disagreed half the time.

Costs of failure

The mismatch between animal trials and human trials can cause big problems. Developing a drug to the animal trial phase is already incredibly expensive, but taking it to human clinical trials adds enormous cost, often tens or hundreds of millions of dollars. If a promising drug fails to impress in human trials, it can mean a lot of money, time and effort wasted.

But far more problematic is a drug that seems safe in animal trials, but turns out to be unsafe in humans. The consequences can be tragic. For instance, thalidomide (a drug to treat morning sickness) does not cause birth defects when given to pregnant rats and mice, but in humans it caused an international epidemic of birth defects, including severe limb malformations, in the 1950s and 1960s.


Read more: Remind me again, what is thalidomide and how did it cause so much harm?


More recently, a drug designed to treat leukaemia, TGN1412, was tested in monkeys – in many senses the closest laboratory model to humans – and was well tolerated. But when just 1/500th of the safe monkey dose was given to six healthy young men in the first phase of clinical (human) trials in 2006, they immediately developed fever, vomiting and diarrhoea. Within hours, they were in an intensive care unit with multiple organ failure. They only narrowly escaped death.

Another drug, fialuridine, developed to treat people with hepatitis B, tested well in mice, rats, dogs, woodchucks and primates. But a subsequent human trial in 1993 caused seven people to develop liver failure. Five died and the other two were saved through liver transplants.

Mice and men differences

So, why do human and animal drug trials sometimes disagree so spectacularly? It boils down to the way the body absorbs and processes the drug and the way the drug affects the body. Often these processes are the same or very similar across species, but occasionally they are different enough that a substance that is benign in one species is deadly in another.

Similarly, a cat that ingests even a small amount of paracetamol is a veterinary emergency, as cats lack the liver enzymes required to safely break down paracetamol. Instead, they convert it to a chemical that is toxic to their red blood cells.This will not surprise pet owners, who know a block of chocolate can kill a dog. Dog livers are poor at breaking down the chemicals caffeine and theobromine, found in chocolate, so it doesn’t take much for toxic levels to build up in a dog’s bloodstream.

Hindsight has taught us where the human and animal differences lie for thalidomide, TGN1412 and fialuridine, too. Rats and mice not only break down thalidomide much faster than humans, but their embryos also have more antioxidant defences than human embryos.

In the case of TGN1412, at least part of the problem was that the drug’s target – a protein on certain immune cells – differs slightly between the monkey and human versions. The drug binds more strongly to the human immune cells and triggers a rapid release of massive amounts of chemicals involved in inflammation.

And the reason fialuridine is toxic to humans is because we have a unique transporter molecule deep in our cells that allows the drug to penetrate and disrupt our mitochondria, which act as cells’ internal energy generators. So fialuridine effectively switches off the power supply to human cells, causing cell death. This transporter is not present in any of the five test animal species, so the drug did not affect their mitochondria.

Despite the shortcomings of animal models, and the profound ethical questions around subjecting animals to suffering for human benefit – an issue that concerns all researchers despite their commitment to improving human well-being – animal experimentation remains an invaluable tool in developing drugs.

The ConversationThe challenges, and indeed the obligations, for medical researchers are to use animals as sparingly as possible, to minimise suffering where experimentation is required and to maximise their predictive value for subsequent human trials. If we can increase the predictive value of animal trials – by being smarter about which animals we use, and when and how we use them – we will use fewer animals, waste less time and money testing drugs that don’t work, and make clinical trials safer for humans.

Ri Scarborough, Manager, Cancer Research Program, Monash University

This article was originally published on The Conversation. Read the original article.

Flu Season Pretty Bad

This year, the number of laboratory-confirmed influenza (flu) virus infections began rising earlier than usual and hit historic highs in some Australian states. If you have been part of any gathering this winter, this is probably not news.

States in the south-east (central and southern Queensland, New South Wales, Victoria, Tasmania and South Australia) are more inflamed by flu than those in the north and west. For example, Queensland has seen more hospital admissions than in the last five years, mostly among an older population, while younger demographics more often test positive without needing hospitalisation.

Meanwhile, flu numbers in New Zealand and elsewhere in the Pacific have not matched the same elevated levels. But is Australia really experiencing the biggest flu season on record in 2017, or are we just testing more and using better tools?

This is hard to answer for certain because the information we need is not usually reported until later and public databases only show the past five years. We can say for sure that 2017 is on track to be a historically big flu year.


Read more: Have you noticed Australia’s flu seasons seem to be getting worse? Here’s why


Really, a big flu season

Flu can be a nasty illness. Sometimes it’s deadly. Other times it can be mild. But even for cases that fall in the middle you may not be able to work for days, or you’ll have to look after ill children home from school, or visit the very sick who have been hospitalised.

Years ago, detection of influenza viruses mostly relied on slow, finicky methods such as testing for virus in artificial cell cultures. But, in Australia today, most laboratories use either sensitive tools to detect viral gene sequences in samples from the patient’s airway, or less sensitive but rapid dipstick methods, where a special strip is placed in a sample to detect viral proteins.

These tools have been in use since 2007 in the larger Australian laboratories, so it’s unlikely we are just seeing more positives in 2017. While newer versions of these tests are being rolled out this year, they are unlikely to detect more cases. Equally, it’s unlikely more people with suspected flu decided to change their behaviour in 2017 and get tested, compared to 2016, or the year before.

As in all years, there are many people in the community with flu who don’t get tested. The proportion of people with flu who are tested likely remains roughly the same year to year.

State-wide flu reports provide reliable, laboratory-confirmed results. By looking at them, we can also be confident that “man flu” and severe common colds aren’t contributing to this specific and large increase in flu. We’re very likely seeing a truly huge flu season.

Why so bad this year?

Flu, caused by infection with an influenza virus, is mostly a disease with an epidemic peak during July and August in non-tropical countries. Flu viruses are broadly grouped into two types: Influenza-A and Influenza-B. Influenza-B viruses have two main sub-types while the Influenza-A viruses are more variable.

The Influenza-As you get each year are usually A/H3N2 (the main player so far this season) or A/H1N1, which lingers on from its 2009 “swine flu” pandemic. Multiple flu viruses circulate each year and serial infections with different strains in the same person in a single season are possible.

H3N2 has played a big role in the past five flu seasons. When it clearly dominates we tend to have bigger flu seasons and see cases affecting the elderly more than the young.

H3N2 is a more changeable beast than the other flu viruses. New variants can even emerge within a season, possibly replacing older variants as the season progresses. This may be happening this winter, driving the bigger-than-normal season, but we won’t know for certain until many more viruses are analysed.

Outside winter, flu viruses still spread among us. This year, in particular, we’re being encouraged to get vaccinated even during the peak of flu season. Vaccines are a safe way to decrease the risk that we or loved ones will get a full-blown case of the flu.

Yet Australian flu vaccination rates are low. Data are scant but vaccination rates have increased in adults and some at-risk groups, but remain lower than for childhood vaccines.


Read more: Disease risk increasing with unvaccinated Australian adults


The flu vaccine

Each season new flu vaccines are designed based on detailed characterisation of the flu viruses circulating in the previous season. But the viruses that end up dominating the next season may change in the meantime.

It is not clear whether that was a factor for this year’s high numbers in Australia this year or precisely what the vaccine uptake has been in 2017. Much of this detail will not be reported until after the epidemic ends. Some testing suggests this year’s vaccine is well matched to the circulating viruses.

The flu vaccine is not the most effective of vaccines, but it is safe and the only preventive option we have for now. Of those vaccinated, 10-60% become immune to flu virus.


Read more: Flu vaccine won’t definitely stop you from getting the flu, but it’s more important than you think


Future flu vaccines promise to account for the ever-changing nature of flu virus, reducing the current need for yearly vaccination. Until they are available, though, it remains really important to book an appointment with your vaccine provider and get a quick, safe vaccination, because we are unarguably in the midst of the biggest flu season Australia has seen in years.

The ConversationWe have both vaccines and drugs to help us prevent and minimise disease and the extra load on hospitals caused by flu. The young, elderly, those with underlying disease and Indigenous Australian people are most at risk of the worst outcomes and this is reflected by government-funded vaccination for these groups.

Ian M. Mackay, Adjunct assistant professor, The University of Queensland and Katherine Arden, Virologist, The University of Queensland

This article was originally published on The Conversation. Read the original article.