Articles / Should GPs be using ChatGPT in clinic? 16% already are
Tools such as ChatGPT, Gemini, Copilot, Perplexity and Claude are making their way into many businesses – and a Healthed survey of 1710 GPs this month found that while just 16% of GPs are either regularly (5%) or occasionally (11%) using one of these ‘large language models’ in their clinical practice, another 36% are considering it.
But how well do we understand the risks and benefits?
The RACGP has developed a resource to help doctors make informed decisions about using conversational AI tools such as ChatGPT, Claude, Gemini and Copilot in general practice.
While the possible uses are enormous—from language translation, to triaging to providing clinical decision support to name a few— potential clinical ethical and privacy risks need to be factored in.
“Conversational AI performs best on straightforward clinical tasks,” the resource states, noting that AI tools might give authoritative advice without caveats in grey areas of medicine that are still up for debate.
Dr Sean Stevens is the chair of the RACGP’s Digital Health and Innovation Specific Interest Group as well as a member of the expert committee that helped create the College’s new resource.
He uses AI in his own practice on a daily basis – from a scribe that writes notes and care plans to a tool that answers phones, as well as the conversational AI tool Claude for social media and mail-outs.
How does he use Conversational AI at work?
“A group letter for a particular situation to your patients, it’s really good for that. You go, you know, ‘Assume the persona of an Australian general practitioner who wants to write a letter to their patients regarding filling out an online questionnaire on sleep apnoea,” he explains.
The best uses are “where you know what you want to say, but it’s just making it polished in a sense,” Dr Stevens says.
But while he’s an early adopter, he says the trick is to understand the limitations. In terms of diagnosis or treatment plans, “it’s not there yet.”
The RACGP lists several potential risks. Problems can include responses that seem authoritative, but are actually vague, misleading, and sometimes just wrong. AI tools can also be biased, and thus make recommendations that rely on inaccurate stereotypes or further stigmatise marginalised groups. Additionally, because the processes AI uses are not transparent, it can be tricky to independently assess the evidence it uses to make recommendations and draw conclusions.
“I’d be very cautious about diagnosis and treatment plans. It can just hallucinate stuff,” Dr Stevens warns.
“Diagnosis, if it suggests a range of differential diagnosis, that’s probably not bad because you can look at something and just go, that’s ridiculous, and ignore it. And who knows, it might actually suggest a rare condition that you haven’t thought of. But you definitely can’t rely on it as your sole means of diagnosis.”
“In terms of treatment, I would be very cautious about letting it lay out a treatment plan. Again, it may suggest a range of options, but the doctor would need to look at those options and decide what to throw out, which would be a lot, and what to accept.”
The College also notes that the need to comprehensively check that the AI’s outputs can offset the time saving benefits, and that many conversational AI tools are not (yet) compatible with or integrated into clinical information systems, meaning they cannot easily be assimilated into existing workflows.
When it comes to summarising clinical notes, “data security and data governance is key,” Dr Stevens adds.
“So you can’t go feeding identifiable patient information into a standard large language model You can only do that with one that’s been specifically designed to handle confidential patient information.”
“Data should be housed in Australia, deidentified, and encrypted between the source and the place where it’s processed and deleted shortly after it’s used.”
Tips to improve safety/accuracy (and avoid an AHPRA breach)
In a nutshell, the College says not quite yet. While there’s no doubt it could “revolutionise” parts of healthcare, the RACGP warns that “GPs should be extremely careful, however, in using conversational AI in their practice at this time.”
“Many questions remain about patient safety, patient privacy, data security, and impacts for clinical outcomes,” the College’s resource concludes.
More information:
Meeting your professional obligations when using artificial intelligence in healthcare | Ahpra and National Boards
Conversational artificial intelligence (AI) | RACGP
Management of Acne in Teens
An Update on Heart Failure in General Practice
Invasive Pneumococcal Disease – What GPs Need to Know
Innovations in Breast Radiation
Likely to succeed
Unlikely to succeed
Listen to expert interviews.
Click to open in a new tab
Browse the latest articles from Healthed.
Once you confirm you’ve read this article you can complete a Patient Case Review to earn 0.5 hours CPD in the Reviewing Performance (RP) category.
Select ‘Confirm & learn‘ when you have read this article in its entirety and you will be taken to begin your Patient Case Review.