Are patients and the life science industry ready for AI GP’s

Are patients and the life science industry ready for AI GP’s

Healthcare and Pharma, like any other industry, is already being dramatically impacted by the evolution of AI. The first wave has been machine learning driving innovation, the second wave will be from a patient’s perspective, in the form of interactive, personalised first line healthcare. GP Chatbots are coming — the public, after all, has long been used to typing its symptoms into a computer to see what might be wrong. WebMD has, believe it not, been around for a quarter of a century. And, of course, since ChatGPT emerged in 2022 it has become a wildly popular alternative to search engines.

A study published last year by the Medical Economics journal found that already 25% of patients it surveyed would rather speak to an AI than a human GP. As we become more comfortable and trusting of these tools, that’s only going to increase.—Research conducted by the UK government and published in January found that 65% of users trialling OpenAI-powered chatbots on the Gov.UK website came away satisfied.

And it’s not just patients and end users. The study that Medical Economics published found that 10% of US-based GPs were already using ChatGPT to help diagnose patients, and up to 50% were planning on doing so in the future as the technology improves.

That some people would rather cut out the middleman and simply ask an AI is no real surprise. AI doctors come with a long list of benefits: they plug gaps in resources, cut waiting times and give human GPs space to focus on patient care. This is potentially powerful technology that, if done right, feels simple and intuitive to a patient, and when combined with wearable tech that can monitor heart rate, blood pressure, blood oxygen and more, is an efficient way to get an accurate diagnosis in many cases.

In a climate where 17% of patients in England have to wait at least two weeks for a GP appointment, and the average number of patients per GP is nearly 2,300 (per NHS Digital), instant access to tailored advice is incredibly attractive.

There’s even data indicating that AI has the potential to deliver a better patient experience than human doctors, at least in terms of quality of communication and empathy. A recent study, published in JAMA Internal Medicine, compared responses to medical questions posted on Reddit's AskDocs forum from verified doctors to those generated by ChatGPT. A panel of healthcare professionals, unaware of the source, rated the AI responses as higher quality and more empathetic 79% of the time.

While not suggesting ChatGPT can replace doctors, the findings underscore AI's potential to assist them by drafting personalised advice for their review.

In fact, the potential benefits of AI for GPs themselves is significant. A pilot study published in Medical Economics, conducted between 2021 and 2022 by the American Academy of Family Physicians, found the use of a voice-enabled AI assistant led to a 72% decrease in documentation time per month, resulting in an average savings of 3.3 hours per week. This time saved can be used for improving one-on-one patient where it is needed most, something that doctors might not previously have had time for, such as preventive care. It can also mean less time doing paperwork after hours and an improvement of work-life balance.

However, there are also concerns. Deep ones. Dr. Annabelle Painter, an honorary fellow in digital health at Imperial College and a GP registrar in London, told Pulse this month, 'there is a question mark about the readiness of the IT and data infrastructure in primary care to allow interoperability. Health equity, digital exclusion, bias, fairness and data security are also important ethical topics to be considered. Building trust in AI tools for patients and clinicians is a significant challenge.'

This is a field that needs to be approached with optimism and an open mind, yes, but also with caution. As Dr. Helen Salisbury, a GP in Oxford, warned in an interview with the i newspaper last year, 'I think you can look forward to a dystopian future when if you're poor, you get a computer [to diagnose you], and if you're rich, you get a proper human doctor.' It is crucial to ensure that AI does not exacerbate existing health inequalities and that all patients have access to high-quality care, regardless of their socioeconomic status. And that’s before we get into risks like data security or biases in training data.

It’s largely inevitable that AI’s integration into patient-facing elements of healthcare, like GPs, is going to increase at a rapid rate. The benefits, particularly in a traditionally under-resourced part of the industry, are just too great. Whether this technology is a positive force will ultimately come down to how ready the industry is for it.

No-one wants patients to have poor experiences, and crucially, no-one wants misdiagnosis or potentially harmful advice that could have been avoided if a flesh and blood doctor or more traditional methods had been involved.Life sciences, after all, is still smarting from the debacle of Elizabeth Holmes and Theranos, over a decade ago. Healthcare professionals need to actively engage in shaping the development and implementation of this tech.

Dr. Patricia Schartau, a lecturer in primary care at UCL and a GP in London, summed it up when she told Pulse recently, 'Educational initiatives for professionals and patients will be key to promote ethical AI adoption and informed decision-making. Clinicians may not be replaced by AI, but they risk being replaced by clinicians better able to use it.'

Interviewed for an article published by the Royal Australian College of General Practitioners (RACGP) Dr. Winnie Chen, a GP completing her PhD in health informatics and health economics, suggested that as a starting point, 'GPs – and doctors in general – should be familiar with the technology. The best way is to try it for ourselves, to test what it can and can't do well … How can it help with my work? What needs to be developed next? What needs to be done to make it more accurate? How do we use it ethically and appropriately?'

There are significant risks, yes. But a theoretical “DocGPT” could also come with huge benefits for the industry and patients alike, and the industry ignores it at its peril. As Dr. Chen told the RACGP’s News GP: 'Without familiarity with LLMs [large language models], we are missing out on opportunities to contribute to discussions with our colleagues (medical students, trainees, other doctors, health managers etcetera) about its use.'

By actively engaging in the development and deployment of AI in healthcare, medical professionals can help shape a future where technology and human expertise work hand in hand to provide the best possible care for patients.

Published on 2023-03-28 11:53:02