Editor’s Question: Should we consider self-diagnoses’ through commoditised technology a power or a failure?

Editor’s Question: Should we consider self-diagnoses’ through commoditised technology a power or a failure?

Everyone has done it. A quick search on a headache spirals into Dr Google informing us we are not long for this world or an influencer informing us that we must have a serious mental health condition due to our sleeping habits. With so much pressure on public health sectors, surely taking things into our own hands could lead us to medical independence with minor injuries and illnesses. This month, we ask: Should we consider self-diagnoses’ through commoditised technology a power or a failure?  

Introducing this month’s Editor’s Question is Vicky Godfrey, Co-founder of DNApal, a platform that provides bespoke nutritional and lifestyle guidance by harnessing an individual’s genetics to provide a roadmap to support their unique health journey. 

Vicky Godfrey, Co-founder of DNApal

Human touch 

As a nutritional therapist and nutrigenomics expert, I emphasise the importance of research for personal empowerment. However, I’ve witnessed the downsides of relying solely on platforms like Google for self-diagnosis. It often leads to unnecessary anxiety among individuals. The human touch is irreplaceable when it comes to interpreting complex medical situations and online diagnostic tests can’t consider the nuances and may not suggest the necessary follow-up tests or procedures. 

While technology continually advances, it struggles to replicate the holistic understanding and empathy that humans provide. Medical symptoms can often lead to multiple possible diagnoses, which is where a skilled healthcare professional’s expertise shines. For instance, two people with similar symptoms might receive different diagnoses based on a physician’s deeper insights. 

Considering the significance of the NHS, it’s essential to strike a balance between technology and human involvement in healthcare. While tech can provide valuable information, it must be complemented by human support, especially for vulnerable populations or those who may be emotionally distressed by online diagnoses. In this evolving landscape, a thoughtful and regulated approach is necessary to ensure the well-being of individuals seeking medical information. 

Misinformation  

Ensuring the validity of your information sources is key. We have two nutrigenomics professors who rigorously review every study before it’s included on our app. Because, let’s face it, the internet is a wild place where anyone can write anything. When you stumble upon a blog, ask yourself: Is the author qualified or are they just sharing opinions? 

I’ve had countless patients claiming they have serious conditions based on Google. But it’s essential to approach these situations with caution. A systematic medical evaluation is necessary to rule out various possibilities. Plus, interpreting complex tests requires professional expertise, which tech platforms can’t provide. 

Input vs output 

Chat GPT, while generally accurate, can sometimes show bias towards pharmacokinetics over holistic approaches. AI often lacks coverage in holistic health unless questions are posed precisely. For instance, seeking causes of a headache on the NHS website might suggest paracetamol, a temporary fix. As a naturopath, I’d explore water intake, magnesium and alcohol consumption, uncovering layers of potential causes. The key is how questions are framed; it takes experimentation.  

Data protection 

Certainly, data protection is crucial. When using platforms like DNApal, we prioritise strict data security. Many cheaper tests may compromise data by selling it on. In the DNA space, trust is paramount,and we’ve created DNA Now as a safe haven for users’ data, ensuring it won’t be sold without consent. In this tech-driven world, understanding where your data goes is essential. 

Dr Robert Sackin, Partner at intellectual property law firm Reddie & Grose LLP 

“I believe that the answer is ‘power’ and here’s why.”

Dr Robert Sackin, Partner at intellectual property law firm Reddie & Grose LLP

“Technology becomes commoditised when it becomes part of everyday life. In its more primitive form, commoditised technology for self-diagnosis can be traced all the way back to paper books, a papyrus scrolls or even cave paintings. Standalone computers became commoditised decades ago, and access to the Internet could perhaps be considered to have become commoditised by the start of this millennium.  

“These technologies enabled people to readily self-diagnose, but they are crude. Typically, one would look up a condition of interest or perhaps flick through a few conditions and select those that seem most relevant. One would then effectively run through a series of “yes” and “no” questions about relevant symptoms or perhaps there might be a more sophisticated scoring when diagnosis is perhaps very nuanced such as for neurological conditions. 

“I believe that these commoditised technologies were powerful and far from a failure. 

“ChatGPT launched on 30 November 2022 and by January 2023, it had become the fastest-growing consumer software application in history with over 100 million users. Google, Microsoft, Baidu and Meta all launched competing products: Bard, Bing Chat, Ernie and LLaMA respectively. Large language model trained generative AI chatbots had become almost immediately commoditised. That is to say, the latest technology had become part of everyday life. 

“And, of course, users are now able to use these systems for self-diagnosis for health-related purposes. They can ask free-form questions and be much more nuanced and probing and, dare I say, more human in interaction. However, concerns are being raised as the models used are not specifically trained for healthcare; they are, by definition, general models trained on general data. This general data is wide ranging, and some may be of questionable scientific merit either intentionally or unintentionally. 

“In a recent study from May 2023 published in JMIR Human Factors, the findings were that ChatGPT had satisfactory explanatory power and positive risk-reward, and it was generally a positive experience for users. The study concluded that rather than discouraging the use of these tools that they should be improved by adapting them for healthcare applications. 

“It seems to me that the next commoditised technology for self-diagnosis is large language model trained generative AI chatbots trained on sound healthcare data – power and not failure.”

Imogen Wade, Therapist at Velaris Counselling 

Imogen Wade, Therapist at Velaris Counselling

“There is an overwhelming plethora of wellbeing content in online spaces, especially on platforms like Instagram and TikTok. The majority is focused on mental health and neurodivergence, with bitesize snippets of information on complex topics such as ADHD, depression and childhood trauma. Often, the creators of such content are qualified in either counselling or psychology. The resultant videos or infographics present bullet points of information to ostensibly help viewers, with content designed for maximum follows, likes and shares. TikTok videos often open with sweeping statements such as ‘five ways to overcome your avoidant attachment from a qualified psychologist’ and Instagram posts will announce, in pastel colours and bubble font, ways to identify the signs of depression. 
 
“Online wellbeing content is leading to a huge rise in self-diagnosis, a rise that is presented as empowering by those who create and consume this content. In some ways, self-diagnosis is a rebellion against long NHS waiting lists and expensive private healthcare; professional diagnoses are often inaccessible for many. In many ways, self-diagnoses are the inevitable result of a healthcare system with long waiting lists; the current waiting time for both autism and ADHD assessments is approximately 6-9 months according to ADHD UK. It is understandable why self-diagnosis is seen as an empowering act in a system that fails those most in need of support.  
 
“Perhaps wellbeing content creators have achieved their success in response to the number of people seeking official diagnosis or perhaps are partly the cause of this number. Either way, thousands of people are producing wellbeing content on commoditised apps without asking themselves if it is ethical or not. I’m sure that there is a lot of accurate, ethical wellbeing content that genuinely has a positive impact on those who engage with it. However, there is a dark side to the trend. As a practising therapist, I have had clients feel utterly broken by the rush of content that tells them everything that is wrong with them. I have seen young women who feel unfixable because a pastel infographic hasn’t ‘cured’ their panic attacks or OCD.  
 
“Commoditised technology profits from those who are seeking support. It has led to the oversimplification of mental health issues and neurodivergence to generate engagement, no matter the accuracy of resultant self-diagnoses. The true cost to wellbeing is enormous and I expect to see many more clients who have been negatively impacted by these trends.”

Leigh Greenwood, Founder and Managing Director, Evergreen PR

Leigh Greenwood, Founder and Managing Director, Evergreen PR

To understand whether self-diagnosis technology is a good or a bad thing, we must first understand the desired outcome.

The UK population is the biggest it has ever been, having risen from 50m in 1950 to more than 67m today. By 2035 it will be 70m.

The NHS is already struggling. An aging population, growing levels of long-term and chronic disease and well-publicised financial and workforce pressures have – coupled with the pandemic – led to unprecedented waiting lists. Increasing numbers of people now lack confidence in the health and care system.

Technology can really help.

As the National Institute for Health Research (NIHR) points out, digital health tech can: be used anytime, anywhere; reduce travel, resulting in benefits to the environment; reach more people than face-to-face care; be cheaper and can empower people to manage their own conditions.

The NHS Long-Term Plan promised to create digitally-enabled primary and outpatient care, and it also committed to giving people more control over how their care is planned and delivered.

Yet there are undoubtedly risks when patients rely too heavily on technology and forego speaking to a healthcare professional. Self-diagnosis is one such area.

Approximately three in four people search for health information onlineand more than half of UK adults self-diagnose when feeling unwell or experiencing medical symptoms.

The risk of error here is high. One study found that healthcare professionals are more than twice as likely to provide an accurate diagnosis than technology alone.

Incorrect diagnosis, missing comorbidities, information overload and false reassurance are just some of the common problems of self diagnosis and there is also the very real problem of ‘cyberchondria’, a clinical phenomenon where repeated Internet searches for medical information cause excessive concern about physical health.

Health anxiety often starts in young adulthood and around a third of under 25s turn to TikTok for medical advice. Yet an analysis of the top 100 videos posted to the channel on ADHD found that 52% were classified as misleading.

The first requirement for healthcare advice should be that it is accurate in the first place. YouTube’s recent announcement of a verification system for healthcare professionals represents a positive step forward in this area.

The second should be that it is correctly interpreted and applied to the individual.

Health technologies that have been designed to augment healthcare professionals typically have this built in as they still require clinical sign-off.

It is less straightforward when the task of interpretation falls squarely on the patient. That, surely, is where education must come in, so that people understand the limitations and risks of self-diagnosis – even when basing it on trusted information – and so know when to access healthcare professional support.


 
 

Click below to share this article