03 April 2024
Following the AI discussions had at the World Evidence Pricing and Access Congress 2024, we asked in which areas you thought AI could make a key difference and here are the results:
👉 6% believed AI could be used to reduce the risk of missed forecasts or outcomes such as price erosion from indication expansion.
👉 19% believed AI could be used to create opportunities such as TPP optimization.
👉 31% believed AI could be used to increase productivity when analysing and integrating data sets (e.g., assessing IRP impact)
👉 44% believed AI could be used to improve efficiency in generating insights and reducing research time.
The uptake of artificial intelligence is lower in healthcare and the pharmaceutical industry than in many industries like automotive, financial and telecoms. However, artificial intelligence is being looked at across healthcare and pharma to understand its potential.
There were 6 presentations associated with the use of AI at the recent Evidence Pricing and Access Congress 2024 and several companies were actively exploring what is being developed.
The examples of the use of AI in pharmaceutical market access and HEOR have generally been centered on the use of AI to extract data e.g., for automating the production of SLRs, summarizing data or analysing large data sources. These are tasks that can be conducted by humans but are much more efficiently undertaken by machines.
There is often a relatively low need for judgement to be incorporated as the output of these examples tend to be summarized data rather than advice or judgements. The next stage of the use of AI will be to interpret and understand data to feed into data analytics or even to make recommendations.
The problem that we have in healthcare is the diversity of human beings and of conditions affecting humans. Many rare conditions are actually collections of heterogeneous conditions with common features but where each individual is different. The best available evidence is often based on randomized controlled trials that include clearly defined patients and exclude patients with multiple pathologies, etc. Real world evidence often shows different results to well controlled clinical trials. It is the variability of the quality and extent of the data that makes assessing this data difficult.
AI should be able to help with this as it will have the ability to review multiple data sources to detect common factors.
But, there is an old saying about computers – garbage in, garbage out. We will have to be careful how we teach artificial intelligence and how we help it learn to deal with poor quality or diverse data. There is the potential that we will develop AI tools that give us results with no real explanation of how the results were derived or with results that have no face validity. For example, Google’s Gemini’s image generation tool depicted a variety of historical figures – including popes, founding fathers of the US and German second world war soldiers – as people of colour.
This was embarrassing for Google but did not cause physical harm to anyone. A similar mistake in healthcare could cause serious harm to multiple patients and in pharma could lead to a potentially effective drug being ditched or a worthless one being further developed.
AI has huge potential to be used within healthcare and within pharma. It may be able to develop the ability to turn data into reliable advice and recommendations. To be successful, we humans will need to provide the right data and, importantly, to identify the right questions to address.
AI will take time to learn how to interpret complex data and will make mistakes at first. Humans will be needed to review the results and help the AI to learn from its mistakes to refine the outputs. It will take time to develop reliable results from AI analyzed data and for the results to be sufficiently accepted that important healthcare and commercial decisions can be based on AI alone.
AI is already helping with diagnostic tests but these are often relatively simple processes like detecting a potentially cancerous growth from a scan, rather than complex multifaceted diagnosis and treatment decisions. For the results to be widely accepted, we will also need AI to provide clear descriptions about the methods used, the assumptions behind it and the caveats associated with the recommendations. The decisions suggested by AI will be scrutinized carefully before being accepted and openness about how those recommendations are made will be vital.
It will take time for AI to be accepted for complex healthcare and commercial decisions. Human intelligence will continue to work alongside artificial intelligence as AI develops and gets exposed to more and more healthcare data and experience. The complexity and diversity of healthcare suggests that there will always be a need for human intelligence alongside artificial intelligence. We should not see AI as replaced human intelligence but as another tool we can use to enhance our understanding.