Many of us already see the benefits of using facial recognition to access our smartphones – it’s quick and saves us time. This use of Artificial Intelligence (AI) can be almost imperceptible in our everyday lives. But should that subtlety extend into health economics and patient-centric research (HEPCR) and health technology assessment (HTA)? At ISPOR US 2024, held in Atlanta, GA, earlier this month, AI was center stage and under the microscope – leaving all of us thinking and talking about its use in health economics, patient-centric research and health technology assessments.

At the conference, ISPOR’s Chief Scientific Officer, Laura Pizzi, introduced a plenary session titled “AI Enabling Whole Health: Opportunities and Challenges for HEOR and HTA”. The session included audience polls showing Generative AI is being used for a wide range of applications. These polls also confirmed what we have long suspected: in health economics and outcomes research (HEOR) or what we prefer to call HEPCR, we seem to value AI most as a support for evidence generation and health economic modeling.

Panelist Rachael Fleurence, National Institutes of Health, USA, noted that systematic reviews and real-world evidence (RWE) are target areas for AI use but not without expert oversight or what she termed, “humans in the loop”, overseeing data interpretation. The key message here was that human oversight will always be essential to AI.

As consultants tracking Generative AI and indeed developing GAIL, our own AI application, we know that manufacturers are using AI more than ever in their early modeling. It’s easy to see AI’s benefits for early stage literature reviews and to populate early models. But we also know AI is sometimes being used to program and adapt models. The use of AI seems acceptable here because the audience are the manufacturers themselves, and not HTA bodies. Indeed, HTA bodies have yet to fully articulate how far they accept the use of AI. For us, this is one of the problems that needs to be solved. Aspects of AI are already being used in health economics but is this compatible with HTA submission documentation? Better still, how should these applications be validated so they are reproducible and transparent? As consultants, we are challenged to consider whether AI can deliver opportunities for different types of qualitative or quantitative health data.

Although most of the attention on the use of AI has focused on its implications for systematic literature reviews and economic models, interest is increasing in AI’s application to patient-centric research. The research in this area is at a nascent stage, but Lumanity’s senior consultants have been considering how to use AI to improve the readability of patient-facing materials (e.g. questionnaires and informed consent forms). Much like in economics where humans need to be in the loop, we have found that AI applied to patient-centric research has to be chaperoned in order for the results to be useful. A poster we presented at the Atlanta meeting highlights some of the pros and cons. Further research by our consultants will look at developing initial disease-specific conceptual frameworks that can be used for the development of interview guides and the selection of patient-reported outcome measures for clinical trials.

Do we need industry-specialized AI?

At a conference panel session titled “Emerging Landscape of Health Economic Evaluation in the Era of Generative AI”, the consensus was that Generative AI is on track to transform the HEOR/HEPCR space but general purpose Generative AI models may not be specific enough for HEOR/HEPCR – requiring extremely specific prompts to generate fit for purpose analyses. Industry-specialized Large Language Models (LLMs) would enable the AI platform to more effectively partner with the analyst. Independent of the debate about general purpose versus industry specific AI models, all agree ongoing dialogue and partnership between industry leaders, HTA agencies, health economists, software developers and policymakers will be crucial.

Many manufacturers are looking to use AI for early models but they are also stretching its use and testing how their AI platforms can be adapted for HTA submission work – in the UK this applies especially for use with the National Institute for Health and Care Excellence (NICE).

NICE has started to evaluate how AI should be used and is striving to understand the best use cases of AI, where mature methods are available, and where methods need further work.

NICE is still evaluating and is looking to develop best practice guidelines, building on related frameworks. Our key takeaways from this ISPOR session were that HTA agencies should:

  1. Collaborate with HTA relevant use cases ­– with academic experts, early adopters, other HTA agencies and
  2. Focus on ongoing monitoring and evaluation

ISPOR US 2024 showcased an impressive collection of posters, presentations and pitches on the application and or potential use of Generative AI in various aspects of health economics, patient centered outcomes research and RWE research. Machine learning applications included prediction of drug comparative efficacy, treatment resistance, time to response and survival extrapolation.

One example (no analysis or coding shared) used Generative AI to scrape (to identify and collect from) different sources of electronic medical records to build longitudinal data at the individual patient level. This real-world data, designed for follow-up analysis, used deep learning to identify each single patient as well as to predict longer-term (beyond observation) outcomes.

While these presentations promised time-savings and superhuman insight, some limitations were conceded, such as the quality and informativeness of the data itself. However, little to no information was provided regarding the methods, as these examples and case studies were designed for general client audiences. It was impossible to judge the scientific rigor of these methods, such as a priori hypothesis generation, model validation, bias testing and replicability. Moreover, companies were supporting their own customized (proprietary) platforms. It will be interesting to see how much transparency is demanded from the customer, HTA bodies, and the larger scientific community. Broader, non-competitive collaboration may be necessary to progress from promising case studies to more standardized and acceptable methodology.

While we continue to meet HTA requirements for rigorous systematic literature reviews and health economic models, it does seem that some more longstanding, AI-powered platforms, designed specifically for our industry standards, have the potential to help us collect, communicate, and grow the data we seek.

Contact us

If you missed us at ISPOR, or if you would like to find out more about any of the topics covered in this article, please contact us.