There are ongoing discussions about the use of Artificial Intelligence (AI) in health technology assessment (HTA) – with multiple stakeholders trying to understand just how far AI’s integration should go. In the drug development sector among manufacturers and those working with them to use AI, the consensus has been that AI has the potential to improve the efficiency of data analysis, predictive modeling, and decision making processes. AI has been widely accepted as a valuable, time-saving tool but one that must be balanced by humans who oversee and ensure the accuracy of data interpretation by AI.

In the health economics and outcomes research (HEOR) space we have looked to HTA bodies for some explanation of what the boundaries should be in using AI.1 In the US in March, the Food and Drug Administration (FDA) published its perspective on the use of AI and machine learning in drug development.2 In the UK, the National Institute for Health and Care Excellence (NICE) has just published its position statement on the use of AI in evidence generation.3

The NICE position statement clarifies issues about AI use and its boundaries. We used GAIL, our in-house developed AI platform, to generate a summary of NICE’s position statement, and in keeping with NICE’s advice, our human consultants refined GAIL’s output.

The NICE position paper discusses the potential benefits and challenges of AI in evidence generation for HTA. AI methods, including machine learning and generative AI, can process large datasets efficiently, revealing patterns and generating novel outputs that might aid in evidence synthesis. NICE foresees that AI-informed evidence will increasingly be considered in its evaluations.

However, concerns exist about AI’s appropriateness, transparency, and trustworthiness. NICE emphasizes the need for careful use of AI methods, ensuring they add demonstrable value while balancing potential risks like algorithmic bias and reduced human oversight. The position statement outlines expectations for AI use, including clear justification and transparency in reporting, adherence to existing regulations and standards, and maintaining human involvement in decision making processes.

NICE also discusses specific applications of AI in systematic reviews, clinical evidence and trial design, real-world data analysis, and cost-effectiveness evidence. It highlights the importance of engaging with NICE when planning to use AI methods and ensuring robust security measures to mitigate cybersecurity risks. NICE notes that: “Submitting organisations considering using AI methods should engage with NICE to discuss their plans.” When appropriate, “early engagement could be sought through NICE Advice. At later stages of evidence development, organisations should discuss their plans with appropriate NICE technical teams.” Moreover, any plans to use NICE content for AI purposes are subject to an approval process, licensing arrangement and a fee for international use.4

NICE notes that in the UK all use of AI must comply with the UK Government framework for regulating AI and the key principles should be referenced when considering the value of AI use cases. In the UK, the public sector has accepted various AI ethical frameworks to guide the development and deployment of AI systems. Examples of these frameworks include:

  • The Department for Science, Innovation and Technology’s “Guide to using artificial intelligence in the public sector”5
  • The Department for Health & Social Care’s “Guide to good practice for digital and data-driven health technologies”6
  • The Cabinet Office’s “Data Ethics Framework”7

When making a HTA submission, the submitting organization must decide which legislation applies, including data protection laws and ethical standards and these should be documented in the submission. We could not agree more. At Lumanity we use AI-enabled tools to assist with translation, transcription and analysis to support the research process. Lumanity has conducted robust due diligence on all platforms used to ensure all regulatory and contractual compliance; this includes the Information Security Risk Assessment, Data Protection Impact Assessment, vendor due diligence and others.

In its position statement NICE discusses the following key areas where the use of AI may be considered:

NICE acknowledges that AI has the potential to automate conventional literature search and review processes. AI can automate systematic review steps such as search strategies, classification of studies (e.g. study design), primary and full-text screening, and visualizing search results.

NICE agrees that large language models (LLMs) could be used to automate data extractions and could also be provided with prompts to generate the code needed to synthesize extracted data in the form of a (network) meta-analysis.

While NICE has published its position on AI use, it notes that other organizations are also developing guidelines for use of AI. Cochrane is developing guidance on using AI responsibly in evidence synthesis;8 and the Guidelines International Network has a working group geared to producing guidance and resources.

The NICE position statement notes multiple applications for using AI in clinical effectiveness evidence which is typically informed by clinical trials on the intervention or comparator(s), and real-world data. AI can assist in defining inclusion/exclusion criteria, optimize dosage levels, sample sizes, and trial duration. Natural language processing (NLP) can mine electronic health records and support identification of eligible trial participants or reporting of side effects. AI can also be used to identify and adjust for covariates influencing treatment response. AI can produce synthetic data and generate external control arms; this can be useful when it is unethical to include a placebo arm in a trial.

NICE requires that when AI is used in clinical evidence generation, reporting should be transparent and should use relevant checklists such as PALISADE to justify its use and TRIPOD+AI to explain AI model development. Any AI approach used should be considered part of the clinical trial and full details should be provided within a submission.

NICE acknowledges that AI has several potential roles that can support real-world evidence across numerous stages of evidence generation. These include:

  • Data Processing: Approaches such as multimodal data integration can combine different data sources into a cohesive dataset. Indeed, data matching and linkage, deduplication, standardization, data cleaning and quality improvement are already being automated and are scalable to process large volumes of data
  • Population Selection: AI can efficiently select relevant populations and observations from large datasets
  • Causal Inference: AI methods can enhance the estimation of comparative treatment effects using feature selection and predictive capabilities of multiple valid machine learning algorithms

Cost-effectiveness evidence is typically informed by economic models. The task of developing an economic model is resource-intensive and involves a multi-step process of model conceptualization, parameter estimation, construction, validation, analysis and reporting. NICE acknowledges that AI methods may have a role in several of these steps.

  • Model Development: AI can inform conceptualization, parameter estimation, and construction of economic models
  • Complex Data Analysis: AI can uncover new insights into cost drivers and health outcomes, aiding model parameterization
  • Automation: LLMs can automate the construction, calibration, and updating of economic models

NICE acknowledges that LLMs can be provided with prompts to reflect new information in an economic model, such as clinical data or comparators, facilitating updates and adaptations.

Looking ahead, AI techniques have the potential to assist with systematic reviews, clinical evidence, trial design, real-world data analysis, and cost-effectiveness studies for NICE submissions. Organizations are encouraged to consult NICE regarding their plans to incorporate AI methods into their evidence generation strategies. The submitting organization must explicitly disclose its use and justify the added value whenever AI is employed while addressing potential risks. NICE stresses that AI should enhance rather than substitute human involvement, and external validation of AI-generated outputs should be conducted.

Contact us

If you would like advice or help on the use of AI in evidence generation, please contact us.

  1. Kutchins et al. ISPOR Diaries 2024: How Generative AI stole the show at ISPOR in Atlanta. 2024. Available at: https://lumanity.com/perspectives/ispor-diaries-2024/. Accessed 2 September 2024.
  2. Food and Drug Administration. Artificial Intelligence and Machine Learning (AI/ML) for Drug Development. 2024. Available at: https://www.fda.gov/science-research/science-and-research-special-topics/artificial-intelligence-and-machine-learning-aiml-drug-development. Accessed 2 September 2024.
  3. National Institute for Health and Care Excellence. Use of AI in evidence generation: NICE position statement. 2024. Available at: https://www.nice.org.uk/about/what-we-do/our-research-work/use-of-ai-in-evidence-generation–nice-position-statement. Accessed 2 September 2024.
  4. National Institute for Health and Care Excellence. Reusing our content. 2024. Available at: https://www.nice.org.uk/re-using-our-content. Accessed 2 September 2024.
  5. Department for Science, Innovation and Technology (DSIT), Office for Artificial Intelligence and Centre for Data Ethics and Innovation. 2019. Available at: https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector. Accessed 2 September 2024
  6. Department for Health & Social Care (DHSC; A guide to good practice for digital and data-driven health technologies. 2021. Available at: https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology. Accessed 2 September 2024
  7. Cabinet Office, Data Ethics Framework. 2018. Available at: https://www.gov.uk/government/publications/data-ethics-framework. Accessed 2 September 2024
  8. Cochrane, Artificial intelligence (AI) technologies in Cochrane. 2024. Available at: https://training.cochrane.org/resource/artificial-intelligence-technologies-in-cochrane/. Accessed 2 September 2024.