Comparing user-written codes for performing economic evaluations in Stata
Bradley Barker-Jones
Thursday, 19 March 2026, 2pm to 3pm
Hosted by herc@ndph.ox.ac.uk
Bradley Barker-Jones, Research Associate in Health Economics at the University of Bristol (HEHP@Bristol)
Date and time: Thursday 19 March, 2-3pm
Location: Hybrid: via Zoom or Seminar Room LG1, The Big Data Institute, Oxford
To Join: This is a free event, which will be taking place online via Zoom/Microsoft Teams. Register
Bio: Brad is a Research Associate in Health Economics at the University of Bristol (HEHP@Bristol). His work focuses on economic evaluation in healthcare, with particular interests in applied health economics and methodological development. Brad holds an MSc in Health Economics and Health Policy Analysis from the University of Bristol, and a BA (Hons) in Economics from the University of the West of England. His current research includes the economic evaluation of primary care interventions and analysis of diagnostic technologies for dementia. Prior to his current role, Brad worked as a Dementia Intern with NIHR ARC West. In this role, he led a systematic review on proxy completion of preference-based measures in neurological disorders and contributed to qualitative research examining technology-based interventions for loneliness. He is an active member of the NIHR ARC West Health Economics Group. His work has been presented at national and international conferences, including HESG and Alzheimer’s Europe 2025.
Abstract: Stata supports user-written codes extending functionality of standard commands. Several user-written Stata codes exist for trial-based economic evaluations, but they lack formal validation and guidance. This study aimed to identify and compare publicly available user-written Stata codes for trial-based economic evaluations. A focused literature search of Ovid Medline, SSC Archive, The Stata Journal and Google Scholar was conducted to identify relevant codes from inception until June 2025. Codes were applied to data from two clinical trials, both featuring missing data and covariate adjustment. Codes were compared in terms of their ability to estimate key economic parameters and graphical outputs and functionality in handling four common statistical challenges: correlated costs and effects; covariate adjustment; skewed costs and effects; and missing data. Four studies providing eight codes were identified: heabs and heabps, bsceaprogs and bsceagraphs, iprogsz and ceagraphsz, Mutubuki et al.’s Stata Code (MSC) and Faria et al.’s Stata Code (FSC). Using the two trials data, heabs and heabps reported lower incremental quality-adjusted life years (QALYs) and incremental net monetary benefit values than bsceaprogs and iprogsz. MSC and FSC produced comparable incremental costs and QALYs, though FSC yielded wider confidence intervals. Differences in estimates across statistical approaches show that code choice can influence economic evaluation results. Some codes were better suited for generating basic economic outputs, whereas others provide more comprehensive analyses or address specific statistical challenges, such as missing data. However, no single code provided all key outputs while addressing the main statistical challenges.

