“Getting your paper published: Expectations, Reality, and Luck”, with Dr. Mark Bounthavong

By the ISPOR-UW Committee: Sara Khor, Yilin Chen, Woojung Lee, Tori Dayer, Ben Nwogu, Ronald Dickerson

The ISPOR-UW student chapter was honored to have CHOICE alumnus Dr. Mark Bounthavong share his experience and insights with us on how to publish papers in scientific journals on January 14, 2022.  In this seminar, Dr. Bounthavong discussed how papers are triaged and assigned to editors, reviewed, and published, and shared practical tips on how to choose and respond to peer reviewers.  

Mark is an Associate Professor of Clinical Pharmacy at UCSD Skaggs School of Pharmacy & Pharmaceutical Sciences and the National Clinical Program Manager at VA Pharmacy Benefits Management Academic Detailing Services. He also is an Associate Editor for Substance Abuse and an editorial board member for the Journal of the American Pharmacists Association. Mark received his PharmD from Western University of Health Sciences College of Pharmacy, MPH from Rollins School of Public Health at Emory University, and PhD from the CHOICE Institute at UW.

Lessons from a meta-analysis in the presence of statistical heterogeneity. A case study of SARS-CoV-2 detection window.

By Enrique M Saldarriaga and Beth Devine

The objective of this entry is to present the lessons we learned from a meta-analysis we conducted on the detection pattern on SARS-CoV-2. In the process, we found high statistical heterogeneity across the studies, that persisted even after stratification by demographic and study-design characteristics. Although these results did not allow us to increase our knowledge on the shedding patterns of COVID-19, it prompted us to review the concepts, assumptions, and methods available to measure heterogeneity and how it affects the estimation of quantitative summaries.

In this post, we present our analytic process and reflections on the methods. We discuss the use of mean versus median and the crosswalk between them, key differences between fixed and random effects models, measures of heterogeneity, and analytic tools implemented with R. Along the way, we provide a tutorial of the methods used to conduct a meta-analysis. 

SARS-CoV-2 window of detection

The window of SARS-CoV-2 detection presents key information to understand the patterns of virus shedding and infectiousness to better implement testing and isolation strategies.1,2 A diagnostic test conducted too soon or too late can lead to a false negative result, increasing the likelihood of virus transmission.

Dr. Kieran Walsh et al.3 conducted a systematic review of studies that described the duration of virus detection. The authors included “any study that reports on the viral load or duration of viral detection or infectivity of COVID-19”, excluding studies without laboratory confirmation of COVID-19 from molecular testing (i.e., polymerase chain reaction or PCR). Thus, they included cohort studies, cross-sectional, non-randomized clinical trials, and case series, from various countries and age groups (adults and children). In addition, the viral samples came from the upper-respiratory tract, lower-respiratory tract, and stool samples. From a narrative summary, the authors concluded that while the trajectory of SARS-CoV-2 viral load is relatively consistent over the course of the disease, the duration of infectivity is unclear.

We decided to meta-analyze the results of this well-conducted systematic review. To boost internal consistency across studies, we focused our meta-analysis solely on studies that reported upper respiratory tract samples. 

Mean v. Median

To combine results, it is necessary to have consistency in the reported metric. Most of the studies reported mean and standard deviation, but others reported median and inter-quantile range or max-min range. We followed the conclusions of Wan et al4 to estimate sample mean and standard deviation based on the reporting information presented for each study.

We employed one of two possible methods:

Method 1.

Where ɸ-1(·) is the inverse of the probability density function for the normal distribution (the function `qnorm()` in R), centered at 0 with standard deviation 1; P is defined by defined by P = (n − 0.375)/(n + 0.25), where n is the sample size.

Method 2.

Where IQ1 is the lower bound of the inter-quantile range equivalent to the 2.5 percentile, and IQ3 the upper bound, equivalent to the 97.5 percentile. P is defined by P = (0.75n − 0.175)/(n + 0.25).

The underlying assumption for both methods is that the observations summarized by the median arise from a normal distribution, which can be a limitation. However, these methods improve upon commonly accepted conversion formulas (see Bland et al 20155 and Hozo et al 20056 for more details), by relaxing non-negativity assumptions, and using more stable and adaptive quantities to estimate the SD.

Fixed vs random meta-analysis

The pooled mean is a weighted average and the decision of using a fixed or random effects model directly affects how the study weights are generated. If we assume that the studies have a common effect then it makes sense that the pooled mean places more importance (i.e., weight) on the studies with the lowest uncertainty. In other words, the assumption that the true value (in direction and magnitude) is the same across all studies implies that observed differences are due to chance. On the contrary, if there is no prior knowledge suggesting a common effect, and rather, each study provides an estimate of its own, then the weighting process should reflect that. The first alternative calls for a fixed effects model and the latter for random effects. The random effects assumption is less restrictive as it acknowledges the variation in the true effects estimated in each study.7 Thus, the precision (i.e., estimated uncertainty expressed in the standard deviation) of the studies plays an important role, but so does the assumption (and/or knowledge) about the relationship across studies. See Tufanaru et al 20158 and Borenstein et al 20109for a complete discussion on these two statistical models. 

Fixed- v. random-effects model: comparative table of key characteristics and rationale.

CriterionFixed-effects modelRandom-effects model
Goal of statistical inference (statistical generalizability of results).Results apply only to studies in meta-analysis.Results apply beyond studies included in the analysis.
Statistical assumption regarding the parameter.There is one common, fixed parameter and all studies estimate the same common parameter.There is no common parameter and studies estimate different parameters.
Nonstatistical assumption regarding the comparability of studies from a clinical point of view (participants, interventions, comparators, and outcomes).It is reasonable to consider that studies are similar enough and that there is a common effect.Studies are different and it is not reasonable to consider that there is a common effect.
The nature of meta-analysis results.The meta-analysis summary effect is an estimate of the effect that is common to all studies included in the analysis.The meta-analysis summary effect is an estimate of the mean of a distribution of true effects; it is not the shared common estimate, because there is not one.
Adapted from Tufanaru et al., JBI Evidence Implementation 2015; Table 4

“The fixed-effects meta-analysis model’s total effect is an estimator of the combined effect of all studies. In contrast, the random-effect meta-analysis’s full effect is an estimator of the mean value of the true effect distribution” (Hackenberger 202010).

In our analysis, we determined that there was no common effect across studies due to differing study designs and populations studied. 

Heterogeneity

Statistical heterogeneity is a consequence of clinical and/or methodological differences across studies and drives to what extent it is possible to assume that the true value found by each study is the same. Clinical differences include participant characteristics, and intervention design and implementation; methodological differences include definition and measurement of outcomes, procedures for data collection, and any other characteristic associated to the design of the study.

There are two main metrics that we can use to summarize heterogeneity: the percentage of variance attributable to study heterogeneity (I2) and the true-effect variance (𝜏2). 𝜏2 is possibly the most widely used metric. It builds on the chi-squared test – usually referred to as the Cochran’s Q in the literature – for expected vs observed information, under the null hypothesis that differences observed across studies are due to chance alone. ­A limitation of this test is that it provides a binary assessment and ignores the degree of heterogeneity, which is more relevant, as variability in method, procedures, and results across studies is expected.  

Julian Higgins11 proposed the I2, a newer metric to describe the percentage of total variation due to heterogeneity rather than chance (i.e., uses the same rationale as the Q test). Formally, I2 = 100% × (Q − df)/Q where df is the degrees of freedom. Negative values of Q − df are equated to zero so I2 is bounded between 0% and 100%. This metric should be interpreted in the context of the analysis and other factors, such as the characteristics of the studies, and the magnitude and direction of the estimate of individual values. As a rule of thumb, an I2 higher than 50% might represent substantial heterogeneity and requires caution in the interpretation of the pooled values.

Another measure of heterogeneity is the variance of the true effects, 𝜏2. This metric is consistent with the random-effects assumption that there could be more than one true effect, and each study provides an estimate of those. There are several ways to estimate 𝜏, the most popular is the DerSimonian and Laird method, which is based on normal maximum likelihood. The main limitation of this estimate is that unless the sampling variances are homogeneous (regardless of the number of studies included) it tends to underestimate 𝜏. Viechtbauer 200512 provides a thorough assessment of the alternatives. 

The interpretation of 𝜏2 is straightforward and it provides an estimate of the between-study variance of the true effects. It therefore helps to inform whether a quantitative summary makes sense; a large variance can make the mean meaningless. Further, the estimation of 𝜏2 has uncertainty, and it is possible to estimate its confidence interval for a deeper assessment of the variance across studies.

All these metrics can measure statistical heterogeneity. However, it depends on the researcher to determine if, even in absence of statistical heterogeneity, the results of two or more studies should be combined into a single value. That assessment depends upon the clinical characteristics of the studies under analysis, specifically the population, place, and time, the interventions evaluated, and outcomes measured. This is the main reason we excluded studies whose samples did not arise from the upper-respiratory tract, because pooling the results of structurally different studies would have been a mistake.

See Chapter 9 of the Cochrane Handbook for an introduction to the topic and Ioannidis J, JECP 200813 for an informative discussion on how to assess heterogeneity and bias in meta-analysis. 

Confidence intervals and prediction intervals

Both the mean and the standard error in a meta-analysis are a function of the inverse variance weights, which are estimated considering the individual standard deviation and the assumption of heterogeneity for the true effect. Random effects models tend to present more equally distributed weights across studies compared to fixed effects, and therefore the estimate of the standard error is higher leading to wider confidence intervals (CI). In the extreme, the presence of high between-study variance in meta-analysis can yield a pooled mean that closely resembles an arithmetic mean (despite study sample size or precision in the estimates) due to the equal distribution of weights across studies.14

If the pooled mean is denoted by μ and the standard error is SE(μ), then by the central limit theorem, the 95% CI are estimated as μ ± 1.96 × SE(μ) where 1.96 is the percentile 97.5 of the normal distribution. The 95% CI provides information about the precision of the pooled mean, i.e., the uncertainty of the estimation. Borenstein et al 20099 present a helpful discussion of the rationale and steps involved in the estimation of the pooled mean and standard errors for both fixed and random effects models.

Under the random-effects assumption, we can estimate the prediction interval for a future study. Its derivation uses both the pooled standard error, SE(μ), and the between-variance estimate, 𝜏2. The approximate 95% prediction interval for the estimation of a new study is given by:15

Where 𝜶 is the level of significance, usually, 5%; t𝜶k-2 denotes the 100 × (1 − 𝜶/2)% percentile (97.5% when 𝜶 = 0.05 ) of the t-distribution with k − 2 degrees of freedom, where k is the number of studies included in the meta-analysis. The use of a t-distribution (instead of a normal as for the estimation of confidence intervals) aims to reflect the uncertainty surrounding 𝜏2, thus the use of a distribution with heavier tails. 

Analysis

Our analysis of the findings from Walsh et al3 was implemented in R using the “ meta` package.16 This library offers several alternatives to conduct meta-analysis; for our study two were particularly relevant – `metagen` and `metamean`. Both base the weights calculation on the inverse-variance methods; the former treats each individual value as a treatment effect (i.e., the difference in performance in two competing alternatives), while the latter assumes each value is a single mean. As shown below, the mean and confidence intervals under both are similar: 13.9 days of detectability (95% CI 11.7, 16.7) using `metagen`, and 15.6 days (95 %CI 12.3, 18.9) using `metamean`. However, the heterogeneity estimates are very different. We present the results of both analyses below. 

Our results of the pooled mean using `metagen`

TE: Treatment Effect; se: Standard Error; MD: Mean Difference; 95%CI: 95% Confidence Intervals; MRAW: Raw or untransformed Mean

Our results of the pooled mean using `metamean`

TE: Treatment Effect; se: Standard Error; MD: Mean Difference; 95%CI: 95% Confidence Intervals; MRAW: Raw or untransformed Mean

The main difference between the two methods is the assumption made over the uncertainty metric included in the data. While `metagen` assumes that the metric is the standard error (SE; that arose from a previous statistical analysis), `metamean` assumes that it is the standard deviation (SD) and hence corrects it using the sample size (n) of the studies, using SE = SD/n0.5. Therefore, the estimated confidence intervals for each study are wider under `metagen` which gives the impression that all studies are more alike, reducing the estimated heterogeneity. The opposite happens under `metamean`, the confidence intervals are narrower in comparison and therefore the heterogeneity is higher. 

This difference in the uncertainty estimation under each method is also reflected in the weights. Under `metagen` the fixed-effects model shows a more homogeneous distribution of weights compared to the high concentration displayed under `metamean`. This is because the narrower intervals under `metamean` lead one to think that studies like Lavezzo and Chen,3 for example, are very accurate and therefore more important in the estimation. In contrast, under the random-effects models, the weights in `metamean` are almost the same for all studies, indicating that each arises from a different true effect distribution, while under `metagen` the wider intervals lead one to think that some studies arise from the similar true distributions and hence deserve higher weights. A homogeneous distribution of weights in the random-effect model under `metagen` is consistent with the notion that under enough uncertainty, the pooled mean closely approximates the simple arithmetic mean.14

Conclusion

The function `metamean` is the appropriate option for our analysis, because it is consistent with the information reported by the studies: a single mean and SD. We found that 99% of the variability is attributable to statistical heterogeneity (the I2 estimate) and that the standard deviation of the true effect is around 9 days (𝜏 = 81.50.5). The mean duration of the detectable period is 15.6 days (95%CI 12.3, 18.9). 

We believe that the level of heterogeneity found is likely a consequence of the marked difference in the types of studies included in the systematic review, that ranged from case series to non-randomized clinical trials. Further, Walsh et al3 collected data from March to May 2020, and the variability of the studies is a consequence of the scarce information about COVID at the time. From the prediction interval we found that an undefined study will find a mean duration of detectable period between -3 and 34 days at 95% of confidence. The impossibility of this result (i.e., having a negative duration) is a consequence of the high level of study heterogeneity found in our analysis.

Given the level of heterogeneity, a quantitative approach is not a feasible option to summarize the results across studies. Even though these results did not allow us to expand our clinical knowledge on the shedding patterns of COVID, it created an interesting exercise that helped us reflect on the underlying assumptions and methods of meta-analysis. The code for this analysis is available in GitHub at https://github.com/emsaldarriaga/COVID19_DurationDetection. This includes all steps presented in this entry plus the data gathering process using web scrapping and stratified analysis by type of publication, population, and countries.

References

  1. Bedford J, Enria D, Giesecke J, et al. COVID-19: towards controlling of a pandemic. The Lancet. 2020;395(10229):1015-1018. doi:10.1016/S0140-6736(20)30673-5
  2. Cohen K, Leshem A. Suppressing the impact of the COVID-19 pandemic using controlled testing and isolation. Sci Rep. 2021;11(1):6279. doi:10.1038/s41598-021-85458-1
  3. Walsh KA, Jordan K, Clyne B, et al. SARS-CoV-2 detection, viral load and infectivity over the course of an infection. J Infect. 2020;81(3):357-371. doi:10.1016/j.jinf.2020.06.067
  4. Wan X, Wang W, Liu J, Tong T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med Res Methodol. 2014;14(1):135. doi:10.1186/1471-2288-14-135
  5. Bland M. Estimating Mean and Standard Deviation from the Sample Size, Three Quartiles, Minimum, and Maximum. Int J Stat Med Res. 2015;4(1):57-64. doi:10.6000/1929-6029.2015.04.01.6
  6. Hozo SP, Djulbegovic B, Hozo I. Estimating the mean and variance from the median, range, and the size of a sample. BMC Med Res Methodol. 2005;5(1):13. doi:10.1186/1471-2288-5-13
  7. Serghiou S, Goodman SN. Random-Effects Meta-analysis: Summarizing Evidence With Caveats. JAMA. 2019;321(3):301-302. doi:10.1001/jama.2018.19684
  8. Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects meta-analysis? Common methodological issues in systematic reviews of effectiveness. JBI Evid Implement. 2015;13(3):196-207. doi:10.1097/XEB.0000000000000065
  9. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. A basic introduction to fixed-effect and random-effects models for meta-analysis. Res Synth Methods. 2010;1(2):97-111. doi:10.1002/jrsm.12
  10. K. Hackenberger B. Bayesian meta-analysis now – let’s do it. Croat Med J. 2020;61(6):564-568. doi:10.3325/cmj.2020.61.564
  11. Higgins JPT, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557-560. doi:10.1136/bmj.327.7414.557
  12. Viechtbauer W. Bias and Efficiency of Meta-Analytic Variance Estimators in the Random-Effects Model. J Educ Behav Stat. 2005;30(3):261-293. doi:10.3102/10769986030003261
  13. Ioannidis JPA. Interpretation of tests of heterogeneity and bias in meta-analysis. J Eval Clin Pract. 2008;14(5):951-957. doi:10.1111/j.1365-2753.2008.00986.x
  14. Imrey PB. Limitations of Meta-analyses of Studies With High Heterogeneity. JAMA Netw Open. 2020;3(1):e1919325. doi:10.1001/jamanetworkopen.2019.19325
  15. Higgins JPT, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc. 2009;172(1):137-159. doi:10.1111/j.1467-985X.2008.00552.x
  16. Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153-160. doi:10.1136/ebmental-2019-300117

Reduce Medicaid Insurance Churn to Increase Access to Care and Stability of Coverage

by Yilin Chen

Understanding Insurance Churn

Health insurance churn refers to losing insurance coverage or moving between coverage sources. This is less of an issue in single-payer nations where universal health programs cover individuals’ health expenditures for their entire lives. In the US, however, this happens to approximately a quarter of the US population every year.  A Medicaid beneficiary is, on average, covered for less than 10 months out of the year because insurance eligibility is determined by month-to-month income [1].

Increasing coverage has been the emphasis of the Affordable Care Act (ACA). However, insurance churn has many detrimental effects on individuals and the US health system. It causes disruption in care continuity, which is found to be associated with increased emergency department use and fewer office-based healthcare visits [2]. Individuals with Medicaid experiencing discontinuity in insurance coverage were less likely to be hospitalized for chronic conditions and to be screened for breast cancer [3-4]. Churning is not only associated with worse health outcomes, but also adversely impacts finances for individuals and families due to less predictable expenditures and higher out-of-pocket costs. Administrative burden caused by churning is not negligible – studies estimated $400 to $600 for each disenrollment or re-enrollment of a beneficiary, a financial burden that eventually falls on taxpayers [5]. In addition, churning can also contribute to a net increase in healthcare costs when people who experience lapses in coverage re-enroll in Medicaid.

The causes of churning may be voluntary such as accepting a new job, or involuntary, such as coverage eligibility changes in the Medicaid program. Volatility in employment and income makes health insurance churn more prevalent. For instance, the current coronavirus crisis has led to increased disenrollment from original insurance coverage and enrollment in Medicaid. After declines in enrollment in Medicaid and CHIP from 2017 through 2019, total insurance enrollment grew to 78.9 million in November 2020, an increase of 7.7 million from enrollment in February 2020 (10.8%) [6]. This trend shows that insurance churn has increased during the pandemic, leading to heightened concerns about the negative effects of churning.


Existing Policies

ACA Medicaid Expansion

The ACA has made significant improvements in making coverage more accessible, including expanding Medicaid coverage for low-income adults with incomes below 138% of the federal poverty level (FPL) as a state option (Figure 1). Further, the ACA offered subsidies to individuals and families with incomes up to 400% of the FPL who are seeking individual market coverage. These policies have led to decline in both coverage loss and coverage disruptions. For example, a study found that men living in expansion states saw their rate of coverage loss decline from 16% to 10% after the ACA. Among people of color, coverage disruption rates declined from 18% to 13%, and coverage loss rates decreased from 15% to 11%. In addition, for people without chronic illnesses, coverage disruption rates decreased from 21% to 15% after the expansion went into effect, and coverage loss rates went down from 18% to 12% [7].

Figure 1: Effect of Medicaid Expansion on Medicaid Coverage Gap (Source: SHADAC)

American Rescue Plan (ARP)

In response to the current COVID crisis, under the American Rescue Plan (ARP), a 100% subsidy for the Consolidated Omnibus Budget Reconciliation Act (COBRA) coverage is now available from April 1, 2021 to September 30, 2021 for qualified beneficiaries who have involuntarily lost their jobs or experienced a reduction in hours. COBRA generally requires that group health plans sponsored by employers with 20 or more employees in the prior year continue to offer employees and their families the opportunity for a temporary extension of coverage in certain instances where coverage would otherwise end. The law also enhances financial assistance for marketplace coverage, including making people who receive unemployment benefits at any point during 2021 eligible for the $0 silver-tier coverage, and helping people with deductibles and cost-sharing. However, the assistance is temporary [8].

More recently, the ARP gives states a new option to extend Medicaid postpartum coverage from 60 days to 12 months, which will take effect starting April 1, 2022 and would be available to states for five years. Studies have shown that new moms who have Medicaid-funded childbirths experienced significant churning in coverage. Among mothers with Medicaid, 55% experience a coverage gap in the six months following childbirth, compared to 35% of mothers with private insurance. This new option can play an important role in reducing rates of churning in new moms [9].

Policy adoption by states

States have the option to adopt and implement these policies. To date, 38 states and DC have adopted the ACA Medicaid expansion and 12 states have not. The Centers for Medicare & Medicaid Services (CMS) released guidance accordingly, and states should engage with providers, community-based organizations, enrollment assistants, and enrollees to facilitate the process, especially during the COVID-19 pandemic.


Changes Medicaid Should Consider

Policymakers should build on the momentum from the ARP’s coverage improvements to make health insurance more accessible, continuous, and stable.

Expand Medicaid 12-month continuous enrollment for all low-income adults

Medicaid should allow enrollees to remain eligible in Medicaid for a continuous 12-month period regardless of fluctuations in their income. Currently, states have the option to provide continuous eligibility for children covered by Medicaid and/or CHIP and Montana and New York have extended this provision to adults using the Medicaid Section 1115 waivers [10]. The nonpartisan Medicaid and CHIP Payment and Access Commission (MACPAC) has long recommended extending 12-month continuous eligibility to adults, as it has the potential to further reduce churn in Medicaid.

Measure and track churns in Medicaid at state level

Collecting and analyzing more detailed state-level information about churning patterns, prevalence and trends, causal factors, and the high-risk churning groups will be useful for policymakers to gain a clearer picture of churning nationwide. The evaluation results could inform future design of policies and procedures to streamline health coverage renewals to minimize churning caused by administrative disenrollment.

Ways to identify churning sub-types are essential. One-way change in eligibility category (e.g., from Medicaid to subsidies or uninsured, or vice versa) or loop change (e.g., starting in Medicaid, leaving for a period, then returning) represent different dynamics and potential impacts. Some researchers have used enrollment data from surveys or claims to study insurance churn-related outcomes (e.g., discontinuity of coverage, duration of enrollment) [11-12].

Initial stateEnd state
MedicaidUn-insured
Un-insuredMedicaid
MedicaidSubsidized exchange coverage
Subsidized exchange coverageMedicaid
Un-insuredSubsidized exchange coverage
Subsidized exchange coverageUn-insured
Table 1: Examples of Insurance Churn

Some challenges in tracking churns are reported in states that have expanded Medicaid. For example, some states indicated that integrating their eligibility systems need to be completed before they could perform better analysis of changes and patterns in health coverage. Other challenges included tracking individuals’ movement between coverage programs across states, and the lack of more defined metrics across various data sources. Addressing these challenges will be crucial to understanding the extent of insurance churning in this country.

Ultimately, current policy options can help reduce insurance churn by expanding continuous enrollment, measuring churning, and alleviate harms from “drop-out” by providing subsidies. While none of these policies can completely eliminate churning, these steps would reduce both the prevalence and the harms of insurance churn.



References

  1. Lehman-white N, Aminzadeh S. Reducing Churn to Increase Value in Health Care: Solutions for Payers, Providers, and Policymakers. https://thehealthcareblog.com/blog/2019/05/15/reducing-churn-to-increase-value-in-health-care-solutions-for-payers-providers-and-policymakers/
  2. Roberts ET, Pollack CE. Does Churning in Medicaid Affect Health Care Use?. Medical Care. 54(5) (2017): 483–489. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5548183/.
  3. Swartz K, and others. Reducing Medicaid Churning: Extending Eligibility For Twelve Months or To End Of Calendar Year Is Most Effective. Health Affairs. 34(7) (2015). https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2014.1204.
  4. Ku L, and others. Improving Medicaid’s Continuity of Coverage and Quality of Care. Washington: Association for Community Affiliated Plans, 2009. http://www.communityplans.net/Portals/0/ACAP%20Docs/Improving%20Medicaid%20Final%20070209.pdf
  5. Swartz K, Short PF, Graefe DR, Uberoi N. Reducing Medicaid Churning: Extending Eligibility For Twelve Months Or To End Of Calendar Year Is Most Effective. Health Aff (Millwood). 2015;34(7):1180-1187. doi:10.1377/hlthaff.2014.1204
  6. https://www.kff.org/coronavirus-covid-19/issue-brief/analysis-of-recent-national-trends-in-medicaid-and-chip-enrollment/
  7. Wen HF, Johnston KJ, Allen L, Waters TM. Medicaid Expansion Associated With Reductions In Preventable Hospitalizations. Health Aff (Millwood). 2019. https://www.healthaffairs.org/doi/10.1377/hlthaff.2019.00483
  8. Odom SG, and others. American Rescue Plan Act of 2021: COBRA Subsidy, Pension Funding, and Other Employee Benefit Changes.  The National Law Review. 11(75) (2021). https://www.natlawreview.com/article/american-rescue-plan-act-2021-cobra-subsidy-pension-funding-and-other-employee.
  9. Kaiser Family Foundation. Postpartum Coverage Extension in the American Rescue Plan Act of 2021. https://www.kff.org/policy-watch/postpartum-coverage-extension-in-the-american-rescue-plan-act-of-2021/
  10. Kaiser Family Foundation. State Adoption of 12-Month Continuous Eligibility for Children’s Medicaid and CHIP. https://www.kff.org/health-reform/state-indicator/state-adoption-of-12-month-continuous-eligibility-for-childrens-medicaid-and-chip/?currentTimeframe=0&sortModel=%7B%22colId%22:%22Location%22,%22sort%22:%22asc%22%7D.
  11. Gordon SH, Sommers BD, Wilson I, Galarraga O, Trivedi AN. The Impact of Medicaid Expansion on Continuous Enrollment: a Two-State Analysis. J Gen Intern Med. 2019 Sep;34(9):1919-1924. doi: 10.1007/s11606-019-05101-8. Epub 2019 Jun 21. PMID: 31228048; PMCID: PMC6712155.
  12. Goldman AL, Sommers BD. Among Low-Income Adults Enrolled In Medicaid, Churning Decreased After The Affordable Care Act. Health Aff (Millwood). 2020 Jan;39(1):85-93. doi: 10.1377/hlthaff.2019.00378. PMID: 31905055.

Does Real-World Evidence Play a Role in the US HTA?

By Woojung Lee and Boshen Jiao

Why do we want to know how real-world evidence (RWE) is being used in health technology assessment (HTA) and what is RWE anyway?

Based on the FDA’s definition, RWE is clinical evidence regarding the usage and potential benefits or risks of a medical product derived from the analysis of real-world data. Real-world data is defined as data relating to patient health status and/or the delivery of health care routinely collected from a variety of sources.

There has been a growing interest in incorporating RWE into the HTA process to assess the clinical outcomes and economic value of drugs, mainly due to the limited availability of evidence from randomized clinical trials (RCTs) and the desirable property of RWE in reflecting outcomes in the real-world setting.  Despite RCTs being the gold standard, long-term drug effects and economic outcomes are often challenging to collect from the RCTs.  The US FDA’s recently released guidance in 2018 on how drug companies may communicate health care economic information to payers and formulary committees that enable the use of off-label evidence related to approved indications also likely increased the use of RWE in economic analysis. Furthermore, US payers increasingly recognize the value of RWE in informing healthcare decision making, believing that RWE, while not a replacement for RCTs, can provide the best available evidence for a HTA in situations when well-controlled clinical trials are difficult to implement.

However, it remains unclear how RWE can be incorporated into the HTA of pharmaceuticals in the US (there has been a few studies in Europe though!). We thought that a quantitative examination of RWE use in the US may give US payers an objective view of the possible roles that RWE can play in the HTA of drugs. This is especially important considering that the use of RWE varies from payer to payer and is generally limited because of the lack of understanding of how to incorporate RWE into the value assessment process. A better understanding of the current roles of RWE can help facilitate its use in the HTA.

What did we do?

We conducted two studies.  In the first study, we assessed the use of RWE in the economic assessment of drugs by the Institute for Clinical and Economic Review (ICER), a nonprofit independent research organization that evaluates the clinical and economic values of medical interventions. Their reports have been increasingly used by the US payers in their decision-making processes. Specifically, we reviewed the long-term cost-effectiveness analysis (CEA) and potential budget impact analysis (BIA) sections of final evidence reports published by the ICER between January 2014 and June 2019 and extracted evidence of RWE use. 

 We identified 407 RWE uses in CEA and BIA sections of the ICER reports, which accounted for 33% of all model inputs (i.e. 67% of model inputs were not informed by RWE). However, this proportion ranged from 4% – 77% for each final evidence report, showing a large variance. (Figure)

We found that over time (from 2014-2019) there is an increase in the use of RWE to inform model inputs. The most common reasons were to inform mortality or disease progression rate (29%) and health care costs (21%) (Figure).

Drug-specific clinical inputs such as drug effectiveness, drug-specific discontinuation rate, and adverse drug events were rarely informed by RWE (<3%). The most frequently used study design was a retrospective cohort (50%) followed by a prospective cohort study (17%). In terms of the data sources, registry data were the most frequently used data (40%), followed by administrative claims data (18%) and patient survey/diary data (18%).

Nondrug-specific clinical inputs (e.g., disease progression and mortality rate, patient characteristics, and incidence) and economic inputs (e.g., health care costs and treatment pattern) were mostly informed by retrospective studies whereas drug-specific clinical inputs (e.g., drug effectiveness and adverse drug events) were more likely to be informed by prospective studies. In terms of the data sources, registry data and administrative claims data were the main sources of nondrug-specific clinical inputs and economic inputs. Drug-specific clinical inputs were informed by diverse sources including electronic health records, claims, as well as registry data. Not surprisingly, only 1% of RWE were pre-registered (one meta-analysis and 4 prospective studies). We also found that about 30% of RWE was sponsored by industry.

In the second study, we assessed the use of RWE in ICER’s scoping and comparative clinical effectiveness (CCE) assessments. We examined the frequency of use, trends, and reasons of RWE use overall and further stratified by the therapeutic areas. We also reviewed the relevant clinical guidelines that were cited in the CCE assessments.

We found that the mean frequency of RWE was 3.8 per ICER scoping document, 0.7 per drug per ICER CCE assessment, and 1.6 per drug per clinical guideline. In the ICER scoping documents, RWE was most frequently used to inform the outcome (55%), followed by the population (20%) (Figure).

To inform the effectiveness, safety, and treatment patterns of the drugs, 53%, 44% and 3% of RWE were used in ICER CCE assessments, and 41%, 30% and 39% were used in the relevant clinical guidelines, respectively. When stratified by disease areas, our findings indicated that RWE was used least in the area of oncology, compared to the other areas. Additionally, we also found a positive association between the time a drug has been on the market and RWE use.

Why is RWE infrequently used to inform drug-specific clinical outcomes?

RWE has a minor role in informing drug-specific clinical outcomes such as effectiveness, safety, and drug discontinuation in ICER’s value assessment process. Main reasons for the slow uptake include payers’ concern about the quality of RWE and their lack of ability to evaluate RWE, implying an overall lack of awareness and underutilization of quality assessment tools for RWE (there is one tool developed by the CHOICers!)  Another reason is the time required to generate RWE. ICER reports are usually published near the time drugs are approved, while generating RWE requires a drug to be on the market at least sometime. Our second study found that a drug that has existed on the market for a longer period of time may produce more available RWE to inform its clinical benefits and harms. ICER often updates the assessments and incorporates recent RWE that was not available originally, which can sometimes result in a significant difference in conclusions. Our finding suggests that there may be a need for reassessment of a drug and adjustment of health care decisions based on updated evidence after the launch as more RWE becomes available. Finally, our study indicated that RWE was rarely used in the setting of oncology. This finding suggests that the assessment of oncology drugs still relies heavily on trials. Perhaps RWE does not fit into the current framework of the regulatory process for cancer drug development. Also, there may be a lack of incentive to conduct post-marketing research to assess RWE after FDA approval.

There are few pre-registered RWE. Is this a problem? What is being done to encourage registration?

The infrequent preregistration of RWE is an issue. Preregistration can improve reliability and transparency of RWE which become increasingly important with the growing interest in the use of RWE. Lack of transparency regarding how evidence is generated using secondary data has been one of the major barriers to using RWE for high-stakes decisions. However, existing study registries such as ENCePP/EU-PAS and ClinicalTrials.gov are mostly oriented towards (randomized) clinical trials where data are prospectively collected and lack many of the features that are relevant for studies performed on existing data (e.g., insurance claims and electronic health records). There have been several efforts to facilitate the registration of observational studies, such as a certification process for payers and the RWE Transparency Initiative to establish a culture of RWE transparency. The initiative’s steering committee, in its recent white paper, outlined an approach designed to facilitate the registration of observational studies, particularly those evaluating effectiveness or safety (i.e., hypothesis evaluation treatment effect (HETE) study). Key steps being suggested for RWE transparency initiative are identifying a location for registration, determining what a good registration process entails, and providing incentives for routine pre-registration for HETE studies.

Why is the use of RWE by ICER increasing in economic valuation over time? Was there any specific action?

The increasing proportion of model inputs informed by RWE is in part explained by the evolution of ICER’s value framework. While the essential valuation criteria did not change, there have been updates in the economic model framework that may have affected the use of RWE. The most notable change was the use of a societal perspective for CEAs in 2017. Since this change requires models to have non-healthcare sector benefits and costs (e.g., productivity costs) that are hard to be informed by RCTs, this change might have increased the number of RWE uses.

Main takeaway

We found limited use of RWE to inform drug-specific effectiveness and safety in the US despite calls for greater inclusion of RWE in value assessments for real-world drug effectiveness.  While some of the barriers to using RWE are inherent to the lack of data available at the time of drug approval, there are definitely several actions that can be taken to improve the use of RWE as we mentioned above. The gaps we identified in this study regarding what has been called for and what has been done in the use of RWE will give directions to the better use of RWE in HTAs.

Choosing between a managed care residency and a fellowship: the struggles of a pharmacy student

By Tae J. Park

I’ll set the scene for you.

Maybe it will sound similar to what you (a current pharmacy student) are grappling with. You made it into pharmacy school. You’re at an elite academic institution with a renowned medical center and your professors and preceptors are the best of the best. They equipped you with more clinical pearls than you asked for and now you are off to a great start in your pharmacy career. There seems to be a big push at your school to pursue a clinical residency, and most students seem to be riding that wave. But only after memorizing the top 200 brand/generic drugs and learning which antibiotics cover Pseudomonas do you realize maybe clinical pharmacy isn’t really for you… so what now?

It’s a struggle that I, and a handful of my classmates, encountered about halfway through pharmacy school.  Luckily, there’s a whole different side to post-graduate opportunities: managed care residencies and fellowships. To the pondering student looking for other avenues, let me break it down for you.

What is a managed care residency?

Managed care can have a relatively loose definition depending on whom you ask. In this context, managed care refers to insurance companies and pharmacy benefit managers (PBMs). As such, a managed care residency is a post-graduate program taking place at these types of companies. These programs are typically 1 year.

What does a managed care resident do?

Insurance companies and PBMs are often very large companies, with many different departments dedicated to managing pharmacy insurance benefits. Accordingly, a managed care resident can expect to dip his/her feet in many of these departments throughout the residency in blocks, similar to rotation blocks during school. Some examples of departments/blocks include formulary management, drug utilization review and clinical programs. Post-residency, many people end up working in one of these specific departments. Some managed care residents may also end up working in administrative roles within hospital systems or health maintenance organizations (i.e. formulary management at Kaiser), or even switch over to an industry role that is focused on negotiating with managed care companies.

What is a fellowship?

A fellowship is a broad umbrella term for a post-graduate program that usually takes place in the pharmaceutical industry (AKA drug manufacturers). But this is not always the case! There are also fellowships in academia and fellowships that are a hybrid of academia and the pharmaceutical industry, like the UW CHOICE fellowships. Fellowships are typically 2 years, although some are 1 year.

What does a fellow do?

Unlike managed care residencies, fellowships are usually offered by a specific department of a pharmaceutical company. A fellowship applicant can apply to a specific department, such as clinical development, regulatory affairs, medical communications, market access, or health economics. Depending on the company, departments can have slightly different names and some departments may have the term “managed care” in their titles (i.e. managed care medical affairs). This can be a bit confusing, but these are still departments within pharmaceutical companies and not payers. Rather, these departments are focused on communicating with managed care organizations for reimbursement and coverage purposes. Knowing which fellowship to apply to requires the applicant to have some prior knowledge of what department he/she is interested in.

In the case of academic fellowships, they are often specialized in a certain field of study and tied to an academic institution with a medical center. These fellowships are focused on conducting original research for publication and guiding treatment decisions. Some fellowships in academia also involve didactic coursework that confers a degree upon completion. As mentioned previously, some fellowships are purely academic and others are a hybrid of academia and industry. The UW CHOICE fellowships are a great example of this, where fellows complete coursework and an academic thesis to earn an MS degree during the 1st year and then transition to industry during the 2nd year. It is important to note that not all fellowships require the applicant to have a pharmacy degree. Some fellowships are open to people with PhDs, MDs and other science degrees! It all depends on the department and what type of skill set they are looking for. From my experience, it seems like the fellowships that are focused on research & development are often open to applicants from different educational backgrounds.

How do you decide between a managed care residency or a fellowship?

If you’ve read this far, you may have noticed that managed care residencies and fellowships are really quite different. They take place in different settings, are focused on different things, and have different structures. One thing I have noticed is that many pharmacy schools and student organizations tend to lump them together and tell the students, “hey, all these non-clinical opportunities are over there” without going into much detail about what’s what.

So here’s the news: it takes some effort on your part to decide what you want to pursue. Sounds cliché? Totally. But there isn’t much of a way around it.

If you see a summer internship at an insurance company or a pharmaceutical company, apply! Get your feet wet and see what it’s like on the inside. Didn’t get an internship or missed the deadline to apply? That’s okay, too! See if your school offers rotation blocks in these settings. Didn’t get the rotation you wanted? You’re still in luck. Spend some time to reach out to people who have worked in the field or at specific departments and ask them about their experience. Set up a one-on-one call or chat over coffee. Does it sound appealing to you? Then maybe it’s for you. Didn’t reach out to anyone? Well then now it’s really on you. There’s no free lunch.

Let me dispel a common misconception: having a managed care internship or a pharmaceutical industry internship on your CV IS NOT a prerequisite for landing a residency or fellowship. Your ability to convey that you have spent the time and effort to get to know the field and how your skills and interests match that field is more important.

My own experience…

A lot of my own experiences served as inspiration for what you just read.  I knew during my second year in pharmacy school that clinical pharmacy probably wasn’t for me. So, I started exploring elsewhere. I joined AMCP and went to their events to meet and talk to alumni who had pursued non-clinical routes in managed care and industry. I also participated in the Pharmacy & Therapeutics competition. When it came time for my teammates to split up tasks, nobody wanted to do the economic portion. It was difficult and economics seemed like a different language. Cost-effectiveness huh? What in the world is an ICER? It all seemed like stuff that was never taught in school. Nevertheless, I decided to take on the economic portion and unexpectedly enjoyed it. I learned so much while creating and interpreting a budget impact model. I never knew Excel could do so many things and it was actually pretty cool. I learned that entire departments in pharmaceutical companies were dedicated to health economics and outcomes research (HEOR), and I became curious.

I tried applying to HEOR internships two years in a row. I never managed to get one. Nonetheless, my curiosity was piqued so I didn’t stop there. I was able to find an internship at a local health plan. I took on projects that were more economic-focused and performed many cost-effectiveness analyses for the health plan. When it came time to pick rotation sites, I saw that my school offered a research block with our health economics professor. So I took that block. I was the only student in my year to choose that block. And it was awesome! I didn’t have to wear scrubs and got to work in an office setting with an endless supply of coffee. I learned new skills that weren’t taught in class, such as creating different types of economic models and using new computer programs.

When it was time to start applying for fellowships, I naturally looked into HEOR fellowships. I was intimidated, knowing that I had never done an internship in HEOR while it seemed like many other applicants had. Though I did have some relevant experience in managed care, I knew that it was still distinctly different from HEOR. Both settings perform economic analyses on medications, but from very different perspectives and scales. Managed care evaluates all the different drugs that come to market on a very broad scale, while tailoring all evaluations to the health plan’s own patient population. HEOR is much more specialized in economic evaluations and analyzes the company’s specific drugs on a deeper level for internal use while taking a national or global perspective. In some ways the skills used in managed care and HEOR overlap, but the perspective from which those skills are applied and the target audiences are different. I made sure to speak to this distinction while also highlighting how my experiences in managed care would be transferable to HEOR. I ended up choosing the CHOICE fellowship because I loved its hybrid structure—1 year in academia and 1 year in industry. Shameless plug: it’s pretty great and you should apply!

So, to the pharmacy student considering a managed care residency or fellowship, I recommend that you start exploring. And try not to fixate on just finding the exact internship you want. There’s plenty of other ways to get exposure—you just need to keep an open mind.

TLDR; check out the awesome pamphlet that Kevin Li, a current pharmacy student, made!