Abstract
- Issue: With thousands in Arkansas losing their Medicaid benefits under the state’s work-requirement demonstration, the importance of evaluating such experiments could not be clearer. In Stewart v. Azar, the court concluded that the purpose of Section 1115 demonstrations such as Arkansas’s is to promote Medicaid’s objective of insuring the poor; evaluations of these demonstrations, as required by law, inform policymakers whether this objective is being achieved.
- Goal: To examine the quality of evaluation designs for demonstrations that test Medicaid eligibility and coverage restrictions.
- Methods: Comparison of state evaluation designs against issues identified in Medicaid impact research.
- Key Findings and Conclusions: Evaluation designs for 1115 demonstrations that restrict Medicaid eligibility and coverage either are lacking or contain flaws that limit their policy utility. No federally approved evaluation designs for Medicaid work and community-engagement demonstrations are yet available, and the Centers for Medicare and Medicaid Services has not issued evaluation guidance to states. Evaluations thus lag well behind demonstration implementation, meaning important impact information is being lost. Eligibility restrictions attached to some approved Medicaid expansion demonstrations remain unevaluated. Moreover, evaluations are not sustained long enough to measure critical effects; systematic evaluation of communitywide impact is lacking; and comparisons to states with no Medicaid restrictions are missing. Without robust evaluation, the core purpose of Section 1115 is lost.
Introduction
As thousands of Arkansas residents continue to lose their Medicaid eligibility for failure to satisfy work, community-engagement, and reporting requirements,1 it is notable that the Centers for Medicare and Medicaid Services (CMS) has permitted the nation’s first-ever Medicaid work demonstration to proceed despite the fact no approved evaluation to test the impact of the requirements is under way. In Stewart v. Azar,2 a federal court, invalidating the U.S. Secretary for Health and Human Services (HHS) approval of Kentucky’s Medicaid work demonstration, concluded that Section 1115 of the Social Security Act authorizes experiments only if they are “likely to assist”3 in promoting Medicaid’s objective of insuring eligible people.
Section 1115 is not simply a grant of power to run alternative Medicaid programs; it is an experimental statute that permits demonstrations designed to promote program objectives and ensures that their results are properly evaluated.4 For this reason, evaluation always has been a core requirement under the law. Because the authority that 1115 confers is experimental, the HHS secretary is obligated to carry “periodic evaluation[s]” of approved experiments, so policymakers can determine whether they are indeed promoting Medicaid’s purpose.5
What Do Medicaid Waiver Evaluations Evaluate?
Over the years, the many evaluations funded by foundations and federal agencies like the National Institutes of Health have shown how Medicaid policy changes affect health care coverage, access, utilization, quality, and outcomes. Evaluations carried out as part of HHS’s formal 1115 demonstration process, however, are uniquely important. They fulfill the secretary’s statutory duties and create an official record of the impact that federally sanctioned demonstrations have on people.
All Medicaid 1115 demonstrations raise critical evaluation questions. But given Medicaid’s purpose, no evaluation is more important than one conducted for an experiment that will test restrictions on eligibility and benefits. The question is whether, despite these restrictions, the demonstration in fact promotes Medicaid’s objective of providing needed medical assistance to eligible people. Examples of restrictive policies include:
- work requirements
- premiums
- expanded or new reporting rules
- extended disqualification periods
- new restrictions on when coverage begins or how long it will last
- narrower benefits.
Except for work requirements, the Obama administration approved state experiments that test certain eligibility and coverage restrictions, but it did so in the context of broader Medicaid expansion. Under such circumstances, the crucial questions for evaluation become: 1) whether, on balance, even significant coverage restrictions are outweighed by the broader population coverage gains made; 2) who is affected and how; and 3) what possible mitigating safeguards could be introduced.
By contrast, the Trump administration has either approved, or indicated its willingness to consider, 1115 experiments that solely reduce coverage, in Medicaid expansion and nonexpansion states alike.6 Assuming that experiments aimed solely at reducing access to Medicaid fit within the scope of 1115 authority (the Stewart court did not squarely answer this question), key evaluation questions become: 1) who is affected; 2) what the nature and extent of the impact are; and 3) what gains, if any, outside of access to Medicaid are realized by people exposed to a heightened risk of denial or loss of public insurance.
Federal regulations amplify the 1115 statute’s evaluation requirement.7 In addition to requiring periodic reports,8 the regulations require states to evaluate their demonstrations.9 State evaluation designs also are subject to federal approval and oversight.10 States must publish their approved evaluation plans within 30 days; unlike states’ 1115 demonstration proposals themselves, however, the rules do not provide for public comment on evaluation designs.11
The federal rules are relatively detailed on what evaluation designs entail, specifying that the design must describe the demonstration’s hypotheses, the data that will be used, and data collection methods. Designs also must include a description of “how the effects of the demonstration will be isolated from . . . other changes occurring in the State at the same time through the use of comparison or control groups.”12
Rather than requiring states to incorporate their evaluation designs into their experimental proposals, CMS specifies a date by which a state must submit its design following approval. The agency has historically allowed states to submit evaluation plans after the start of implementation — as long as 60 days, in recent years. This time allowance appears to be lengthening: recent CMS approvals have permitted states up to 180 days to submit draft evaluation plans. Indeed, in the case of so-called “community engagement” demonstrations, CMS, as of November 2018, had not yet sent guidance regarding its expectations for evaluation design to states with approved or pending demonstrations.13 Since evaluations cannot begin without CMS approval, this means their start times likely will be well after the early stages of experiment implementation, when crucial decisions regarding the operationalization of the design are made and a demonstration’s impact begins to be felt.
Exhibit 1 summarizes the key changes being tested under 1115 Medicaid demonstrations approved between 2012 and 2018.
Research into the effects of changes in Medicaid eligibility and coverage span evaluations conducted under 1115 authority as well as those supported by public agencies or private foundations. Evaluations have examined not only gains and losses of coverage but also their consequences at the individual, provider, and community levels. Indeed, changes in Medicaid policy that alter eligibility for large numbers of people can produce effects that go beyond near-term changes in coverage. Moreover, their impact may take years to measure, as the effects of large-scale shifts in coverage in one direction or another ripple through health care systems and entire communities.
These broader effects — gleaned from research into the effects of Medicaid reforms over many decades — may be especially important to examine in the case of health insurance programs targeted at poor people. That is because the poor tend to be concentrated in poor communities, which, in turn, may be particularly sensitive to broader spillover effects.14 Changes in policy related to eligibility and benefits also impose new demands that can add to program administration complexity and cost.
Over the years, numerous formal evaluations have yielded important information about impact. But the Government Accountability Office (GAO), in examining Section 1115 evaluations, found that while CMS has improved the evaluation process, the evaluations themselves have been neither complete nor timely. They frequently lack intellectual rigor, and they sometimes fail to test important hypotheses raised in approved designs.15 GAO also noted that evaluation results often are not made public, thereby negating their potential value in creating knowledge.16
How We Conducted This Study
This analysis examines evaluation designs linked to 1115 demonstrations that test restrictions on eligibility and coverage. We first identified key topics that might be included in such evaluations by reviewing studies that have examined the impact of Medicaid eligibility reforms, both prior to and following ACA enactment. Many of these studies were identified through the Kaiser Family Foundation’s ongoing, periodically updated compilation of the research literature.17 We supplemented this compilation with a search of peer-reviewed articles examining the impact of Medicaid eligibility reforms (see the Appendix for a summary of some of the key studies).
Our review yielded the following topical areas:
- How reforms affect individuals’ coverage, health care access and utilization, and health outcomes.
- How reforms affect health care providers.
- How reforms affect community health and resources and local and state economies.
- How reforms affect program administration, including administrative complexity, implementation feasibility, and overall program cost.
We then compared these topical areas against approved state evaluation designs for 1115 Medicaid eligibility demonstrations approved by the Obama and Trump administrations. These demonstrations should be understood as a single body of work that could provide key insights into how the same reform might play out under different local conditions. Viewing the current generation of evaluations in this light assumes special importance, given the Trump administration’s stated policy of fast-tracking and replicating the same demonstration elements across multiple states.18 Under these circumstances, coordinated cross-state evaluations would seem essential to the use of multistate testing strategies.
Findings
Posted State Demonstration Evaluation Designs
Of the eight states currently approved to operate the Medicaid expansion on an 1115 demonstration basis, publicly available approved plans were available in six: Arkansas, Iowa, Indiana, Missouri, Montana, and New Hampshire (Kentucky and Arizona did not have approved plans available). Of four states that, as of November 2018, had received approvals to conduct work experiments — Arkansas, Indiana, Kentucky, and New Hampshire — Arkansas is the only state that has submitted evaluation plans for approval. (Kentucky’s original demonstration approval was invalidated in June and reapproved November 20.)
The Urban Institute has identified many questions raised by work demonstrations, particularly in relation to who is affected, who qualifies for exemptions, who loses coverage and for what reasons, how long coverage lapses, and whether alternative forms of coverage can be secured.19 Without an approved and public evaluation design in place, it is not possible to know whether these topics will be captured. This is critical, since although the HHS secretary’s approval for Kentucky has been set aside, Arkansas’ demonstration already is under way, and early evidence of large-scale risk of loss is apparent.
How Do 1115 State Medicaid Eligibility Demonstrations Approach Key Evaluation Topics?
Exhibit 2 shows the extent to which current state 1115 Medicaid approved evaluation designs reflect the four major topics listed above. Certain topics, unsurprisingly, are common to all approved evaluation designs, such as changes in coverage and access to care, health care utilization, and changes in health behaviors. However, CMS does not appear to have sought comparable evaluation approaches that enable policymakers to more clearly gauge the cross-state effects of common reforms whose details nonetheless may vary in important ways. Although CMS has also commissioned two cross-state evaluations covering certain approved 1115 ACA Medicaid demonstrations, it has made only selected results available and does not indicate whether it is considering these limited emerging findings to be relevant to its ongoing 1115 review and approval process.
We also find that certain topics remain largely unaddressed. One example is the possible spillover effects on health care providers when insurance coverage for an entire population is altered — an especially important consideration in communities where poverty is concentrated and the impact can be broad enough to affect the economic sustainability of the local health care system.20 Similarly, evaluation designs are inconsistent in the extent to which they are expected to address the feasibility and cost to government of changing program rules, such as new enrollment restrictions that add costs to the eligibility determination process. Likewise, benefit restrictions that vary by individual beneficiary characteristics could trigger new costs in determining the scope of benefits or level of cost-sharing owed.
Key waivers of federal law appear to remain unevaluated as well. For example, the lack of evaluation designs for demonstrations that eliminate retroactive eligibility is notable, especially given the long-standing role this process has played in protecting people and health care providers alike from heavy levels of medical debt and uncompensated care.
While a couple of state designs appear to pose “pre and post” questions (Medicaid enrollment under demonstration conditions compared to what it might have been previously), there do not appear to be evaluation features aimed at ensuring pre-and-post-impact analysis in all demonstration states, or between demonstration states and states expanding Medicaid under the ACA and states doing so under Medicaid’s normal operating standards.
Finally, the duration of evaluations appears to be uncertain. For example, with Office of Management and Budget approval, the Trump administration has terminated a previously scheduled participant impact survey for the Healthy Indiana Demonstration that would have examined the longer-term effects of premiums and enrollment lock-out periods.21 This raises the potential that major downstream consequences will go unobserved.
Premature cancellation of evaluations not only creates the possibility that important consequences will be missed entirely, but it also can mean that early, preliminary results are treated as final when, in fact, they are not and are subject to change. For example, in the early stages of an experiment that imposes premiums on the poor, a state agency may waive the premiums because the affected enrollees are still learning to navigate the new rules. As the agency moves toward more aggressive enforcement, the impact of premiums may produce effects that differ substantially from those observed during the grace period. Alternatively, the early results could suggest that premiums do not produce a major impact on enrollment or retention.
Discussion
The GAO reports that by the end of 2016, nearly three-quarters of all states operated at least part of their Medicaid program under Section 1115 authority and that during fiscal year 2015, expenditures under 1115 accounted for one-third of total program spending.22 At the same time, as the agency notes, evaluation has played only a limited role in program administration, despite the fact it is a core feature of the 1115 statute.
The Medicaid expansion demonstrations have the potential to affect coverage for millions of people while yielding important information for policymakers regarding the effects of eligibility conditions more restrictive than what is normally permitted. Furthermore, with at least one major Medicaid work experiment under way, evaluation has taken on a special urgency in helping policymakers understand why the demonstration is costing thousands of people their coverage each month. Early anecdotal evidence suggests that this figure likely includes many beneficiaries who are working or actively looking for work but cannot navigate the online reporting system or find enough work to satisfy the minimum weekly requirement of 20 hours.23
This analysis suggests that current evaluations will yield far less information than they could. Because evaluation designs and approval are delayed until, potentially, well after the demonstration has begun, there is a lost opportunity to establish a predemonstration baseline against which change and its impact will be measured. Furthermore, at least one key eligibility change — eliminating retroactive eligibility — appears to be going forward without an evaluation plan in states permitted to test the restriction as part of their demonstrations. Major topics, such as the imposition of premiums, may receive inadequate attention. This is particularly true in the case of states that initially take a gentle approach to enforcing a premium policy, becoming stricter about payment and the lock-out consequences for nonpayment only as time passes. Additionally, comparative assessments of common changes across demonstrations appear to be lacking. Although CMS has commissioned a federal evaluation of certain issues across several demonstration states, published findings are limited and the agency has left unaddressed how these findings are informing ongoing eligibility restriction demonstrations.
The Trump administration also appears to be moving to lengthen the time between waiver implementation and the official start of an evaluation, thereby affecting the potential to capture the early effects of change while also depriving the evaluation of a critical preimplementation baseline against which to measure the impact of change. Indeed, CMS has yet to inform states with approved or pending work experiments what the agency expects their evaluations will capture. The administration also appears to be limiting the duration of evaluations, even in the case of changes whose full effects will only be known over a longer period or whose early impact may change appreciably over time.
These gaps and limitations are especially important now when potentially large numbers of Medicaid demonstrations aimed at restricting eligibility may soon be in full swing. These demonstrations carry significant implications for access, coverage, health care utilization, and uncompensated care, and because of the concentration of poverty, their implications may be felt communitywide. And while the cost and operational feasibility associated with implementing complex restrictions on eligibility and benefits are potentially considerable, inclusion of these considerations in evaluation designs is highly uneven. All of this argues for an evaluation process of enhanced rigor.
Acknowledgments
The authors thank Vikki Wachino for her review of this analysis.