Developing Stronger ESMs
Using a Structured Planning Process
The Results-Based Accountability (RBA) framework can be used to begin a process of quality improvement in strengthening ESMs, both across the nation and in each Title V agency.1
Summary: What Is RBA?
RBA can be used by Title V agencies as a planning process in two ways:
1. To ensure that ESMs measure activities that advance National Performance Measures (NPMs). A set of population and performance accountability questions requires programs to consider:
- Desired impact on a targeted group.
- Relevant barriers and facilitators, relevant resources and potential partners.
- Identification of what works to produce measurable outcomes.
- Mechanisms to deliver programs effectively.
2. To strengthen measurement of ESMs. A four-quadrant measurement matrix assists programs to move from tracking effort (basic) to assessing effect (advanced). The quadrants are:
- Quantity of the effort (How much did we do?).
- Quality of the effort (How well did we do it?).
- Quantity of the effect (Is anyone better off?).
- Quality of the effect (How are they better off?).
RBA – also called “moving from talk to action” – is a tool that:
- Connects your programs to desired results and supports the development of robust and feasible action plans.
- Ensures your programs are connected to your work and advance your goals.
RBA can be used at a population level to identify and choose National Outcome and Performance Measures (NOMs and NPMs) and at a performance level to set State Performance and Evidence-based/informed Strategy Measures (SPMs and ESMs) and track measurement.2
RBA on a Population Level: Useful to Doublecheck Your Needs Assessment
RBA starts with the ends and works backward, step-by-step, to means. This process can serve as a way to double-check and reinforce your needs assessment process. Gather your staff, partners, and members of the community and ask these seven questions to make sure you have captured the necessary information from your stakeholders and are aligning your goals with NOMs, NPMs, and SPMs:
- RESULTS: What are the quality of life conditions we want for the families who live in our community? These are the population results we want (e.g., children living to their first birthday, children are safe on the road).
- EXPERIENCE: What would these conditions look like if we could see them? Ask how you would recognize these results in your everyday lives, without worrying about identifying programs or data (e.g., children not dying in their cribs while sleeping, children wearing bike helmets).
- INDICATORS: How can we measure these conditions? How would you see these experiences in measurable terms? What data do you already have? What new data could you collect? (e.g., percent of child care facilities that are trained in safe sleep, number of bike helmets distributed). For each indicator, ask yourself how you are doing – are the numbers improving, staying the same, or getting worse?
- BASELINE and STORY BEHIND THE CURVE: How are we doing on the most important of these measures? Map out your data over time and develop a baseline that includes 5 years ago to now and a projection 5 years into the future (ask yourself what would the data look like if you did nothing different). Write down the root causes of why the data looks the way it does – include health disparities, behavior change, and social determinants of health. Then map out how you would like the data to look into the future.
- PARTNERS: Who are the partners that have a role to play in doing better? For every “cause” in step 4, think of a partner who you can work with to address the need. Include typical and new partners. Then list partners who can work to address disparities. Are they at the table? How can you engage them?
- WHAT WORKS: What works to do better, including no-cost and low-cost ideas?Brainstorm possible actions that Title V can work directly to address identified root causes, engage partners, and leverage other programs already in place. Ask yourself “what would it take to make the numbers better?” Use MCH Evidence Center tools to see if your ideas align with the established or emerging evidence for “what works.”
- ACTION PLAN: What do we propose to do? The next step is to create an action plan. Start by setting priorities and a timeline: “Now,” “Next 12 Months,” and “2 to 5 years.” No-cost/low-cost actions are natural places to start. Don't wait for the perfect plan to be developed and approved. Get started right away.
Public Square Test. Will your stakeholders understand the priorities and actions that you have decided on based on your needs assessment? Could you stand in a community public square and explain what you want to do? Do your activities have the “power” to be understood? Are they representative? Are they data driven?
- Communication Power.Does your proposed activity communicate to a broad and diverse audience?
- Proxy Power. Does your activity address a root cause and carry potential to bring about the desired result? Can the activity stand as a proxy or representative for a number of strategies needed to affect change?
- Data Power. Do you have quality, timely data? Is the data reliable and consistent?
To prioritize, choose the activities with the best data power, then rank those activities that have the best chance to “make a difference” and be adopted by the community and your partners.3
RBA on a Performance Level: Developing and Strengthening ESMs
RBA can also serve as a way to choose and strengthen your ESMs and SPMs. Similar to the population-level process, you could consider seven performance accountability questions once you have set your priorities, identified your NPMs, and are focused on ESMs. Note some key differences in approach:
- CUSTOMERS: Who are our customers? Develop a complete list of who these groups are. Remember, your “customers” are the direct recipients of your strategy – they might be providers, an organization, or the MCH population group you are targeting (the customer might not be the mother/infant/child/youth/family).
- EFFECT: How can we measure if our customers are better off? Come up with the most meaningful measures, even if you don’t have data or don’t control every aspect of the activity. These might eventually be Quadrant 4 measures (highest measurement).
- EFFORT: How can we measure if we’re delivering services well? Your answers will usually measure what staff do and how well your programs perform. These will be Quadrant 1 and 2 measures.
- BASELINE and STORY BEHIND THE CURVE: How are we doing on the most important of these measures? Two parts: (1) From strategies in steps 2 and 3, what are the 3 to 5 “headline measures?” Try to get a mix of Category 2 and 4. (2) Graph out your efforts and create a baseline (history and forecast). Tell the story behind the data. Why are things getting better or worse? What are the causes at work?
- PARTNERS: Who are the partners? Consider partners inside and outside your organization. Consider active, non-active, and outside-the-box partners. What can they do to help turn the curve?
- WHAT WORKS: What works to do better? There are two natural pointers to answer this question: (1) each part of the story behind the curve (the “cause;” e.g., poor lighting is a cause of increased fear of crime) and (2) actions that come from the partners list. Each partner has something important to turning the curve. Evidence is important here. Look at the research for what has worked in other places, both from the MCH Evidence Center’s online toolkits and from best and promising practices. List these strategies.
- ACTION PLAN: What do we propose to do? Choose the most powerful actions from the possibilities identified in #6. You can use the criteria list (SiLVeR — specificity, leverage, values, and reach). Organize these actions into a plan that specifies the person responsible for each task, timelines, and necessary resources.
- Specificity. Are the strategies focused enough to be implemented? Do they align with the evidence? With a theory of change? With the goal of the NPM?
- Leverage. How much difference will the strategies make – will they address a root cause and turn the curve?
- Values. Will the strategies be adopted by the community they are targeting? Do they work to address health disparities and social determinants of health?
- Reach. Are the strategies feasible and affordable? Can they actually be done and when? Do you have the resources to ensure that the level of activity will be enough to make a change?
Begin with activities that rate highest in the first three areas. Build up to strategies where “reach” is a question.4
Turn the Curve: A Five-Step Process
You can answer each of these questions individually or work through Turn the Curve (TTC) process.5
TTC is a quick method to strategically think about your needs assessment data and develop strong measures to assess progress we make in changing the trajectory of your work. There are five basic steps to the TTC activity that can be adapted to meet your team’s needs:
- Graph or describe the trend of data associated with your outcome.
- Analyze and describe the story behind the curve to give your outcome some background and context.
- Identify existing and new partners who have a role to play in improving the data.
- Brainstorm what works to address the contributing factors and turn the curve.
- Develop and implement a comprehensive action plan that includes strong measures.
Tools to Move ESMs into Practice
Tips: How to Strengthen ESMs
Effective ESMs measure strategies that draw from the evidence, advance NPM topic areas, and include:6,7,8
Strategies that are meaningful. Consider if the ESM:
- Is based on an evidence-based/informed strategy. Evidence can be based on peer-reviewed research or informed by emerging practices and expert opinion that there would be a positive, measurable, and expected result from the strategy.
- Has a direct relationship to the NPM.
- Is feasible relative to state priorities and funding.
- Reflects the needs of your populations.
- Has involved stakeholder input and/or buy-in.
- Has potential for improvement over time.
- Addresses disparities, gaps, or issues to improve health equity.
Activities that are measurable. Consider if the ESM:
- Is quantifiable (count, percentages, rate) and specific (defined indicator, numerator, denominator). Note: Quantitative measures are better than qualitative “yes/no” measures to show improvement over time.
- Is well-defined, specific, and captures relevant data needed to demonstrate change.
- Has data sources that are available to measure and track the ESM over time.
Improvements that are moveable. Consider if the ESM:
- Can show improvement over multiple assessments.
- Is sensitive to change over time.
- Is effective with multiple population groups, including vulnerable families and CYSHCN.
Health Equity: Double-Check Strategies To Ensure Adoption
Not all strategies are effective for all population groups, and the evidence is often lacking in terms of using specific strategies to advance health equity. To help address this issue, we use the Science-Based Intervention approach to ensure that a program is effective for MCH populations by asking:9
What about it works? If we understand the key ingredients of a strategy, we can replicate and/or adapt them. Looking at a strategy through a health behavior theory identifies key ingredients. Here are several to consider:
- Intrapersonal. Theory of Planned Behavior, Health Belief Model, Attribution Theory.
- Interpersonal. Social Cognitive Theory.
- Community. Diffusion of Innovation, Ecological Models.10
How does it work? Being specific about the underlying mechanisms can help us increase the impact. Developing a logic model with program actions, targets, outcomes, and moderators allows you track the process from action to consequence.11
For whom does it work, and for whom does it not work? When we know who is and isn’t responding, we can make targeted adaptations to improve outcomes. Harvard’s approach is to think strategically about the program life cycle:
- Precision. Understand what a program entails so you can go beyond “does it work,” to “what about it works” – and eventually “for whom does it work.”
- Fast-cycle iteration. Incorporate new ideas as you go – what is working and what is not working. o Shared learning. Create a platform to share learning about success and failures.
- Co-creation. Bring together multiple parties to create a mutually valued outcome.12
In what contexts does it work? By evaluating the context in which a program is implemented, we can adapt it for other settings. The best way to ensure that a strategy is effective is to conduct a robust evaluation. The Kirkpatrick Model asks four short questions to measure reaction, learning, behavior, and results.13
See the ESM Development Guide for specific tools that advance health equity and social determinants of health.
References: Includes Links to Resources
1-5, 12 Friedman M. Trying Hard Is Not Good Enough: How to Produce Measurable Improvements for Customers and Consumers. FPSI Publishing, 2005, 2009, 2015 and Implementation Guide: Results-Based Accountability. Clear Impact. All use of Results-Based Accountability™ materials comply with usage guidance.
6 Kogan MD, Dykton C, Hirai AH, Strickland BB, Bethell CD, Naqvi I, Cano CE, Downing-Futrell SL, Lu MC, A New Performance Measurement System for Maternal and Child Health in the United States, Matern Child Health J (2015) 19:945–957.
7 U.S. Department of Health and Human Services, Health Resources and Services Administration, Maternal and Child Health Bureau. Clarifying Instructions and Frequently Asked Questions (FAQs): Development of Evidence-Based or -Informed Strategy Measures (ESMs) and State Performance Measures (SPMs). Technical Note (November 30, 2015).
8 Coster WJ. Making the Best Match: Selecting Outcome Measures for Clinical Trials and Outcome Studies. Am J Occup Ther (2013) 67(2):162-170.
9, 11, 12 Adapted from IDEAS Impact Framework, Center on the Developing Child, Harvard University.
10 Hayden J. Introduction to Health Behavior Theory, Second Edition. Burlington, MA: Jones & Bartlett Learning. 2014.
13 Bates, Reid. "A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence." Evaluation and program planning 27, no. 3 (2004): 341-347.