Advancing Performance Management Webinar Series
Performance Management Techniques for Successful Decision-Making
In the first installment of a two-part webinar series, Amanda McCarty, Performance Improvement Expert with the Public Health Foundation, dives into the concepts and uses of a performance management system, the crucial role of monitoring program performance and success in your public health initiatives, and how to develop meaningful measures using both qualitative and quantitative data.
Speaker
- Amanda McCarty, MS, MBA, MHA: Performance Improvement Expert, Public Health Foundation
Resource
- Performance Management Techniques for Successful Decision-Making: Presentation Slides (PDF) by ASTHO and the Public Health Foundation
Transcript
Some answers have been edited for clarity.
ANNA BRADLEY:
Hi everybody, welcome, and thank you for joining us today. My name is Anna Bradley. I'm a Senior Analyst with ASTHO's Performance Improvement Program, and I am really excited to welcome you to this new webinar. It’s part of our Advancing Performance Management webinar series that we've been hosting over the past year.
A couple of housekeeping notes: Closed captioning is available today — you can enable it on the control bar in Zoom. Please drop your questions into the Q&A feature; we’ll use that to help moderate discussion at the end. This webinar is also being recorded. You’ll be able to find it, along with other recorded webinars on the same topic, on the Advancing Performance Management Webinar Series page on the ASTHO website. I’ll drop that link into the chat for you momentarily.
But first, I want to make sure to welcome our speaker today, Amanda McCarty.
Amanda has delivered several webinars for us on this topic, and we’re so excited to have her back. She’s a consultant for the Public Health Foundation and has provided training and technical assistance for state and local health departments in areas such as performance management systems development, workforce development, quality improvement, and the development of evaluation plans and logic models.
Amanda also has experience in governmental public health, having previously served as the Director of Performance Management and Systems Development at the West Virginia Bureau for Public Health.
So, Amanda, thank you so much for joining us today. I’ll go ahead and let you get started.
AMANDA MCCARTY:
All right, welcome everybody, and thank you for joining us. Today’s session will be more of an overview of performance management in public health. To kick things off, I’m going to share my screen to display a Mentimeter poll. I’d like to hear from everyone about your level of familiarity with performance management. You can join using the QR code or by typing “Mentimeter” into your browser and entering the access code. We’ll quickly get a sense of where everyone stands in terms of familiarity with performance management, just to see what kind of mix we have in the group.
Okay, for those of you who have responded — thank you very much! It looks like we have a pretty good mix across the group. So, we’ll go back to the slides for now. Leslie, if you could pull those back up for me, please. Great.
When we talk about performance management, we’re referring to the regular collection and analysis of data related to the work we’re doing. This includes reporting that data to track our outputs and determine whether results are being achieved. We want to use this information to guide our work and help us make better decisions. When we do this intentionally, continuously, and in a repeated cycle — monitoring our work, identifying areas for improvement, and implementing changes — we begin to build a performance management system.
This system allows us to analyze the success of our programs or the health department as a whole by comparing actual outcomes to our intended goals. We ask ourselves: Is progress being made? Are we moving toward our desired goals? Do we have the right activities and services in place to achieve those goals? Using this information helps us determine whether improvements are needed.
So when we look at performance management as a model, this is the Turning Point Model for performance management. There are several pieces to this, and you don't have to start in one particular area. But when we're talking about performance management, we do need to eventually establish standards or goals that we're trying to achieve, and set targets related to those goals that are measurable.
Then we determine what are meaningful ways that we can measure whether or not we're achieving those goals through performance measurement. So, looking at performance measures — refining them, making sure they're related to our work, that they're usable, and that they help guide our direction.
We want to be able to collect data on those measures, whether it's monthly or quarterly. That regular reporting of progress allows us to see: what is this data telling us? We're able to see what direction we're working toward. Especially if we're moving in the wrong direction, we have the opportunity to fix that. If we wait a year to look at our measures, and we make a change, we’d have to wait another year to see if it made a difference. So the more timely we can be, the more regular our reporting cycle, the more meaningful it will be.
Then there's the quality improvement piece. This is really where, obviously, with performance management, we're using data to help guide us. But when we're not getting the results that we had hoped, that's where we can really use quality improvement to help us. The QI tools let us take a deeper dive into the results we're getting and the data we're seeing, to help us manage some of those changes or understand the root causes for those results and help us make improvements.
When we look at performance management and quality improvement together, they work hand in hand. At the core of what we're doing with all of these initiatives, we're really using data to drive our decision-making and monitor our progress. When we use them together in collaboration, it's to help us improve the value and impact of our programs.
I like to look at performance management more as your telescope — seeing the landscape of how things are going, whether it's within the program or the health department as a whole. You're able to see, dashboard-wise, if there's anything that's really drawing your attention. That’s where you would dig in a little bit deeper with your microscope — or your QI tools — to really help understand some of those issues and overcome the barriers we may be experiencing in achieving our goals.
When we look at the components of performance management, we typically determine — again, with those performance standards in that quadrant of the Turning Point Model — what are our goals? Those high-level pieces of what we're trying to achieve. Objectives-wise, what are some specific, measurable steps or milestones that we would want to achieve in our work of making progress toward those goals?
Then, how can we measure that work? Specific activities or strategies — program activities that we're doing — that we can measure and collect data on to help demonstrate: are we achieving our objectives? Are we moving toward achieving our goals?
In some cases, we may also have key performance indicators, or KPIs. These are the top, most meaningful measures that you're keeping track of related to your program. If you had to pick just one or two that would help you quickly see how your program is doing, which of those measures are the most meaningful and can help you quickly gauge your overall performance? Those would be your KPIs, which can also be a part of this.
Oftentimes, folks say, “Well, this seems very overwhelming. It's intimidating.” And when we call it performance management — if you've never called it that before — it can be intimidating. So I want to give this helpful analogy of performance management that you may be doing on a day-to-day basis and just not realize it.
If you have an Apple Watch — or if you know anyone who has an Apple Watch — you know there are these three rings. It's encouraging you each day to close those rings. The green ring encourages you to get at least 30 minutes of exercise. If you complete that, you close the ring. You can see here, this individual achieved 45 minutes of exercise. They've gone beyond the goal and are halfway to closing the ring a second time.
The blue ring encourages you to stand at least a few minutes every hour to prevent being sedentary. The goal is to stand a few minutes each hour, or at least 12 hours a day. This goal has also been met, with 14 hours.
The red ring is about how many calories you're burning based on your movement throughout the day. You can see here, the goal was 510 calories, but we only burned 443, so this ring is not closed.
If I want to better understand how I could potentially close this ring next time — or tomorrow — I dig a little deeper into the data that's available. For example, I did a workout on the elliptical, but I only went 3.57 miles. Maybe if I had gone four miles, I would have burned enough calories to meet that goal. So tomorrow, perhaps I’ll do four miles and be able to close that ring.
This is the idea of using performance goals and then monitoring your performance to see how you're doing against those goals.
So if you had to think of three goals for your program that were most related to the performance of your program — or three measures that you would want to monitor on a regular basis to really demonstrate the progress being made — what would those three measures be? Or, if we had to prioritize, what would we choose?
Just thinking through the prioritization of what you would collect: if we're looking at this on a monthly or even quarterly basis — say, in a staff meeting — and this is related to clinic wait times, vaccination administration, or other activities, we might notice something. For example, on Fridays, let’s say the red ring here represents our clinic wait times. We’re able to see in our staff meeting, through the data, that we have an issue on Fridays with not keeping our clinic wait times under 30 minutes.
That draws our attention. We can then have a conversation to better understand what’s contributing to those longer wait times. Is it because we’re starting Friday off with a staff meeting? Is it because we’re short a provider? We can talk through those things and, if needed, use some of the QI tools to drill down further. Just like when you're driving a car, you see what’s most important on your dashboard — your speed, how much gas you have, temperature, battery life. You see what you need as the driver. You're not looking at everything that could potentially be available. So again, we want to prioritize. If we had to look at everything every day while driving, it would be hard to notice if one thing was a little brighter than the others or if something was trying to get our attention.
We want to prioritize what is most meaningful for our program — what we need to monitor regularly — and use that data to help us better understand the work being done. That doesn’t mean everything we can measure is needed. It means that not everything that can be measured needs to be measured. But that doesn’t mean the work is any less important.
I’m not saying that answering phone calls or printing birth certificates isn’t important — it is. It has to be done. But it’s not necessarily something we need to prioritize for regular measurement and monitoring. So instead of measuring everything we do, let’s try to prioritize what would be most meaningful to measure and monitor. Let’s use the data we’re collecting as a feedback mechanism to help us make decisions on improvements moving forward.
Not everything that counts should be counted. We don’t want to just count everything because we can. We don’t need to measure something just because we always have. If we’re not using the measure — if it’s not providing us with any type of feedback to make improvements — we should probably have a discussion about whether we need to keep measuring it.
We need to make decisions about what we want to measure. When we’re looking at measures, we want to choose information based on it being a high-power measure. When we’re choosing high-performing performance measures, we want them to have communication power, importance and proxy power, and data power. These three qualities help ensure that the measures are not only meaningful and actionable but also credible and influential.
Measures with communication power are easily understood by stakeholders — staff, leadership, partners, and the public we serve. They clearly convey performance and progress. That helps build trust and drive engagement. If we can’t explain a measure simply, it loses value as a tool for motivating improvement. So we want to make sure we have that communication power.
With importance and proxy power, these measures focus on what matters most — critical goals or mission-driven objectives. Proxy power means the measure reflects broader success. Even if it’s a single data point, it can represent a larger trend or outcome. That’s what we want to be able to see with importance and proxy power.
Selecting high-impact or proxy indicators also helps avoid wasting time tracking less relevant data.
Then there’s data power. A measure needs to be reliable, valid, and timely. We need timely data available to collect and represent that measure. Otherwise, it can be useless. Without consistent tracking and the ability to show comparisons within our data, it’s hard to use it as evidence-based information.
We want strong data power that supports monitoring, evaluation, and improvement over time. When we look at measures that have all three of these powers, they help us drive performance improvement, align around shared goals, and support informed decision-making as a team.
So just think through what are meaningful measures, while also keeping in mind these high-power measures.
We also talk about the use of family measures. We don’t want to use only outcome or only process measures — we want a balance. We want to show outcomes, productivity, process impact, and maybe even customer satisfaction or feedback. We don’t want to collect just one type of measure, but a balance of them.
We want to be able to tell, within our program: How much are we doing? How effectively are we doing it? How well did we do? Are we working toward achieving our goal? And what was the impact — or the “so what” — of doing this?
So when we're looking at our measures, I've heard some programs say, “Well, we don't really have any quantitative measures — we really just have qualitative information.” I think it's important that we collectively show both within our programs. And if you've never moved from qualitative to quantitative, I think there's a way — with good discussion — that we can do that.
We really want to be able to evaluate holistically the work that we're doing. When we use both qualitative and quantitative performance measures, we're combining what the numbers say with why the numbers look that way. We're allowing for a fuller understanding of performance, context, and impact. When we define what we're evaluating, we're clarifying our goals and what success looks like. When we're using quantitative measures, again, we're looking at trends over time, comparisons. Maybe we've collected qualitative information, and we're going to summarize it into something more quantitative for us to look at over time.
Let me give you an example of that. If we're looking at quantitative measures within our program, maybe we're just looking at the number of eligible participants participating in WIC. Maybe we're looking at the percentage of participants completing at least three coaching sessions in a tobacco quitline. Maybe we looked at the number of processes we improved as a result of collecting customer satisfaction data, or the percentage of participants in a virtual coaching program.
Then some will say, “Well, we're only collecting qualitative data. We're only getting group feedback, or we're doing town halls or focus groups. Maybe we're getting narratives or information, and it's not really something that we feel is quantitative.”
I think there's a lot of great information we can capture with qualitative data. It provides additional insights that numbers alone cannot provide within our programs. But we can look at this information and try to summarize it in a way that could be measurable for us moving forward.
Let’s say you're collecting employee satisfaction information, and there's a general sense of low morale. A lot of employees are ranking below “satisfied” with a supportive work environment. We could average that number — the average number of employees who have rated “satisfied” or higher — and then look at this information from one year to the next.
Maybe as we work on our workforce development efforts or implement employee engagement activities, we can hopefully see the impact of that intentional work. We can improve the score and track it year to year. If our target is 80%, and maybe right now we're at 60% or less than satisfied, our goal over the next three to four years, as part of our workforce development efforts, might be to get to at least 80%. So we can create that type of measure from either quantitative or qualitative information.
When we hear feedback — whether it's from customers, folks from the public, or those using our process — if people feel like they're waiting too long, then let's find out what the current baseline is for our average wait time. If we haven’t been collecting that, there is a way for us to start. We just need to identify the best way to do it.
If forms are not being completed in their entirety, and that’s the feedback we’re hearing, then we need to get a baseline. What percentage of our forms are not completed?
Believe it or not, in the past year, there have been several QI projects I’ve engaged with or worked on in our QI trainings with health departments where folks are saying, “We do inspections, and some of the inspectors’ forms are not 100% complete.” Then we have to send them back to be finished. Or we’re getting some type of data or form submitted to us, and the provider hasn’t filled in all the information.
There was even a project related to death certificates — where the information needed to process a death certificate wasn’t being completed by providers. If we feel like this is an issue we need to work on and want to monitor its progress over time, then we need to average the current baseline and find a starting point that we can measure moving forward.
If we’re hearing feedback that people prefer to complete their information online, that’s great. Let’s see how many requests are being made online. So, there was another example from a hospital related to the proper disposal of controlled substances. They were investigating reports that things were not being disposed of properly. But what had actually happened was that items had only been misplaced temporarily — just for a couple of days — and then they were found, documented, and the forms were updated.
They were receiving so many of these forms reporting improper disposal, when in fact, the items had just been temporarily misplaced. What they really needed was a more standardized process. So, they made a change. By adjusting the length of time for reporting, they went from 80% of reports resulting in a non-investigation to zero.
This is an example of understanding the issue rather than just saying, “Well, we have a problem with proper disposal.” Let’s get a better understanding of why it’s happening. Let’s put it into some type of quantitative form so we can monitor it. Is there something wrong with the process overall? What do we need to do to change it or make an improvement?
Again, this goes back to how we frame things initially with performance management components. We talk through: What are our goals? What are the priority goals we’re trying to achieve? What are some supporting objectives — those measurable steps we can take to meet the goal? And what are some meaningful measures we can collect to help demonstrate whether we’re making progress toward achieving the objectives and the goal?
Here’s an example. I’m going to show you three different examples. The first is from environmental health. A health department had a broad goal to minimize environmental health risks and disparities. They had three supporting objectives: enforce environmental health codes, develop policies that incentivize compliance, and engage the community to reduce the need for enforcement. You can see the measures they associated with those objectives. In this example, they had both performance measures and outcome measures. The outcome measures were things they would assess once or twice a year, while the performance measures were tracked more frequently and had an impact on the outcomes.
The second example is from maternal and child family health. This was just one component of their program, related to their overarching goal of improving maternal and child health outcomes. They had two objectives: assess smoking and pregnancy status at state and county levels, and increase the availability of school-based dental sealant programs. You can see the measures they initially chose to help demonstrate whether they were achieving those objectives. Again, this is not representative of their entire maternal and child health program — just a snippet.
The third example is from a clinical program — a diabetes prevention program. Their goal was to increase access to the program for those at high risk for developing type 2 diabetes. They had two objectives: increase identification and referral of individuals at high risk, and increase access and availability of the program for those with prediabetes or at high risk.
Some of the measures they identified as most important included the number of partners funded to screen and refer patients, and the number of individuals referred to the program. Under the objective to increase access and availability, they tracked how many programs were available and aimed to expand throughout their service area. They also looked at the number of regions identified as having no programming — trying to fill those gaps, especially in rural areas. And they tracked the number of participants in their virtual coaching program.
They could identify how many people were at risk, refer them, and offer programming. But the real impact came when individuals enrolled in the program and participated in the virtual coaching. That was the key performance indicator (KPI). They were measuring the work, the process, and the outcomes — but the impact they really wanted to see was participation in the virtual coaching program. That’s where they could make a difference through education and prevention.
So, with performance management, when we’re looking at the measures and the information they provide, we don’t just want to know how we’re doing. We want to know that — and how we can do better. If we can get good, meaningful, and reliable data that gives us accurate performance measurement, then the information we obtain will tell us where we are now, how we’re doing, and the direction or trend we’re moving toward.
When we compare that to our targets and goals, we’re able to see what we’re achieving — and how we’re doing.
Performance management, again, is just like quality improvement on a larger scale.
When we look at establishing our plans — whether it's within our program or within our health department — we might be looking at a strategic plan department-wide, or operating plans at the program level. Maybe we're reviewing our improvement plans to see if our priorities are changing or if our focus areas should be shifting. We're aligning our expectations, defining them, and implementing the public health work at the program level. That’s where we’re doing the “do” piece of the Plan-Do-Check-Act cycle.
Within our programs, we may also be doing operational-level QI. For example, we might be working on improving the information we receive from providers when completing death certificates. Maybe we’re implementing a new process to get better feedback and reduce the wait time for people waiting on a death certificate. That’s operational-level QI.
Not only are we setting expectations and implementing the work within the health department, but we’re also collecting data so we can check and monitor our progress to see how well we’re doing. Some pieces of this happen less frequently — like our state health assessment or community health assessment. Is there additional information being provided to us or available to help us check how we’re doing against our goals?
We translate this into usable information to help us determine: Are there ways for us to improve? Can we change or implement something new? We’re determining our next actions and also our direction moving forward.
All of this information — if you go through one cycle of it — will influence the direction of our next priorities or what may change in the future. Some of these pieces may happen quarterly. Some may happen annually, or every three to five years, like when we update the strategic plan or the community health improvement plan.
Not all of these pieces happen at the same time or with the same frequency. But performance management is a large-scale quality improvement Plan-Do-Check-Act cycle. We’re using information within the organization — information we’re able to collect within our programs — to give us insight into the work we’re doing. That helps us better understand the results we’re seeing and determine whether we need to make adjustments moving forward.
When we look at the importance of performance management, it builds in accountability. We can see a better return on dollars invested in health. We can achieve greater accountability for funding. And when we demonstrate the work we’re doing and share the data, we also build transparency and communication with the communities we serve.
It helps reduce duplication of efforts across the health department. It gives us a better understanding of our accomplishments and priorities. We’re looking at the quality, the impact, and the meaningfulness of our work — not just how much work we’re doing. We’re also focused on problem-solving.
Again, we’re looking at a group of measures — a family set of measures — where we’re examining capacities, processes, and outcomes throughout the health department to help guide our decision-making.
When we have discussions at the health department about performance management, we ask: What are our priorities? What do we feel is most important to focus on? We don’t want every light on the dashboard blinking. If we had to choose the top three priorities, what would we want on our dashboard? What would be most important for us to collect?
Let’s just have that conversation. And again, keep in mind: there is no universal set of right and wrong performance measures. It’s not about being perfect at performance management. Don’t let perfection — or striving for perfection — get in the way of your program doing good work and demonstrating that good work.
In our program discussions, let’s talk about the work that needs to be done, the goals we’re trying to complete and achieve, and the time frame we have to complete them. Is this something we’re trying to achieve this year? In the next two years?
What matters is not how much we’re tracking or reporting on, but whether we have meaningful measures that truly demonstrate the work we’re doing and the outcomes we’re seeing. We need to analyze those outcomes to understand what the data is telling us, so we can track our progress toward our goals and demonstrate that we’re making progress.
We also need to discuss the time frames for achieving these goals. Who’s going to be responsible for this work? And is that responsibility actually ours? I frequently see in our technical assistance discussions that people want to include certain measures. But the more we talk about it, the more we realize those measures are related to work they don’t control. Maybe it’s work being done by community partners. Maybe the health department is coordinating the effort or providing funding, but the actual work is being done by local health departments or other partners.
It’s important to think through who is responsible for carrying out the work and achieving the goal. We don’t want to measure things that are outside of our control.
Then there’s the data collection piece. For the things we’re trying to measure, is the data available? Is it readily available? Can we depend on it? Is it something we can feasibly access on a regular basis? And is it available without additional cost or financial burden?
These are the kinds of conversations we need to have at the program level — talking through the specific pieces of performance management and trying to narrow it down to the priority areas that matter most, or that are most representative of the meaningful work we’re doing, rather than trying to measure everything. Some health departments — and I think we’ve talked about this in one of our previous webinars — say, “I have no idea where to start with performance management.” Sometimes it seems like an overwhelming or daunting task to have this conversation with every program, especially if you're a really large health department.
One of the things we recommend is to start with the foundational public health services. If you're new to performance management or looking for a framework to help get started, the foundational public health services are a great place to begin. These are the fundamental responsibilities that all health departments are expected to deliver.
These foundational areas include communicable disease control, chronic disease and injury prevention, environmental health (ensuring safe air, water quality, and food), maternal, child, and family health services, and increasing access to clinical care and linkages with clinical care.
Over the past year, we’ve sat down with a few health departments and talked through questions like: Which of your programs are most involved in communicable disease control? Let’s have a meeting with just those folks to talk through: What are our goals? What are we specifically working on right now? What’s most important for us to measure?
For chronic disease and injury prevention, maybe it’s community health, school health, or opioid injury and overdose prevention. From one health department to the next, how you approach this may look different, but the foundational area remains the same.
The same goes for clinical care — access and linkages. That might include WIC, women’s preventive services, school health clinics, or dental services. This will vary from one health department to another, but generally, most programs contribute to one of these foundational areas in some way.
If we bring those folks together and have a conversation about the work we’re doing collectively and collaboratively related to communicable disease control, we can talk through: What are our goals? What are we striving to achieve? What are our current focus areas? Then we can talk about our objectives and measures. These may differ across the two or three programs contributing to that goal, and that’s okay. We can still capture all of that work together under the objectives for that goal of preventing communicable disease.
I wanted to share this because it can be a helpful framework to get started — especially if the “Where do I start?” question feels like a barrier. Try starting with those five foundational areas. Identify what each of those areas looks like in your health department, what the work looks like, and then identify the priorities and ways to measure them.
Once you get a system in place — something you feel good about — then, if you need to expand to other areas, you’ll already have a system in place.
Performance management is systematic and continuous. We’re setting goals and objectives, developing meaningful performance measures that help show we’re achieving those goals and objectives, collecting that information regularly, and seeing what it tells us. We’re sharing those results with others and using quality improvement tools to address any gaps or opportunities for improvement that we’ve identified through the data.
But we never stop monitoring. We continuously monitor. Then, perhaps the next year, we set revised or more focused goals and objectives and continue the cycle again.
I also want to add that performance management is not project management. We should not be using our performance management system to track projects. Projects are temporary. Even if it’s the most important thing your agency is working on right now — maybe you’re implementing an electronic health record or moving to a new online document-sharing system — those have a start and end point. That’s a project.
When we’re measuring performance, it should be related to the impact our public health programs are making in the community. Think about it this way: if the governor called you today and said, “Hey, it looks like we’re giving you $10 million a year for this specific program, and I’m looking to cut $10 million from the general budget — can you tell me the difference you’re making with that program?” — you wouldn’t want to say, “We’re implementing a new filing system.” You’d want to share the impact, the results, the trend lines, the outcomes you’ve seen over the past few years, and the difference you’re making.
We don’t want to include projects or tasks in performance management. We want to monitor the goals, the work we’re doing related to those goals, the data, the impact, and the difference we’re making. We want to be able to demonstrate the value we’re adding as a result.
We wouldn’t want to include things like “We need to develop a training” or “We need to hold committee meetings.” I’m not saying those things aren’t important — they are. They’re things we have to do, and maybe they help us achieve our goals. But something like printing birth certificates — that’s a very important task. We’ve always had to do it, and we always will. But we’re not necessarily going to change how many birth certificates are needed each day or how many we’re printing based on need. That’s a service we provide to the public.
If we’re saying, “We need to develop a committee to help implement a new initiative,” that’s fine. That’s part of the work. That’s a business-as-usual task. But the progress of that committee being developed isn’t something we’d want to report as representative of our work in community health or tobacco prevention. Instead, we’d want to say, “Here’s the work we accomplished as a result of that committee.”
So, when we’re developing performance measures, we need to ask: Is this a business-as-usual task? Am I going to have to do this regardless of the result? If so, it doesn’t belong in the performance management system.
But if we’re looking at something like wait times — something we could improve or change based on the data we collect — then, if it’s related to our priorities and goals, that’s something we would want to measure.
So, we’re going to jump back over to the Menti again and do a little bit of practice with some performance measures, just kind of developing those.
I’m going to ask folks to go back to Menti. You can use your browser and go to menti.com and use the code, or you can use the QR code with your smartphone. I want to hear from everyone: What are some example activities or outcomes that your program measures — or that you’re currently doing — even if you’re not measuring them?
Okay, so we’re measuring retention, customer satisfaction, presentations, cultural engagement, folks that are enrolled, training participation, turnaround time — that’s a good one — permit process timeline, performance evaluations, reduction in alcohol use, recruitment, participants served, turnover, overall outcomes, immunizations, website visits, burnout (I haven’t seen that one yet), grants completed on time, vaccination rates, revenue cycle, STD testing and rates.
Great. This is all great feedback — really good examples: community trust, radon kits, procurement measures, self-efficacy, turnaround time, engagement. Great. This is all great.
All right, so keep that in mind, some of the examples you’re thinking about, and we’re going to build from that in our next one.
So, related to some of this work, do you have any goals specific to your work for this year? What are some of your goals related to your program or your health department? If you have a goal that’s not necessarily related to what you mentioned earlier, that’s okay — you can start with a new area.
All right, so we’re looking at increasing program participation, outreach engagements, reducing time to hire — that’s a good one; I’ve had experience with that—maternal and child health outcomes, reducing turnover, increasing use of radon kits in rural areas, increasing QI, reducing youth screen time (okay, haven’t seen that one yet either), increasing outreach efforts, improving staff engagement, completing our state health assessment, looking at syphilis rates.
I see several things about staff retention, hire time, onboarding experience, employee engagement and satisfaction, rolling out a new program, performance management. I saw something for leadership succession planning, and now I’m seeing internal promotion pathways, improving immunization rates. Okay, very good. This is all good feedback.
Now, thinking about the goal you just listed, what are some specific activities related to that work — related to the goal you’re trying to achieve — that you think, after hearing this information today, would be meaningful to measure?
If your goal is to improve the onboarding process or reduce the hire time, what are some things that would be meaningful to measure to help achieve that goal? Same thing for succession planning or reducing syphilis rates.
Yes, so a specific activity related to that work: identifying outreach events. Great. Providing education and radon kits to rural city clerks. Increasing school clinics. Engaging our partners. Number of programs in the performance management system that are using data to make decisions — that’s great. Looking at attendance for job fairs and college presentations. Sharing awareness of community efforts. Indigenous words, common practices, or common phrases used. Identifying KPIs. Okay, great. Some of this is just activities that would be meaningful to measure. I think the next one might be a little repetitive because I was going to ask how you could move that to a meaningful performance measure — and I think some of you already did that.
If you didn’t already and you’d like to, go ahead and do that here — that would be fine. Some of you were even mentioning the percentage of vaccination rates or the number of events that you’re holding.
So, Anna and Melissa, I’ll turn it back over to you. I think this slide is probably a little repetitive of what we just covered. I’ll leave it up in case anyone has any additional ideas they want to share, but we can move to questions or whatever you all had planned next.
BRADLEY:
Sounds great — thanks, Amanda.
We do have a couple of questions in the Q&A box. Two really focus on the difference between performance management and evaluation. Some of the performance measures look like things you might include in an evaluation plan as well. Could you speak to the difference between evaluation and performance management for a little bit?
MCCARTY:
Yep. So I think with performance management, you’re looking at: What are our goals? You’ve established some goals, and you’re looking to see — based on the data you’re collecting — what progress you’re making toward achieving those goals. You’re looking at that information to see: Can we make improvements? Can we get better results?
Performance management is definitely focused more on continuous improvement. You’re using real-time data — or monthly or quarterly data — to help you monitor and make adjustments.
With evaluation, you’re really looking at assessing the impact that’s been made. So, after a program — after a year of the program, or after we’ve closed something, or it’s run its course — we’re determining what worked, what didn’t work, and why. We’re evaluating the work we’ve done.
Another piece of that is the timing. With performance management, we may be looking at measures monthly, quarterly, or biannually. Evaluation is more periodic — it’s a look back.
I hope that helps.
BRADLEY:
Another question we’ve gotten is that one of the biggest performance management challenges this person comes across is staff saying, “This measure would be great to track, but we don’t know how to go about collecting the data.” Do you have any tips or resources you refer people to for the basics of data collection for performance?
MCCARTY:
Yeah. I usually just have a conversation about what’s available. What data is available that we could collect? Or is there a new way for us to collect this information?
Just because we’ve never done it before doesn’t mean it’s not possible. It’s really just a conversation. For example, clinic wait times — if it wasn’t something we were tracking and we didn’t know the baseline average, we implemented a new process. When people check in, we also log the time they check in and the time they check out in a spreadsheet or some way to monitor that, so we can keep an average wait time.
So it’s really just a further discussion of: Okay, we want this information — how can we find it? Is something available to help us? Or maybe we need to involve others within the health department to help us with that.
BRADLEY:
Awesome. Thanks, Amanda.
All right, we are coming up right at the end of our hour, so let’s see if we can pull the slides back up. I want to point people to a couple of upcoming resources. There are two more webinars that you might be interested in. One is being hosted by the Public Health Foundation about harnessing AI to elevate public health performance improvement initiatives. I'm going to drop the registration link for that in our chat.
Amanda will also be joining us again on May 21st to talk about operationalizing performance management in a health department. We encourage everybody to come back for that event. The registration information for that session is now in the chat.
The session on harnessing AI is definitely a hot topic right now.
Lastly, we would really appreciate it if you could take a moment to complete the evaluation. Let us know how today went — it would be incredibly helpful as we plan future webinars. If you have a moment before you completely sign off for the day, please take a minute to fill that out.
We’ll see you back on May 21st. I’ll drop the registration information for that one into the chat as well. We hope you’ll be able to join us again.
I know we’ve shared the link a couple of times to where you can find all of these recordings. That’s going to be a really great resource website for you all as you continue to look for this information.
I did see that we just got a couple of other questions in the chat. It is 12:59, so I’m going to let us go. But Amanda, I might send these questions to you for you to take a look at, and maybe we can get something typed up for the people who posted them.
MCCARTY:
Yeah, I feel like I can really quickly answer that one. I think it can be either, Taylor. Some health departments do it by program. Some are using the foundational public health services framework — the five foundational areas. Some are doing it department-wide, maybe by division or office, depending on the level of the hierarchy.
BRADLEY:
So, it really depends on the preferences of the health department. It can be both specific to a program or division, and department-wide.
MCCARTY:
Yep.
BRADLEY:
Okay, awesome. Thank you all so much, and have a great rest of your day. Thank you so much for being with us.
MCCARTY:
Operationalizing Performance Management in a Health Department
ASTHO and the Public Health Foundation present the second installment in a webinar series designed to help attendees advance accountability and performance management in their public health departments. This session will equip participants with practical insights and strategies that can be immediately applied to effectively operationalize performance management.
Speaker
- Amanda McCarty, MS, MBA, MHA: Performance Improvement Expert, Public Health Foundation
Resource
- Operationalizing Performance Management in a Health Department: Presentation Slides (PDF) by ASTHO and the Public Health Foundation
Transcript
Some answers have been edited for clarity.
ANNA BRADLEY: Hello everybody, and welcome. It’s so wonderful to have you here with us today. We really, really appreciate you giving us some of your precious time.
My name is Anna Bradley. I'm a Senior Analyst with the Performance Improvement Team here at ASTHO, and I am so excited to be hosting another one of the webinars in our performance management series: Operationalizing Performance Management in Health Departments.
A couple of items: closed captioning is available through Zoom. You can access that through the Zoom toolbar at the bottom of your screen. We do have a Q&A box open for questions — please drop any questions there throughout the webinar, and we’ll be monitoring those. This webinar is also being recorded. We’ll post the link to where you can find this webinar and others in the series in the chat for you to access.
I want to jump right into introducing our speaker for today because I know she has a lot of amazing content to cover for you all.
Amanda McCarty is a performance improvement expert with the Public Health Foundation. As a consultant for the Public Health Foundation, she has provided training and technical assistance for state and local health departments in the areas of performance management systems development, workforce development, quality improvement, and the development of evaluation plans and logic models.
Amanda also has experience in governmental public health, having previously been the Director of Performance Management and Systems Development at West Virginia’s Bureau for Public Health. We are delighted to have Amanda with us. Thank you so much for being here, and Amanda, I will just turn it right over to you so you can get rolling.
AMANDA MCCARTY:
Thank you very much. I appreciate it, Anna.
Welcome, everybody, and thanks for joining us. If you’ve joined some of our previous webinars, we’re going to be talking a little bit more today about how to use the data and information that you get from a performance management system — and really start to utilize that in action, or use the feedback mechanism of the data to make improvements within the health department.
Just to recap — and I’ll spend the first few minutes here recapping performance management overall — when we talk about performance management, and we look at this Turning Point Model, we want to set goals or standards, or set a level of expectation that we are aiming to achieve within the health department across our programs.
Whether you’re building that for individual programs, building it based on a framework like the Foundational Public Health Services, or building it based on strategic priorities — however you’re doing that — you’re setting standards and goals based on those priorities, whichever model you choose.
In establishing those standards and goals, we want to be able to identify meaningful ways to measure our work. Are we making progress toward achieving these goals? Are we moving in the right direction?
When I say meaningful performance measures, we want them to give us useful information that allows us to make decisions, have discussions, talk about the results we’re seeing, and make decisions moving forward regarding our work. Should we keep doing what we’re doing? Is it working well? Are we seeing good results? Are we working toward achieving our goals? Or should we try something different to hopefully see different results?
Those are meaningful performance measures. We want to collect them frequently. The more often we collect them, the more often we can have discussions about the feedback and make decisions moving forward.
If we’re only collecting data once a year, then we may only be making changes once a year — and have to wait another year to see if our changes made an impact. If we can collect data more frequently — monthly, or at the max, quarterly — we can make those decisions on a more short-term basis instead of waiting a long time.
Reporting on progress is about looking at our performance measures, looking at the data we’re collecting, analyzing and interpreting that data to see what it’s telling us about our work — about the impact and the value we’re adding.
This Turning Point Model is the model of performance management: setting goals, collecting measures, using that information as feedback on our programs during that reporting of progress. Then, when we’re not seeing the results we hoped for or we’re not moving in the right direction, we dive in a little deeper with the quality improvement component — trying to understand why we’re seeing the results we are, and whether there are things we could change to get more effective, more efficient results.
Ultimately, we’re using this model to improve operations for the health department — to make things more effective, more efficient, and to make a better impact within our community or the population we serve.
When we look at performance management and quality improvement working together, we are using data at the foundation of both of these — both concepts, both models. We’re using data to drive our decision-making, as a feedback tool, and to make improvements where possible.
We’ve talked about how we can collect performance measures — we can monitor measures and collect the data — but if we’re not using that information to make better decisions within our operations, we’re not really using it as performance management systematically throughout the health department or our program.
We want to use data to make better decisions, and we want to do this on a continuous, repeated cycle — monthly or quarterly — using this information as a regular feedback mechanism so we can analyze it, interpret it, and make improvements throughout the health department.
That’s where we get a performance management system — when we make this a part of our operations. It becomes business as usual. It’s intentional, it’s repeated, it’s continuous, and we’re using it just like we would any other piece of information in making decisions for our work.
When we look at measurement itself, we don’t just want to know if our program is doing okay or if we can help our program. We don’t just want to know if we’re performing better — we want to be able to continuously attempt to perform better.
Things will never be perfect. In Lean Six Sigma principles, the goal is to get as close to perfection as possible. But we’ll never be perfect. So we want to continuously monitor — even if we think things are as good as they can get — because we don’t want to lose that. We don’t want to backtrack in any way. We don’t want any part of our process to deteriorate or start to impact the results we’re seeing.
Continuous monitoring really helps with that.
We also want to have accurate performance information related to our process and our work. If we have meaningful data that’s relevant, reliable, useful, credible, and related to our work, then we can have meaningful feedback that tells us where we’ve been, where we are now, and — based on trend information — where we’re going.
So when we — again, this is a recap — but when we look at the components of performance management, we have our goals. In the Turning Point Model, these are referred to as performance standards. These are the goals we’re trying to achieve, whether they’re agency goals, programmatic goals, strategic priorities, or goals established using something like the Foundational Public Health Services.
Then we establish objectives — specific, actionable steps that we can take. If we achieve these objectives, we will ultimately achieve our goal. With those objectives and the achievement steps or milestones we need to reach, we then have specific measures or key performance indicators. These are pieces of information that provide feedback as we collect them. They demonstrate whether we’re working toward achieving our objectives and moving in the direction of achieving our goals. We can use this information to tell us: Are we making progress in the right direction? Are we on track to achieve our goals?
We’ve briefly talked about this before, but I’ll touch on it again. I like to talk about performance management using the “three rings” concept. If you don’t have an Apple Watch — or aren’t familiar with it — the watch encourages you to personally achieve three rings each day. They start over blank every morning. The green ring is a goal for exercise. It’s a default goal, but you can adjust it based on your personal goals. For example, 30 minutes of exercise per day. You can see here that this ring has been closed and is halfway around to achieving it a second time because we’ve logged 45 out of 30 minutes for that goal.
The blue ring encourages you to be up and moving around a few minutes every hour to avoid being sedentary. If you do that for at least 12 hours throughout the day, you’ll achieve this goal. In this case, we did 14 out of 12 hours and closed that ring.
The red ring is about how many calories you’re burning throughout the day — not resting metabolism, but intentional movement and exercise. You can see that this goal was not met. Based on not meeting this goal, there’s additional data available. I can look at the calories burned through exercise and see that maybe if I had achieved four miles on the elliptical, I would have closed the ring. So next time, I might try for four miles. I can use that additional data to make decisions.
Now, think about this from your program’s standpoint — or your health department’s. If there were three key performance indicators that you’d want to see every day or know about related to your program — not how many times the phone rings, not how we’re doing on implementing a new filing system, not how many birth certificates we’re printing or brochures we’re handing out — but three key points related to the work we’re doing, what would those be? If we wanted to narrow it down to the top three most meaningful indicators, what would your three rings look like for your program or your health department? What would that look like, however you’re trying to set this up across different categories?
Let’s say these are related to clinic wait times, completing inspections on time, getting audit reports back on time, or returning expired vaccines. If these were our measures for our program — regardless of what your program is — we’d be discussing this in our staff meeting.
Remember, performance data is a method of feedback. We’re going to talk about this feedback so we can have discussions and identify areas for improvement. When we look at this, we might easily see that something is happening on Fridays. Something is consistently happening on Fridays that’s keeping us from closing those rings. If this were clinic wait times, we’d want to talk about that. What’s different on Fridays? Do we have fewer providers? Are we starting the day with a staff meeting that’s putting us behind?
What’s happening, and how can we fix it? If we need to dig deeper, we can use QI tools — do a process map, root cause analysis, those types of things — to figure out what’s happening and what’s different on that particular day of the week that’s keeping us from meeting our goals.
That’s what performance management is: collecting the information intentionally, monitoring it, having conversations, looking at the feedback to see what it’s telling us, and then making decisions as we move forward so we can make improvements.
Even if things look perfect, let’s keep monitoring. Keep checking it so that if things start to deteriorate, we can recognize that quickly and make those changes quickly.
When we talk about the planning piece of what goes into this, we’ve talked about having goals, objectives, and measures. But we also want to talk about how we’re going to do this within our own program. What work needs to be done? What are our top priorities? What’s our time frame to complete that work? Who’s going to be responsible for doing it and making sure it happens? And what type of data do we need to collect to tell us whether or not we’re achieving these goals? These are some of the conversations we need to have as we build our performance management system, identify our priorities, or even update our priorities from a previous year.
It’s more important that we focus on the quality of our measures and whether they’re providing meaningful information, rather than how many measures we have. You could tell me you have a hundred measures, but that’s going to be a red flag for me. Most programs don’t have a hundred measures. That’s a lot of work. It’s likely a full-time job just to update those measures regularly — and all hundred of them cannot be meaningful. So we’d be having a conversation about each measure: What are you using it for? How is it providing meaningful information? What are you doing with it? What’s the action item after you review that measure? What is it providing you?
We want to focus on the quality of your measures — not how many you have. You can have good intentions, but here are some examples of what may not be considered quality measures.
If we’re just measuring activities instead of outcomes or impact — for example, “we handed out this many brochures,” or “we distributed this many materials,” or “we answered this many calls” — those don’t actually tell us if there was any impact, engagement, or enrollment. We don’t even know if the materials were read or if an entire case of 500 brochures was left behind at a health fair. Measuring that type of work would not necessarily be meaningful to your program.
When we look at counting numbers or counting “widgets” without context — like how many people got tested — we don’t know if those tests were complete. We don’t know if the number represents unique individuals or someone getting tested every week. We don’t know the results, the impact, or the total population being tested. So just counting the number of something isn’t always useful. And just because we’re doing more doesn’t mean we’re doing better.
We also need to think about measures that are too broad or vague. If we say, “we’re improving community well-being,” or “we’re improving priorities within the state health improvement plan,” what does that really mean? For what specific population? With what programs? How is this work making a difference? We want to be able to track progress, show positive impact, and demonstrate the difference we’re making. We want to make sure our measures are not hard to understand. They should be specific, time-bound, discrete, and tied to a goal — not vague or broad.
Another issue with quality is measuring what’s easy. I see this a lot. Folks want to measure how many meetings they’ve had, how many phone calls they’ve received, how many certificates they’ve printed, or how many brochures they’ve handed out. These are easy to measure because the data is readily available — but that doesn’t mean it matters.
Just because we hold meetings or convene partners doesn’t mean we’re making a difference, or that action, follow-up, or improvement is occurring as a result. So let’s think a little further: Why are we holding these meetings? What is the impact or action we hope is taking place as a result? Is there a way for us to measure that as we move forward?
We also don’t want redundant or duplicative metrics. This is another quality issue. For example, if we’re measuring the number of patients we see and the number of appointments we have, we need to think through whether those are actually measuring different things. Sometimes I’ll ask, “are we measuring a unique number of patients?” If you say, “we saw 35 patients this week,” and I had to come back for a follow-up, am I counted two or three times? Are we measuring unique patients or the number of appointments?
We need to be clear and ensure there’s meaning behind what we’re measuring. Is there a difference between the number of patients seen and the number of appointments? If so, what is that difference? Let’s try to make that more detailed so it’s not just a repetitive measure.
Another issue is relying solely on process metrics without linking them to results, outcomes, or impact. For example, I’ve seen measures like, “how much time does it take to process this paperwork?” That might be a good measure — but we need to talk about what it’s providing. If you have to turn around a report within three business days, maybe we should look at the percentage of reports completed within that time frame. Measuring just the time it takes doesn’t necessarily link to an outcome, a goal, or something customer service-related.
Let’s make that more specific and focus on the goal we’re trying to achieve. Maybe we look at the percentage of items completed on time instead.
These are just some examples of thinking through quality versus quantity. We don’t need all of these measures unless they’re providing useful information — information that, when we see it, we can use to take action.
If you’re measuring something and, when you look at it every month, you don’t anticipate taking any action based on what you see, then it’s probably worth having a conversation about whether that measure should be included in your dashboard.
Next, we’re going to talk about using data in action — or taking it to the next step with action.
There are eight steps that we really see in this process — from reviewing your measures to ultimately evaluating your outcomes. We’re going to talk about each of these individually, but the key is making sure we’re doing this frequently, involving the right people, and using the review and the feedback it provides to identify opportunities for improvement within our operations.
Once we’ve identified those opportunities, we develop improvement theories or projects, implement those actions, monitor progress, and then evaluate the outcomes.
When we look at the review piece, we don’t just want to collect measures for the sake of collecting them and say, “we did it.” We want to use the feedback. We need to have a review process in place — scheduled on a regular basis.
So, what type of review or level is needed for our program and our measures? We want to ensure that the data is meaningful, reliable, and consistently obtainable. It should be easily understood, clearly related to the work we’re doing, and actionable. That means we can look at the results and say, “we have a problem here.” Like in the earlier example — if we have a problem on Fridays, we need to have a discussion about what we can do differently on Fridays.
We also want to consider whether we need to transform any of these results into more meaningful information. The data we’re reviewing each month or quarter might be in a spreadsheet — just raw numbers. But if we’re going to share that externally, with leadership, a board, or stakeholders, we may need to summarize it in a way that’s easily understood and provides actionable insights.
So, think through how you’re going to review and present your information. Make sure you’re setting aside time to do this regularly and that it’s an effective process. We want to have the review, have the conversation, and ask: What are we going to do with this feedback?
Doing this on a regular basis — monthly or quarterly — helps us catch any shifts in our work or outcomes. That way, we can draw attention to them and make improvements. If we’re only reviewing once a year, we might have to wait a whole year to realize something needed to change.
Regular review also helps ensure our data is of good quality. If we start to see errors or things that don’t seem right — like an issue only on Fridays with clinic wait times — we can ask: Is someone reporting or collecting that data differently? Monitoring regularly helps us identify inconsistencies or errors.
We stay informed about our program’s performance by using these performance measures and the feedback they provide. That way, we’re ready to make timely improvements when needed.
Having a system of performance management contributes to a culture of quality improvement. We build a structured routine. We work as an agency to maintain business as usual. We focus on efficiency and effectiveness. We foster a culture of accountability and improvement. Sharing and discussing data, and identifying problems when they exist, is part of transparency.
It’s also important to identify the right people to involve in the data review. We want the group that’s doing the work — our programmatic staff who know the ins and outs of the information we’re reporting. Let’s have a conversation with leadership, too — share with them the goals we’re achieving and the improvements we’re making.
Do we have a performance management or quality improvement council or committee that would want to regularly review this? Do we have subject matter experts we want to consult? Maybe we need their input or assistance in evaluating the feedback and making improvements.
When we’re looking for opportunities to improve, we want that feedback loop. We want to recognize what isn’t working. That’s what provides valuable feedback — understanding what’s not working so we can identify why and make a plan to move forward.
In the Turning Point Model, quality improvement is one of the four quadrants. If the feedback loop tells us something isn’t working, we can dig deeper with QI tools. One example is root cause analysis. This helps us identify opportunities for improvement by focusing on the underlying issues contributing to the problem — not just the symptoms. That leads to more sustainable and meaningful solutions.
We also want to focus on continuous improvement. That’s a core part of performance management. Continuous monitoring of data often reveals gaps, inefficiencies, or areas that can be optimized. If we identify these opportunities, we can refine our strategies or activities to improve our overall outcomes — and that’s what we want to see.
Adaptation to change is also important. When we review data, look at feedback, and identify opportunities for improvement, we recognize that our programs can change. Circumstances evolve. Environments change. Funding streams change. Identifying areas to improve allows us to adjust strategies in response to new data. It keeps our efforts relevant and allows us to make changes as needed.
Ultimately, recognizing opportunities for improvement is how data drives innovation. It ensures that our actions are based on data — not just on a hunch or a feeling that something might not be working.
Once we identify those opportunities for improvement, we also need to make a plan. A needs assessment helps us identify the problem areas that require improvement. Then, as a group, we start to prioritize. The data should help us prioritize issues based on their impact, feasibility, and urgency. How urgent is it that we fix this?
That way, we can ensure our resources are focused on the most critical areas. Having those intentional conversations on a regular basis helps us identify these opportunities for improvement quickly — and prioritize which ones we need to work on first.
So, it's important for us to have a baseline measurement. If you've done a QI (Quality Improvement) project before — or really any improvement effort — you know that baseline metrics are essential. They help us track progress by showing where we started. We also need to define our goal — what it is we're trying to achieve — and use data to establish that baseline, clearly showing where we are now. This also helps us set a target as we move forward.
Using data-driven insights helps guide us in creating targeted interventions that address the specific issues we've identified. This ensures our solutions are evidence-based and grounded in data. Just like we saw on the slide with the Turning Point Model, data is the foundation of both performance management and quality improvement. Performance management involves regular, ongoing feedback from the data we collect, while quality improvement relies on that same data to identify problems and set targets for change.
When we implement actions — whether it's changing staffing, adjusting the check-in process, or making a specific change for Fridays — we need a clear implementation plan. That includes defined tasks and responsibilities. Then, we need to pilot or test those changes to see if they lead to improvement. This is where the PDCA (Plan-Do-Check-Act) cycle comes in. We plan the change, do it, check the results using data, and act based on what we learn. If the results are positive, we may standardize the change. If not, we may need to make further adjustments.
As we implement improvements, it's critical to continuously monitor progress. This is a key part of turning data into action. We want to see how the data responds to our changes and whether we're moving toward our goals. Frequent monitoring allows for timely adjustments. It also helps us learn from experience — analyzing what worked and what didn’t gives us insights to improve future initiatives.
When making evidence-based decisions, we should use evaluation data to adjust strategies, replicate successful efforts, and address any shortcomings. Evaluating outcomes is just as important as implementing changes.
To monitor performance measures continuously, we need to regularly bring people together to analyze data and make informed decisions. This includes recommending improvement actions, integrating QI methods like PDCA or root cause analysis, and driving meaningful change.
Here are some examples of performance dashboards. This one, built in Excel, outlines goals and objectives with supporting measures. It includes a target column, the current reporting period, and directional arrows that are color-coded to show whether metrics are improving, declining, or holding steady. These visual cues help direct our attention. For example, if an arrow is red and pointing down, it signals a need for discussion. The Excel formulas are set up to update automatically based on entered results, highlighting areas that need attention.
Some health departments use Tableau. While it may not be as visually striking in terms of drawing attention to specific measures, it does allow you to see trends over time, filtered by program or measure.
Regardless of the tool, it's important to set realistic and reasonable goals. If a measure has shown "goal not met" every quarter for two years, we need to discuss whether the goal is achievable. For example, while we all want 100% compliance with vaccinations, it's unlikely due to medical or religious exemptions or general resistance. If we continuously report "goal not met" at 100%, we should consider whether a 90% target is more realistic for our community. This doesn’t mean lowering standards just to meet goals, but rather ensuring our goals are evidence-based and appropriate for our population.
When identifying improvement efforts and putting data into action, we should monitor results frequently — monthly or quarterly. If results aren't where we want them to be, we can identify opportunities for QI projects. Sometimes, the solution is minor. For example, if Friday clinic wait times are consistently high and it's due to a delayed morning huddle, maybe the fix is simply starting the huddle 15 minutes earlier. That small change could prevent delays for the rest of the day.
We can also use customer feedback or qualitative data to help us assess and identify opportunities for improvement. Again, it's important that we focus on collecting information and utilizing feedback to identify those opportunities. This supports a culture of transparency and accountability and helps foster ownership in problem-solving and making improvements. Ultimately, it contributes to achieving our program goals and becoming as effective and efficient as possible. I have a couple of Menti polls I’d like to pull up before we move to questions. If you’ve never used Menti before, you can go to menti.com and enter the code, or you can scan the QR code shown here with your smartphone.
The first question I want to ask is: Do you have any examples you’d like to share about how you, your program, or your health department are currently using data in action? Let me bring the QR code back up for you.
Monitoring implementation of a chatbot and using feedback to coach, train the chatbot, and improve the knowledge base — that’s awesome!
I understand if you're still trying to wrap your head around the data we have. Figuring out how to identify priorities and make the data useful is very important.
I love seeing that you're having monthly and quarterly reviews, looking at the data to find areas for improvement.
Using spending trends to forecast funding needs — great.
Feeling a bit stuck in just counting how much we’re doing — that’s okay.
Using your CHIP and CHAW to drive decision-making — excellent.
Collecting data quarterly and monitoring the information — great.
It’s okay to be brand new and just starting. We’ve all started at the beginning at some point, and that will continue to happen.
Using a platform to look for opportunities for improvement — good.
Taking baby steps using QuickBase, tracking participation in advanced care — great.
Addressing disparities in utilization — I love that.
Using Clear Impact to tell the story and progress outreach to the community — excellent.
Tracking time spent processing permits and having an agency dashboard that includes your strategic plan and KPIs — I love that too.
Violence prevention programs — great work!
Now, I have another question for you: What are some supportive aspects your health department or program has in place for monitoring performance?
Hired an evaluator and a results-based accountability specialist.
We have a dashboard.
We use the ClearPoint platform for data and publish it on our website — that’s great for transparency and communicating results.
It’s okay to be here just to learn what others are doing. I think many of us are here for that. I learn something new every time I do one of these from what other health departments are doing.
Staff trainings.
A performance management team.
Developing a dashboard.
Continuous quality improvement.
Quarterly meetings to communicate program results.
Presentations to staff.
Our PM and QI Council.
One-on-one support for interested staff.
QI coordinator.
Tableau dashboards.
Quarterly reporting — this is all great feedback, and we’re all doing different things.
And lastly, any thoughts on specific action items or things we can do after today to begin implementing performance management within your program or health department?
Developing a training.
Determining better KPIs.
Monitoring data more frequently.
Creating better dashboards.
Choosing what’s most important to achieve results — yes!
Emphasizing quality over quantity — absolutely.
Procuring a better tracking system.
Reviewing KPIs and targets.
Setting more realistic goals — I love that.
Gaining leadership and staff buy-in.
Making things more visible.
This is wonderful feedback.
Now, back to the slides for just a moment. I want to briefly talk about why QI and performance management are so important in public health. We want to guide our work with proper monitoring and assessment related to the work we’re doing and the performance we’re seeing. We need to use that data and feedback to improve our program activities and focus on building an organizational culture that supports continuous improvement — from leadership to operations and back again.
Thank you. Any questions?
BRADLEY:
Hi Amanda, thank you so much — what a really lovely presentation. There are lots of questions, so I think we’re going to have some good discussion here as we wrap up.
The first question I see — oh my goodness, I hope you’re seeing all the reactions coming in! You’re getting applause, party symbols, and hearts, so clearly you’re speaking to people who are really excited about what you’re saying. That’s great!
We have a strategic plan with specific strategies and KPIs. Do we need different goals and performance measures for a performance management system?
MCCARTY:
That depends. Do you feel like your strategic plan and priorities allow you to obtain feedback about the work happening within your programs, especially in relation to community needs? And does the feedback you’re getting based on the strategic plan drive improvement efforts across the health department in a way that gives you all the information you need?
For example, are you able to tell — based on your strategic goals and priorities — how you're doing in responding to communicable diseases or improving access to clinical care? It really comes down to preference and how well your strategic priorities support decision-making and improvement across the department.
So maybe it’s a good starting point — and then yes, absolutely, you can build from there and make things more specific.
BRADLEY:
We have eight divisions within our agency. Do we need separate division-level performance measures, or can we have an agency-level goal with each division contributing through specific objectives and measures?
MCCARTY:
Absolutely. There’s a previous presentation in this series that focused on using the Foundational Public Health Services (FPHS) framework, which is a great resource. It talks about using FPHS as a model to determine whether the health department is collectively working toward foundational goals, and how programs can collaborate to achieve them.
So no, you don’t have to divide everything by program or division. You can have overall agency-level goals and work together to achieve them. It’s very individualized and should be tailored to your agency.
BRADLEY:
How do you identify data sources, and what about datasets you don’t own? Any guidance on using proxy data when meaningful measures can’t be created due to lack of data?
MCCARTY:
In some cases, health departments have to find creative ways to collect the data they need. That might include gathering qualitative information through feedback surveys, for example.
Start by being clear about what you’re trying to measure or understand. Are you measuring an outcome or a process? Then identify what would tell you that information. Do a scan of existing data — what internal sources or reports do you already have? Are there external sources, like CDC databases, that could help?
Also consider whether qualitative data could help you establish trends. This topic was covered in our last presentation, and the recording includes more detail on this.
BRADLEY:
Yes, this has been quite the series of performance management webinars! I’ll drop the link to the recordings in the chat again as we wrap up. There was also a question about transcripts — yes, that’s a goal of ours, to make transcripts available with the recordings.
If we’re using the RBA model and our population outcome is to increase the number of pregnant women tested for syphilis by X% to prevent congenital syphilis, would it be useful to monitor how many women we test, even if it doesn’t track the actual impact?
MCCARTY:
Yes, I think if you're still trying to measure or see the impact you're making, it’s okay to track the number of women tested. But also think about why you're testing. If the goal is to prevent or reduce congenital syphilis, can you also track outcomes — like how many test positive, how many receive treatment, or how congenital syphilis rates change over time?
Testing is a necessary first step in preventing mother-to-child transmission. If you can tie those testing numbers to specific programs or outreach efforts, it helps make your strategies more targeted and impactful.
BRADLEY:
Very often, staff have a fear mindset if they’re not meeting performance targets. Can you share more about how to shift that culture?
MCCARTY:
Yes — this really comes down to communication. It’s important to explain how we’re using the information. Problems are gold — we want to identify them so we can improve. There’s no punishment or negative consequence here. We need this information to make things better. That’s the culture we’re trying to build — one focused on improvement.
BRADLEY:
What advice do you have for someone who sees the value in this work but doesn’t have decision-making authority or access to the programs doing the work?
MCCARTY:
So, I think there are a lot of resources out there — ASTHO, NACCHO, PHAB, and PHF. There’s a wealth of information available for performance management, including toolkits that can really support your efforts. These resources can help you educate senior leadership or other staff on the importance of performance management and why we need to do it. I would definitely recommend leaning on those resources to support implementation.
BRADLEY:
Awesome. I think we have time for one more question before transitioning back to the slides and sharing the evaluation information for today.
Do you have any advice on developing outcome-based performance measures for departmental risk prevention — like making improvements in department-level policies and procedures?
MCCARTY:
Not right now, but I can make that a follow-up and we can get that information back out to folks. I’ll see what I can find.
BRADLEY:
Awesome. And I think we’ll need to do that with some of the remaining questions in the chat as well. I know we’ve done that in the past, and Amanda, we really appreciate that. There are also some comments coming in from people who are genuinely grateful for you sharing your knowledge through these presentations. So thank you for taking the time and effort to be here with us.
MCCARTY:
Sure, thank you.
BRADLEY:
Let’s transition back to the slides if we can, Leslie. There’s some evaluation information here for you all. If you don’t mind taking a few moments to respond, we’d really appreciate your feedback — not just on today’s presentation, but also on topics you’d like to see covered in future webinars. We’ll drop that information in the chat as well.
Thank you all so much for your time today. I see one more question in the chat: Is this the last in the series?
We don’t have any additional sessions planned at this time, but we encourage you to keep an eye on the ASTHO events page. And as always, don’t hesitate to reach out. Our email address for performance improvement is on the slide, and we’d be happy to hear from you about your performance management needs. I know the Public Health Foundation would as well.
Thank you all so much for your time. Have an amazing rest of your afternoon. I’m sure there will be a next time — this is such an important topic, and there’s always more to explore. We’ll see you next time.
Elevating Tools and Resources for Transforming Performance Management
During this webinar, the Public Health Foundation launched a new webpage focused on the performance management needs of PHIG recipients. The event introduced updated resources in the Performance Management Toolkit, designed to assist organizations at any implementation stage. The toolkit included self-assessments, infrastructure checklists, and skills development resources to enhance public health performance management efforts.
Speakers
- Amanda McCarty, MS, MBA, MHA: Performance Improvement Expert, Public Health Foundation
- Jack Moran, MBA, PhD, CMC, CQM: Senior Quality Advisor, Public Health Foundation
Transcript
Some answers have been edited for clarity.
MELISSA TOUMA:
Good afternoon, everyone. It looks like almost everyone is in from the waiting room, so I think we can get started. Thank you so much for joining us today for our webinar called Elevating Tools and Resources for Transforming Performance Management.
From our experience, we know how important it is to have actionable tools and guides to support our planning efforts. We're really excited to have the Public Health Foundation presenting today to share the updates they've made to their Performance Management Toolkit. This toolkit was designed to support an organization’s Performance Management System needs, regardless of implementation stage. So, no matter where you are in your Performance Management journey, I think you’ll find some really useful resources to guide your work.
We’ll go ahead and drop a link to the online toolkit in the chat — it is live. As we get started and throughout the webinar, please share in the chat your health department name and, perhaps as an icebreaker, what tools or resources you’ve found to be most useful to you in your Performance Management journey so far.
A couple of housekeeping items: closed captioning is enabled for today’s webinar, and we are recording so that anyone who could not make it today can still view the presentation. It will be posted to ASTHO’s website and also to TRAIN. Additionally, if you think of any comments or questions, please feel free to drop them in the chat. We may have time at the end for a Q&A, or we can try answering questions in the chat as we go.
Thank you to everyone who is already adding things to the chat. Now, onto our content for today. I’m really happy to introduce our speakers.
Amanda McCarty is the Performance Improvement Expert with PHF and has provided training and TA for public health departments in Performance Management Systems development, workforce development, quality improvement, and the development of evaluation plans and logic models since 2013.
Jack Moran is a Senior Quality Adviser to PHF and brings to the organization more than 30 years of quality improvement expertise in developing QI tools, training programs, implementing and evaluating QI programs, and writing books and articles on QI methods.
With that, I will pass it over to the PHF team.
AMANDA MCCARTY:
Thank you. All right, good afternoon, everybody. We are here to talk about the next step following the previous two webinars. If you were able to join us, we started with an overview of Performance Management in the first webinar, followed by operationalizing Performance Management. If you weren’t able to attend those, the recordings are available through ASTHO.
Today, we’re going to talk about some of the updated tools and refresh the Turning Point Model framework. We’ve also created a Performance Management Toolkit that is available to everyone online. We’ll introduce you to some of those tools and highlight additional resources that are available to you.
Initially, we have this Turning Point Model that we’ve based much of our Performance Management work, training, and public health efforts on. With this model, we’re looking at performance standards — our ability to set expectations and goals for what we’re trying to achieve. We want to have meaningful performance measures to help us assess our progress toward those goals and standards. We also want to collect meaningful data — not just data we can collect because it’s available, but data that truly informs our work.
We aim to collect that data on a regular reporting cycle. The more often and consistently we collect and review that data, the better we can determine whether we need to change what we’re doing to get better results — or if we’re doing a stellar job and should continue on the same path.
Depending on the results we’re seeing, the quality improvement side of this model comes into play. When we’re not achieving our goals or moving in the right direction, we can dive deeper with quality improvement tools to understand the processes or factors behind the data and results. This helps us improve our processes or the work we’re doing within our programs.
Maybe we’re no longer offering the right services. Perhaps things we were doing 30 years ago aren’t having the same impact today, and we need to change our approach. That’s the essence of the model. It’s circular because it’s a continuous cycle. We never really stop. We keep learning from the information we collect, using it as a management tool, adjusting our actions, and seeing what the next cycle reveals.
Alongside all of this, we need visible leadership — support from leadership that communicates this is how we do business. It’s not just another task or job; it should be the way we operate. This mindset drives our culture toward continuous improvement and accountability. We set goals, collect measures, work toward achieving those goals, review the information regularly, and apply quality improvement as needed.
So, the next slide here just goes into a little more detail for everyone on the Turning Point model. It further defines what we mean when we talk about setting performance standards or the goals we’re trying to achieve. These may change from year to year. They may shift based on a community health assessment or a state health assessment, and strategic priorities within the health department may also change. That can affect some of our priorities or performance standards, even at the program level, as we work to achieve them across the health department as a whole.
With performance measures, what really matters is that they are meaningful. We want to collect information related to our processes and outcomes, looking at a balance of different types of measures. We don’t just want to count widgets — we want it to be meaningful work that we can use as a management tool.
That ongoing, regular reporting of results — how we collect and share this information throughout the health department — is also key. How are we using it to guide our discussions or decision-making? Do we have this as a standing agenda item in our leadership and management team meetings? Are we making sure employees understand what measures their work contributes to, and how their work supports the health department in achieving its goals?
It’s not just about reporting progress. It’s also about having discussions around what the data is telling us and using that information — not just reporting it and letting it sit on a shelf.
Then there’s the quality improvement piece. Within this culture of Performance Management and quality improvement, you’ve heard me say before: you can’t really have one without the other. Performance Management shows us the landscape of what’s happening and the work we’re doing in the health department and within our programs. The quality improvement side helps us dive into the details behind the operations to really understand how we’re getting the data we are, and how we might identify root causes or look for ways to improve the results we’re seeing in our measures — and the impact we’re making against the standards we’ve established.
We’ve created the Performance Management Toolkit, which is available on the PHF website. Melissa mentioned earlier that the link to the toolkit was dropped in the chat. We’re going to go through some pieces of that tool for you today. As we do, we’ll talk about how it’s organized, what tools and resources you might benefit from in each section, and even some of the assessments and planning or implementation plans you can use.
We’ve really tried to think of everything, based on feedback we’ve received from health departments — what they’d like to see or what they need. Even when we do technical assistance and help health departments with coaching and implementing Performance Management plans, we always gather feedback. When technical assistance and support ends, we ask: what are folks most overwhelmed or intimidated by when it comes to continuing implementation and rolling it out? We use that feedback when developing these resources and tools to figure out what would be most helpful for you all in doing this work in public health.
So, we’ve created this toolkit, and we’re going to walk through some of the different sections for you today. I’m just going to share my screen briefly to show you some of the functionality of the toolkit — just give me a second here.
When you log on to the website and look at the toolkit, it’s organized around the Turning Point model, just as we’ve been discussing. You’ll see some quick links to that information and to the Performance Management framework itself. It helps guide you through understanding the model — what visible leadership looks like, for example, and how transparency and communication play a role.
You’ll also find approaches and information related to assessments, whether you’re a state health department or a local one. We’re going to talk about each of these. There’s also a homepage that walks through what Performance Management is and outlines the different sections available to you.
It’s very user-friendly. It’s not overwhelming. Whether you’re just getting started or trying to refine and make your Performance Management system more meaningful, I think there’s something here for everyone. Even when it comes to building a Performance Management Council, the toolkit outlines the steps involved, as well as considerations for ongoing sustainability and communication.
So, I’ll turn it back over. All right, and Jack, I think you were going to talk through the assessments piece?
JACK MORAN:
Yeah. One of the things we built into the toolkit was assessments — we included five of them, as you can see. Before you do an assessment, you’ve got to realize that Performance Management can be an uncomfortable topic in any organization. Be clear about the purpose and intent before you begin. That helps reduce rumors and ensures you get accurate answers.
These assessments help you get a snapshot of your environment — what it looks like. Once you know that, you can start to develop actionable strategies and implementation plans. You might be installing a new Performance Management system, revising one you already have, or resurrecting one that’s fallen off the radar.
You want to use these assessments before making any major changes. One of the key things they help with is establishing a baseline. That’s important because, as you move forward, you’ll know where you started. Too many organizations jump in and then realize they don’t know how to measure progress because they never established a starting point.
You want to track the success of the system — its implementation, use, and spread. Another thing I always suggest is to do one assessment for the organization as a whole, and then have individual divisions or departments complete their own. That helps you identify areas of excellence — places that are already doing well and can serve as models. You may also find places that aren’t doing anything yet. This helps with long-term planning.
So, the first one we want to take a look at — Amanda jumped a little bit ahead on this — but why don’t you talk about where you are? We want to get a sort of consensus from the group. Are you at the beginning, with minimal awareness? Is there much organizational agreement around data-driven decision-making?
One of the big things we see at every level is that people worry about how Performance Management is going to be used. Everyone tends to think it’s going to be punitive. But again, it’s really about understanding where we are and where we’re going from here.
If you move up to Stage Two, you start to see the need for Performance Management and its usefulness. This is where leadership really needs to be involved. As Amanda mentioned earlier, we need visible leadership — not just saying the right words, but actually being involved, doing measurement, and using the data when they get it.
At this stage, you might find you don’t yet have a centralized data collection system. As you move up to Stage Three, you start to see limited deployment. Maybe you’ve done pilots with a few parts of the organization. You begin to make sure the measures align with strategic planning, and you start to see deployment primarily at the program level — not yet agency-wide. That’s usually because we start with pilots, and they stay at the program level.
Then, when you get to Stage Four, you see it spread across the agency. You have a well-functioning system. You’ve got a PM/QI Council or team in place, and they help direct the use of the system. Measures are collected regularly and analyzed, and people understand what role they play in the system.
Finally, if you reach the last stage, you have a culture of Performance Management — and also a culture of Quality Improvement. This is a tough stage to maintain. We’ve seen many places reach it, but then people leave, and you have to rebuild. You always have to be pushing to keep that culture in place.
So, what we want to do now is a quick poll to see where you think you are — Stage One, Two, Three, Four, or Five. Let’s go ahead and show the poll.
MCCARTY:
There it is. You can participate in the Mentimeter poll by going to menti.com and entering the code, or you can use the QR code with your smartphone camera, and it’ll take you to the link.
Our first question today is just looking at the stages of agency Performance Management: where do you feel like you currently are?
MORAN:
Wonderful — we have a nice mix of folks. Great! Thank you all for participating in that. One of the things we’ve seen, too, is that a lot of people are between Stage Two and Three as we go around working with various health departments.
One of the next tools we want to look at is the PDCA assessment we’ve developed for PM systems. There’s one for state and one for local. The difference between them is that the state version talks about the SHIP, and the local version talks about the CHA. But they’re basically the same questions. What you want to do is go through it and, as you can see from the scoring — ranging from zero (nothing in place) to four (very effective)—have people go through and complete it. Don’t get into decimal scoring like 1.23; just use whole numbers. Get a consensus from the group, whether it’s for a department or the organization as a whole.
As you begin to look at the results, you’ll get scores. The nice part about this tool is that when you click on the next page, it produces a radar chart for you. It shows what you look like now. Then, as you move forward, you can identify areas for improvement.
In the example shown, you can see that people didn’t do much planning at the beginning. They did a lot of acting, and the scores reflect that — they’re not very high. What you really want to do is spend time on the assessment and planning, then start doing, checking, and acting.
Another thing we want to look at is the stages of process performance. This is another tool you can use as a group. It emphasizes that to have a good PM system, you need to document your processes. How are you going to measure them if they’re not documented? How can you measure them if you’re constantly reacting and fixing things?
We need to standardize processes and measure them in a timely and relevant manner. We also need to identify the points in the organization where it’s easiest to measure.
As we get to Stage Five, we reach robust quality improvement. We talk all the time about building that culture — always improving. But what happens a lot is that organizations get out of sync with these phases.
Go ahead, Amanda.
What happens is you see the stages of performance improvement and process performance don’t align. So, if you think you have a formal, agency-wide PM system but you’re only at Stage Two in process performance, you’re out of sync. You’re not getting good measurement.
As you’ll see on the next slide, when you put the NACCHO roadmap in, the stages have to line up. You need documented processes, measurement across the board, and formal QI. Then you can build forward.
What we see a lot of times is people say, “well, we don’t really have things documented,” or “we’re not getting good measurement.” And often, what I see when working with clients is that the first thing the quality improvement team has to do is collect data — because they don’t have any.
So, we have to go through that process with them: how do you collect the data? The problem is, the processes are undocumented, and there’s no good system for collecting data. Once they do that, then they can start measuring. But that delays the process moving forward.
So, as you go forward, think about how you align stage to stage. Then you’ll start to see how good Performance Management supports quality improvement and helps push you forward.
Thank you.
MCCARTY:
We do have another Mentimeter poll to share, based on the stages of process performance and Performance Management. I’m just going to pull that up as well, just to see where you all feel like you are within your agency.
Okay, great — we see another good mix of folks. The majority are within that Stage Two to Stage Three range. We do have one at agency transformation, which is great, and also some that are just starting from the beginning. So, a good mix of agency stages here and where you are on your journey.
MORAN:
Yeah, and again, this is sort of what we see in the field today as we’re working with our clients. A lot of people are in Stage Two and Three — aware of the need to change and hoping to make change and move toward renewal.
MCCARTY:
Thank you all for participating and sharing there as well. We’ll go ahead and pull the slides back up.
All right, so we’re going to talk a little more about the “Skills” section that’s available in the toolkit itself. There are a variety of tools available to you — whether you want to sit down with some of the programs within your health department and help facilitate a discussion around designing goals, identifying milestones, defining objectives, and supporting measures.
Oftentimes, what we see is a disconnect between what programs say they’re working to accomplish and what they’re actually measuring. Sometimes, they may be measuring things simply because they always have, or because it’s easier to measure, or because it’s something they’ve been tracking for a grant. This tool is a great way to sit down and ensure there’s alignment from goals down to measures, and to have that conversation — talking through those pieces.
We also have some graphic tools to help us better understand data visualization and to support the creation of a data management strategy — from collecting raw data to transforming it into useful information that can guide our next steps. We’ll also talk about aligning our efforts with other agency initiatives, and again, about collaboration with quality improvement initiatives and related efforts.
When you’re on the toolkit, you’ll see the “Performance Management Skills” section and all the different tools available to you. You can open the actual attachments within each section.
For example, the “Designing Goals and Objectives” tool walks you through how to align goals with meaningful objectives and then develop meaningful measures. We also try to break it down to the fundamentals — what’s the difference between a goal and an objective? For folks who haven’t worked with these terms before, they’re often used interchangeably. So, we want to level-set with definitions and use this tool to build from our goals, to defining our objectives, to identifying the work we’re putting in place to help us get there — and then, how we can measure that work in a meaningful way to show we’re moving toward achieving our goal.
We’ve provided an example within the toolkit of what an overall goal might look like — for instance, a competent public health workforce — and some objectives that might support that. These objectives represent the actual work to be completed or the milestones to be reached. We often make them SMART, or set them with a time-bound target in the measure itself. It’s really up to your preference how you want to do that.
Talking through each goal and using this worksheet to facilitate those discussions and keep all the information aligned is a very useful tool. It also includes how often you plan to report on that data.
There’s also an article included to help with your data management strategy. Oftentimes, we collect a lot of data, and while we may want to share all of it, people really just want to see the meaningful information — what’s most important. So, this tool helps you take that raw data, consolidate it, translate it into useful information, interpret what it means, and communicate the results clearly and effectively. It helps you share the priority information quickly and in a way that supports decision-making, rather than overwhelming people with everything you know.
There’s also the Pyramid Alignment Tool. This walks you through steps within the toolkit to ensure that you’re aligning performance goals and standards with your strategic objectives or other agency-level plans — and understanding how your work contributes to those broader goals.
Within our programs — ranging from clinical services to vital statistics to tobacco prevention — they’re all contributing to the health department’s goals. The health department has a strategic plan and identified strategic priorities to help us achieve our overall goals and accomplishments as a public health agency. We want to make sure that we’re aligning our work and efforts accordingly. The Pyramid Alignment Tool is designed to help with those conversations, especially if you notice a large disconnect between your work, the strategic plan, and what the health department as a whole is doing.
We also talk through the change management journey, making sure we’re taking the time and effort to do this correctly and communicating why it’s important. This includes the work necessary to build a strong foundation for Performance Management, as well as the communication and leadership support needed to build accountability. We also want to make sure we’re planning for any unforeseen events or barriers that might arise during implementation.
Just like we discussed earlier with the stages of Performance Management, we also include the stages of change management. In Stage One, we recognize the need to change for survival. In Stage Two, we begin to see problems emerging in the data and realize that change is necessary. We recognize several quality issues and find ourselves working more reactively than proactively.
In Stage Three, we see change for renewal. We begin to shift the culture and build Performance Management more intentionally. Leadership is supportive, but they may not yet be fully communicating that support or building in accountability.
Then comes change for excellence. We’re working toward institutionalizing these practices, though transformation hasn’t yet occurred across the entire agency. At this stage, we have coaches and mentors helping with Performance Management, supporting programs in defining goals, objectives, and meaningful measures. We’re starting to see more standardized policies and procedures that support information exchange and ensure consistent information is available to employees. We’re also seeing data begin to drive decision-making.
Finally, we move into Stage Five: agency transformation. At this point, we have a culture of Performance Management. This is how we do business — it’s not something extra or new that’s been added to people’s roles. It’s expected. Management wants these discussions to happen. They want us to talk about our data. We’re constantly working to improve; we’re never stagnant. Things aren’t perfect, and we know that. We know we need to continue monitoring to sustain the goals and improvements we’ve made. All employees — from top to bottom — are involved in Performance Management. Communication, accountability, and a focus on continuous improvement are embedded in how the organization operates.
And I just realized — I launched the Mentimeter poll at the wrong point earlier when we were talking about the stages of change management. I apologize for that.
We also have an “Infrastructure” section in the toolkit. Within that section, there are several checklists available to help you as you develop your PM and QI plan. If you already have a QI plan and want to combine it with your Performance Management plan, there are checklists to guide you through that process. These aren’t meant to be rigid or exhaustive — they’re more like guides to help ensure nothing gets missed or overlooked, especially if this is your first time doing this work. They also help you consider whether certain tasks are applicable to your health department.
There’s also a checklist and guidance document for building your Performance Management and Quality Improvement Council. This is the group that will help lead and support these initiatives, providing coaching or technical assistance to programs as they develop performance measures or implement QI projects. This council can help drive these efforts and provide support throughout the health department.
We’ve also developed an implementation plan for your consideration as you roll out a Performance Management system. This helps ensure you’re thinking through all the components — not just drafting goals, objectives, and measures, but also planning for ongoing monitoring, reporting, discussion, and updating of measures.
In addition to the checklists, there’s also information on designing and maintaining a strong communications plan for your efforts. With the checklist for your PM and QI plan — which, again, is an accreditation requirement — we’ve included suggested sections that could be part of your plan. You’ll want to make sure you’re covering why this work is important to the health department, and that you’re level-setting with definitions and terminology.
Some health departments use the term “goals,” others say “standards.” Some say “measures,” others say “indicators.” It really depends on your department’s preferences and what leadership has established. The key is to ensure consistency in terminology and definitions across the board.
You’ll also want to talk through the meaning of the plan, the definitions you’re using, and the models for improvement that will guide your work.
We also consider the foundational elements of a QI culture — things like employee engagement, impact, empowerment, teamwork and collaboration, leadership support, customer focus, QI infrastructure, and continuous improvement. It’s important to assess your work against each of these elements and identify ways to improve in each area.
This is a high-level, bulleted list, but we do provide an overall summary and checklist to support you in developing and implementing a plan. If you already have separate plans, they can be combined into a single PM and QI plan. The checklist includes items to help facilitate discussions around developing your plan—or, if you already have one and are preparing for reaccreditation or looking to update it, you can use the checklist as a guide. Even if your plan already exists, the checklist can help you identify anything that might need to be refreshed or added.
The checklist walks through each of the elements we mentioned earlier. For example, have you used a QI cultural assessment to identify where you are? You can also use a radar chart or similar assessment — like the one Jack showed earlier — to help set goals, identify gaps, and prioritize findings from your assessment. We also have a checklist specifically for developing a PM/QI Council. This walks through steps to consider when structuring the council: who should be involved, how many people, and how to ensure regular, ongoing conversations about Performance Management data. It also covers how to manage ongoing reporting and how to use that information to guide improvements from year to year.
This group of individuals helps support, drive, and communicate these efforts across the agency. The checklist also emphasizes the importance of having a PM and QI plan, which you can refer back to using the earlier checklist. The council should also have a process for reviewing performance measures — ensuring programs are having conversations, checking progress, and confirming that the measures are meaningful.
It’s important to ask: are the measures in the Performance Management system demonstrating agency value? Are they showing the impact we’re making? The council should meet with programs to discuss what they’re learning from their data, what their measures are telling them, and how they can identify improvement opportunities or implement QI projects as a result.
We also provide a guide for implementing the Performance Management system in general. This includes ensuring that a council is in place to support the work, mapping out high-level activities, and working with programs to draft goals, objectives, and measures. It also includes guidance on developing a communication plan related to Performance Management.
Keep in mind, this doesn’t have to be a large, intensive document. It’s more about making sure the council is having conversations about how to communicate the rollout—or the continued use — of the Performance Management system. We want to make sure we’re identifying communication efforts, sharing information, and keeping staff informed about what the system is, why it’s important, and how it’s best used.
The last thing we want is for staff to say, “I had no idea we had a Performance Management system.” So, as a council, we want to ask: what do we want to achieve with communication? How do we want to do that? That’s the deployment. And how do we maintain ongoing communication?
There are communication templates available, and we’ve incorporated a QI tool called the RACI model — sometimes referred to as RY. It stands for: who is Responsible, who is Accountable, who needs to be Consulted, and who needs to be Informed. You can use this tool to support implementation activities around your communication plan.
For example, maybe you’re putting information in the employee newsletter, or you want to host quarterly professional development forums with leadership — and this is one of the topics you want to cover. Maybe you want to make this a standing agenda item in leadership or management team meetings. There are a variety of examples to consider, including how often you want to communicate and in what format.
Again, this isn’t a required document — it’s a toolkit with recommendations for your health department to consider as you roll out these efforts.
We also have additional resources available to support you as you plan or implement Performance Management. One of those is the Collaborative Performance Management for Public Health book. I think I saw in the chat that Emily from Illinois mentioned this was a helpful guide for her — so thank you for that, Emily.
In addition to that, there are several other recommended resources. And of course, there’s the toolkit we’ve just created. All of the tools we’ve mentioned — from assessments to skill-building — are there to help you build your capacity as an internal coach or TA provider for Performance Management within your health department.
These tools are designed to help you coach other programs through the process. We know it can feel overwhelming at first — there’s a lot to understand, and it can be hard to know what comes first, what comes next, and how to even begin.
But keep in mind: it’s okay to start your Performance Management system on a pilot basis with just a few programs. Even if it’s just five or six programs with four or five measures each, those measures should be meaningful to their work. As you implement and begin rolling it out, you can work out the bumps in the reporting process and how you’re sharing information. Once you feel confident in how it’s working, you can expand it to the rest of your programs or department-wide.
As your programs begin to get a feel for using data — using it to make meaningful decisions or to guide their decision-making — they may start to realize, “We’re not quite reaching our goals, so we need to do something differently.” That’s when you’ll start to see those light bulbs go on. Programs will begin thinking, “I want to collect this information and monitor it in the Performance Management system too.”
You’ll see that interest grow. But it’s also perfectly okay to start with just a few programs. Work out your process, build a strong and comprehensive foundation, and then roll it out to the rest of the health department. That way, you can share success stories and feel confident that what you’re rolling out is meaningful. Now, regarding the framework components and resources themselves — again, this is what we showed you at the beginning of the toolkit. There are additional resources available within the toolkit, just like with the Turning Point model. For example, if you’re looking at what visible leadership means and how to improve that area, there are tools to support you. If your cultural assessment shows that leadership commitment and communication are areas for improvement, there are resources to help you address that.
There are tools and information for each quadrant of the Turning Point model — from setting goals and performance standards, to performance measurement, reporting progress, and quality improvement. So, especially if you’re new to this or just getting started, this is a great way to learn about each quadrant in more detail and see examples that bring the concepts to life.
Jack, I, and the rest of us at PHF want to thank Jenna Constable and NoGo for helping bring this toolkit to life. We had the content, the checklists — the meat of the toolkit — but we couldn’t have made it the user-friendly tool it is without their help.
The toolkit is live now, and the link was shared in the chat. All of the tools, checklists, and guidance documents are available to help you roll out your Performance Management system within your health department. If you need coaching tools, those are available too.
MORAN:
One thing we should mention is that the Pyramid Alignment Tool was developed by Dan Ward from the Idaho State Department of Health. We appreciate his contribution.
As you move forward with Performance Management, make sure you have commitment across the organization. Develop the capabilities of your staff, and be consistent in your terminology. There’s nothing worse than one part of the organization calling something “goals and objectives” and another calling it “objectives and goals.” Be consistent.
Also, think about how you’re going to collect data so that you can report it system-wide. Ideally, automate it. I know of one health department where, during the pandemic, the person responsible for uploading data left. Everyone kept sending their data in, but for two years, no one was uploading it. That also tells you no one was using the system.
You need to manage the system. Have the PM/QI Council review any requests for new measures. Sometimes people get excited about measurement and want to measure everything. But once you start measuring everything, the system becomes less useful.
The PM/QI Council should regularly review requests for new measures and ask: Why do we want to measure this? How long will we measure it? What’s the purpose? How will it be used? That’s how you manage the system effectively.
So, I think we can open it up to some Q&A.
TOUMA:
Thank you so much, Amanda and Jack. We did get a couple of questions. For those who haven’t yet, feel free to drop your questions in the chat—we’ll try to get to them. You can also raise your hand to ask a question.
The first question came in around the time Jack was talking about the self-assessments. The question is: How does the self-assessment compare to NACCHO’s self-assessment?
MORAN:
They’re all similar. NACCHO’s is a good self-assessment. We have these mostly focused on Performance Management. I would suggest not just doing one self-assessment — do a couple. For example, a QI cultural assessment and a PM assessment. The key is to understand where you’re starting from. That’s the foundation of the whole process.
MCCARTY:
And I think it’s important to note — we’re not trying to replace other assessments. We’re just offering something that’s Performance Management–driven fashion, like including radar charts to help you set a baseline and use that for goal setting as you move forward.
TOUMA:
Great, thank you.
Another question came into the Q&A box. Jack and Amanda, could you speak a little more about performance measures and the challenge or gap between what you can measure—what you already have in your department — and what you’d like to measure, ideally? This question is especially relevant in environments with legacy data systems or resource constraints.
MCCARTY:
Sure. I think it comes down to asking: What are we measuring now, and why? Are we using that information? If not, I would personally recommend stopping. If it’s not useful or beneficial — even if you’ve been collecting it for 20 years — there’s no point in continuing.
Instead, take a step back and ask: What would be meaningful for us to measure? Is there a way we can collect that data? Sometimes, it’s about finding a way to measure what matters. Even if it’s not easy, it may be worth implementing a new process to collect the data that will actually help you make decisions.
Just because something is easy to measure doesn’t mean it’s meaningful or that you’re going to use it. Let’s focus on collecting the meaningful data that we’re actually going to use — even if it takes a little more effort to get there.
MORAN:
Einstein used to say, “not everything that counts can be counted,” and that’s part of the challenge. A lot of times, you have to find a way to measure what matters. If your processes are documented, you can often identify measurement points where you can begin collecting data.
When you start collecting data, keep it simple. Use checklists or check sheets. Make sure you’re collecting data across the entire cycle of the process — not just when it’s convenient. And one important tip: never include a category labeled “Other.” I can guarantee it will end up being the biggest category.
TOUMA:
Great, thank you.
Another question came in through the chat: What are some available data collection tools that others find effective besides Excel?
MORAN:
There are a number of software packages out there you can explore. We don’t recommend any one in particular, just because we don’t want to appear to be endorsing a specific product. But I know ASTHO just hosted — or is about to host — a vendor showcase. Melissa, I think you mentioned that?
TOUMA:
Yes, we have it tomorrow, actually. We’ll be dropping the registration link shortly.
MORAN:
Definitely attend — it’ll give you some great information.
TOUMA:
Since we’re talking about it, we’ll have a slide for it. VMSG and AchieveIt are both presenting during the vendor showcase tomorrow. If you’re interested in learning more about those two systems and hearing from peers who are currently using them, this is a great opportunity. We’ll drop the registration link and info in the chat.
That’s definitely the place to look to see what’s available.
All right, the next question I see is from someone in a fairly small department — about 30 staff members. Some program areas only have two staff. The question is: What’s the recommended structure for a PM/QI Council? Should each program area have unique staff representation, or should it be more of a “bucket” representation, where staff represent areas they may not directly work in?
MCCARTY:
I think it really comes down to what works best for your health department. I’ve worked with a department that had only two staff members, and both of them sat down to talk through how they could measure performance related to their work. So, maybe it’s not a comprehensive committee that includes every program area — especially if that would mean half the department is on the council.
Instead, maybe it’s a group of three or four people who have a good understanding of the programs and processes across the department. They can speak to those areas and help build the system.
TOUMA:
Great, thank you.
We still have a few more questions — this is great! Keep them coming, everyone.
Okay, for teams not currently using a Performance Management system or without many established QI practices across the department, what advice do you have for sequencing the introduction of this work? Does PM come before QI? Or does QI build the excitement for using a PM system?
MCCARTY:
I’ll share a personal example. When I was at the State Health Department in West Virginia, back in 2012, we were just starting these efforts. We had an infrastructure grant — I think it was the NIPHI grant, though there have been so many acronyms since then.
When I looked at the Turning Point model and its quadrants, I felt like we needed to start with QI education. If I didn’t train folks on how to make changes and improvements in a structured way, it would be hard to jump straight into Performance Management and expect them to know how to improve.
So, we started with QI education. We trained a group of QI champions using a train-the-trainer approach and launched projects across a variety of programs. Once we had that foundation, we started talking about Performance Management.
I developed a template — similar to what’s in the toolkit — for creating goals, objectives, and meaningful measures. I had conversations with each division about what they were currently measuring, whether those measures were meaningful, and how they were using the information. From there, we slowly built a Performance Management system.
And when we started collecting data and saw we weren’t moving in the right direction, those QI champions were ready to step in and provide technical assistance and support to improve processes and procedures.
TOUMA:
Thank you.
We have another question: How do we get folks to think about meaningful measures that aren’t just for grant reporting? That comes up a lot.
MCCARTY:
Yes, it does. When I sit down with a health department, I usually start with a blank Word document. I ask the program, “What are you really here to achieve? Why does your program exist? What impact are you trying to make? What are your priorities for this year?”
As they talk through that, I start to identify themes. For example, I recently worked with a small health department in Florida. They wanted to expand community awareness of their clinic beyond just women’s health services — they offer much more. They also wanted to improve health literacy, especially for Spanish-speaking patients. Providers were spending more time with those patients, which delayed the rest of the day’s appointments. If they had two or three Spanish-speaking patients in a row, they’d fall behind.
So, they were trying to increase community awareness, improve health literacy, reduce clinic wait times, and increase patient volume. We talked through all of that, and I said, “It sounds like you have two main goals: increasing awareness of your services beyond women’s health, and making your processes more efficient.” Then we talked about how to measure those goals. I don’t start with, “What are you measuring for your grants?” I start with, “Tell me about your program. What work are you doing? What’s meaningful to you? What are you trying to achieve? What are your priority goals for this year?”
Once we’ve identified those goals, we can figure out how to measure them.
MORAN:
Another thing to keep in mind: don’t mix grant measures with your strategic measures. If you’re using a spreadsheet, create separate sections—one for strategic measures and one for grant measures. That way, when people look at it, they can clearly see what’s required for grants and what’s part of your broader strategy. Some grant measures you have to do, and it’s important to make that distinction.
TOUMA:
Great, thank you. I have one more question in the chat — I think this might be our last one.
The question is: How can you develop one Performance Management system that crosscuts a variety of public health functions?
MCCARTY:
I think what’s being asked here is how to include all programs in the health department and take a comprehensive look at everything. That really starts with having individual conversations with each program — asking what’s important to them, what impact they’re trying to make, and how that can be measured. From there, it’s about finding one reporting system where all of that information can be collected and shared.
At the Public Health Foundation, in our technical assistance and training, we often use an Excel format. It’s more of a building-block tool to guide the conversation. We start with goals, then move on to objectives, and finally to measures. It’s not meant to be a full Performance Management system, but rather a tool to help build out the content that will go into one.
You can certainly use Excel as a reporting tool. I’ve seen departments upload it to SharePoint, and others use public-facing Tableau dashboards, which I understand are free of charge. As others have mentioned, tools like VMSG and AchieveIt are also options.
Ultimately, it’s about meeting with folks, ensuring they have meaningful goals, objectives, and measures, and then determining as a health department what is the best way to comprehensively report and manage that information.
TOUMA:
Great, thank you.
I think we’re at time, so I just have a couple of quick slides to share. Leslie, if you’re able to bring those up.
Amanda and Jack, thank you so much for sharing your expertise today.
As we mentioned earlier, please join ASTHO again tomorrow for our Vendor Showcase of Performance Management Systems. This event will offer valuable insights from industry leaders into selected performance management software that your agency could use to monitor and track indicators. The registration link has been dropped in the chat — thank you, Anna.
There’s also a post-event webinar with VMSG in November, where you can learn more about the VMSG dashboard and how it’s being used to improve performance management across jurisdictions. This event is open to everyone, even if you’re not able to attend the vendor showcase tomorrow.
And lastly, we truly value your feedback. Please take a few minutes to complete the evaluation and let us know what you thought about today’s webinar — what you liked, what we can improve, and any other feedback you’d like to share. We’re always grateful to hear from you.
I know we’re a little over time, but I’ll keep the webinar open so folks can access the links we dropped in the chat. If anyone has additional questions, please don’t hesitate to reach out.
Thank you all very much for joining us today, and I hope everyone has a wonderful rest of their day.
Building a Performance Management System Using the Foundational Public Health Services Framework
This webinar will guide participants through the process of using the Foundational Public Health Services framework to develop a performance management system in public health agencies. Explore the key principles of performance management and quality improvement, along with actionable steps for designing and implementing a performance management system tailored to your health department’s needs.
Speaker
- Amanda McCarty, MS, MBA, MHA: Performance Improvement Expert, Public Health Foundation
Resources
- Performance Management Resources (PDF) by ASTHO
- Building a Performance Management System Using the FPHS Framework: Presentation Slides (PDF) by ASTHO and the Public Health Foundation
- Using the Foundational Public Health Services Framework to Build a Performance Management System (PDF) by the Public Health Foundation
Transcript
Some answers have been edited for clarity.
MELISSA TOUMA:
Welcome, everyone, and thank you for joining us today for our latest performance management webinar. It's called Building a Performance Management System Using the Foundational Public Health Services Framework.
My name is Melissa Touma. I'm with the Performance Improvement team here at ASTHO, and I’ll be your facilitator for today.
Today’s webinar will explore how to build a robust performance management system for public health agencies, grounded in the FPHS framework. Whether you're looking to enhance your agency's performance management practices, align your work with the FPHS framework, or ensure that your public health outcomes are driving improvement, this session will provide you with strategies and insights to help guide those efforts.
It just occurred to me — Leslie, are we recording?
LESLIE SIMMONS:
Yes, we are.
TOUMA:
Great!
Throughout today’s session, we will examine key concepts around performance management and quality improvement, including how to design and monitor performance measures. We’ll also share the eight-step process for creating a system using the FPHS framework, along with strategies for aligning your agency’s performance outcomes with the elements of the framework.
To get us started, we’d like to know who is in the room with us today. As a form of introduction and a mini-icebreaker, please share your agency name in the chat, along with two to three words describing your familiarity with the FPHS framework. I’ll give everyone a moment to do that, and then I’ll jump into some housekeeping.
A few quick housekeeping items before I introduce our speaker: closed captioning is enabled for this presentation. Throughout the session, feel free to drop your comments into the chat box, as I see many of you doing already. Please enter your questions into the Q&A box. If you happen to drop a question into the chat, no problem — one of our ASTHO team members will move it over to the Q&A box so it doesn’t get lost.
The webinar is being recorded so we can share it afterward, along with the slides.
As folks continue entering their names and familiarity with the FPHS framework into the chat, I’ll go ahead and introduce our speaker today.
I’m happy to introduce Amanda McCarty, a performance improvement expert with the Public Health Foundation. As a consultant for PHF, she has provided training and technical assistance to both state and local health departments in the areas of performance management, workforce development, quality improvement, and the development of plans and logic models.
Amanda also has experience in governmental public health, having previously served as the Director of Performance Management and Systems Development at West Virginia’s Bureau for Public Health.
Thank you, Amanda, for being with us today. I’ll go ahead and pass the mic over to you.
AMANDA MCCARTY:
Thanks, Melissa, and thank you all for joining us today.
We’re going to build on our previous webinar, which introduced performance management concepts, and take it a step further by exploring how we can utilize the Foundational Public Health Services as a framework for building a performance management system in a health department.
I’m looking forward to hearing your feedback and any questions you may have toward the end. We’ll also be doing a “mentimeter” or some live polling and feedback to generate ideas around what this may look like at your health department, or what meaningful activities you all may already be doing related to the Foundational Public Health Services.
Just a quick refresher on some performance management basics. We’ve talked about performance management in the past, and I want to remind you that if you're not familiar with it — or if it seems intimidating — you’re likely already doing performance management in some way. You just might not be calling it that. With a performance management system, the goal is to collect meaningful data on a regular basis and monitor that information to understand the feedback it's giving us. It should be used as a management tool. The data we collect — whether it's directly related to a public health program or a population health improvement project — should be meaningful and help us make better decisions.
When we do this systematically across the organization or agency, that’s when we’re truly using a performance management system. We’re using it to identify opportunities for improvement or gaps in our processes, programs, or the care being offered, so we can make improvements and be as effective and efficient as possible in delivering our programs.
Just to refresh your memory on the Turning Point model: we use this model to help us start by setting performance standards or goals. These are related to our work — what are the goals we hope to achieve? What are the targets we’re trying to reach? Then, what is a reasonable and meaningful way to measure our performance against those goals?
We report on that information regularly. We collect data, analyze it, and interpret what it’s telling us about the work we’re doing and the progress we’re making. If we’re not moving in the right direction or haven’t achieved our goals when we should have, that’s when we can bring in quality improvement tools. These help us dig deeper to understand the root causes — what’s contributing to us not achieving our goals — and how we can improve our processes. Maybe we need to change something we’re doing or try something different to get better results.
That’s the Turning Point framework, which is available to support performance management development.
Again, we want to be able to obtain feedback in performance management. When we simplify it even further from the Turning Point model, we’re really talking about setting expectations — whether within a program, a specific project, or the organization as a whole. We want to monitor progress toward those expectations, and then use the data to provide feedback.
The data we collect shouldn’t just be collected for the sake of it. It should be meaningful and serve as a feedback mechanism to help us better manage our program or project — whatever it is we’re working on. We should be able to use the data to better understand how we’re doing.
When we look at the performance management steps — setting expectations, monitoring the process, and providing feedback — let’s consider an example. Say a particular health department is running a breastfeeding support program, but the breastfeeding rates are rather low among new mothers compared to what we would like to see.
As a team, we could set expectations. What are our goals? What are clear goals? If it's increasing the percentage of new mothers who initiate or continue breastfeeding for at least six months, let's define that. What is the defined goal of what we're trying to achieve? Let's be able to collect data against that — data that's meaningful and related to that goal or expectation we've set. Then, use the data to have conversations among our team. What is the feedback? How are we working toward achieving this goal? Are we making progress?
Using the data as a feedback mechanism really helps us improve our efforts. When we look at performance management as a feedback mechanism, we want to have a clear understanding within our organization of what performance management is. That means making sure everyone — from leadership to program management staff to operational-level staff — understands what our goals are and the work being done to achieve them.
We need to define the specific components: our expectations, how we're going to monitor the data, and how we're going to discuss what the data is telling or showing us. We need to have that feedback loop and make sure it's closed. We should use it for constructive feedback or to make improvements where needed. Or maybe it's positive feedback, and we need to continue doing what we're doing because we're really liking the results.
If it's constructive, we're focused on growth and how we can make things better. If it's not what we expected, that's not a bad thing. That's actually a great thing, because we can use that information to help us improve. We need to establish how often we're going to look at the data and how often we're going to have these discussions. Then again, we need to decide what the actionable suggestions are or what ways we can move forward after reviewing what the data is telling us.
When we're talking about performance management and using it in a systematic approach, it's all centered around our program, our project, or our organization — whatever level we're looking at. What is the work to be done? Do we have a specific time frame to meet or achieve this goal? Who is responsible? Is it our particular team, an individual, a group, or the entire organization?
Then, what data is going to help us understand our progress? Who's going to be collecting that data? Where do we get the data from? Is it a third party? Another part of the organization? A contractor that we use? Is it reliable? Are we going to be able to get this data when we need it and on a continuous basis, so we can see trends over time related to the results?
Looking at it again from that feedback mechanism perspective, we ask: what goes into that?
When we talk about measurement, it's key to really demonstrating our impact and our value. By measuring progress and results, we can clearly show — or should be able to show — the success of our initiatives or our project. The evidence is critical to proving its worth to those contributing efforts or those we're reporting to and sharing results with.
When we're providing visibility, measurement helps ensure transparency. It allows everyone involved to see how the project is performing. It keeps us, as a team, informed and helps demonstrate that we're aligned toward our shared goals and moving toward achieving them.
Then there's the business case. We're looking at building a stronger case for the future. The data that's collected can be used to justify continuing — or even expanding — a program or project. It can serve as concrete evidence to support future funding or scaling if needed.
Measurement helps transform subjective opinions into objective proof. It makes it easier to communicate the program's effectiveness overall and helps us, as a team, advocate for its sustainability and continued work.
When we talk about being able to pick meaningful measures related to our work, we’re talking about high-power measures. These are metrics related to our strategies that have a significant influence on the success of our project or program.
When we talk about communication power, we want to be able to effectively convey information related to our work and drive collaboration. High-power measures rely on strong communication so that everyone involved understands their roles and responsibilities as they relate to the project’s goals. We want clear and consistent communication.
The importance — or proxy power — refers to the influence that comes from representing the interests of those who are engaged and have bought into the process. We want high-power measures that align with our priorities and help demonstrate that we’re being responsive to our processes or changes in those processes.
Then there’s the data power piece. We want to be able to leverage data to make informed decisions, predict outcomes, and optimize our work’s efficiency and effectiveness. We want to optimize those processes.
High-powered measures are going to be data-driven and allow us to tie them back to the work that we're doing and show the progress that we're making. When we come up with a measure — and you've heard me say this before — we don't just want to measure something because we can. We want it to be a quality measure, something meaningful. So we can ask ourselves: if we're collecting information, is it useful?
I’ve shared this example before. At a health department I worked with, one program had 110 measures. That was a red flag for me. When I sat down and started going through some of these measures, I asked, “What do you do with this measure? How is it used? What's the follow-up?” And if the answer was, “We don't know,” or “There really isn’t anything,” then that’s not a meaningful measure. We don’t want to collect data just because we can.
We want to be able to assess our measures to make sure they’re meaningful, that we can compare them over time, that they’re responsive to our work, and that they’re accurate and reliable. We need to be able to collect reliable data to make the measure even more meaningful.
Apologies for that — it kicked me away from my Zoom screen.
When we're looking at performance management and utilizing the Foundational Public Health Services to do this, it helps us be less program-organized and more conceptually organized in our approach. Some of you mentioned in the icebreaker that you're somewhat familiar with FPHS, and some of you are very familiar. This is a framework that was developed about 10 to 12 years ago to help define a minimum package of public health capabilities and programs. As a governmental health department, there’s a fundamental responsibility to provide public health protections and services in several areas. Whether that’s preventing the spread of disease or ensuring safe air, food, and water quality — those are all part of the Foundational Public Health Services.
The FPHS framework is broken down into two parts: the foundational areas and the foundational capabilities. The foundational areas include the prevention of communicable disease, chronic disease and injury prevention, environmental health, maternal, child, and family health, and access to and linkage with clinical care. The foundational capabilities are the specific elements, competencies, and skills that a health department needs to fulfill those foundational areas.
When we look at the foundational areas themselves — starting with communicable disease control — we need to be able to provide timely, relevant, and accurate information to the health care system and the community. We want to ensure appropriate treatment for individuals with communicable diseases, support the recognition of outbreaks, and more.
For chronic disease and injury prevention, it’s the same idea. Whether it’s opioid prevention, substance use disorder programs, or tobacco control, we want to work together to identify the health risk behaviors in our community or state. We also want to understand the capacities of our local chronic disease and injury prevention partners. How can we collaborate to create a prioritized prevention plan? That plan is going to look different from one community or state to another.
The same goes for environmental public health. We’re talking about ensuring safe, high-quality air, drinking water, and food. Again, we want to integrate and work together to make sure our community is providing that service.
We also have maternal and child health care, and access to and linkage with clinical care — or just linkage to care in general.
Then we have the foundational capabilities. These are the skill sets or focus areas that every health department needs to ensure they can fulfill the Foundational Public Health Services. You can see those capabilities listed here.
When we look at the FPHS framework, we can use it for performance management. Whether you're in California, West Virginia, or New York — regardless of the differences in our communities or states — it provides a common language. It helps shift us toward conceptually based performance management, rather than organizing everything strictly by program.
Using this framework can be especially helpful if you're just starting out in performance management. Instead of trying to identify every program and its goals and how to measure them, this framework helps you get organized. It helps you see what’s meaningful. How are we meaningfully contributing to these foundational public health services? What information can we collect to demonstrate our work?
It also meets the requirements for public health accreditation.
When we look at it, we have several different frameworks in public health. We have the Essential Public Health Services and the Foundational Public Health Services. Some of the key differences between them are that the Essential Public Health Services provide a broader description of public health activities, while the Foundational Public Health Services focus on the minimum essential services that governmental public health departments must provide. Regardless of our location or how we may deliver those services differently, these are still the minimum essential services that need to be provided.
The Essential Public Health Services serve as a roadmap for public health practice in all communities. The Foundational Public Health Services, on the other hand, are designed around the basic public health protections that should be available in every community. So, when we look at the differences between these frameworks, we can see how each plays a role.
When we utilize the Foundational Public Health Services in performance management — especially when we're looking at the assessment piece, monitoring health status, and investigating health problems — we're all doing that. We may be doing it in slightly different ways depending on our location or the needs of our community, but we’re all developing policies and plans that support those needs and health efforts.
We can also use this framework to help ensure that those essential health services are available to our communities, that they’re accessible, and that they’re meaningful.
Looking at the benefits of using the Foundational Public Health Services, we want to ultimately improve public health outcomes — whether that’s for one program or the entire agency. We want to enhance the quality and impact of the public health services our health department is offering.
Increasing accountability happens with good data, meaningful measures, and regular monitoring. Looking at that feedback on a regular basis helps ensure transparency and accountability in our operations. It also enhances data-driven decision-making. If you're collecting data, reviewing how you're progressing toward your goals, and using that data to guide your decisions, then you're using it as a management tool.
Another health department pointed this out to me a few weeks ago — it’s a feedback mechanism. That’s a great way to sum it up. It’s a feedback mechanism. That was actually from Massachusetts — Paul, I’m going to call you out on that. I’ll always give you credit for it. It’s the feedback mechanism we need to help our programs advance and move forward.
When we look at building a performance management system using FPHS, as a health department or a program, we need to prioritize our unique responsibilities. We need to set and establish our performance standards — what are our goals? What are the targets we’re trying to achieve?
We want to develop meaningful performance measures that help demonstrate whether we’re working toward those goals and standards. We need to collect and analyze the data, just like we would with any other performance management effort. We report and communicate those results, and we prioritize our improvement areas based on the feedback. What is the data telling us we need to improve?
We also want to engage the community where possible. Who are our community partners, other health care organizations, or community-based providers? Who else is contributing to the health outcomes of our community that we can work with? How can we make improvements along the way, even if it means bringing in additional partners and collaboration? The focus is on continuous improvement.
Looking again at the components of a performance management system — and that framework we talked about earlier in the Turning Point model — performance standards don’t have to be organized by program. You could look at the Foundational Public Health Services and those five service areas we discussed. I’ll show you an example of that in just a moment.
You could establish standards based on communicable disease control, environmental public health, and so on. Then, for performance measures, ask: what are the ways we can measure the work we’re doing in these foundational areas? How can we determine whether we’re effectively delivering in these areas?
What does the data look like related to that work, those strategies, or specific activities? We want to measure it in a meaningful way. We don’t just want to count widgets. We don’t want to count how many times the phone rings or how many brochures we hand out. We want to monitor meaningful interactions or activities related to our work so we can demonstrate that we’re making progress toward achieving a goal.
And the same applies to quality improvement. We still implement QI tools when needed — when we’re not achieving our goals or making progress toward those expectations or targets. We want to address performance gaps, improve, and work toward achieving our goals. That’s where QI tools can help.
We also continue ongoing evaluation and monitoring. We want to keep having data collection discussions within our programs. What is the data telling us? Are we moving toward our goals? How can we fill in the gaps? What are the opportunities for improvement? We want to make adjustments as necessary to improve our outcomes and move toward achieving those goals.
When we look at performance management — whether you’re a program, an entire health department, a hospital, or a chronic disease prevention program within a community partner — we are setting goals and objectives. What are the targets we want to achieve, and how are we going to do it?
What are the performance measures we can create and monitor on a regular basis to let us know if we’re working toward those goals and objectives? What’s the frequency for collecting this information?
We want to collect the data, report on those performance measures, and share the feedback or analysis of what we’ve discussed as a group — what the measures are telling us. And again, we report on this regularly.
We implement quality improvement to identify and address gaps. It’s a continuous process. We make adjustments, change how we’re implementing the work, and hopefully continue to update our goals and objectives as needed.
This is an example of some recent conversations I’ve had with health departments that are using the Foundational Public Health Services as their goals to help facilitate discussions around performance management.
For example, with Goal One, they’re talking through the idea that, as a health department, they are here to prevent the spread of communicable disease. So, how are they doing that? While it may look different from one health department to another, these are some of the priorities this particular department was discussing in terms of objectives and the meaningful metrics they wanted to collect related to that work.
The same goes for preventing chronic disease and injury. They were looking at things like breast and cervical preventive screenings and other programs specific to their county. But they were also focused on decreasing overdoses, overdose fatalities, and similar issues. Like I said, this may look different from one health department to another in terms of how we carry out this work, but ultimately, most of us are working to prevent the spread of communicable disease, prevent chronic disease and injury, and ensure safe food, air, and water quality.
You can have the discussion: what do these goals mean to us? What does that look like in our health department? We’re going to do that with some Mentimeter feedback as well, just to hear from you all — how does this look in your county, your state, or your community? What would this mean to you? What does this work look like to you?
Here’s another example from a different health department. They’re also working on preventing chronic disease and injury and ensuring healthy communities through some of their youth programs. They have a diabetes care program they’re working through. In terms of maternal and child health, they’re supporting programs like Florida Healthy Babies. For communicable disease, they’re looking at reducing STI rates and the spread of TB.
Again, you can see this looks a little different from one health department to another, but we’re still ultimately working toward some of those shared goals that come from the Foundational Public Health Services.
In another example, this health department isn’t finished with the discussion yet, but just last week we were talking about the Foundational Public Health Services and how they wanted to align them with their strategic plan objectives. They boiled it down to prevention — implementing primary prevention initiatives — and then started tying that to the objectives in their strategic plan.
We were doing a kind of crosswalk between the Foundational Public Health Services, the primary objectives of their strategic plan, and the action steps they’re putting in place. We also talked about how they can measure this meaningful work. This is still a work in progress. I actually just added this slide today, but I wanted you to see how another health department is taking the FPHS model, drilling it down to what they need for their work, and choosing to focus on the prevention piece. They’re tying that to their work, aligning it with their strategic plan, and asking: how can we measure this work in a meaningful way?
Now I’m going to step out to Mentimeter and pull this up. Just give me one second and I’ll share my screen so we can do some live polling.
Can everybody see that? Yes? You can join using the Mentimeter web address at the top, or you can use the QR code. Just open the camera app on your phone, hover over the code, and click the link that pops up.
Now, thinking through the use of the Foundational Public Health Services in performance management — this is going to look different for everybody, and that’s exactly what we want to see.
In your health department or your particular program, how do you work to prevent the spread of communicable disease? What does that look like in your health department? This can be very general — I’m not asking you to come up with a measure right now. But what does some of that work look like? What are you doing? What do you have in place that helps prevent the spread of communicable disease?
We’re seeing a lot of responses: contact investigations, case tracking, prevention education, vaccines, surveillance. We have a lot of participants, so I’m just going to scroll as we talk through this.
Monitoring disease rates, vaccination campaigns, education, surveillance, public information, lab reports, data exchange — I love that one. Communications, consults, facility inspections. Great. Health equity teams providing updated education and resources. Multi-agency collaboration. That’s great.
Now, thinking through this and what it looks like in your health department, if we were to group some of this work together, what are some key objectives or milestones that might emerge?
Let’s say we were all working together and having this discussion as a group. After looking through some of those responses — or even just your own ideas — what are some key objectives or milestones we might identify to help achieve the goal of preventing the spread of communicable disease?
Yes — percentage of school children meeting vaccination recommendations, vaccination rates, data exchange, outreach metrics, increased uptake in vaccinations, conducting vaccine promotion efforts, community coalitions, public awareness, timely surveillance, sampling metrics for wastewater surveillance, community partnerships to expand access to reproductive health services, reporting cases referred to care services, positive case referrals to treatment. Great.
Now, if we look at these as objectives or milestones, what specific activities related to this work would be meaningful to measure? Some of this may be repetitive from earlier responses, but this is what the discussion would look like if you were working toward building performance management components. What does this work look like for us? What are some of the meaningful objectives or milestones we need to achieve to reach our goals? And what will that look like for us in terms of measurement?
Yes — vaccinations given, and the number of unduplicated clients. I love that — unduplicated clients — because that’s usually a question I have: how many unique participants or clients do you have? Sometimes we see the same one or two individuals coming multiple times, or many individuals returning, but what does “unique” measurement look like in that context?
Other examples include the number of vaccine events completed, case investigations, the percentage of a certain population vaccinated, and the number of complaints around disease factors. Clients retained in care is another one. Staff knowledge is also important, and that can relate to some of the core competencies we need in delivering these programs — the skills, knowledge, and abilities that staff need. That’s important too.
The percentage of cases where outreach is attempted, interviews completed, and referrals to care — where the loop is closed — I love that. This is great. Community partners and community outreach opportunities. Identifying and quantifying barriers to receiving vaccines — I love that too. Percent positivity rate. So, we looked at that one for communicable disease. Now, what about preventing chronic disease and injury? What does that work look like?
Preventive services, tobacco cessation, healthy lifestyle programs, and education. I’m seeing a lot of things related to education, harm reduction, screenings, healthy lifestyle programs, and prevention. Yes —promoting healthy activities, targeted communication efforts, diabetes prevention programs, health promotion, education, and screenings. I love it. Chronic disease management programs. Data analysis from local hospitals. Wonderful.
Looking at those ideas, if we were to group them together and identify key objectives or milestones to ensure we’re achieving the goals of preventing chronic disease and injury, what would that look like?
I really like this exercise because I want you all to see that this is how you would facilitate these conversations in your health department, program, or agency. If you were using this type of framework — focused on prevention or increasing access to clinical care — this is how you’d start. You begin by having conversations: what does this work look like for us? What would our goals be? What are the milestones or key steps we need to achieve to help us do this? And then, what are the things we know we can meaningfully measure related to our work?
It’s a tiered conversation, going a little deeper with each piece.
Yes — disease rates in participants of a specific prevention program, rates of unintended deaths based on risk, goals completed within a time frame, homes mitigated after testing and finding high radon levels. Again, promoting education — we’re seeing that again. Staff training, snack policies, identifying the most common chronic diseases and injuries — and again, that’s going to vary from one population group to another. The number of emergency department visits and the purpose behind them. Reduction of death in key diseases. Health care involvement.
So again, from one state to another, or depending on the priorities within our programs, this is going to look different. That’s why it’s important that we don’t just pick up what another program is doing and run with it. It really is about having those conversations — those meaningful conversations — within your program. What are our goals? How are we doing the work? How can we measure to demonstrate the work we’re doing and the impact we’re making?
This is great.
All right, so the last piece related to this area: what are specific activities related to this work that would be meaningful for us to measure on the chronic disease and injury side of things? Are there any particular performance measures that you feel would be meaningful to monitor on a regular basis — measures that will tell us whether we’re making progress toward our goals? Do we know if we’re making an improvement?
Yes — this is great. And again, you can see the variation across programs. Emergency department visits, mobile clinic visits, obesity rates in children, information related to tobacco retailers, overdose deaths per 100,000 residents.
Yes — looking at annual data analysis on key outcomes. That’s another good point. We want measures to be monitored as frequently as possible, because however often we’re collecting the measures is how often we’re getting the feedback. But some of these population health outcome measures — we may have to wait one to three years to collect updated information to see if there was any kind of change.
The more frequently we can collect on a measure, the more timely the feedback is. That allows us to make a change or respond in our operations, or address a gap, or change the way we’re doing something. If we have to wait a year to collect the result, then we’re going to have to wait another year to see if any change we made had an impact.
These are all great examples: rates of uninsured individuals, enrollment in chronic disease prevention programs, percentage of homes mitigated that tested high for radon, tracking homes tested for radon, looking at local data in rural communities, decreased positive cases of STIs, injuries, respiratory disease. Again, emergency department visits — I love it.
We can quickly do the one for ensuring safe food, air, and water quality, or we can just wrap things up and start taking questions if you’d prefer. I wasn’t sure how long it would take us to get through some of these.
TOUMA:
We have about six questions, Amanda, and around 13 minutes left — so it’s up to you. It looks like folks are starting to respond, so maybe we can do a couple?
MCCARTY:
Yes, let’s go ahead and go through this one for three or four minutes, and then we’ll start taking questions.
I also just want to remind everyone that we briefly talked about the Performance Management Toolkit that was launched during our last introductory session. I believe that link is going to be shared in the chat. We also just updated an article last week related to utilizing the Foundational Public Health Services in developing performance management systems.
So yes, when we look at what it means for us to ensure safe food, air, and water quality, we’re talking about things like food inspections, having a robust environmental health department, advocating for policies to reduce pollution, wastewater system inspections, monitoring and testing, and investigations. Shellfish harvesting restrictions — that’s the first time I’ve seen that one. Air quality and how it relates to health outcomes. Waterborne infection surveillance.
When we’re looking at ensuring safe air, food, and water quality, and we start to see some common themes, what do we feel are some key objectives we would consider here?
Yes — testing of water systems, how often inspections are conducted, and making sure we have enough workforce.
If we’re looking at some of the work being done, what would be something we could specifically measure related to this work?
Yes — percentage of inspections completed on schedule, decrease in lead poisoning cases among children. If we’re seeing a significant number of repeat restaurant violations, that would let us know there’s room for education or another opportunity to intervene. We could increase education to help prevent those violations. Water quality tests conducted. Incidences of foodborne illness.
These are all great ideas. Continuous quality improvement monitoring. Asthma rates. Absolutely.
Okay, I think that’s all I have. Melissa, I’ll turn it back over to you.
TOUMA:
That’s great — thanks so much, Amanda. That was some great participation, and lots of positive feedback in the chat.
As we jump into the questions, there are six of them, and we’ll do our best to get through them all. We’ve dropped some of the resources Amanda just mentioned, along with other performance management resources from PHF’s website, ASTHO’s website, and other partners. Lots of great resources there. We also have another webinar coming up in a couple of weeks, and we’ve dropped that information in the chat for your reference.
So Amanda, I’ll start from the top. The first question we received: “We have some programs that have a hard time choosing measurements. We’ve even resorted to counting the number of meetings they’ve had. How can you strategically get around this?”
MCCARTY:
Yes, I would say that in some cases, there are going to be programs where it’s more difficult to find meaningful performance measures. If you’re concerned about how many times they’re meeting, ask: what’s the impact or the result you’re hoping to see from those meetings?
To me, how many times the phone rings or how many times we meet isn’t necessarily as valuable as understanding the outcome of that. Are we making progress as a result of having these meetings? If that makes sense.
TOUMA:
Thanks, Amanda. The next question: “Do you include community-specific services when setting up your performance management system? For example, would you categorize the number of vaccines provided under community-specific services or under communicable disease control?”
MCCARTY:
I think you could do it under either. It’s really just a health department preference.
When I have these discussions — especially when we’re talking about prevention or using the FPHS framework as the foundation for performance management — I’ll often bring together leadership or management and talk through it. Who feels like they’re contributing to preventing communicable disease? Who feels like they’re contributing to preventing chronic disease and injury?
Then we have a separate discussion just on the Foundational Public Health Service related to preventing communicable disease. We talk through the work being done, how we’re working collaboratively to achieve that goal, and what our measures are.
So it really comes down to the preference of the health department. But yes, you can absolutely include specific community-based programs.
TOUMA:
Yeah, makes sense. I'm curious whether you have — or have seen — example performance measures that crosswalk back to the FPHS capabilities and areas?
MCCARTY:
I’m assuming they’re asking for performance measures that tie back to both the foundational capabilities and the foundational areas. A lot of the work being done in health departments can often be tied back to the Foundational Public Health Services. But I’ve never seen a performance management system developed first and then tied back to FPHS, if that’s what you mean.
What I have seen is the FPHS framework being used to build out the measures. That can really help, especially because some health departments have a hard time getting started with performance management discussions. It’s tough to identify those priority goals, and it’s also hard to get folks away from wanting to measure everything. Sometimes there’s this feeling that if something isn’t being measured or isn’t in the performance management system, then it must not be important. But that’s not the case. It’s really about prioritizing what we’re measuring, not measuring everything.
TOUMA:
Can you talk more about how to measure effectiveness versus impact?
MCCARTY:
Well, I think effectiveness and impact are probably pretty similar. We want our measures — and the work we’re doing — to ultimately demonstrate the impact we’re making. Are we making a difference? Are we improving the population we’re serving? Are we seeing movement in a positive direction toward meeting our goals?
As a program, we want to be as effective and efficient as possible. So when we’re collecting data, we want to be able to demonstrate whether we’re moving toward achieving our goals and making a positive impact. We should be able to see that reflected in our measures.
If we’re not able to demonstrate that — or if we’re not seeing it in our discussions about the measures — then those may not be effective measures. That ties back to the importance of collecting meaningful information and using it to improve our work, so we can be as effective and efficient as possible.
TOUMA:
You mentioned collecting feedback. Were you referring to the Balanced Scorecard approach? And if yes, can you speak more about it?
MCCARTY:
If you could add a little more context in the chat, that would be helpful.
TOUMA:
I’ll move on to the next question for now, and we’ll come back to yours once we have a bit more information. Amanda, how does this intersect with RBA? With Results-Based Accountability, we see an acknowledgment that one organization can’t be responsible for the health of the whole population — just their clients.
MCCARTY:
Yes, I think RBA aligns very closely with performance management. We want to be able to answer: how much work are we doing? How well did we do it? And is anyone better off as a result of our work?
It’s about having those conversations and talking through, as a health department, the work we’re doing for our community — for those we serve, or our customers, who are ultimately the end users of the products and services we offer. Are we able to see the impact or the change? Are we adding value? Is anyone better off because we’re doing this work?
TOUMA:
Next question — and maybe folks can answer this in the chat as well: what software are teams using for their performance management tracking systems?
Amanda, what are some of the systems you’ve seen so far?
MCCARTY:
I’ve seen some departments use Clear Impact. There’s also AchieveIt and BMSG. Some folks are just using Tableau dashboards and Excel — creating and customizing Excel templates so they don’t have to pay for software.
TOUMA:
Yes, and there are a few others mentioned in the chat. We’ve also seen some homegrown systems. We had a vendor showcase back in the fall, and we’ll try to drop that link. It might already be in the resources, but if not, we’ll follow up afterward.
We’ve had a lot of questions around performance management systems.
We are at the hour, unfortunately. I really hate to cut these questions off. We’ll try to look at the remaining questions and see if we can follow up with some information to help answer them.
Otherwise, I really want to thank you all for joining us today. I hope this was helpful. There are lots of resources in the chat, and we’ll follow up with all of them. We also have an evaluation form — we’d love to get your feedback. Your input helps us decide what our next webinar might focus on. Performance management is a high-interest topic for a lot of folks, so we want to continue this conversation.
Amanda, thank you so much for your presentation. And thank you, everyone, for joining us today — for your questions, your participation. I really hope you have a wonderful rest of your day.
Thank you, everyone.
Best Practices and Insights for Designing Effective Performance Management Systems in Public Health
In this webinar, the Public Health Foundation and ASTHO showcase best practices for establishing and enhancing performance management systems within public health departments. Two jurisdictions, one local and one state, share insights from navigating their own real-world experiences, successes, and challenges establishing a robust performance management framework within their organizations.
Speaker
- Amanda McCarty, MS, MBA, MHA: Performance Improvement Expert, Public Health Foundation
- Carmen Johnson, MPH: Community Health Planning and Engagement Manager, Tarrant County
- Katherine Feldman, DVM, MPH: Chief Performance Officer, Maryland Department of Health
- Pam Tenemaza, MPA: Health Policy Analyst Advanced, Maryland Department of Health
Transcript
Some answers have been edited for clarity.
MELISSA TOUMA:
Hey, good afternoon everyone. Thank you for joining. I'm going to give it just another minute for folks to hop on. I know there is a lot going on this week — today and yesterday — so I want to give some folks a little more time to get here. Thank you all for joining us.
All right, we are about a third of the way to the number of folks we were expecting, so I think it's probably a good time to get started and be cognizant of everyone's time.
Welcome, everyone, and thank you so much for joining us today for our latest performance management webinar. We know that this has been a needed and popular series of webinars over the last year, and we’re excited to present today’s session. We also have a few more coming up this spring that we hope you’ll attend or find useful.
My name is Melissa Touma, and I am with the Performance Improvement team here at ASTHO. I’ll be your facilitator for this afternoon.
Today, we’ll be exploring how two health departments — one state and one county level — are building their organizational performance management systems from the ground up. They’re engaging both program staff and leadership to advance their agency’s culture of improvement and performance management.
Over the last few months, both agencies have received technical assistance from the Public Health Foundation to guide them through the steps of establishing a performance management system and tailoring those steps or best practices to the unique needs of their health departments.
Before we dive in, just a few quick housekeeping items — and a quick icebreaker. We’d love to know who is in the room with us today. In the chat, as a form of introduction and a mini-icebreaker, could you please share your health agency’s name and let us know: over the last few years, have you conducted a QI culture assessment? This is a key step in establishing a performance management system. So, over the last three years — or maybe a little longer — has your agency conducted a QI culture assessment?
Please continue to drop those into the chat. I’ll give you a few quick housekeeping items here.
Closed captioning is enabled for this presentation. Throughout the session, feel free to drop your comments into the chat box and your questions into the Q&A box. If you happen to drop a question in the chat, that’s not a problem — one of our ASTHO team members will move your question from the chat to the Q&A box so we don’t lose it.
This webinar will also be recorded so it can be shared afterward, along with the slides and several other resources we have to share.
All right, I love the chats coming through — please keep sending them in. Feel free to keep posting those.
I’m going to go ahead and introduce our amazing speakers today.
To start, Amanda McCarty is a performance improvement expert with the Public Health Foundation. In addition to all the previous technical assistance and consultation she’s done with state and local health departments around performance management, QI, and workforce development, over the last six months she has also been providing TA to four PHIG recipient health departments. She’s been helping them build the foundations of their performance management systems and align those systems with their organizational goals.
Next, we’ll hear from Carmen Johnson, who is the Community Health Planning and Engagement Manager with the Division of Family Health Services at Tarrant County Public Health. She’ll share what her agency’s performance management journey has looked like over the last year — the successes they’ve experienced in establishing their system and some of the challenges they continue to navigate.
Lastly, we’ll hear about the Maryland Department of Health’s journey — their successes and challenges — from Katherine Feldman, who is the Chief Performance Officer, and Pam Tenemaza, a Health Policy Analyst Advanced. Both sit in the Public Health Workforce and Infrastructure Office within the Office of the Deputy Secretary for Public Health Services in Maryland.
With that exciting lineup, let’s go ahead and dive into our presentation. Amanda, I just switched the slide, and I’ll hand it over to you.
AMANDA MCCARTY:
So, if we begin with what we're ultimately trying to achieve — we want a performance management system. We want to have data collection on a regular basis and have conversations around that data. What does it look like for us to get there? And hopefully, we can understand any of our barriers up front and talk through them. For example, maybe we don’t have any data coming in from a particular program, or we’re not collecting customer feedback.
We need to look at what a performance management system consists of. What could we base this on? If we’ve never done this before, could we use the Foundational Public Health Services to help us get started? Or could we start this on a pilot basis — identify some areas where there’s a best practice, begin to implement, and identify barriers to rolling this out to other programs? We can include that in our project plan as well.
Then, having this conversation — especially if you have folks within your health department who work on performance management and QI — maybe they can help with identifying clear, meaningful goals and objectives for the programs and for the health department. That way, we have consistency. For example, one program might have a goal to increase compliance rates or reduce the prevalence of disease, while another program might say their goal is to implement a new filing system. But that’s not really what we’re looking for. We want to make sure we have a clear understanding of what we’re trying to roll out and that everyone shares that broad understanding.
One thing I’ve noticed is that some health departments want to use grant deliverables — or what they’re currently reporting on for a grant — as their performance management system. While grant deliverables are great and we need to report on that data to show outcomes, they don’t necessarily represent all of the work of the health department. So we need to talk through: what are our goals as a health department, within our programs? What data can we collect to demonstrate that we’re meeting those goals as a whole and the impact we’re making?
As we collect this data, we also need to ensure regular and ongoing communication and check-ins. That means looking at the data in staff and management meetings, leadership team meetings, or maybe at a PM/QI council meeting. We need to talk through the data that folks are collecting and what it means. Are there opportunities we’ve identified for improvement? Do we have QI projects in progress or ones we’re planning to implement as a result of collecting this data? Again, we need to make sure we have a plan to use this information. We want to collect it in a way that provides feedback on the work we’ve done, the work we’re doing, and helps us make plans for what we should be doing moving forward. That feedback mechanism is critical, and we can’t really do that without ongoing communication and check-ins.
With the health departments we typically work with — and what we’ve been doing with ASTHO technical assistance — we want to understand what the health department’s needs are. Each health department is going to have different needs and challenges. It’s important in the beginning to go through a kind of needs assessment — just a conversation: What are the needs of the health department? What do they already have in place? What have they tried before that maybe didn’t work? What are they ultimately hoping to get out of implementing this?
We start where they are. This is different for every health department. Everyone has different staff, infrastructure, and capacity. Maybe they’ve tried different approaches in the past. We want to know where they are in their performance management development or implementation. What are their current capabilities, resources, and limitations? That helps us tailor the approach from there.
With TA, it’s never a one-size-fits-all approach. We create customized solutions designed to address the unique needs of the health department and their goals. As you work to implement performance management, we really try — through that needs assessment or those initial conversations — to identify what you need the most help with and start where you are, not where we think you should be.
Technical assistance should be an iterative process. Solutions are continuously refined based on feedback and the results we see. We want this work to remain effective and relevant over time, so we make sure it’s a process that works for you.
The collaboration and engagement piece is also key. With TA, it’s all about collaboration and engagement with the health department — with those working and doing this work throughout the department. We want solutions to be relevant, but we also want buy-in from the programs, their direct participation, and their support as you begin to roll this out.
In some of the work we did with Tarrant County — and you’ll hear from Carmen in just a moment — performance management wasn’t necessarily a new topic for most folks in the health department. As with every health department, it’s about getting people on the same page about what the definition is and what that might look like. It could look different from one program to the next. Many of their programs already had some level of performance measures.
In most of our conversations, we spent a lot of time refining what those measures looked like or prioritizing what would be the key performance indicators. Carmen and I had many conversations with their programs: “How are you using this data? What information does it provide you? How is this measure meaningful to you?” Then we narrowed it down to: “What is the impact this program is trying to make, and how can we measure that?”
So it was mainly refining and tweaking what they already had in place.
Very few of their programs needed help actually building their goals, objectives, and measures. Like I said, they had a lot of that in place. It was more about refining and prioritizing what was most important, or rethinking some of those measures.
With Maryland, it was very similar. They also — Katherine and Pam will show you this — have a great performance management toolkit that they’ve developed. Their programs walk through the different phases of their PM toolkit to think through: what are their goals, what are they trying to achieve, who is their customer, what data is available, and what measures are meaningful related to the work they’re doing. They even look at it from a logic model perspective — short-term, intermediate, and long-term outcomes. Because they had this toolkit, most of their programs had already thought through these steps and started to identify what would be meaningful measures. So again, most of their technical assistance was spent refining what they had developed or finishing out what they had started. They may have begun conversations about initial goals or the impact they were trying to make, and then we spent our time building that out further with measures and prioritizing what would be most important to monitor and get feedback on in a performance management system.
They also have staff internally — Katherine and Pam will share more about this — who offer their own technical assistance and coaching support for programs working on performance management and QI. That’s a great example of how differences in capacity, infrastructure, and support can shape the process. This particular health department has that additional support, with a team of folks available to offer assistance throughout the entire department.
So, I’m going to turn it over to Carmen and let you hear directly from the health departments — their stories and their performance management journeys.
CARMEN JOHNSON:
Hello, my name is Carmen Johnson. I'm the Community Health Engagement and Planning Manager at Tarrant County Public Health. I'm housed in the Family Health Services Division, and we began receiving technical assistance (TA) around November of last year to support our performance management process. As Amanda mentioned, our performance management work had actually started a bit earlier than that. But for the purposes of this presentation, I’m going to focus on the past six or so months. I’ll cover our performance management timeline with TA, our QI culture assessment results, some successes and best practices we’ve had, as well as some challenges and lessons learned.
The goal for us in receiving TA was to strengthen our ability to provide excellent service through performance monitoring and engaging in quality improvement. We are currently in the planning, training, and application stage, and we’re moving toward monitoring and regular program evaluation.
We started our performance management training and TA with Amanda McCarty in early November. This was really important, as Amanda mentioned, to get all of our staff on the same page. We had been working on our performance management system for almost two years, and during that time, we experienced a lot of leadership changes. As most health departments know, when leadership changes, so do priorities. So, our performance management plan and its implementation have shifted over the past two years. We requested TA to help us reframe our performance management efforts, put a framework around it, and get all of our leadership and managers aligned on what performance management means for us.
We then conducted our QI culture assessment with Amanda in December. From that, we were able to assess where we were in terms of QI, and it helped inform our next steps. We followed up with individual meetings between Amanda and each of our divisions to refine our KPIs, as she mentioned earlier. Amanda provided us with a consultant summary, which I’ll talk more about in a moment. We’ll be using that to help us further refine our action plan over the next year or two.
Currently, we’re in the process of modifying some of our performance measures and KPIs in QuickBase, which is our performance management system. We’re also finalizing KPIs that will be used not only in our county budget book but also to track department-wide performance.
Our QI culture assessment evaluated each of the six foundational elements of a QI culture: employee empowerment, teamwork and collaboration, leadership commitment, customer focus, QI infrastructure, and continuous process improvement. Based on this assessment, we were provided with a draft QI action plan for the next two years.
Some highlights from our QI culture assessment: Employee empowerment — we were currently at a 1. We’re working on improving the standardization of our policies and procedures. Our Year 2 goal is to reach a 3, which would mean not only improving standardization but also utilizing those policies and procedures and moving into a more proactive planning stage. Teamwork and collaboration — this is happening both formally and informally across our department and teams.
We’ve been working hard to become less siloed and more collaborative. Our two-year goal is to reach a 3, meaning continuous collaboration and shared systems. Leadership commitment — we were at a 1. While leadership was supportive of ideas for change, at times it lacked direction or felt like a lack of commitment. As I mentioned earlier, we’ve had several leadership changes over the past year, which can create instability and shifting priorities. But we’re optimistic that with TA and our new leadership, we’ll reach a goal of 3.5 in the next two years.
Customer focus — we scored low in this area. There’s been a general decline in services offered, largely due to budget cuts — something I’m sure many can relate to. Fewer services and fewer staff often lead to lower customer satisfaction. Our goal is to improve this to a 2 within two years. This is a leadership priority — ensuring excellent customer service for both external customers (like patients in our clinics) and internal customers (our employees). QI infrastructure — we scored a 1.5. We had just started laying the groundwork for QI and hadn’t completed many QI projects. Based on the assessment, we’re using customer focus as our first QI project area. Our two-year goal is to reach a 4. Continuous process improvement — at the time of the assessment, we were just getting started. We were still figuring out how to use data continuously, make data-driven decisions, and evaluate our work. Our goal is to reach a 2.5 within two years and use data regularly.
The next steps for our QI action plan, which was developed in conjunction with our TA, are to review and update our current QI plan in accordance with the results from our QI assessment. Our initial focus for QI projects is going to be around customer service. As I stated, that’s a priority for our leadership. We’ve begun discussions with our leadership team around that, and we’ve also started discussions around agency-wide training on QI and customer focus. Once we receive our initial results, we’ll revisit to see if further work is needed around QI and customer service, or if we need to turn our attention to another one of the lower-performing sections of our QI culture assessment.
The next steps in our timeline regarding QI project planning include identifying the QI project we’ll be conducting, which will focus on training and customer service. We’re also going to be pulling our first quarterly performance measures report. As Amanda stated, we already had a performance measurement system in place, but now that those measures are refined, we’re going to pull a report to start looking at the data to make decisions and evaluate program performance. We’ll also be starting our customer service QI project, which I’ll be leading in conjunction with our workforce director, to begin assessing where the needs are within our agency around customer service.
Some successes and best practices we’ve had since receiving the training include having clear goals and expectations. That was truly a gap and a need here at the agency. Now we have a framework and a timeline. I think people in our agency are really understanding why we’re doing performance management and are starting to see the benefit of what we’re doing — and they’re making their own goals as well, which is really exciting.
We’re also doing continuous monitoring. As I mentioned, we’re going to be pulling our first quarterly report of all the agency’s performance measures to really take a look at how our programs are doing and identify areas where additional TA or QI might be needed. We’re really excited about that.
Leadership support has also been a major success. Having the buy-in of our leadership team and our health director has been a game-changer in terms of moving the needle forward on implementing our performance management system. It’s been really exciting to have the leadership team champion this work and collaborate with their teams in a less siloed way.
We also have some workforce development opportunities — areas where our agency staff can receive more training around performance management, QI, or other capacity-building practices that will help us carry out this project. That’s been really great.
Some challenges we’ve seen — something that’s been consistent throughout the implementation of our performance management system — include staff buy-in. As I mentioned, with changes in leadership, scope, and priorities, it’s been difficult to get everyone on the same page at times. The TA has helped a lot by providing a framework and a timeline for implementation, which has increased staff buy-in, but it continues to be a challenge.
Another challenge is the lack of managerial training. Even though some staff may have worked with performance management systems or had some training, the actual understanding of the difference between a KPI and a performance measure, or other related concepts, is still a struggle and continues to be an area for improvement.
A third challenge is data reliability. As mentioned, we had been entering data based on the performance measures we chose and entered into our system. However, with staff turnover and changes in data sources, data reliability can be a challenge. That’s something we’ll be focusing on in the next year.
Data-informed decision-making is another area of concern. Of course, if your data isn’t always reliable, making decisions based on that data can be difficult.
Finally, competing priorities. As we all know, the landscape of public health is changing daily. Implementing a performance management system in this environment can be challenging when there are other priorities and possible staff changes. These are things we’ll continue to monitor and work on over the next year.
But I can’t express enough how helpful the TA and the framework around our performance management system have been over the past six months. That’s it from me. If you have any questions, my contact information is here. I’ll turn it back over to Melissa.
TOUMA:
Yeah, thank you so much, Carmen. I’m seeing some questions in the chat, and we’re going to try to answer some of them as we go. We’ll also hold questions for the panelists until the end, so feel free to drop them in the chat and we’ll definitely get to them.
At this time, I’d like to pass it over to our Maryland team — Katherine and Pam.
KATHERINE FELDMAN:
Well, hello. Can you hear me now? I had to press a number of buttons. Sounds like you can hear me — that’s terrific.
I’m Katherine Feldman, the Chief Performance Officer here at the Maryland Department of Health, and I’m joined by my colleague Pam Tenemaza. I think I have a larger speaking role today, but we are joint performers in this effort, and I couldn’t do this without Pam. I’m glad she’s here and hopefully she’ll chime in as needed as we go through this.
I think both Amanda and Carmen’s presentations really set us up nicely to tell our story, so let’s move into that without further ado.
As Amanda referenced, we do have this toolkit — this implementation guide. I’m going to introduce you to that and walk you through how we rolled it out and how we’re currently rolling it out. This is very much a work in progress. Then we’ll highlight some successes, challenges, and lessons learned. I think this will be a really nice complement to the two earlier presentations.
So, where are we starting from — or where did we start from — on this journey?
As a state health department, we received initial public health accreditation in 2017. Then there were a number of changes in leadership and personnel. Some of you may remember the pandemic, which really upended things here at the health department — as it did across the U.S. in both local and state health departments. Specifically regarding this topic, it resulted in the erosion of our very nice, robust quality improvement infrastructure — and our performance management infrastructure.
We were fortunate that new leadership came on board and really embraced performance management and recommitted to public health accreditation. A very small team — that’s going to be a theme in my presentation — was tasked with reconstituting the quality improvement and performance management infrastructure. That small team was Pam and me. I’m exaggerating a little — we did have some additional staff support for this, and we feel fortunate that we did. We’re delighted to announce that we achieved re-accreditation just a few months ago, and that’s due in part to some of the incredible technical assistance we’ve received over the past year and a half from some of the folks on this call.
That’s going to be another theme — really encouraging folks to take advantage of technical assistance when it’s available to you.
We did have a kind of playbook for the QI infrastructure. We had a QI council, a QI steering committee, and a charter. So we were able to follow that playbook, adapt it to current needs, and really stand it back up. However, our performance management system at large really required starting from scratch. We didn’t have a playbook or a starting place, and we had a lot of learning to do. This was all new to me — a lot of these concepts and approaches.
So, we developed a performance management implementation guide with the idea, as Amanda described, that programs would be able to work their way through it. I’ll talk you through some of the steps in a minute, just at a high level, to help programs identify their performance measures after being able to articulate their program’s purpose and describe the activities, outputs, and outcomes associated with their programs.
You’ll note the Turning Point framework — the Public Health Performance Management System framework — and we based our performance management system on that.
To create this guide, we really relied on existing resources. That’s another theme and lesson learned: leverage all of the great resources that are out there. In this case, we used NACCHO’s “Measuring What Matters in Public Health,” which our ASTHO colleagues introduced us to, and the Results-Based Accountability implementation guide. We married those two resources to meet the Maryland Department of Health’s needs. It was a customization — a tailoring of two incredible resources to meet our context.
The idea is that public health programs at our state health department would work their way through the guide and associated worksheets to establish performance management for their programs. We initially laid out seven steps, starting with defining a program purpose, identifying outcomes, goals, and objectives, and linking those activities to outcomes and objectives. Ultimately, this results in identifying performance measures. Through those middle steps, we create a logic model. Once you’ve identified your performance measures, you get into the details — the nitty-gritty — like: do we have the data? Just as Carmen was talking about — do we have the data to support this? How frequently is it updated? Things like that. That was our starting place.
This is an example of one of the associated worksheets that a program could work through to define their program’s purpose. This is just a little excerpt of the logic model section, giving them an example and providing some instructions for working their way through the associated worksheets — developing program goals and objectives so they can then identify performance measures that reflect progress toward those goals and objectives.
As I mentioned, we used Results-Based Accountability in addition to NACCHO’s “Measuring What Matters in Public Health.” This is really adapted straight out of the Results-Based Accountability framework — identifying performance measures by asking three questions: How much did we do? How well did we do it? Is anyone better off? The metrics that fall into the lower-right quadrant — “Is anyone better off?” — by percentages, are really going to demonstrate that ultimate outcome. But measures in the other quadrants are also very important to help provide insight into program success.
We created this and rolled it out. We did some broad introductions and advertised it in various forums across the agency. We identified some select public health programs that were willing to be early adopters and help pilot the guide. We recognized pretty quickly that some ancillary training materials would be helpful, and we have a fantastic intern working with us on that. You’ll hear more about that in a moment.
We also received technical assistance — from ASTHO, from the Public Health Foundation. I’m specifically referencing Amanda’s wonderful technical assistance in working with programs to identify meaningful performance measures. We’ve had other performance measure TA along the way, and it’s all been really helpful in supporting our team’s progress in standing up performance management.
Not surprisingly, we learned a lot of lessons, and we’ve since refined the guide. We’re currently in version two, and we still maintain that it’s a work in progress — but it’s a working work in progress. It’s really exciting to see programs use it and tackle it.
Now we’re trying to be very intentional about who should be using the guide. We can’t expect all programs to adopt it immediately. So, who are our top-priority programs for working through this guide so we can stand up a baseline set of measures for the agency?
This is just a sample of some of the information we would tell programs about using the guide — providing parameters about who should participate, giving them an understanding of the time involved. I’ll tell you, the estimate of six to eight hours worked for some programs, but it took a lot longer for others. We provided suggestions for how they might approach it in terms of scheduling. All of us are juggling many responsibilities, so we needed to be mindful of not adding burden. We pointed them to the guides, which are in collaborative documents. They can get a copy of the guide, make a copy of the workbook, and work through it interactively.
I mentioned the updates. From those original seven steps, we realized the logic model was embraced by some programs and a real stumbling point for others. So we’ve made it a little more of an optional exercise. Also, working with Amanda, it was wonderful to see her walk programs through getting to performance measures in a very conversational, commonsense way. That helped inform our revisions to the guide so that it’s not so prescriptive and allows for more back-and-forth and flexibility.
We already have two ancillary slide decks available — one on logic model development and one on how to develop goals and objectives. We’re working on a customer focus deck as well, which is exciting to see.
These are just some screen grabs from the goals and objectives deck that we have. We continue to want to be as concise as possible, but we also want to make sure that programs are appropriately equipped to be successful in these efforts. We want to ensure they have enough information — but not so much that they’re overwhelmed or paralyzed into inaction.
Okay, so I think I’ll just take a moment to highlight one of our very early adopters. In fact, they got the pre-1.0 version — our Maryland AIDS Drug Assistance Program (MADAP). We can’t thank this team enough for being our initial — well, I don’t want to say guinea pigs, though guinea pigs are awfully cute — but they were our initial early adopters. We learned a lot by working with them through this process.
Here, you can read their purpose statement and who their customers are. Some of this might seem like common sense, but what we’ve found is that working through these guides is really helpful for level-setting and clearly articulating why a program exists and who it serves.
This is a logic model that we can take from the more cumbersome spreadsheet format and move into a more typical diagram format. You can see here, working from the right, we have the inputs — all of the resources that the MADAP program has to implement and provide their services. The next column includes the activities, followed by the outputs. Then, as you move left, you see the short-term, intermediate, and long-term outcomes, and finally, the ultimate impact. We are using Clear Impact as our performance management data platform. This is just a quick screen grab of their scorecard. We’re really delighted that we’ve taken them all the way from describing their program’s purpose to having a scorecard they can monitor, evaluate performance with, celebrate successes, and identify opportunities for improvement.
Just quickly, so we have time for discussion, I want to share some of the successes, challenges, and lessons learned. You’ll hear some recurring themes. This is a big undertaking, and it’s been an incremental uptake by our public health programs. It’s gradual. Sometimes, leadership wants all programs to have their scorecards ready to monitor, but it takes time. It’s happening, though, and we’re delighted with the successes and the scorecards we’ve stood up, and how this work is moving forward and being accepted by our programs.
We’ve also received really positive feedback from the programs as they work through the guide. They say that wherever they are — whether they’re a well-established program or at an inflection point, like receiving new funding and needing to think about how to apply it — it really helps the team contribute to a shared understanding of what the program is for and how they’re going to achieve those ultimate outcomes.
We’ve always loved the technical assistance offerings we’ve tapped into, primarily through the Public Health Infrastructure Grant. Amanda’s help really expanded the reach of our small team and helped us learn additional approaches to performance management. That’s been incredibly helpful.
Some of the challenges: we’re a small team, and we’re a small team that’s learning. Often, we feel like we’re just one step ahead of the programs working through this. But we’re learning a ton, and it’s been a challenge we’ve been happy to take on.
As I mentioned, the uptake has been gradual and incremental. Sometimes we’d like to see more progress, but we’re okay with where we are. We’re delighted with the uptake, and we just need to set expectations with leadership that they’re not going to suddenly see all programs represented with scorecards.
Another challenge is the varying abilities of our programs to work on this independently. Some folks already have a great understanding of logic models and performance metrics, often because of their funding sources. Others require a lot more hand-holding, and we don’t have the bandwidth to support everyone as much as we’d like to.
And then, of course, there’s change management. I think Carmen mentioned this as well. It’s such an essential component of this effort, and we don’t have someone dedicated to change management. That’s definitely on my wish list.
Some of our lessons learned: set reasonable expectations. This is going to take time. We can’t expect it to be done overnight.
We developed our materials to be standalone, and that’s been great. But as I mentioned, some programs are more able to do this than others. So we really need to support programs wherever they are in working through the guide.
We benefited from PHIG technical assistance. If you have the opportunity — do it. Do it. Do it. Do it. We really expanded our reach and learned a lot in the process.
There are so many great resources out there. It takes a little investment to get the lay of the land, and that’s always tough given how busy we all are. But really — leverage available resources. I’ve listed some here, and they’ll really jumpstart your work and set you up for success.
I think those are all my lessons learned. I really appreciate the opportunity and am happy to take any questions.
TOUMA:
That was fantastic. Yeah, thank you so much, Katherine.
The Q&A box has been lighting up. We’ve been trying to answer some of those questions as they come in. What I’m going to do now is start with the questions directed to our panelists about the work they’ve been doing.
There are some questions for Katherine, Pam, and Carmen — if you’re willing to share some of the materials you talked about today. But let’s start with the questions that relate to the strategies and the work you’ve been doing to stand up your performance management systems.
The first question: can you share more on the QI culture assessment and planning tool?
I added two links — one to the QI culture assessment that NACCHO hosts, and another to the PHF Performance Management Self-Assessment Tool. Those are two great tools to use. So my question to Carmen, Katherine, and Pam: what did that QI culture assessment look like for your teams? Who was in the room for that assessment, and how did it look?
Carmen, if you could start — thank you.
JOHNSON:
Yeah, so for our team, we actually invited all of our leadership team. That included our executive leadership team, our division managers, and supervisors — basically anyone who could make decisions was in the room to be part of the assessment.
TOUMA:
Great.
PAM TENEMAZA:
And for Maryland, we actually had our QI Council and our QI Steering Committee — which is kind of like our leadership at MDH in Public Health Services — participate. And we actually just did it, so very recently. We just got our findings from Amanda the other day.
TOUMA:
Yeah, awesome. Thank you.
Someone stated, “I find in this work there are many barriers around the language we use because it's complex for many, and it's also often new language.” I think that’s true for many. So, what have each of you done to combat or work around these language barriers?
JOHNSON:
For us, I think having that initial performance management training for everyone was a good way to reset and make sure we all had a common language. Even though some staff had done a training years ago, and others had participated more recently, that reset — to ensure we were all using the same language — was very helpful in addressing different interpretations or terminology.
TOUMA:
Yeah, thanks, Carmen.
FELDMAN:
This is something we’re still wrapping our arms around. I think we need to develop our basic performance management training — that will be very helpful. One of the things that’s not yet done in our guide is our glossary. That’s something we were very concerned about — making sure everybody knows what we mean when we use a particular word. Because, boy, it is confusing and can get complicated very quickly. So that’s still a work in progress.
And I think one of the realizations I had — this came from working with Amanda — is that it’s important to be clear about what we mean, but at the same time, don’t get too hung up on it. Like, “I mean this when I say ‘goal,’ and I mean this when I say ‘objective.’” Other people are going to say kind of similar but slightly different things. That’s not a perfect answer, but I think there’s a balance. Yes, it’s important, but don’t let it become a barrier. It’s still a work in progress here in Maryland.
TOUMA:
Absolutely. I’ll also add — you both talked about change management and what that requires. That’s all about communication. So, if folks are thinking about how to roll out their PM or QI program or system, think about what an internal communication plan might look like. How do you start building that foundation? How do you start using that language, hearing leadership use that language, and reinforcing it at various touchpoints with staff across the agency? The more people hear it, the more familiar it becomes. That’s definitely an element of change management as well.
Okay, we answered that one live. Let’s see — this next one is directed to you, Carmen. Could you share more on how you intend to get staff buy-in?
JOHNSON:
That’s been an ongoing challenge, especially with changes in leadership and priorities. We’ve kind of had to be the little train that could to get people to buy into it. What’s helped, I think, is having a framework around it. Having all the managers and supervisors — and having leadership buy-in — has been helpful in driving that bus, instead of it just being a small team, like me, trying to say, “Hey, can you guys turn in your measures, please?”
In our case, having a framework, constant communication, and a timeline has been really helpful. Also, asking for their opinion — what works for you, what doesn’t — has made a difference. A lot of the time, when implementing a system, it’s a top-down approach. But really asking, “What’s your opinion? How do you see this? How does your program work? How can we measure the impact of what you do on a daily basis?” — that’s helped get buy-in.
I saw someone in the chat mention not viewing performance management as punitive. That’s something we’ve really had to work on. But I think helping everyone understand that this is agency-wide, and asking, “How do you want to measure the impact of your work?” — that’s been really helpful.
TOUMA:
That’s great — thanks, Carmen.
I know we have five minutes left, so I’m just scanning. There are lots of questions here about the resources, the guides, and your QI plans.
Katherine and Pam, it looks like here’s a question for you: what was the lift from your two-ish person team? For example, how long have you been working on this, and what percentage of your time did it take?
FELDMAN:
Two-ish — I’ll say three-ish. When we really got going on this guide, it was Pam and another former colleague who contributed immensely. Maybe he’s even on this webinar — that would be great. Pam, when did we start? It’s been a long while.
TENEMAZA:
Yeah, I’d say at least nine months.
FELDMAN:
And maybe a teeny percentage of my time, a fair chunk of Pam’s time, and a fair chunk of this other individual’s time. But we really did start with some great materials. I’d say most of the work has been refining. You can get a long way by taking NACCHO’s “Measuring What Matters in Public Health” and working your way through it. Then it’s about refinement, saying, “Okay, this is good, but this part trips people up a little. Let’s tweak it.”
Melissa, I also see a number of folks asking for the guide. We’d love to send it out, but we need to do a little quality check on it first. So we’ll be in touch about getting the materials out to the audience. We’ve put a lot of effort into it, and it would be great to share.
TOUMA:
Yeah, absolutely. Folks would love to see it. That’s great — thank you.
I think we have about two minutes left. Carmen, there was a question for you: how do you plan to measure customer service?
JOHNSON:
We’re currently starting with trainings. We’re developing pre- and post-surveys for the staff trainings around customer service. Once we get the results from those, we’ll go in and modify based on what the needs of the employees are. That’s kind of where we’re starting. But I’m sure patient satisfaction surveys and other tools will come afterward, once we get an assessment of where staff are and what their needs are initially.
TOUMA:
Great, thanks, Carmen. I think in the resources we’ll try to send out after this webinar, there will be some that could help with that question — looking at customer service and using that to drive your performance management system.
Okay, I’m going to wrap up those questions for the panelists. There was one question I think everyone might be interested in: can the initial QI assessment be sent to the entire workforce, or is it best practice to limit it to management staff?
Personally — and Katherine, Carmen, feel free to weigh in — I think you could break it down into a two- or three-question survey, maybe four questions, if you want to include it in an employee satisfaction survey to get everyone’s take on QI culture. I think it would be challenging to send the full assessment to absolutely everyone in the agency, but I also believe it’s good to have all levels of staff involved in the assessment. It doesn’t necessarily need to be limited to management staff.
Katherine and Pam, you talked about doing the assessment with your QI Council, which might include management but also frontline staff. So there are different ways to approach it. Carmen, Katherine, Pam — any thoughts?
JOHNSON:
Yeah, I agree. Starting with our managers was really smart for us. Once you get a larger group, it’s harder to manage everyone’s understanding, and it’s also harder to get everyone’s feedback. So if you wanted to do it with a smaller group of 25 or fewer first, and then, as Melissa said, add a few satisfaction survey questions for the rest of the staff, I think that would work well.
TOUMA:
Thanks, Carmen.
TENEMAZA:
Yeah, and I would just add — if you take a look at the assessment, we worked with Amanda, who graciously led the assessment. But it’s very long. It’s definitely a big assessment.
TOUMA:
Definitely.
All right, thank you all so much. I know we are at time. Leslie, if you could just pop up the slides real quick.
As I mentioned before, we’re going to try to share some PM resources that could be helpful to you all, including a list of trainings, resources, and some of the guides that Katherine also mentioned. We’ll send this out in the resource mailer.
We also have several webinars coming up. We have two more performance management webinars in May, and a couple around academic health departments. You’ll be able to find those registration links — if not now, then soon — on ASTHO’s events website.
And finally, please take our evaluation. We really appreciate your thoughts on our products and this presentation. Thank you so much to Carmen, Katherine, and Pam, who I know have a lot going on in their weeks — especially nowadays. I really appreciate you all taking the time to talk with us today and share your performance management journeys.
Thank you very much, and thank you to everyone for joining. I hope you all have a good rest of your day.