Complex care programs deliver whole-person, person-centered care to individuals with complex health and social needs. The individualized nature of the work makes it difficult to glean lessons for an entire panel or population of patients as it can be hard to see the forest through the trees. By analyzing, sharing, and discussing process data, team members and supervisors can assess where the program is not being delivered as intended and identify opportunities for improvement.

How not to do quality improvement work

Early in my tenure at the Camden Coalition in a Program Manager role, I learned a hard but crucial lesson about how not to do quality improvement work, and about the difference between accuracy and effectiveness.

I had been tasked with creating weekly scorecards for our high-touch, community-based complex care team. Scorecards are an important tool — they help care team members and supervisors ensure that their programs are consistently delivering quality care, and can be used to assess where care team members may need additional support or a change in approach.

Regularly reviewing program metrics, such as the data collected via scorecards, is how complex care teams can “activate their data.” The goal of which is to work collaboratively to understand what is driving them and where and how to improve.

My education in economics and statistics had prepared me for this task. I reviewed the care team’s workflow diagrams, oriented myself to the database, identified priority measures, produced relevant charts and graphs, and packaged them into a scorecard that could be filtered at the individual, team, and program level.

And I did all of this largely in isolation. I only engaged one member of the care team throughout this process — the 22-year-old program assistant, Andrew. Like me, Andrew was a young white male that grew up in middle class suburbs. In contrast, the care team members were largely women of color who were from the city they served. Andrew had the least amount of experience providing care or understanding of the problems we were trying to address. I went to him because I didn’t find him intimidating.

With Andrew’s help, I completed the first draft of our scorecards late on a Friday afternoon. The care teams had gone home for the weekend, so I printed out the four-page scorecard on large 11×17’ paper. Each person had their personalized scorecard showing how they were doing compared to their peers. The charts were colorful and easy to read, with green representing where the provider had met the target, ‘success’, and red indicating where they had ‘failed’.

I was proud of myself.

On Monday morning, I was welcomed to the office by an icy reception. Some care team members were confused by the scorecards and what they were communicating. Others were embarrassed that they were being portrayed as doing worse than their peers. All were angry that their extremely difficult and complex work had been graded with stoplight colors and the patients they care deeply about reduced to dots on a page.

The analysis I had produced was (mostly) correct. But despite being accurate, the scorecards I created were ineffective. It was a quick and hard lesson that being effective requires care team members to be brought into the process, that they understand what the data is communicating, and are motivated to incorporate the data into their decision-making with patients. I had done none of that.

After my initial snafu, one of the care team leaders, Victor, offered to help mend the damage I had created. Victor is from Camden. Prior to overseeing our care teams, Victor worked with victims of violence in the city, mostly young Black men like himself. He has a deep understanding of our care management work and has earned the respect and trust of the care team.

Victor and I worked together with nurses Jeneen and Renee to identify a single measure that they felt the care team would respond to: in-person engagement rate. The care team already had an informal goal to have in-person meetings with all of their patients every seven days. This was already understood by the care team members to be a measure of the relationship. They were already bought in.

Victor had a whiteboard installed in the care team’s space that everyone could see. On it, he wrote the names of patients that hadn’t been seen in more than seven days, as well as those who needed an initial home visit or primary care appointment, and the name of their care team lead. The care team saw him do it and they discussed it. These were people we weren’t connecting with enough. We needed to invest in those relationships.

Victor was trusted and respected. The care team knew that he had their interests — and those of the patient — at heart. And they responded to him. He clearly stated what he was doing and why, while staying light and playful. Care team members started teasing one another about the frequency their names appeared on the board. Teasing turned into collaboration and action. Team members started having deeper conversations about what was happening in patients’ lives that was making connection a challenge. The result? Within a few weeks the list of 20+ names dropped to a small handful.

Activating data is about using data to improve patient care, not numbers on a page, and that is done through conversation, prioritization, and problem-solving. Victor’s whiteboard was infinitely more effective at improving patient care than my fancy scorecards had been.

Quality improvement in complex care

What I learned through that experience and through the following years, as I worked with our team and others across the country to activate data for quality improvement, is that although this process is crucial for improving the quality of care provided by complex care programs, the very characteristics that make complex care so unique and powerful can also make the process of data activation uniquely challenging.

Complex care is:

  • Whole-person – the needs addressed are sprawling, ranging from advanced illness to addiction to housing instability. Many of these needs require action from external providers that care team members have no authority over.
  • Person-centered – each combination of needs is unique to that individual, including their environment and readiness to engage. Risk scores are often unavailable or incomplete and comparing patient progress across providers is difficult.
  • Relationship-based – care relies on providers building authentic healing relationships anchored in trust and acceptance. Aggressively managing to metrics risks undercutting the relationship and turning care into a check-box intervention.

But that doesn’t mean we should give up on data-driven quality improvement altogether. When done sensitively and collaboratively, activating data allows us to identify weaknesses in the care team’s workflow, assess where team members need additional training or support, and promote conversations about specific patients and how to best meet their needs. That process can look like:

  1. Starting with a discussion on what success looks like. Given the population we serve and the workflow we use to deliver care, when do we know that we have succeeded with a patient? When we succeed, what were the contributing factors?
  2. Understanding how your data is captured and stored. For most complex care programs, the workflow is complicated and filled with ‘if-then’ statements. This is because it is person-centered and every person needs different support at different times. Understanding these nuances and how they are reflected in your data is important to ensure that you are able to use the data you are collecting.
  3. Aligning on key moments and timelines. Hold conversations with the care team on how they approach each step of the process and when they hope to complete those steps by. Connect these conversations to your previous discussion on success and which steps feel most critical or reflective of how the care is going overall.
  4. Producing a simple workflow map. A detailed workflow document will be complicated and difficult for staff who are unfamiliar with these tools to read. Simplify it based on common paths that patients follow. Over time, maps can become more complex as staff become more comfortable creating and reading workflows.
  5. Selecting a single measure. You are likely to have a number of key moments in care delivery that are worthy of regular review. Pick one! The initial goal should be to focus attention on a narrow part of the workflow where there is the potential to improve. When you do, it is a team-wide success that all providers contributed to. Over time, you can add measures and focus conversations with individual team members based on their opportunities for improvement.
  6. Reviewing the data as a team. Practice reviewing the data. Make sure everyone on the care team understands what it is communicating. Model for the care teams that data will not be used punitively, but rather to drive conversations and opportunities for improvement. The data can be wrong! Their work is challenging and non-linear, and you might need to adjust what you’re measuring and what your target goals are based on their experience.
  7. Using data to drive quality improvement initiatives. Identify changes in the workflow that you think will impact weak points and use the data to assess success.

What I’ve learned over and over at the Camden Coalition is that effective, data-driven quality improvement in complex care starts and ends in conversation with those doing the work. The data review tools (scorecards, dashboards) must be informed by care team members with a deep understanding of the problem we are trying to solve and the barriers to doing so. Data analysis may be able to tell us what happened, but, the real value is the conversation between care providers on why it happened. If the care team isn’t fully engaged in those conversations, then the program isn’t improving.

 

Looking for hands-on help activating your care team’s data? Learn more about technical assistance offerings from the Camden Coalition, and contact us at [email protected].