And “Indicators for Indicators”
Now that we are approaching the fall, many of us are looking forward to next year’s budget, strategic plan, and either updating or enhancing the quality dashboard. Additionally, this time of year, there are new Board members and medical staff officers, some of whom may need to be educated on the above processes, in particular, the quality improvement initiatives. At a minimum, one needs to better understand the quality dashboard, the indicators which comprise that dashboard, and the process by which the system will improve performance.
It is easy to get overwhelmed with quality indicators. Many health care systems track hundreds of them on a regular basis. Ultimately, these indicators should roll up into a single dashboard. When educating new Board members, it is tempting to have them “drink from the firehose” by showing them all of the indicators that the system is tracking, and it is not hard to quickly lose the audience in a sea of data and acronyms. Oftentimes, and sometimes out of shear frustration, a prudent Board member will ask the question: “How did you determine which of these indicators to incorporate into the dashboard?”
So, both the time of year and the need to educate new leadership makes now a good time to revisit the concept of “Indicators for Indicators”. Specifically, out of the myriad of indicators that are tracked, how do we pick out the cherished few that will be reported to the Board and management on a regular basis?
Having worked with many systems over the years, the following points have greatly assisted me in transforming this mass (or morass) of data into a useable, effective dashboard that will drive the appropriate results necessary to move the organization forward.
1. The dashboard should be just one page (and not done so with micro-print !). It should go without saying that the document is concise, easy to read, and will enable the readers to quickly focus onto the most important indicator at the time. Often, the indicators are color-coded red, yellow and green, so that the outlying indicators are quickly apparent.
2. The dashboard should consist of 3-6 Value categories, as outlined by the organizations mission, vision and values. Examples would include: patient safety, quality outcomes, cost-effectiveness, citizenship, mission goals, etc.,.
3. The total number of indicators to populate the system dashboard should be distilled down to approximately 8-12 individual data entries.
4. All the indicators for the individual members or departments of the system should roll up into a common dashboard. Having said that, I do acknowledge that there may be specific issues within an individual entity that must be managed, and that they may deserve a place on the local dashboard. But ultimately, everything that is monitored and managed should roll up to the ultimate success of the organization, and correlate with the entries of the system dashboard.
5. The dashboard is a dynamic document that needs to be continually managed and massaged as the organization and the health care climate changes. Individual indicators will change with time, either because the appropriate result has been obtained, and the indicator is no longer needed, or a more important indicator has arisen to take its place. Incidentally, the Value categories should not change much through time
6. The ideal indicator should have the following attributes:
a. The indicator must be significant to the organization- put simply, if the organization is going through the effort to obtain and manage this indicator, the successful accomplishment of that indicator must have a significant positive effect on the system. Otherwise, it is a waste of time. The fundamental premise is that the successful accomplishment of the components of the dashboard will result in the success of the system. If this argument cannot be made, it is not worthwhile to include that data as part of the dashboard.
b. The indicator must be measurable. The organization must have the capacity to appropriately abstract and obtain the indicator. Typically, many of the indicators are obtained through the existing EMRs.
c. The indicator must be reasonably objective- There should be a common, agreed-upon, definition for that indicator. Any ambiguity should be minimized, if not eliminated. There should be no ability to “game” that number to benefit the organization.
d. The indicator target must be obtainable with reasonable effort- It does no good to track an indicator that cannot be accomplished. The targets should be aggressive but doable by the organization.
I have found the above thoughts to be very useful, both in the day to day operations of the system, and as part of the ongoing leadership educational program.
As always, I appreciate your thoughts and feedback.