Using the Science of Measurement to Enhance the Art of Clinical Work (Part 1)

As promised in our October newsletter (click here to subscribe), Insight Partners has some exciting things planned for our third year, including this blog series by guest blogger Julia Pickup, MSW, LCSW. Julia is a highly skilled therapist, clinical supervisor, instructor, and program director. She thrives on developing clinicians to reach their full potential, building team culture and employee engagement, and leading teams to create mission impact through measurable outcomes. On top of all that, she’s a data geek like me!

Some of you might remember a post I wrote last year called “3 Reasons Why Your Staff Might Not Care About Your Data.”  In it, I pointed out some of the most common mistakes program leaders make in measurement and how they impact your staff’s (lack of) engagement with your data. I hear this complaint most often from those who are leading clinicians, therapists, case managers, and other front-line service providers. These groups of professionals are sometimes the ones who resist and/or resent measurement the most. If this is your challenge, stay tuned. Over the next several posts, Julia is going to share with us how she’s been able to maintain a commitment to data-informed leadership, program performance management, and her values and worldview as a clinician while inviting her team of therapists into those conversations. Without further adieu, here’s Julia . . .


I’ve always loved data; I was the weird clinical social worker that adored stats class. I love to find the trends and patterns in data so I can dig into their meaning and apply it to my clinical work. When I transitioned into my current role as a program director, I wanted to ensure that my team felt equipped to understand the outcomes and outputs they were producing. I also wanted them to feel empowered to own and use their data.  This is the product of their work, after all.  I encourage you to do the same.

Below are a couple strategies for creating clinical value from your program’s performance data, and engaging the clinical perspective to enhance measurement.

Engage Them as Problem Solvers

The quarter is over, and you’ve run your numbers. Something is off. Now what? Your direct service staff will be the first and best people to know if the problem is actually something about what you’re measuring and how you’re measuring it, rather than the work itself. Engage them as problem solvers before you start tweaking the program or looking at employee performance. Let me share a couple examples.

When Data Don’t Reflect the Reality

For years I provided in-home therapy to at-risk families, including many parents who had developmental or intellectual disabilities.  My gut was telling me that one of our measurement tools was not accurately capturing these families’ experience. Their scores didn’t reflect the changes I saw. I was invested in making it right, so I proposed to my program director that we try another measure more suitable for parents with special needs.  We piloted a new tool, which confirmed my hunch. We then rolled it out permanently across the team. The old measure was written at a reading level that wasn’t appropriate for our population and was longer than they could complete with care. With the new measure, the data told the same story we clinicians were observing in the homes! This was a win for our clients, our program, and for our culture of evaluation. If my Director had jumped to a conclusion or ignored my hunch, we could have jeopardized our funding, made unnecessary adjustments to our program model, and overlooked the important progress these families were making.

In another case, my program was struggling to meet one of our outcome goals related to child well being that we measured with a pre- and post-test standardized measure.  Each quarter, I engaged our therapists in the process of exploring the barriers to success on this outcome and discussing suggestions to improve the scores.  The clinicians told me that they were seeing change that the measurement wasn’t reflecting because we defined success differently than the measure did. In other words, it wasn’t measuring the outcome we cared about. So, we are searching for another tool that will reflect the reality our therapists see every day and the change to which our program contributes.

In both cases, the observations and instincts of front-line providers were key to our problem solving. As those closest to the work, your service providers should be involved in setting your outcome goals, selecting your measurement tools, and interpreting your data.  This will increase their commitment to collecting and using data well.  Clinicians have their fingers on the pulse of the clients, and therefore the program.  A wise leader will involve them and engage their input as a hot commodity.

Be Transparent and Vulnerable

If your data suggests that something isn’t working, talk about it.  It’s okay if you didn’t perfectly project your outputs or outcomes for the year.  But don’t pretend it didn’t happen, and don’t pretend it doesn’t mean anything.  Own it when things don’t go as planned.  That will demonstrate vulnerability, model resilience, and encourage honesty within your team.  It will also encourage a culture of learning and improvement. Doing so will make them feel safe to make mistakes.  It could also encourage your clinicians to be more daring in articulating their hunches, participating in the design and improvement of measurement strategies, and telling their stories during group discussions.

The Bottom Line

They very skills that make clinicians artists – their compassion for and connection to clients, their creativity, and their sensitivity – make them invaluable thought partners in your evaluation efforts. Next time, we’ll look at four specific ways that outcome data can provide clinical value. We know how Sarah loves to kill two birds with one stone!

 

5 thoughts on “Using the Science of Measurement to Enhance the Art of Clinical Work (Part 1)

  • I really resonated with this guest blog. Just using new measurement tools we helped create for our program goals with students. The results were great but not sure they match what staff observed. Guess we need to have a topic about this on the end of the month agenda for our meeting. Thanks for the idea and reminder.

    • I’m curious, Kathleen, what difference there is between what the measures show and what the tutors see? And have you got any theories about what accounts for the discrepancy?

  • The tutors see that students are wanting to participate in giving answers and asking questions and the measures show that fewer students show improvement in doing these two things. Measures depend on classroom teacher response. Also students feel they improve but their grades don’t demonstrate that at the same level.

    • Hi, Kathleen. It’s worth exploring what the tutors actually “see” when they get the impression that students want to participate more in class, because that might be different than what you’re measuring with the teacher observations. Tutors might be observing students feeling more confident, students expressing a greater interest, etc. and those things are different than actual classroom participation. Also, teachers have a classroom of 20+ kids and might not be the most accurate observers for actual participation per child. Your team might consider if you want to just ask the students themselves if they want to participate in class or are more comfortable participating in class, if that’s what they tutors see. We can talk more offline if you want, but I thought other readers might find this exchange helpful, too.

  • Thanks. I am going to think about this and ask some questions. I like having measurement data at the ready to use with supporters to demonstrate success.

Comments are closed.