Skip to Content
8 Min Read

Change Agent: The Imperfect Science of Evaluating Communications

This article originally appeared in the Fall/Winter 2015 issue of Change Agent

My name is Taj Carson. I have a confession to make. I’m a scientist, and worse than that, I’m a social scientist.

I come from academia, where we are convinced that if you measure human behavior using the right instrument, one that is reliable and valid, you can predict human behavior and make clear, undeniable statements about the truth.

I thought there was a truth that you could measure. And that you could measure it using human beings. I learned the hard way that people are messy, complicated, chaotic creatures who unfortunately have feelings and motivations that are changing all the time.

So how do you measure these disordered, hectic creatures in a meaningful way? How do you gather data, a scientific and methodical process, from subjects who are the exact opposite? That is what I’ve spent the past 15 years working with my clients to figure out.

“Errors using inadequate data are greater than those using no data at all.”

I’ve finally arrived at a place where I can say with confidence that we can measure anything—human behavior, relationships, communication. We will do the best we can, with the resources we have, to collect the information we need to tell the story of what the client is doing and why it matters. This may sound unscientific, but that’s because it’s not a perfect science. People aren’t lab rats. We are messy, complicated, chaotic … you get the picture. People think of communication as an art. It involves the most amazing of human activities—storytelling, compassion, advocacy—used to make our world better, to help each other, to create change.

So what is the role of evaluation in the world of communication? Does it have a role to play at all? It can be really hard to see its place, so sometimes we don’t collect any data because we don’t think it’s appropriate.

Or, if we can’t see the big picture, we collect data without thinking it through. We collect random information, or we pull up Google Analytics data because it’s easy. But as Charles Babbage put it, “Errors using inadequate data are greater than those using no data at all.”

When evaluating communication, measurement must be a priority. When it isn’t, we end up with no data, the wrong data, or bad data. When this happens, we are unable to explain what happened. We can’t tell the story about who we motivated to learn, to understand, and to act.

You can evaluate communications work effectively. Think about public health work. To evaluate its effectiveness, you must measure communication. Public health folks are always telling people how not to get sick, how to avoid getting HIV, how to prevent child abuse, how to have a healthy baby. And these campaigns depend on being able to measure who got the message, how they got it, what they understood, and how and whether they acted on it. Public health officials depend on being able to measure communication. So how should the social sector go about it?

There are three steps to keep in mind:

1. Start by doing something profoundly ordinary and unscientific.

Make a plan. What do you expect to happen as a result of your work? Who will be impacted? How will they be impacted? What are the outcomes you expect to see? And very importantly, where does your influence end? That’s as far as you want to measure because you don’t want to measure something you can’t influence.
One of my favorite tools for doing this is the logic model. It’s a great tool for helping you to think through what you are doing and what impact you expect it to have. The Kellogg Foundation has a logic model guide that has been around forever. It’s a great guide to what they are and how to develop one.

The logic model can be used to test your thinking internally, making sure that all your activities connect to short and long-term outcomes, and to help you identify breaks in the theory of change if you aren’t seeing the results you expect. Did some activities not get implemented as planned? Did you not see the short-term outcomes you expected and therefore miss your mark on long-term change?

It can also be used as a simple visual tool to give to stakeholders and potential funders. A good logic model can give someone a good sense of what you are doing (and why) in one page. Seriously, I’ve mapped out some very complex programs in just one page. It can be done.

As the Cheshire Cat told Alice, “If you don’t know where you are going, any road will take you there.” Make sure you have a road map.

2. Measure with specific goals in mind.


Once you know what you are doing and what impact you expect, you can measure it—not just with analytics, but by collecting information from the people who receive the message.

Analytics are easy to track because the data is collected automatically. But they can’t tell you everything. Even a
 simple online survey can be used to get feedback about your campaign and determine whether or not you are reaching your intended audience. But we are talking about changing people’s hearts and minds, so to really get to your outcomes, you will need to get some important information from people.

Look at your outcomes. Are you teaching people something new? Get some people in a room, ask them some questions, then have them interact with your website and ask them the same questions again. Do they know the answer now? It can be a survey, or you can do this as a more informal conversation.

Behavioral change is harder, but still doable. How easy it is to get information about changing behavior depends on many things. One is how narrowly defined your target population is. If you are trying to reach out to a particular community, it’s easier to go out into that community and ask people about the messages they received. If your reach is nationwide, things like online surveys are more likely to be your tool of choice.

You can be innovative, too. There are some interesting methods, such as case studies, network analysis, and the success case methods. Start poking around and you may find something other than a survey or a focus group that will tell you what you need to know. Some of these methods (such as network analysis) are better implemented with an evaluation professional. There are lots of ways to collect information.

Think about where your people intersect with your message, and then reach out to them.

3. But it doesn’t end there. You must then share your story.

Sometimes we collect data, but we find that no one is listening, or the information doesn’t get to the right people. Or people sit on the data because the data seldom tell a simple success story. Many times, like with all good information, we learn about what went well and where we fell short. And you have to be brave to share that information. But share it you must, because you have to tell the story.

You are storytellers. And this is an amazing time, because data geeks and storytellers are coming together in the field of data visualization. People are using data to tell amazing and beautiful stories about everything under the sun.

We use the principles of visual storytelling all the time. Once clients are clear on what they are doing and what impact they expect to have, the story follows. Then we just need to make sure they have the data to talk about each step with. Do they collect information about their activities? We can tell the story of how much they did and how many people they reached. Do they know what changes they expect?

We can tell the story of what people learned and how they behaved differently. We weave in strengths and challenges that people encountered and that the program overcame. And then we visualize it. What that looks like depends on the audience. Sometimes it’s an infographic. Or it might be a graphic memo that is visually rich and easy to digest.

Once you collect data, analyze it, and use it to tell the story of your work and the changes it brought, you can better understand the impact of your work, get valuable information about how to improve it, and demonstrate the value of what you do.

As Stephen Few put it, “Numbers have an important story to tell. They rely on you to give them a clear and convincing voice.” So measure away, and give those numbers the voice they deserve.

Dr. Taj Carson, Ph.D. is the CEO of Carson Research Consulting in Baltimore, MD. Dr. Carson’s work focuses on objective assessments of foundation and nonprofit programmatic and organizational effectiveness to give them the information they need to engage in more focused strategic planning. She has experience working with local, state and federal government, nonprofits organizations and foundations, focusing on the unique issues surrounding measurement and evaluation.

Join The Network

Community, learning, and leadership to help you do good, better.

Become a member