

Discover more from Organizational Change Intersections
OCI #29: Change Initiatives as Experiments
...and what to focus on instead of success vs. failure.

I’ve been trying to find ways to explain why statements such as “x% of change initiatives fail” bother me so much. There are a number of reasons, but here’s one that is on top of my mind today. Although it’s certainly important to evaluate whether we achieved the results we intend, I believe framing our work as a quest for success actually undermines the likelihood that we will do an honest and objective evaluation. Here’s why.
Certainty Breeds Vulnerability
If the political and administrative system has committed itself in advance to the correctness and efficacy of its reforms, it cannot tolerate learning of failure.
— Donald T. Campbell
We often put a lot of effort into building the business case for change initiatives, articulating a solid argument for the benefits of the approach we are proposing, and projecting a significant return on investment. This helps us gain support and reach alignment on the path forward, but it also sets in motion two forces that put the sponsors of an initiative into a vulnerable position and make it difficult for them to objectively evaluate results.
We promise results. The operative word here is promise. When we imply that desired outcomes are certain to follow from the implementation of a specific solution, and downplay any areas of uncertainty in the quest to get strong endorsement for a plan, we tie our personal credibility to achievement of those results.
We close off alternatives. When there are several possible approaches to achieving the desired outcomes, we are likely to highlight the benefits of the preferred solution while stressing the disadvantages of alternative approaches. This may increase agreement on the direction to take, but also can create overconfidence about the selected approach and negative impressions of other solutions.
It is one of the most characteristic aspects of the present situation that specific reforms are advocated as though they were certain to be successful. For this reason, knowing outcomes has immediate political implications…There is safety under the cloak of ignorance.
—Donald T. Campbell
The combination of these factors places a high value on success, and leads to a situation where a leader may be at significant reputational risk and potential organizational jeopardy if the project doesn’t deliver everything that was promised. And the likelihood is that it won’t. Most solutions are imperfect, and are implemented in situations that include lots of uncertainty and complexity. Results tend to be mixed, ambiguous, and lend themselves to a variety of interpretations.
To what extent does your organization begin initiatives with the expectation of success? Are leaders making implicit or explicit promises about the results of the changes they propose? If a leader champions an initiative that is unsuccessful, what are the implications for their career or their advancement in the organization?
Unintended Consequences
When we judge the outcome of a change initiative as a “success” or a “failure” based on whether or not we get the results we predicted, three things happen:
We drive leaders into a corner. Rather than seeing neutral or negative results as useful data to guide us in our next attempt, we place a pejorative label on the outcome, and often on the people who were the greatest advocates for the solution we tested. This tends to push leaders into doing what they can to ensure that their initiatives—and their role in them—are seen as successes. They distance themself from results, delegate too much responsibility to change agents, suppress or ignore bad news, cherry-pick positive results to report, and push the organization to move on to the next initiative without fully taking the time to learn from experience.
We weaken our analyses. When there is so much at stake, leaders tend not to push for rigorous analyses that might prove them wrong. This doesn’t mean that they intentionally seek to bias or distort the data, or place pressure on others to suppress negative outcomes—although these things do happen. Instead, they might settle for measuring outcomes that are readily accessible, rather than more accurate measures of business impact that require greater effort to assess. One common example is in the evaluation of training interventions. There are multiple outcome levels that can be measured, from participant reactions to actual business results; most training evaluations focus on the lower end of the continuum.
We increase organizational cynicism. When people in the organization see leaders reporting positive results for initiatives that are widely known to be ineffective, overlooking alternative solutions that might be worth trying, or refusing to listen to anything but “good news,” they begin to lose faith in the system. Change becomes more about politics and appearances than about what’s really working.
We should be ready for an approach…in which we learn whether or not these programs are effective, and in which we retain, imitate, modify, or discard them on the basis of apparent effectiveness on the multiple imperfect criteria available…So long have we had good intentions in this regard that many may feel we are already at this stage, that we already are continuing or discontinuing programs on the basis of assessed effectiveness. It is a theme of this article that this is not at all so, that most ameliorative programs end up with no interpretable evaluation.
— Donald T. Campbell
If Campbell is correct—and I believe he is—we are spending a lot of time, energy, and money on change initiatives without an equal level of focus on making a realistic and honest assessment of their outcomes.
A Different Way of Thinking
The single most important factor in objective and rigorous change evaluations has nothing to do with measurement and everything to do with mindset. It requires a shift to a view of changes as experiments.
Rather than looking at the success or failure of a planned initiative, we need to focus our attention on the problems we are trying to solve and/or outcomes we want to achieve, and recognize that there are many potential solutions we might try. Then we can try one, get curious and honest with ourselves about what’s working and what’s not, and have other approaches at hand rather than placing so much weight on one particular solution.
By making explicit that a given solution was only one of several…and by having ready a plausible alternative, the [leader] could afford honest evaluation of outcomes. Negative results, a failure of the first program, would not jeopardize their job, for their job would be to keep after the problem until something was found that worked.
—Donald T. Campbell
Here’s how this works in practice:
Focus first on the issue—the problem to be solved/result to be achieved.
Identify multiple potential solutions.
Choose the most promising solution to test.
Try it out and evaluate it in as unbiased a fashion as possible.
Attach no judgment to neutral or negative results—treat them as data.
Based on the results, make adjustments and/or try a different solution.
Continue until you have successfully addressed the issue.
Where have you seen this “experimenting mindset” in your organization? In what situations have people stayed focused on solving a problem by working through various options and solutions to see what might work best? What challenges do you see in helping people adopt this frame of mind?
Conducting Experiments
Once we have adopted this mindset, the second critical ingredient is unbiased evaluation—testing our interventions in a way that can allow us to discover where we are wrong. The “gold standard” of evaluation is a true experiment, in which one or more specific elements of a system are shifted while everything else stays constant, and “control groups” that do not experience the shift are used as a comparison point. These allow us to rule out other explanations for any effects we might observe.
Experimental evaluations are easiest to do in a laboratory setting. When we operate in real-life environments, we start to run into complicating factors. There are often other things that are going on at the same time that might influence results, and it's not always easy or possible to set up control groups. These can limit our ability to draw strong and accurate conclusions about the impact of our changes.
To address these challenges, we can apply strategies called field experiments and quasi-experimental designs that incorporate elements of the experimental approach. Campbell’s article contains a large number of these. Here are two examples:
Combining pilot testing with data tracking. This involves tracking outcome data for multiple groups, but only introducing the change to one group at a time. If the scores change for the “test” group but not for the “control” groups, we have more confidence that the results are due to our intervention.
Looking at data over time. In many change evaluations, we only view outcome data for two points in time—before and after the change. If we see a shift in the desired direction, we conclude that the change “worked.” A time series analysis looks at data from a longer period of time to help us identify other things that might be going on.
Here’s an example from Campbell’s article. It shows data on the relationship between a crackdown on speeding and the traffic fatality rate. The first graph shows the fatality rate before and after the crackdown. At first glance it suggests that the crackdown was effective. But if you look at the larger picture, which shows data from several years in a row (with the timing of the crackdown indicated by the line), you can see that it’s not so simple. This view shows that the numbers have gone up and down over time, making it less clear that the crackdown is responsible for the observed drop. This could lead researchers to explore some additional possibilities for a more accurate picture of what’s going on.
Conclusion
When we are able to release our emphasis on certainty and see our changes as experiments, we can come to a place of genuine openness and curiosity. This is often accompanied by a culture of “trying things out,” conducting frequent pilot efforts to test various approaches against one another, paying close attention to early results, and rapidly spreading successful ideas throughout the organization. These are some of the core elements of the “learning organization” approach that has been described by Peter Senge and other management writers.
Combining this mindset with well-designed intervention and analysis techniques can help us achieve our aspirations of truly understanding the impact of our initiatives, allowing us to invest our resources in the most powerful and effective ways.
Where have you seen people take an approach to change that truly reflects curiosity and an openness to experimenting? What can leaders do to help one another apply this mindset? How can change agents increase their knowledge and skill in designing real-life experiments, testing ideas objectively, surfacing and learning from neutral or negative results, and creating a climate for quickly sharing new and effective practices?
I hope you’ve enjoyed this edition of Organizational Change Intersections! I’ll be back in about 3 weeks with the next installment.
Over 50 years ago, psychologist Donald Campbell wrote an article called Reforms as Experiments [Campbell, D.T. (1969). Reforms as experiments. American Psychologist, 24(4), 409-429] that has become a classic in the social sciences. He asserts that we do a terrible job of evaluating the effectiveness of programs and initiatives despite our best intentions. While his focus is on the public sector, and programs designed to address social problems, much of what he has to say is relevant for change in any setting, and essential for us to understand. This article includes some of the key insights and lessons I have taken away from his work.