• Nate Braymen

People Aren’t Airplanes

Checklists don’t work!

I could almost hear the eyeballs rolling as I typed that. Hang with me for a second though. I wouldn’t want the devil to get lost in the details.

Checklists can and have been applied to processes with great success. The obvious example of that is the aviation industry. Strict adherence to the pre-flight checklist is a great practice that has helped mold culture and instill the value (and priority) of safety. Everyone knows that an aircraft will not be operated if it is not in an acceptable condition. The practice identifies deficiencies and requires that the are remedied for the sake of personal and equipment safety.

It’s because of this fact that people often believe the practice can be translated seamlessly from process to people. I learned early in my career that this is not a universal truth.

One of my first civilian safety positions was largely centered on data entry. Technically, I administered a behavior observation program, but what it boiled down to was hours upon hours typing numbers. None of the “observations” reflected actual conditions on that site.

As our inspectors turned in stacks of multi-colored cards daily, I began to notice some trends. I would later study the trends in detail using six sigma, but that’s a story for another day. Anecdotally, however, my analysis helped me form two important hypotheses.

First, the “behavior” observation checklists didn’t seem to fit the activities being performed. Our people were mandated to complete the sheets even when a specific activity (excavation, for example) was not being done on our site, often resulting in a solid line being drawn down the “N/A” column. That problem was easily attributed to poor implementation and bad rules, but even checklists designed for common tasks showed this problem. Housekeeping, for instance, is something that was expected every day throughout the site. But the checklist designed to observe it didn’t fit every activity.

Second, I began to notice that the numbers were falsely skewed positive. The observers were incentivized to achieve scores representing high percentages of safe behavior. In order to achieve that, the inspectors would avoid observations that would hurt their numbers. It was so bad that during a site tour, the Project Manager asked me how it was possible that our (housekeeping) scores were so high while the site was in such disarray. My answer was simply to remind him that he was getting what he’d asked for.

My example in this post is obviously a case study of one. I know that checklists can be useful tools. But they can also breed complacency. If you’re going to use them as a mechanism to promote safety, do so wisely. Consider the following points as “requirements” for building a useful checklist:

  1. Make certain the list fits the activity: checklists are valuable when observing static processes, not variable tasks.

  2. Don’t create parameters that skew your results: If you tell your end user that they must get a score of 95% or better on their checklist, they will most likely find a way to make that happen. Sometimes sweeping things under the rug is the most sensible method.

  3. Accept the answers for what they are: If your people observe something that doesn’t meet your standards, consider it a snapshot of where your organization is today and figure out how to improve.

  4. Do something with the data: If people see you take action, they are much more likely to rally around the cause. No one wants anything to do with a program when their hard work just piles up on someone’s desk.

This discussion could very easily open a couple huge cans of worms, so I’m going to same some for later. This week I’ve gotten some great post requests from readers, so be looking for those. If you want to join in on the discussion please let me know.

Please follow and share Relentlessly: