On #techsafety

You’ve probably seen a lot written lately about tech safety, the notion that we should be making our software environments as safe to work in as our physical workplace environment. If you haven’t, it’s definitely worth a read.  Josh Kerievsky has conjured up a useful analogy for the protective measures we should be implementing in our development environments. From his initial article on the topic:

Tech safety is a driving value, ever-present, like breathing.

It is not a process or technique and it is not a priority, since it isn’t something that can be displaced by other priorities.

Valuing tech safety means continuously improving the safety of processes, codebases, workplaces, relationships, products and services.

It is a pathway to excellence, not an end unto itself.

It influences what we notice, what we work on and how our organization runs.

It’s a noble concept and I wholeheartedly believe that it has merit. However, there’s an issue with the idea of Tech Safety lurking below the surface, and it’s called “risk homeostasis”. Risk homeostasis, a theory developed by Gerald Wilde in 1994, is the idea that we don’t save risk; we consume it. In other words, when we implement something to make our lives safer, we use it to justify riskier behavior in other areas of our lives, and so on the whole we’re no safer than we were before. There is no better overview of the concept of risk homeostasis than Malcolm Gladwell’s 1996 New Yorker article, “Blowup”. (You can also find it in What the Dog Saw, a collection of his best articles.) In the article, he examines the cultural and sociological factors that contributed to disasters like the Challenger explosion and Three-Mile Island. At the top of the list: Risk homeostasis.

A few examples:

  • Diet-Soda-Makes-You-FatStudies have shown that Diet Coke does not help people lose weight. On the  ontrary: the supposed calorie “savings” are subconsciously used as an excuse to eat other high-calorie foods.
  • When we lower our monthly expenses by, say, paying off a car loan, do we take that amount and put it in a savings account or find another way to spend it?
  • Gladwell cites a study of taxi drivers in his article. Drivers whose cars are equipped with ABS were shown to drive more recklessly than those who didn’t, supposedly because they “consumed” the risk savings provided by their anti-lock braking systems.

Don’t get me wrong, I believe that Tech Safety is a good idea. We need to be able to prevent our teams from the perils of fragile code. We owe it to our customers to protect them from bugs. But the safety systems we create, according to the risk homeostasis theory, are just going to give us permission to increase our risky behavior in other areas. We’re going to use the protections we’ve enabled, not to make us safer, but to make us faster. That’s what the taxi drivers did, isn’t it?

So what’s the solution?

I think the first step is recognition. It’s human nature to equalize our risk tolerance. When we create our safety systems, let’s not look at it as the solution, but as the first iteration of a better system.

The next step is to find the balance for each of our safety checks. Our safety systems have to have the foresight to capture and isolate the risk not only within the area it’s intended to protect, but also areas where our desire for risk might leak. Not just checks, but balances too. And checks-and-balances is where Agile shines. We protect against mishandled requirements with acceptance criteria and a clear definition of done (check), and prevent information silos with collective ownership (balance).

I’d like to hear your thoughts. How else can we protect our safety systems from risk homeostasis? Feel free to hit me up on Twitter at @johnkrewson.

This entry was posted in Agile Development, Agile Management, Agile Software, Agile Testing, Test Driven Development. Bookmark the permalink.

4 Responses to On #techsafety

  1. #techsafety is about making it safer to build software. It is also about making it safer to take the necessary risks which make products better.

    What I find lacking is if the injury rate for taxi drivers with ABS brakes climbed or not. Did adding ABS brakes lead to more accidents or just riskier driving?

    Seatbelts, ABS braking systems, and air bags all lead to a safer car. So, do people drive more unsafely? Perhaps, but according to http://www.alertdriving.com/home/fleet-alert-magazine/north-america/us-traffic-fatalities-fall-lowest-level-60-years:

    “The number of people killed on American highways dropped to a 60-year low in 2009, thanks in large part to safer cars, safer roads, better-trained young drivers and a limping economy.”

    According to risk homeostasis, we should all be driving like bats our of hell, yet fatalities continue to fall mostly because cars get safer.

    #techsafety wants to bring that to software development. Make hazards more visible and focus on injuries which should lead to better products.

  2. Tim Ottinger says:

    “We’re going to use the protections we’ve enabled, not to make us safer, but to make us faster”

    Careful, there! If it turns out that we can build systems faster by recognizing and/or removing risk from code, we’ll see a huge industry-wide shift to TechSafety. Everybody wants “faster.”

    I don’t know how many lives and how much property has been saved by ABS, air bags, seat belts, building codes, ergonomic standards, health-n-safety in food production, crumple panels in cars, tamper-proof medicine packaging, elimination of lead and asbestos, guard rails, etc. My instinct is that we’ve had more wins than losses with safety overall.

    Doing “lean startup” style experimental market development needs us to take measured, intentional risks. It is hard to take those risks if the developers are in a “bug-fix-only, oh-good-lord-I-hope-I-don’t-ever-have-to-change-this” mindset.

    So maybe it’s possible that we will consume one kind of (unwanted, unnecessary, obstructive) risk in order to do things that would _otherwise_ be unwise or unthinkable.

    I’ll be willing to accept that risk-takers will always take risks and sometimes might even benefit from them. In the meantime, I’d like to personally benefit from code that won’t bite me (or at least not very hard).

  3. John Krewson says:

    I think the important thing to remember is that there is no such thing as “risk free”. The Challenger represented the most advanced technology of its day, including many safety features. So did Discovery. The NASA culture at the time was constructed in such a way to be okay with a certain level of risk. They had a static level of risk tolerance. Perhaps one of of the first elements of techsafety should be to assess the organization’s risk tolerance level. Get it out there in front of everyone. And then take steps to reduce the tolerance level. I say reduce, because I don’t think it can be eliminated.

  4. Jason Yip says:

    In my mind, focusing on safety worked at Alcoa for two main reasons:
    1. Getting your workmates home safely was a noble, aspirational purpose
    2. Focusing on safety encouraged more mindful behaviour

    None of the examples (Diet Coke, monthly expenses, having ABS) line up to tbese reasons.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>