The Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability
Magazine / The Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability

The Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability

Book Bites Entrepreneurship Habits & Productivity
The Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability

K. Scott Griffith is the founder and managing partner of SG Collaborative Solutions, LLC. He is a prominent leader in the field of aviation as an international airline captain and chief safety officer at American Airlines. He is widely recognized as the father of the airline industry’s landmark Aviation Safety Action Programs (ASAP), and is the recipient of the Flight Safety Foundation’s Admiral Luis de Florez Award for his contribution to aviation safety.

Below, K. Scott Griffith shares five key insights from his new book, The Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability. Listen to the audio version—read by the author himself—in the Next Big Idea App.

A Leader’s Guide to Managing Risk: A Proven Method to Build Resilience and Reliability By K. Scott Griffith Next Big Idea Club

1. Risk intelligence can determine the success of a business.

My story began with a plane crash I witnessed on August 2, 1985. It was a horrific accident caused by a dangerous, invisible wind vortex known as a microburst. The pilots couldn’t see it and weren’t trained to deal with it. After the crash, I took a leave from my job as an airline pilot and worked as a physicist and test pilot to develop a wind shear prediction system using Lidar and to formulate microburst recovery strategies. A few years later, I invented a program—the Aviation Safety Action Program, or ASAP—that led to a 95 percent reduction in the U.S. airline fatal accident rate.

I learned from the plane crash that there’s a pattern to how bad things happen and an evidence-based science to prevent them. Any business leader can learn how it works, but it requires a method you weren’t taught in business school. Leadership skills alone won’t get you there.

We see and understand the world through the lens of our experiences. How we interpret our experiences depends on several factors, including our training, culture, environment, and appetite for opportunities. But your risk intelligence—the ability to perceive and calculate situational danger—will determine the future of your business.

Understandably, we celebrate our successes: profitable financial results, winning a contract, getting to work on time, meeting our obligations, and enjoying our personal lives. We work with colleagues who share our mission, vision, and values. We strive to produce results that last by focusing on what we do well.

But sometimes we learn the wrong lessons from our successful outcomes. Our risky systems and behaviors produce dividends until they don’t, and we pay for our miscalculations with our fortunes, our lives, and sometimes the lives of others. The best strategy highly reliable organizations use is to not focus exclusively on what the organization does well but to devote equal attention to what the organization lacks: expertise in preventing the potential negative consequences of running the business.

2. We can manage only the risks we see and understand.

Like the pilots on the plane crash I witnessed, if we’re not able to see and understand the dangers ahead, we could be flying blind into disaster.

Seeing and understanding are different. Sometimes we see risk but don’t understand it. In January 2020, the World Health Organization and CDC recognized a novel coronavirus designated SARS-CoV2 but didn’t understand the way in which it propagated. They assumed it was similar to the original SARS virus, that people were only contagious if they had a fever, and that infection would be passed along through close contact and on surfaces. They did not understand that people could be infectious before showing signs of illness and that the virus could spread through airborne particles, aerosols suspended in air, not just direct contact. The result was that the epidemic in Wuhan China rapidly spread into a pandemic.

“The National Highway Traffic Safety Administration estimates that for every drunk driver arrested for driving under the influence of alcohol, that driver has driven drunk 88 times on average without getting caught.”

At other times, we may understand a risk but fail to see it when it’s present, such as the way we manage the risk of drunk driving in America. The National Highway Traffic Safety Administration estimates that for every drunk driver arrested for driving under the influence of alcohol, that driver has driven drunk 88 times on average without getting caught. When it comes to drunk driving, we understand the risk as a society, but we simply don’t see it most of the time it occurs—until the car crash.

Another example of understanding, but not seeing, risk is Malcolm Gladwell’s fascinating re-examination of the biblical story of David and Goliath. Contrary to conventional interpretation, David’s surprising victory over the giant wasn’t just his skill with the sling shot, but Goliath’s inability to see the risk in front of him, due to some type of macular degeneration or eye disease. He simply could not see his foe very well, enabling David to gain his advantage. One of the biggest challenges you face as a business leader is preparing for hidden risks—because you’re consumed with competing priorities that block your view, or because you’re not looking in the right direction.

You counteract these tendencies by identifying what hasn’t happened yet and triaging best and worst-case scenarios. Internally, the employees closest to where the work is being done are your best eyes and ears about how the business is running. Managers provide input, too, but it’s the frontline workers diving below the waterline who can tell you the iceberg is ahead. Employee burnout, fractious or discriminatory cultures, and sexual harassment are examples where employees see far more than meets the eyes of the executives. Externally, examples include pandemics, supply chain disruptions, changing customer expectations, and cyberattacks. Making these risks visible requires looking beyond your own expertise and experience to see the precursors to bad outcomes. In contrast to what Nassim Taleb says, Black swan events—rare, unexpected occurrences with major consequences—don’t just appear out of nowhere; they’re often hiding in plain sight.

3. Improve system reliability by making them effective and resilient.

Systems fail, whether it’s the ice cream machine at McDonald’s continually frustrating customers; the Texas power grid failure that left 4.5 million homes and businesses without power and killed at least 246 people; or the Minnesota I-35W bridge collapse that killed 13 people and injured 145. In many endeavors, this is a matter of life and death. In business, this can mean the difference between profitability and bankruptcy.

For all leaders, it’s important to understand how systems fail—as well as how they work—and to manage them accordingly. Control the system and environment first, before setting human performance expectations. Exceptional people working in average or below-average systems won’t produce optimal results. On October 1, 2013, the public’s perception of the Affordable Healthcare Act suffered tremendously when the healthcare.gov website couldn’t handle the onslaught of applications. In December 2022, Southwest Airlines came under intense criticism because of breakdowns in customer service during a winter storm. Employee associations and industry insiders blamed the schedule disruptions on antiquated crew and resource allocation software. Southwest has a reputation as an industry leader in culture leading to superior customer service. But during this event, the unreliability of Southwest’s systems led to operational and reputational harm.

Reliable systems have two attributes. First, they are effective in producing results when things go right. Second, they are resilient, able to bounce back or recover when things go wrong, including system breakdowns and human interactions. Engineers are good at designing systems to work when things go right, but perhaps less skilled at thinking like real humans, i.e., understanding how people apply workarounds and cut corners to save time or make work easier.

“Control the system and environment first, before setting human performance expectations.”

What we know about systems is there are factors that influence how reliable they are. We call these performance-shaping factors. They include influences such as system capacity and operational load, environmental factors, resource matching, system degradation, and system design. This includes applying barriers (i.e., obstacles put into place to prevent failures), redundancies (parallel working components or backups), and recoveries (ways to correct when things go wrong) as ways to manage system reliability. Finally, the reliability of any socio-technical system will depend, at least in part, on how reliable the humans are when operating it. A car, train, or airplane is only as safe as the person driving it, for example.

Once we see and understand the factors that influence our systems and the ways in which they can fail, we can then manage them accordingly. Why is this important? Because to get the results we expect, we must first see and understand risk, build a reliable system to match the risk, and then turn our attention to that quirky component known as the human being, perhaps the most challenging—and rewarding—of all components.

4. Improve human reliability by understanding how they underperform.

We’re all human. What makes us unique as individuals is the product of our biology, environment, and experiences. But we have one thing in common: We all make mistakes. There are no exceptions. Although human failures are inevitable, like systems, we can manage them. But first, we must see and understand how people perform, what motivates them, and what influences their behaviors. Like systems, we must understand how people can underperform, and then manage them accordingly.

It’s important to understand the distinctions between human performance and human behaviors. Human performance management includes building the knowledge, skills, abilities, and proficiencies it takes to do a job or perform a task, along with managing the system, personal, environmental, and cultural influences—as well as the competing priorities we all face every day. All of these affect our behaviors.

There are two broad categories of behaviors, and I’ll describe them briefly and break them down into multiple types. We must understand, though, that human performance management must come before behavior management. Simply responding to a behavior without addressing its underlying causes and influences will not get us the results we expect and our response to the individual won’t be fair and consistent.

The two categories of human behaviors can be thought of as errors and choices. How we manage these distinctions will, in large part, determine our workplace culture. Each behavior requires a different response to the individual. Employers often get behavior management wrong by assuming every bad outcome and every rule, law, or procedural violation should be met with coaching, discipline, or punishment. “Blame and shame” cultures can result, which chills employee reporting. Employees start to hide their mistakes and choices to avoid punishment or, often, embarrassment.

“The two categories of human behaviors can be thought of as errors and choices.”

In high-consequence industries such as healthcare, this can lead to tragedy. In 2017, a nurse at Vanderbilt University Medical Center retrieved and administered the wrong medication, killing the patient. She was fired, lost her license, was convicted of criminally negligent homicide, and was given a probated sentence. Patient safety advocates across the U.S. warned of the negative effect of these actions without accountability applied to the hospital and a redesign of the medication dispenser. Experts remain concerned that this event discourages healthcare workers from reporting their mistakes.

Influencing and managing human behavior using just cultural principles can be challenging, but immensely rewarding, and builds the bedrock for improving culture. The goal is to raise collective risk intelligence and provide consistently fair and just responses to human behavior. We all make mistakes, but seeing, understanding, and managing our choices leads to fewer human errors and adverse outcomes.

5. Improve organizational reliability by assessing potential risks.

Sustaining reliability across an organization requires a commitment beyond individual leadership and the tenure of top executives. Longevity in business requires identifying the risks ahead, placing equal focus on what the business does well, and just as much attention on what it doesn’t. You must hardwire success over the long term and build the framework for reliability, giving your employees and the organization the best chance to succeed.

The U.S. airline industry’s remarkable safety record was no accident. It took industry and labor associations pulling together for mutual benefit and a regulator willing to move beyond compliance-based enforcement to risk-based methods of oversight. The Aviation Safety Action Program succeeds because pilots, mechanics, dispatchers, air traffic controllers, flight attendants, and other workers can report problems without fear of losing their jobs, or their licenses, and a team of airline executives, regulators, and employee unions work together to find unanimous solutions.

Continuing the work of Edwards Deming and others who established a framework for quality management, I developed a set of landmark standards called Collaborative High Reliability and Collaborative Just Culture. These standards are the first of their kind to define the terms “high reliability” and “just culture” and require the following: executive leadership and governing body responsibilities, labor association involvement, policies, training, tools, and a sustainment plan. Most importantly, the standards require independent validation by an accrediting body.

You’ve no doubt heard the story of the professor who demonstrated the big rocks approach to managing life priorities by bringing in a large glass jar and filling it with rocks, stones, pebbles, sand, and then water. The lesson says that if you want to get the big rocks in the jar you must put them in first.

So, with Collaborative High Reliability, the first big rocks in the jar are a Collaborative Just Culture program and a Reliability Management Team, trained, developed, and audited every two years. This is the first universal high-reliability program in the world independently audited to a documented standard.

This collaborative approach provides incentives for workers to report problems and become part of the improvement process. Business leaders can learn from this success, too, and apply its lessons beyond the attributes of safety. Seeing, understanding, and managing the risks ahead is the only surefire way to avoid catastrophe—whether you’re flying planes or running a business. Making your organization more resilient and reliable is more than a matter of leadership or good fortune—it’s a proven strategy for success.

In closing, I’ll end with a quote from one of my heroes, Richard Feynman, who said: “We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems, but there are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions and pass them on.” The science is no longer hidden. The sequence matters and positive results are sustainable.

To listen to the audio version read by author K. Scott Griffith, download the Next Big Idea App today:

Listen to key insights in the next big idea app

Download
the Next Big Idea App

app-store play-market

Also in Magazine

-->