This week’s Pipeliners Podcast episode features long-time pipeline accident investigator Gary Kenney discussing the fundamentals of conducting a pipeline accident investigation.
In this episode, you will learn about the challenges that accident investigators face, deciding what plays into the accident investigation at hand, and how AI technology might be entering the industry more in the future.
Pipeline Accident Investigation: Show Notes, Links, and Insider Terms
- Gary Kenney is the Managing Principal at Sine Rivali. Connect with Gary on LinkedIn.
- Gary has served as Director of Technical Investigations into a number of explosions and fires that have occurred in the pipeline industry, including:
- Several incidents within Aramco over the period of 1977-87.
- Piper Alpha Offshore Platform that resulted in 167 fatalities and the total loss of the offshore platform in 1988.
- Longford Gas Plant Explosion and Fires that resulted in the total loss of natural gas supply to the state of Victoria and the city of Melbourne in 1998.
- Varanus Island gas pipeline rupture, explosion, and fires that resulted in the loss of natural gas supply to the state of Western Australia in 2008.
- BP Macondo / Deepwater Horizon drilling platform explosion and fires in 2010.
- TapRooT– TapRooT® Root Cause Analysis is used to improve performance by analyzing and fixing problems to prevent major accidents, quality issues, equipment failures, environmental damage, and production issues.
- HAZOP (Hazard and Operability Study) is a systematic way to identify hazards in a work process. HAZOP is broken down into steps, and every variation in work parameters is considered for each step, to see what could go wrong. HAZOP’s approach is commonly used with chemical production and piping systems. Process facilities in the United States are required by OSHA to run a Process Hazard Analysis (PHA) every five years. Most commonly, a HAZOP is used to conduct the process hazard analysis.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and hardware elements that allows industrial organizations to control industrial processes locally or at remote locations, monitor, gather, and process real-time data, directly interact with devices such as sensors, valves, pumps, motors, and more through human-machine interface (HMI) software, and record events into a log file.
- NTSB (National Transportation Safety Board) is an independent Federal agency charged by Congress with investigating every civil aviation accident in the United States and significant accidents in other modes of transportation – railroad, highway, marine, and pipeline.
- PHMSA (Pipeline And Hazardous Materials Safety Administration) protects people and the environment by advancing the safe transportation of energy and other hazardous materials that are essential to our daily lives. To do this, the agency establishes national policy, sets and enforces standards, educates, and conducts research to prevent incidents. They prepare the public and first responders to reduce consequences if an incident does occur.
- UK HSE (Health and Safety Executive) is Britain’s national regulator for workplace health and safety. It prevents work-related death, injury, and ill health. HSE is an executive non-departmental public body, sponsored by the Department for Work and Pensions.
Pipeline Accident Investigation: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 233, sponsored by the American Petroleum Institute, driving safety, environmental protection, and sustainability across the natural gas and oil industry through world class standards and safety programs. Since its formation as a standard setting organization in 1919, API has developed more than 700 standards to enhance industry operations worldwide. Find out more about API at api.org.
[background music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time. To show that appreciation, we give away a customized YETI tumbler to one listener every episode. This week, our winner is Eddie Valdez with ExxonMobil. Congratulations, Eddie. Your YETI is on its way. To learn how you can win this signature prize, stick around till the end of the episode.
This week, Gary Kenney with Sine Rivali is joining us to talk about accident investigation. Been trying to get Gary on for a long time. I think you’ll enjoy this conversation.
I’m sitting here with Gary Kenney, having a very nice bottle of wine, I might add.
Gary Kenney: [laughs]
Russel: We’re going to talk about accident investigation. Gary, welcome to the Pipeliners Podcast.
Gary: Thank you very much, Russel. Very pleased to be here.
Russel: If you would, why don’t you tell us a little bit about your background and how you got into pipelining and accident investigation and all that.
Gary: All right. We’ll start with education. My undergraduate or my bachelor’s degree is in physics and mathematics. It was a dual major. After I graduated with my bachelor’s degree, I went to work for Firestone Tire and Rubber in Akron, Ohio. I’m born and raised in that area.
Then, after about a year and a half, I ended up going to graduate school. I got a master’s degree in environmental engineering and business management, business administration at that time, and then a PhD in environmental health and environmental engineering.
After graduating, I first went to work for Bell Laboratories for a couple years. Then I was approached by the Saudi Arabian American Oil Company. Actually, at that time, it was called Aramco, the Arabian American Oil Company. I took a job there and ended up over in Saudi Arabia about two months after I was approached by the company.
Russel: Interesting. I’m sure that was quite different from what you were used to working at Bell Labs.
Gary: [laughs] Very much so. I arrived in Dhahran, Saudi Arabia, which is where the headquarters of Aramco were and continue to be. About three weeks after I got off the airplane in Saudi Arabia, there was an explosion and a fire in one of the gas oil separating plants. It just flattened the plant.
The plant was producing about 330,000 barrels of oil a day. It was chasing about, if I remember, something like two to three million standard cubic feet of gas around as well.
There was a problem. One of the spheroids was over pressurized. Relief valves didn’t relieve. The top lifted off the spheroid. Gas pancaked all over the facility, found an ignition source, and just destroyed it. Luckily, no fatalities in that, but the entire facility was destroyed.
The company then immediately said, “OK, well we need to get together an engineering investigation team, find out what happened.” They looked around and said, “OK, we need a scribe to this engineering investigation committee.”
Russel: [laughs]
Gary: Quite frankly, they looked around and said, “Oh, look. This guy’s been pretty much just straight off the plane. He’s not really settled into his job at this point in time. He’s done a master’s thesis. He’s done a doctorate dissertation. He must know how to write reports.” So I became the scribe to the engineering investigation team.
[laughter]
Russel: I’m not going to do it, but I could tell you a whole story about getting called to scribe.
Gary: [laughs]
Russel: People that know me well, I’ve shared that story with. It’s not really appropriate. Anyways, that’s really interesting. They picked a brand new PhD who knows how to write. He’ll be the scribe. You don’t really understand what’s going on here, do you guys?
Gary: [laughs] It took us about three months to complete our investigation, writing the report, putting together various presentations, PowerPoint presentations, for the executive management. We’re in about the last couple weeks of the investigations, doing the final work on the report and blew up another gas oil separator.
The company looked around and said, “Well, we’ve got this engineering investigation team. Guess what? We’ll just roll them over into this one.” My first six months of my life in Saudi Arabia working for Aramco was being a scribe to accident investigation. That changed my whole career, where the job I was originally going into, I didn’t go into.
The next thing I knew, I was in the loss prevention department as one of the engineering advisors in the loss prevention department, working in that particular group. Unfortunately, Aramco had a series of other explosions over this period of about 1977 to 1980.
At one point, was top of the list of, that time it was Marsh & McLennan. I forget who it is now. We topped the list in regards to the largest oil and gas incidents. We had 3 or 4 out of the top 10 in the world over that period of time.
Russel: Ooh. That’s a whole lot of unwelcome visibility.
Gary: [laughs] Yes. As a result of first being a scribe, and then later on I ended up being part of the engineering investigation teams, doing the investigations.
Russel: You’ve done accident investigation for a big part of your career?
Gary: Almost the entire career.
Russel: What are some of the accidents you’ve investigated that people that listen to this podcast might know about?
Gary: Well, within the oil and gas…Let me do one outside of it. If there are those who live in the UK, they might be aware of this one. That was the King’s Cross underground fire. After I left Aramco in 1987, I went to England and was working at that point in time for a venerable old English firm called Cremer and Warner.
Sir Frederick Warner was still alive and running the company at that point in time. We had actually used Cremer and Warner in Saudi Arabia, so that was the connection. I knew them. They knew me.
When I went up to England, and I was really originally going to go for about a year and it turned out like 10 years, but in November of ’87, a fire occurred in one of the escalators in the King’s Cross underground station.
As a result, and I’m pulling this from memory, I think there were 32 members of the riding public that perished in those fires.
Russel: Yeah, I remember that. I left the UK in ’85, and I remember that.
Gary: Cremer and Warner were asked by the government to lead the technical investigations into Justice Fennell’s public inquiry into it. They formed what they call a public inquiry, where they brought in witnesses, we did investigations and that. I was asked by Cremer and Warner to be the deputy project manager of the technical side of those investigations.
Which we did, and that went on for seven, eight months. Now one within the oil and gas industry is, in July of ’88, the Piper Alpha offshore facility.
Russel: Yeah, that was a big, big deal.
Gary: About 125 miles off of Aberdeen. Suffered an explosion of fires. At the end of the disaster, the entire facility had sunk in 464 feet of water. 167 lives were lost. There were 226 people on board at the time, so 160 of them, two thirds of those who were on board lost their lives in that particular accident.
Since I had been part of the lead on the technical investigations on the King’s Cross underground fire, I was contacted by Lord Cullen’s offices and asked to fly to Edinburgh. Got off the plane in Edinburgh, was ushered in. Lord Cullen imperiously walked in. The gentleman was about six foot six tall. Just this distinguished white-haired English Law Lord.
Russel: What you would expect from an English Law Lord, right?
Gary: Well, I should say Scottish. He was a Scottish Law Lord.
Russel: Scotsman.
Gary: I apologize to my English compatriots. Scottish Law Lord, walked in. We probably had five minutes together and then he said, “Right. You’ll start in a week.”
[laughter]
Gary: The public inquiry itself lasted for about 14 months from that period of time. We actually started gathering evidence in September of that year. We held the first of the public hearings in December, and then it was right at the end of November the following year that we concluded the last of the public hearings.
It took about another year for us to complete all the investigations and write the report, and turn the report over to the Secretary of Energy.
Russel: What else in the oil and gas world that people might know about?
Gary: The gas plant at Longford, Australia. I was asked to lead Sir Daryl Dawson’s technical investigations into that. That was in 1998. Then in 2008, there was a 16-inch gas pipeline in Western Australia that had severe corrosion on it ruptured while under pressure and had a major fire. Gas was lost to the State of Western Australia for a period of time.
A few minor injuries because where the pipeline ruptured was distant from people, so luckily there were just minor injuries in that particular one. That was a 16-inch gas pipeline.
Then most recently, the Department of Interior, the Coast Guard, and the Department of Justice asked me to lead the investigations into the BP Macondo event here in the Gulf of Mexico.
Russel: Yeah, we’re all familiar with that one.
Gary: [laughs] Right.
Russel: If you’re working in oil and gas in North America, well, pretty much anywhere in the world you heard about that. It was all over the news for weeks. That’s a very impressive resume.
The first question that comes up to me is what is accident investigation? I’ve read the Macondo report. I’ve read the Piper Alpha report. I’ve read everything that was written about the bonfire at Texas A&M. You read these reports and I’m sure there is a scientific method to gather all the data, but that seems to me to be pretty distinct from the process of really getting to what happened.
Gary: That’s very correct. Yeah, how do I describe it? I was asked once to give a lecture to the New York Police Department. I don’t know why that ever occurred, but I was. Well, now, I think the reason was we got into a discussion one time when I actually was in New York on some stuff.
I didn’t realize it, but at the table this discussion was occurring – this was a charity event – at the table were some detectives from the New York Police Department. People were asking me, “OK, you’ve done accident investigation and talk about accident investigation,” and I described that, about that point in my life, I had investigated about 170 fatalities.
I made the comment, I said, “That’s probably akin to what a detective in New York probably investigates in their lives, in regards to murders and that particular thing.” The next thing I know, I then got an invite from the New York Police Department to come to talk to them about that. Well, how does your investigation differ from our investigations?
It became quite fascinating, in regards to doing those comparisons between how they go about setting up and doing their murder investigations and even accident investigations, and what we were doing. At the end of it all, based on discussions of that, what we came down to is, there is as much art in the process as there is science.
It’s not just a pure fact-finding issue. I would describe it as you have to have two sides of your brains operating. You have to have the deductive side, and you have to have the inductive side. When you bring both of those together, that’s how you get to, this is what happened. You can’t do it just on the deductive factual side of the work. You’re going to ask me about TapRooT.
Russel: Well, yeah.
[crosstalk]
Gary: You’re going to ask me about TapRooT. [laughs]
Russel: Yeah, I’m going to ask you about TapRooT, but before we go there, I have done some root cause analysis around systems failures. What I always find is that there are two parts to that. The first thing you find out is, what happened? Which, that’s largely deductive.
I mean, that’s gathering data and just mining through and organizing it until you understand what was the specific sequence of events. That’s not really the important bit in safety, because now, it’s about why did it happen? That’s more where the art comes in, like what was operating?
Because I think you always operate on the assumption that every human that was involved in that event was actually trying to do the best they could, given what they had available to them to do it. What you’ve got to do is discern what about the human systems, or the automation systems, or the physical systems, combined in order to have the breakdown?
Gary: Correct.
Russel: OK, so now tell me about TapRooT.
[laughter]
Russel: Wait a second. We should pause and refill our wine glasses.
Gary: Absolutely.
Russel: The listeners will pardon us while we refresh. This should improve the conversation.
[pouring wine]
Gary: For the listeners, we’re doing a Saint Emilion. Oh, I just murdered that. It’s a 2015 Chateau La Dominique. A nice little Bordeaux.
Russel: I just tell you this, it’s good.
[laughter]
Gary: OK, TapRooT. I described the major disasters, accidents that I’ve led investigations into. In addition to those, I’ve done a lot of less dramatic, less drastic types of investigations. Very frequently, I’ve been called in, and one of the first things that I’m handed from a company is, “I want to start this off with we’ve done the TapRooT. Here is our TapRooT investigation.”
I have probably looked at I will guess 50, 60, 70 TapRooT investigations in my life. One of the ways that I would describe it, and I don’t mean to be overly flippant about the issue, but there are different kinds of taproots. Dandelions have taproots and oak trees have taproots. [laughs]
Russel: Well, before we get too deep in this, we should probably define for the listeners what TapRooT is because everybody might not be familiar with that.
Gary: It is a very good systematic approach to going in and gathering the data and the facts, as we were talking about in the past. It’s very good at that. It helps you systematize that. Helps you classify things.
Russel: It’s a system. There are books and everything, and worksheets, and all the tools necessary. You can go buy this stuff, and it’s a system that allows you to collect the information.
Gary: Right, very structured. It’s a very structured process.
Russel: Structure is important in this kind of thing.
Gary: It is. What happens, and as we were talking earlier and what I found is, what it doesn’t really do, because we can’t really teach that, is the inductive side of the issue. How do you now make the leap from all of this data? Part of the process is how do you winnow out the data?
Winnowing, if you remember that from the days in the wheat fields and with your pitchforks and that and throwing it up in the air. Winnow all that stuff that is irrelevant and really able to focus on the data, the facts, and the information that is really relevant and that information that will allow you to determine why. That’s where the art comes in. That’s where the inductive side of the brain comes in.
In the discussions that I had with the detectives, they said the exact same thing. You can do all this, but, boy.
[crosstalk]
Russel: That’s why not everybody can be a detective.
Gary: Correct.
Russel: Anybody can gather the information, can organize the information, can put it in a book, can drop it in a system. There’s a lot of people that can do that. Looking at that and going, “Out of these 5,000 data points, here’s the 25 that really matter,” that’s a whole different kind of thing. How do you learn that? Is that something you learn, or is it just the lights come on and aha?
Gary: The aspect of learning, in that particular process, is similar as you find in the police departments for the detectives. You take a really good police recruit or an individual that’s been on the streets and now wants to become a detective. They move over.
It is that mentoring from the standpoint of working with an experienced detective for a period of years. That starts the creative processes flowing, where you start saying, “OK, right. Now I know this,” and start learning things.
If I may, let me step back. We talked a lot about the facts. One of the ways that I describe a lot of the major accidents that I’ve been involved with, and really pretty much any accident, is – and I’ll use Piper Alpha as an example. When we got Piper Alpha, we had a literal pile of steel on the seafloor, which was 464 feet below the surface, and that pile of steel raised 150 feet.
It was just junk. I mean, we put ROVs [remotely operated underwater vehicles] down. It was nothing more than just a pile of junk sitting there. We had no physical evidence to work from. There wasn’t an instrument we could go to. This is before the time that it was being recorded onshore. There wasn’t a vessel we could go at.
There wasn’t a piece of piping we could look at and say, ooh, OK, that split, or something like that. We had none of that. All we had was the eyewitness events. In that particular case, I had a team. At the height, there were about 40 of us in the engineering investigation team that I was leading.
What we would typically describe it as is, it’s like someone walked in and dumped a 1,000-piece puzzle on your desk, of which there were only about 150 pieces.
[laughter]
Gary: Then they said, make a picture out of this. Not only that, there wasn’t a picture on the cover. [laughs] We had to take this sparse amount of data and rebuild the events of that particular evening. The way we did that was we used a tool called a hazard and operability study, and we ran 52 different hazard and operability scenarios.
It was out of those 52, which we sometimes would say, pick a card. [laughs] Pick any card in that. It was out of that 52 different scenarios that we were able to bring it down to the scenario that we identified as being the cause.
Russel: This is really fascinating to me, Gary, because for me, as I’m listening to this, I’m thinking about if you’re a software engineer and you’ve got a non-deterministic fault. There is no such thing as a non-deterministic fault in software. It’s all deterministic. It’s just, I haven’t yet determined what the fault is. I haven’t gotten to the error case.
That can be extraordinarily complex. You keep like, well, is it in this box or this box? Oh, it’s in this box. Then you take that and you break it down. Is it in here or in here, and you break that down.
One of the things I’ve always found, in the work I’ve done, to be incredibly resourceful is, the first thing I try to do is I try to build a timeline. Then every data I collect, I put all that data into the timeline. The timeline can often start way ahead of and finish way behind the actual event.
As the event gets closer that timeline gets much more concentrated, but that exercise and just the questions you have to ask to clarify the timeline often reveal things completely unrelated to the timeline.
Gary: Exactly.
Russel: Like where were people at in the minds. What was going on? Then you could say OK, they remember this going on, and that relates to what I was hearing over here.
I’ve often found that the timeline, and the development of the timeline, and the constant clarifying and making sure I’m anchoring it, and I’m certain about the timeline, that that effort, while it gets you a timeline, also creates a whole bunch of other understanding about what occurred. It’s in that understanding that the art manifests.
It’s weird how the brain works on it. It’s not like you just arrived at the conclusion. It’s like you’ll be running, or washing dishes, or in the shower, and then all of a sudden it’s like bam. Then all these neuron paths connect.
Gary: [laughs]
Russel: You’re going, “Oh!” Then you go, and you look, and you say is that right, and you validate.
Then you end up, for me at least when I’m doing that kind of work, I often find myself in a position where I had a conclusion, I had support for the conclusion, and then as I’m beginning to try and formalize and finalize, and drive nails into the conclusion, I start lacking confidence in my conclusion. It’s a very circular kind of thing.
Gary: Yes. That’s the other side when you talk about an accident investigator is willingness to, at some point, you’re going down a particular path, and your willingness to say that’s not leading me anywhere, and to be able to back up.
Or you’ve gotten down the path that you’ve, as you say, you’re starting to develop conclusions, and then additional facts come in as you’re doing the investigations which challenge that. You then have to step back and reevaluate. A good investigator, a good accident investigator, has the resilience to be able to do that.
Too many times, people get down a path and it’s like no, I’m on the conclusion. I’ll fit the facts to the conclusion versus the willingness to stand back and challenge, and say if I can’t explain this, I’ve got to reformulate.
Russel: It’s real easy as an engineer to get invested in the outcome you think you’re going to get to. It’s very easy. You have to maintain this posture of openness to some new data point, some new idea coming in, and completely flipping the apple cart upside down and causing you to start over.
Gary: Very much.
Russel: By the same token, you still need to know what are the facts because the facts don’t change.
Gary: To come back to your point about the timeline, because I have found just as you, timelines are critical. If there’s anything that I would recommend if you’re going to do an investigation is to start that process of the timeline.
As you describe, you may have to go back days. You may have to go back weeks in their timeline to really start to get into what the root causes were, whether we’re talking a technical root cause, whether we’re talking a soft cause in regards to organizational failures or something of that nature.
You may have to go back some period of time to look into those and take your timeline back in to start seeing what are some of those contributory factors versus what actually caused the accident itself.
Russel: Exactly. We do, in our world, we do what we call a critical system failure report. Anytime we completely lose the SCADA system or the control system, we’ll do a critical system failure report. The whole purpose of that is to understand the root cause, and the first thing is the timeline.
My guidance on the timeline is always “Dragnet.” For those young people, look it up on Google. It’s an old TV, police TV show. The catch phrase was, “Just the facts. Just the facts.” You only want facts in the timeline, no suppositions, no conclusions, no analysis, just facts.
Gary: Correct.
Russel: That, getting clear about is that a fact, does it go on the timeline, or is that a supposition, or an assertion from somebody, and it doesn’t really go on the timeline, it’s not yet a fact, those kind of things. Just navigating that, there is a lot to be learned.
We’ve been talking here, and we kind of answered one of the questions I was going to ask you. What’s the challenge of being an accident investigator? If I were to speculate, I would say it’s the maintaining of the posture of openness, right?
Gary: That’s very correct.
Russel: I think the other thing is being just that hypercritical. Is this a fact, or is this something other than a fact? And parsing that stuff.
Gary: That’s true. You could put it down to aspects like diligence, but there is also the aspect of the fortitude of the individuals. One of the things that I’ve found, and I’ve worked with an awful lot of younger individuals doing accident investigations, many of them, after they’ve gone through one.
I remember in King’s Cross very vividly working with a younger engineer, at that point in time in the early 20s, and when we finished after about three or four months they said, “I can never do this again. It requires too much.”
Russel: It’s interesting comment, isn’t it?
Gary: Yes.
Russel: Why do you think they said that?
Gary: There is a mental aptitude. Why do some people become a firefighter?
Russel: It’s hard work. It’s really hard work.
Gary: Why do some people become a firefighter, which is just unbelievable? In general, engineers, they exist and their love is build, and build things that work. To go in and tear apart something to see how it fails is probably really the antithesis to a lot of engineers and their engineering mind.
Russel: Well, it’s certainly not how we’re trained in school.
Gary: No.
Russel: But it is something you can develop. That kind of work, I find it very demanding and yet very rewarding, because when you get to an answer and you’re confident the answer is right, there is a real sense of, and when you can share that with people, because other people want to know what you learned. [laughs] They may not necessarily want to go through the process of learning it, but they want to know what you learned.
Gary: One of the things that I have to say that I’m very pleased about in my career when I look back at probably the most, is some of the mentoring that I’ve done of engineers and where those people have done on. I always love that.
The other one that I take some personal pride in is, in all of the major accident investigations that I have undertaken and developed this is why it’s happened, in all five of them, those findings have been challenged in court in various litigation for upwards of 10 years.
At the end of them, not one of them has ever been overturned. Not one of them did anybody come back and say, no, you didn’t get to the right conclusion. Despite the fact of 10 years of further litigation and investigation by some really, really good individuals that are top notch in their business.
Russel: Yeah, if there had been the least little bit of a crack in the armor, they would have found it.
Gary: Exactly.
Russel: Well, that’s awesome. I want to pivot a little bit. I find this conversation just absolutely fascinating. I can talk about this for days, I’m sure. I love the stories and the things you’ve done. I mean, it’s fascinating to me, but I want to pivot and talk about from a safety perspective, and where we are particularly in pipelining.
We’ve got pipeline safety management starting to be a thing. It’s been around about six years now. A lot of people are looking at how do we get that ingrained into our culture and so forth? In all the accident investigations you have worked, what have you learned that you think has the most impact on a good safety culture and a high-performance safety workforce?
Gary: Culture of openness. Transparency and openness in decision making. You probably have experienced it. I mean, we have both probably gone into and worked for companies. After leaving Aramco, I’ve been a consultant since 1987/1988, so I’ve worked in all kinds of companies, all of the majors.
I’ve done work for major pipeline companies, major oil and gas companies, the midstream parts of them, the upstream parts, the downstream parts, etc.
I’m sure you have ran into where you walk in and you’ve got a vice president sitting there, or a director, or a top engineer who is just adamant that this is what happened. Don’t tell me anything different. This is what happened.
Russel: No, that’s never happened to me.
[laughter]
Gary: Yeah.
Russel: That would be a lie.
Gary: I have to admit, when I was younger, you get taken aback by that and whatever. Then after a while, with some of the experience that I’ve had, and I have to admit, more recently, I’ve had a few cases where that has happened. Very opinionated individual. This is it. I’ve looked at it. I would start to ask, “Well, what all have you looked at?”
“Well, I looked at this.” That was probably two percent of what we eventually ended up doing in regards to the investigation and they had formed their opinion. There have been times where I’ve gone in and just said, “Well, then why am I here? Do you guys want me to leave? If you already know what the answer is, what do you want me to do?” I’ve had cases where it’s just confirmed what we said.
Russel: That’s not what I do.
Gary: [laughs] Exactly. It’s not what I do. I’ve made that comment. The other side is, OK, once we get past that and we start to do the investigation, and we start to open it up, what I’ve found is because of that culture, people knew things were going on, but they just never felt that they were in a position to be able to challenge that level of authority.
Russel: Yeah. I don’t know what you would label that. I might call that closed thinking. When you’re in a leadership position and you have a mindset of closed thinking, it’s not safe for anybody to bring you anything that’s in opposition to your closed thinking.
Gary: Correct.
Russel: Particularly in technical leadership, we need to be able to receive all the information that people are giving us. There are two parts of this openness. One part of it is, when I know something that would be of value to others, I need to share it. At the worker level, if I see something, I need to report it, share it, and make sure people are aware of it.
At a leadership level, I need to share it amongst my peers in the community. The thing we don’t talk about as much is, I need to be able to receive it. Now, I don’t think you have to confuse receiving the information with agreeing with it or taking any conclusions from it. Just accepting it and integrating it into your thinking is super important.
When you’re a carpenter and you work with hammers, nails, and saws all day, it’s hard to understand welding. We all, every one of us, get up against that. I say this a lot on the podcast. Pipelining is a number of hyper-vertical, hyper-technical disciplines. We use the same language to say different things.
Gary: [laughs]
Russel: All of those things create complexity around a culture of openness, so I agree with that. Openness is a big thing. I think that we’re seeing big moves in the pipelining business in that way.
I had a great conversation just last week at the API Pipeline Conference with Shawn Lyon, who’s the President of Marathon. He was talking about a safety share that they did, that was an industry wide safety share, 10 days after a major event, related to a geotechnical earth movement and a pipe problem that occurred out of that.
They coordinated with PHMSA. They coordinated with the NTSB. Then PHMSA and NTSB stepped out and said, “You, operators, share, share, share. Have fun. Share freely. We encourage this.” I think we’re making progress, but, boy, I think we’ve got a long way to go.
Gary: That’s my feeling too. When I use the term openness, you use the term about receiving information. One of the things that I’ve seen in really good investigations, when I’ve been in it, is a bit like you.
I’ve got metallurgical engineers that have doctorates and fracture mechanics specialists. You name it. They know miles long of information down one little narrow path. Then we get into a meeting and just debate. They will be heated, passionate debates.
I think another part of it is when you’re doing an investigation and you’re pulling all of this expertise together, there’s a lot of debate to go on and really encourage the debate and try to keep the conflicts out of it.
Russel: Try to keep the emotional obstacles out. You want the emotion into it because passion is helpful. What you don’t want is emotional obstacles. I want to have my passion in the conflict. I don’t mean like fighting, but I mean like trying. Iron sharpens iron, that kind of thing.
Gary: Exactly. When you’re in there and you’re having those kinds of discussions, keeping the debate open, keeping it on a neutral level as much as you possibly can. We’re not talking about personalities. We’re talking about an issue here. Personalities don’t need to come into it.
Russel: We’re seeking to understand.
Gary: Right. That’s what I would say needs to be addressed in a lot of the cultures in a lot of the organizations.
The other thing that I found – let me digress here a little – working in Aramco, we had our pipeline side. We had hundreds of miles of pipelines running from our fields, running from our gas oil separators, up to our terminals, to the refineries, etc.
We had the drilling side of the business. It was an integrated oil and gas company. We had drilling. We had E&P. We had production. We had refining. We had terminalling. We had pipelines and that kind of stuff.
When people come to me and say, “This is the culture of this company,” I go, “Wait a minute. I’ve been there. You’ve got multiple cultures in there, in your company.”
Russel: That’s right.
Gary: You don’t have a single culture.
Russel: Getting the safety culture, there’s a lot of attention being paid to that in pipelining right now. It’s a non-trivial thing. I do want to spend a little bit of time and ask this question because I have a premise that I think is really critical in this idea of safety and what do you learn out of accident investigations.
I think procedures are critical to being able to do a next level order of magnitude improvement in safety performance. I’m coming at this from my aviation experience. I spent a bunch of years in the Air Force.
There is a tech order for everything around an airplane in the Air Force. They follow those procedures. They follow them carefully. There’s all kinds of controls to make sure you follow them.
When there’s a failure, they tend to look at the systems, the processes, the procedures. They don’t tend to look at the people, other than do the people have the training, do they have the competencies, do they have the capabilities, do they have the understanding to follow the procedure. They don’t look at the people and their motivations, typically.
Gary: That’s interesting. One of the things that I’ve been concerned about in what I’ve seen in a lot of companies, we have an accident. Somebody goes out and writes a new procedure. That covers really that little micro issue that was the immediate cause or the immediate triggers or maybe two or three steps away from it, but it doesn’t really address this long tail that sits there.
One of the things that I’ve become concerned about is really companies becoming over proceduralized at the loss of the competency of the individuals. To me, an ideal, and this is a Disneyland-ish aspect of it, is you have the perfect marriage between competency of your individuals and procedures.
You shouldn’t take away – this is me, from my experience – you should not over-proceduralize your company to the aspect of de-emphasizing the competency of the individuals themselves. A lot of that occurs.
Russel: You think about the pilot, Scully, that landed the airplane in the Hudson River and had a movie about it and everything. He’s testifying to the accident investigation board. Everything he did about a minute and a half after the bird strike was off any procedure that had ever been written. He had to make all that up based on the information he had.
So your point is well made. There’s a level of the people who are operating in the procedures need to know what the procedures designed to do and where they don’t work anymore.
Gary: I’ve been talking recently in the aircraft industry or in the aviation aerospace. I think another issue that we’re going to be facing, and not only there, they’re already facing it, but is also going to come into it, is this whole use of machine learning, this whole use of artificial intelligence in that.
I don’t know how many people realize but most of the landings now by commercial aircraft are on autopilot. It’s gotten to a level, and the United Kingdom Civil Aviation Authority, has really been looking into this, saying the pilots are losing the competency to land planes anymore because they’re doing it so few times.
There, they’re really trying to work on what is a good ratio in regards to a pilot actually taking over the controls and landing when we got a fully operational plane not in a simulator kind of deal versus doing it on autopilot. I think that’s going to become quite an issue.
To me, that’s the head of this issue I’m talking about, is that loss of competency of the individual or the marriage of competency in particular.
[crosstalk]
Russel: You’re seeing this throughout oil and gas. We’ve got a lot of people that are leaving the business. They’re retiring. We’re losing a lot of expertise. The younger generation, they didn’t learn it the same way the older generation learned it. They don’t have as much of a hands-on of how it actually works in the pipe kind of experience. That’s going to have consequences.
On the other hand, they have a whole different set of tools and capabilities around analysis and automation, and all that kind of stuff. It’s a valid point. We should leave it there.
Gary: Could I ask you a question?
Russel: Yeah, sure.
Gary: Is a podcastee allowed to ask a question? [laughs]
Russel: Yeah, absolutely. I like it when you flip the script.
Gary: Here’s a question for you, and again, this comes from my connections over in the UK. I was working on some various projects over there for about a year prior to the pandemic and leaving the UK.
One of the things that they’re becoming concerned about, and I want to ask you about this from your experiences, they’re seeing now that when instrumentation is failing, because the technology in replacing that instrumentation is just increasing at such a rate. If we have part number 1000, and it plugged in and did all of this kind of work and that part has now failed.
We’ve replaced it from the same manufacturer with not really part 1000 but 10001, or 2, or A, or E, or something in that particular range, which is supposedly totally compatible with everything that was in place 10 years ago. What the UK HSE is starting to see is 90 percent of the time, 95, 99 percent of the time, that’s right, but it isn’t always right.
There are little glitches going on now, and they’re seeing this more and more, not so much in accidents, but they’re starting to see this in what they would call the trips and that are starting to go on. They’re seeing this advancement, and this technology, and the retrofitting, and that kind of stuff is something we’ve got to keep our eyes on.
Russel: What’s the question?
Gary: The question is how backward compatible is, in your experience, are you starting to see where the potential for this backward incompatibility?
Russel: It’s a real issue. It’s a real issue. Here’s the problem that we have. Gary’s pouring a final glass of wine as I answer this final question, so it’s getting more creative as we go on. It’s a real issue.
One of the primary differences in what’s happening today versus even 10 years ago is almost everything has firmware on it, meaning it’s got a chipset and software. That chipset and software is doing analysis of hysteresis and performance, and is maintaining its calibration, and it’s keeping ID information. It’s doing a lot of things that simplifies the O&M.
The problem is every chipset has a little bit of a different piece of software on it, and if we go and buy an instrument from the manufacturer, and the manufacturer says it’s like kind, we just put it in. Anybody who’s dealt with software knows that as you make changes to the software over time it’s not necessarily compatible. I don’t know.
It’s a real issue. I’m not aware of anything material that’s been a problem, but I can see how that could become a problem as we get more towards the future. Of course, I also believe that in control systems, one of the big uses of AI is going to be to ensure quality and reliability of the instrumentation.
Gary: Machine language, machine code language, where basically you’re putting the equipment in and it starts to learn itself and starts to autocorrect, where do you see that going in the future?
Russel: I was just on a podcast with a company called CruxOCM that is doing something that they call industrial process robotics. What this software is designed to do is, for example, if I want to start up a long haul, liquids pipeline, there’s probably 50 to 100 things I need to do to start that pipeline.
This software will automate that entire process. All I’ll do is say start the process. Start the pipeline. All the other things that happen, sequencing, monitoring the pressures, all that other stuff will be done with this robot.
I think there’s going to be a lot of fit and opportunity for those kinds of things, but all AI, all industrial robots, all have to learn. They only do what they do within the context of what they’ve learned.
What’s going to happen is the requirement for the engineers is going to become I need to understand. Engineering historically has been about understanding the successful completion case, what’s going to happen, like manufacturing, we’re going to have to begin to understand all of the unsuccessful completion cases.
Gary: Correct.
Russel: All the abnormal conditions. We’re actually going to have to become better engineers to use this technology. Then that technology will offload the things that we would do that were repetitive tasks, and we’ll get to be doing more interesting engineering stuff. That’s my take on it.
Gary: You probably know the term “Black Swan.”
Russel: Sure.
Gary: That was written by Nassim Taleb.
Russel: Familiar with it.
Gary: He came up with it. He did Black Swan. Some of the things that he’s currently doing, there’s a term that he’s using, and a lot of people have problems with it, but a term he’s coined is called antifragile. How do we make our systems, how do we make our people, how do we make company?
Russel: How do we make them robust and flexible, not brittle?
Gary: He differentiates antifragile from resilience and robustness. Typically, he says, or not typically, his view is when we say robust, we know the issue that’s going to hit it, so we need to build into the system that known issue. In antifragility, as he describes it, is that unknown issue. How do we make our system such that the unknowns, when the unknowns hit the system.
Russel: There’s some interesting work being done, and a friend of mine named Doug Rothenberg has written some stuff about weak signals, the same idea. It’s like I don’t know what’s going wrong, but I’m looking at something and my gut is telling me something’s going wrong. What do I do with this information? That’s a direct corollary to accident investigation. Right?
Gary: It is.
Russel: It’s even more art and less science because the science is currently manifest. Listen, Gary, I could go on talking like this forever, but at some point we have to end the podcast.
Gary: OK. [laughs]
Russel: Thanks for being a guest. I really appreciate it. It’s been great. We need to get together because I could geek out and go a lot further. Besides, you provide very good wine to the podcast folks, so that’s good.
Gary: [laughs] Thank you, Russel. Yeah, if you find it, let’s do another.
Russel: All right. I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Gary. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win and enter yourself in the drawing.
If you’d like to support the podcast, the best way to do that is to leave us a review. You can do that on Apple Podcast, Google Play, Stitcher, wherever you happen to listen. You can find instructions at pipelinepodcastnetwork.com.
[background music]
Russel: If you have ideas, questions, or topics you’d be interested in hearing about, please let me know on the contact us page at pipelinepodcastnetwork.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
[music]
Transcription by CastingWords