This week’s Pipeliners Podcast episode features Jan Hayes from RMIT University in Melbourne, Australia, and author of the book “Nightmare Pipeline Failures,” who returns to talk about the regulatory approach used in Australia that focuses on an approach called as-low-as-reasonably-practicable risk management.
Jan and Russel discuss that there are often external and sometimes uncontrollable factors that play a role in pipeline operations, as well as the distinction between compliance and safety.
As Low As Reasonably Practicable Risk Management with Jan Hayes Show Notes, Links, and Insider Terms:
- Jan Hayes is the author of “Nightmare Pipeline Failures” and a Professor at RMIT University in Melbourne, Australia. Connect with Jan here.
- Nightmare Pipeline Failures is a collection of pipeline failures that have occurred in the United States, going into detail what went wrong and how it could have been prevented.
- RMIT is a world leader in Art and Design; Architecture; Education; Engineering; Development; Computer Science and Information Systems; Business and Management; and Communication and Media Studies.
- Australian Pipelines and Gas Association (APGA) is the peak body representing Australasia’s pipeline infrastructure.
- Future Fuels Cooperative Research Centre is the industry focussed Research, Development & Demonstration (RD&D) partnership enabling the decarbonisation of Australia’s energy networks.
- Public Safety in the Pipeline Industry: an engineering practice guide, published 31-Jan-2022, by APGA and developed by engineers at the Future Fuels Cooperative Research Centre.
As Low As Reasonably Practicable Risk Management with Jan Hayes Full Episode Transcript:
Russel Treat: Welcome to “The Pipeliners Podcast”, episode 256, sponsored by Gas Certification Institute, providing standard operating procedures, training, and software tools for custody transfer measurement and field operations professionals.
Find out more about GCI at gascertification.com.
[background music]
Announcer: The Pipeliners Podcast, where professionals, bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations.
Now, your host, Russel Treat.
Russel: Thanks for listening to The Pipeliners Podcast. I appreciate you taking the time.
To show that appreciation, we give away a customized Yeti tumbler to one listener each episode. This week, our winner is Andrea Wara with Marathon Pipeline. Congratulations, Andrea. Your Yeti’s on its way. To learn how you can win this signature prize, stick around until the end of the episode.
This week, Jan Hayes from RMIT University in Melbourne, Australia, and author of the book “Nightmare Pipeline Failures” returns to talk about the regulatory approach used in Australia which focuses on an approach called “As-Low-as-Reasonably-Practicable” risk management.
Jan, welcome back to The Pipeliner’s Podcast.
Jan Hayes: Thank you so much, Russel. I enjoyed our first conversation.
Russel: Me, too. I’m very glad to have you back. I wanted to start, because there’s a quote in your book that I thought was on the nose and goes to one of the thematic elements in the book that you put together, which is, “Engineered systems obey the law of nature, not the laws of man.”
You might want to attribute those remarks. We were talking about that off mic a little bit. What does that mean?
Jan: [laughs] Thanks, Russel. It’s an interesting place to start.
That’s a quote that I hear reasonably frequently from the old-timers in the Australian pipeline sector. They attribute it to a guy called Ken Bilston, who used to be the chair of the committee for the Australian pipeline standard AS 2885.
I have to say, I never met Ken myself, but I think he’s a great guy based on this one quote. He is alleged to have said reasonably often in discussion around the standard that we always need to remember that the pipeline systems obey the laws of nature and not the laws of man.
I think what he means by that and what my colleagues mean when they quote this back to me is that it’s all very well for us to write procedures, to write standards, to write legislation, all these different written rules.
At the end of the day, the pipelines and the fluids in the pipelines, all that stuff out there, the physical stuff in the field, doesn’t read procedures and follow procedures. It obeys the laws of nature, the laws of physics, chemistry, etc. We can write whatever we like, but the system is going to do what the system will do.
Russel: I marked all over your book when I read it. That was one of those things I read, and I had to stop and I had to go back and read the paragraph before and read the paragraph after again and say, “OK, what are we trying to say here?”
Because that’s one of those things that on the surface of it, you’re like, “OK, that makes sense,” but, to me, there’s a deeper conversation there because people also obey the laws of nature, not the laws of man. The way I say that is, this is just something I made up, but where does gravity take us?
We actually have to work to get up and walk around. We have to work to have these systems behave. We have to work to have our organizations work effectively. Human systems don’t work well without good leadership, I guess, is a way to frame it, good management, good leadership. In humans, our human systems don’t work.
Jan: Sure. I’m not sure that this is what Ken had in mind when he said this, but if I can paraphrase what you’re saying, maybe you’re saying also that our brains in terms of making decisions also obey the laws of nature. It’s the neurons in the brain that make these choices, etc., and we’re not robots.
Just because something’s written in a procedure, doesn’t mean that it necessarily works for us, or that we will follow it every time, or that it’s going to get around the fact that at the end of the day, we need judgment, which is perhaps something that’s a law of nature to make the best choices, and we can never completely eliminate that. Although, some engineering people try as hard as they can.
Russel: That’s a good thing, too, right?
Jan: Yeah, absolutely.
Russel: You shared with me a paper called “Public Safety in the Pipeline Industry” that’s produced by the Australian…I think it’s Australian Public Gas Association. Have I got that correct?
Jan: It’s the Australian Pipelines and Gas Association, APGA. That’s our industry association, but also our professional engineers association in Australia.
Russel: It was very interesting to me because it lays out an industry practice for pipeline safety, and it’s chock-full of examples. I have not seen a document like this before, where it’s chock-full of examples.
I thought that was a really interesting way to put this kind of document together, because it lays out some situations and they’re all real-world, this is what really happens in the pipelining business situations. It frames them around, well, how do the organizational systems contribute to the way that someone does their job if they find themselves in the situation?
It lays out case studies related to disasters. I just thought that was really interesting because I’ve not seen something put together that way before.
Jan: Well, that document, the engineering practice guide that was published jointly by the APGA and also the research organization, Future Fuels CRC, that’s kind of my link to the pipeline industry, that was published last year.
We put it together essentially to summarize about a decade of research in this general field of what makes for good judgments around complex engineering decisions, to put together a practical thing and move from research reports that don’t have all those examples in and are full of much more esoteric discussion.
To make it much more readable and usable for people on the ground who have to actually make these decisions.
The examples, as you say, are two kinds of things. We put in a whole lot of little vignettes of situations that people might find themselves in where they had to apply certain principles that we were advocating and coming to a decision about what to do. Also, some accident case studies about, well, if you don’t get this right, how bad can it be?
They were the kind of examples that went in there, but it’s based around a number of principles like speaking up for safety, and taking a long-term view in your decision-making, and things like that. The engineering practice guide came out about six months ago. We’ve had some pretty good feedback about conversations that people are having as a result of what’s in there.
Russel: Well, one of the things that we’re going to do is we’ll link this practice guide. It’s a PDF document. We’ll link it up to the episode, and we’re going to put it in our resources section on the website.
I want to try to encourage some pipeliners in the US to read it because I’d really like to hear their take when they read this. I’d like to get a conversation going on around all that.
Jan: That would be fabulous.
Russel: Jan, it’s been very interesting to me, as we’ve got to know one another. It’s interesting. You read a book, and you get your thinking going, and you start coming to some conclusions.
Then I’m having a conversation with you and there have been a fair number of, I’d categorize them as ‘yeah, buts’. Yes, but did you think about it this way? That kind of thing, which is awesome.
One of the big points that you make is there is a distinction between compliance and safety. They’re not actually the same thing. Could you elaborate a little bit as to the point you were trying to make in your book about what is the distinction between compliance versus safety?
Jan: Sure. Some people think that safety and compliance are the same thing. That we have to do to achieve safe outcomes is to make sure everybody complies. Certainly, I would agree, and people that have the same view as me would agree that compliance is super important. We don’t want people to go off and make stuff up.
We want to make use of that collective knowledge about what’s the right thing to do. Whether we’re talking about a field procedure, or a standard, or even up into regulation. Standards and rules of various kinds are like the collective knowledge about how to do things right. They’re super important, and we do want people to comply.
There’s an issue with thinking that compliance covers everything when you’re talking about high-consequence and low-frequency accidents, as we started to do in the first episode of this. The bottom line is that not everything has happened yet. Things can go wrong that we’ve not thought about.
Also, systems change, so what was the right procedure last year or five years ago may not be the right procedure now because the situation has changed in some way. What we’re after is mindful compliance.
Compliance, but bearing in mind the reasons why we’re doing this, never forgetting or never losing sight of the end goal so that you can always be a little skeptical, and if necessary, apply the necessary judgment around whether what you’re doing is the right thing.
To have mindless compliance, where your goal becomes meeting the regulation, obeying the standard without thinking about that broader context can send you down rabbit holes that, on occasions, can get you in big strife when it comes to those high-consequence, low-frequency accidents. Does that make sense?
Russel: Yeah, absolutely. I would say that when you get myopically focused on compliance, you create blind spots. You’ve used this term a lot, and I want to underscore it, you’ve used the word judgment a number of times here.
I think the pinnacle of engineering is engineering judgment. There comes this point where you begin to operate a little beyond everything you know, and yet you still have to make a decision and operate. That requires you to use your engineering judgment. There’s an interesting dynamic.
In fact, I did a whole podcast on leadership and talked about how do you become a facilitator or an instrument of change without coming off as a threat? I talked about the need to have two minds.
One, your engineering analytical mind that operates in your training. Another way to frame that would be your engineering mind and it’s operating within your training, and the standards, and the policies, and procedures.
Then, there’s a higher level mind that sits on top of that from time to time and asks the question, “Do I have it right? Am I missing something?” It checks in with my emotions, it checks in with my gut feel. I’m asking, “Does this feel right?” Trying to make sure that all that lines up.
That’s kind of what you’re getting at, although I might be a lot more esoteric about it than maybe what you would be.
Jan: I agree with everything that you’ve said, Russel. It also reminds me of some research I did back when I was doing my PhD, so quite a few years ago now, where I was talking to people in other industries about decision-making.
If you’re trying to make a decision about whether a plant modification is appropriate, whatever, you have all your systems of doing your design reviews, you have all of that laid out, engineering stuff that you need to do.
People talk to me about also using story-based imaginative tests. They would sit there and think, “I’ve got a five-year-old son. Would I pick him up and hold him in my arms and walk and stand next to this thing? How does my gut respond to that?”
People were using story-based tests like that to tap into their gut feel for whether or not something was the right thing to do, alongside ticking every engineering box.
Of course, we’re not talking about doing engineering designs based on, “Would I be happy to stand next to this?” but also drawing on that gut feel and the emotional side of us and how that taps into our decision-making to support what we decide to do.
Another similar test was, “If this went wrong and someone was hurt, how would I feel ringing that person’s wife, husband, and explaining what happened? How would I feel about that? Would I feel that I made the wrong choice and I’m having to justify it, or would I feel we did everything and it was just one of those things?”
People putting themselves in uncomfortable situations in their imagination and drawing on that.
Russel: That’s fascinating. It’s very interesting. When I was a young engineer, my mindset was emotions were obstacles. Really, emotions are resources. They inform. They can be used to inform your decision-making.
They can be used to inform your judgment. That’s a little outside of classic engineering training. I was in the Air Force, not a pilot. I was an engineer in the Air Force. There’s a certain amount of being around pilots, and particularly test pilots.
A lot of people don’t know that most test pilots in the military, they’re flying aircraft that are coming out of maintenance. They’re making sure the aircraft is safe to fly, and you have special qualifications to do that.
Those guys often are talking about how it felt to fly the aircraft. Did it feel right? They’re using that as a resource, and the good ones, they’re oftentimes aeronautical engineers and pilots. They’re connecting the experience of flying the aircraft with what they know from an engineering perspective about flying the aircraft.
It’s the same kind of thing that we’re talking about here. It’s like, “How do I feel about the system and how it’s going to operate versus the technical knowledge to do the things I do?” It’s a both / and. It’s not an either/or, it’s a both / and, and it’s a very important thing to understand.
Jan: There’s a whole body of theory around how experts under time pressure make decisions. This is a whole field. If anyone’s interested, it’s called naturalistic decision-making.
Classic decision-making theory says, we’re faced with a choice, we come up with options, we evaluate the options, and then we decide on the criteria, and then we choose the best one against that criteria.
If you’re an expert operating under time pressure, if you’re a firefighter going into a building, or if you’re a surgeon faced with some quick choice that needs to be made, you don’t have time to go down that process.
This body of theory has developed around studying people in those types of situations and looking at how they make decisions, and so how they can be better supported. Their model is called recognition primed decision making.
It’s about people identifying patterns in the situation, and this is often subconsciously identifying patterns. Going into a situation and thinking, “What we need to do is this.” You don’t have time to say, “We could do A, B, C, D. Let’s evaluate them.” It’s about making a choice.
The research shows that experienced people make those choices drawing on their body of experiences that they’ve had previously.
The classic case is the firefighters going into the building and they know that the fire is going to behave in a certain way, or that the ceiling’s going to collapse, or whatever based on what seems like intuition but is, in fact, their past knowledge of fires. They can see, they intuit what’s going to happen next.
Russel: It’s informed intuition.
Jan: Correct.
Russel: It’s based on education and experience.
Jan: And you can train for that.
Russel: Have you studied the Captain Sully, the airline pilot that ditched the plane in the Hudson River? I think that’s a great illustration of what you’re talking about because they had a double bird hit, lost both engines, and they took the time to evaluate where they were at, which was a minute and a half. It was a very short period of time.
That minute and a half to take that completely changed the analysis, the decision making, and so forth. You’ve got to think that you have a very experienced, very highly trained pilot. He’s operating completely off the checklist.
Jan: Absolutely.
Russel: He had to know airspeed, pitch. To be able to put that aircraft down and to have everybody walk out of it, it’s pretty amazing to me, because he was off script, for sure.
Jan: He was certainly off script. That’s a good example of that kind of experts under pressure decision making that we’re talking about.
Somewhat of an aside, I actually met Chesley Sullenberger several years before that incident, completely by chance. He was at a high-reliability organizations conference that I went to.
Russel: Oh, my gosh. I did not know that.
Jan: I met him. I had a meal with him. There you go, an active on-duty pilot who’s taken the time to go to a conference and learn about high-reliability organizations, which is exactly the kind of thing that we’re talking about here. Then, two or three years later, I see his name pop up in the news because he’s had this incredible experience to successfully land this aircraft.
Russel: Gosh, I’m getting a little emotional just having this conversation. I had no idea that he was a student of high-reliability organizations and high-reliability systems. I am not surprised.
If you watch the movie, it’s also clear that he has a pretty clear understanding of the human aspects of what he was doing too. I mean this in a good way. In the midst of the situation, he was a stone-cold decision-maker.
At the same time, very clear, after he got the aircraft down and all of the stuff afterwards, about the human aspects of all that. That’s actually great. That’s a great segue. You talk a lot about high-reliability organizations and the characteristics of that in your book.
One of the things that I would assert is that a high level of compliance is a prerequisite for high reliability.
Jan: That’s absolutely true. Along with that high level of compliance has to go systems for keeping procedures up to date. You can’t have compliance with rules that haven’t been checked or edited for years and years.
Part of that whole compliance in the high reliability context has got have with it systems for monitoring compliance to see whether people are following the rules. Also, not assuming that non-compliance means that the people are wrong.
It could be that the system is wrong, and that the rules need to be changed for some reason. Again, there’s this system component. It’s not seeing the rule followers as necessarily the problem. If the system is not working, which part of the system needs to be modified?
Russel: Even if you have people who aren’t following the procedures, that’s still a systems problem. [laughs]
Jan: Correct. So often we see incident investigations where people say, “Oh, so and so wasn’t following the procedure.” What’s the corrective action? Training. It’s seen as the problem is the person, not the problem being the system as a whole.
Russel: It’s exactly where I was going, Jan, is that even if the person doing the work is not following the procedure, the question should not be punitive and give them more training. It should be more, why aren’t they following the procedure?
Have we updated them recently? Are they appropriate? Did we give them different tools? Was there some other contributing factor? There’s a whole level of, “Did we hire the right person? Did we give them the right training in the first place? When’s the last time they got refresher training?”
Vince Lombardi…I don’t know if you know Vince Lombardi. Vince Lombardi, a famous football coach. American football coach in the ’60s. He used to start every training camp with, “We here at the Green Packers, we study fundamentals. Gentlemen, this is a football.”
What I’m driving at is there’s lots of potential contributing factors there. I think that is a nice segue with what we’re talking about too. You mentioned in your book, “Least allowable risk.” You talk about that versus risk prioritization. I think the term you use, “As-low-as-reasonably-practical risk.”
Can you unpack for me what that concept and idea is and how it’s used in the Australian pipelining world?
Jan: We haven’t really talked much about risk. Risk is super important, but there are questions about what’s acceptable regarding risk. Why are you using risk? Where are you heading in doing your risk assessments?
One common way of using risk is as a method of prioritization. You might say, “This is how much budget we’ve got to spend on pipeline inspections for the next year or two,” or whatever. Where are we going to spend this?
You might say, “All right, we should spend it on the things that are posing the highest level of risk. Let’s go and risk assess all of our pipeline segments and decide where we’re going to spend our money.” That’s one way of using risk.
Sometimes people would frame that, even, as being about continuous improvement, which kind of forgets the fact that the system is also potentially declining and degenerating and whatever.
You talk about spending money to make improvements and you’re on this cycle of hopefully continuous improvement, but it does depend on that balance between how much you’re spending and how much things are degrading, following those laws of nature that we mentioned earlier on.
Another way of thinking about risk and why we’re looking at risk is to think, “OK, people who work on and around our systems shouldn’t be exposed to unacceptable levels of risk.” There should be limits. We should say, “We’re not going to expose our people or the general public to significant levels of risk.”
We’re going to do risk assessments on all of our facilities, our activities, etc., etc., designed to decide whether or not the risk that we’re exposing workers and the public to is low enough. We’re going to keep making changes, and putting in additional risk controls, and making things better. Making the system operate more effectively until we decide that risk is low enough.
To take that to its extreme, obviously we can’t completely eliminate risk. You can’t say, “Let’s keep making improvements until the risk is zero,” because it’s never zero if we’re doing these activities. You need to come to some level which is as low as we’ve all decided collectively in society is an acceptable level of risk. After that, you’ve got no obligation to do more or to spend more money.
That’s the idea of reducing risk to a level that is low as reasonably practicable. It originates in UK law and has made its way into Australian law from the UK. It’s commonly used in all countries that are part of the British Commonwealth, but a similar principle applies in quite a lot of other jurisdictions, as well.
It means that companies have an obligation to take action to reduce risk further unless the cost of doing more changes is grossly disproportionate to the benefit that you get from making those changes.
You’ve got to keep spending money until you’ve reduced the risk far enough. Super importantly, in this system you can’t say, “We’re not going to spend the money because we can’t afford it.” Making these changes becomes the cost of doing business.
The way the whole regulatory system is framed in that the companies have to demonstrate to the regulator their case for safety. Meaning that we’ve spent an appropriate amount of money and put an appropriate amount of effort into reducing safety risk to ensure that if we were to spend more money, it would be wasted effectively.
We’re not going to spend more because there’s nothing more we can do without wasting money on this.
Russel: I’m processing. The wheels in my brain are turning as you’re talking about this. I’m thinking about what I know about U.S. pipeline operators and their approach to things. There’s many operators, and each operator has their own culture and approach to these things. By and large, I would say that it’s two ideas of approaches.
One would be we do what’s required, and the other would be we do a little more than what’s required. Both of those are fundamentally different than what you just laid out, which is, I have a program.
One of the things that comes up for me, there might be things that are required from a pure compliance perspective, that when you look at the competition for dollars to improve the system, it might not be the place you want to spend your money.
The place where you get the biggest risk reduction impact is someplace else. It might be someplace completely outside the regulations or completely outside of what currently exists as compliance.
Jan: You’re absolutely correct. It doesn’t necessarily mean companies have to spend more money, although it might, which is why, in an ideal world, this has to apply at the whole industry so that individual companies are not put at a commercial disadvantage.
Also, it might not mean spending more money. It might just mean spending the money in smarter places.
Russel: Right. [laughs] That would be a big change for the operators, but it would also be a huge change for the regulators, because their whole approach to how they do inspections shifts.
I’m no longer inspecting to see if you’re doing these things that are required. I’m now inspecting to say, “Are the things you’re doing and the way you’re spending your budget the most effective they could be?” That’s a very different conversation.
Jan: You’re right, it does require smarter regulators that have a different view of the system.
It’s done in two pieces. First of all, there’s the desk-based, document-based argument for what we say we’re going to do. Our risk assessment that says, “These are the riskiest parts of our system, so this is what we’re undertaking to do. We’re not doing any more.”
The next thing we would do would be, “This thing, and that’s not justified, so we’re not doing it, but here are all the things we are doing.” That’s an analytical desktop exercise informed by field data.
Once that case is accepted by the regulator and in place, then the thing against which you’re audited in the field are effectively your own undertakings as to what you were going to do.
It’s different for every company. Obviously, there’s a lot of commonality, but there are also differences from company to company. You’ve made the case that this is your highest risk. You said you were going to do these things, show us.
Russel: There’s lots of valid reasons for those differences, depending on where you’re operating, the nature of the weather, the nature of the geology, the nature of the terrain, the products you’re moving, how close are you to population, and other things that could affect consequences of an adverse event and so forth.
Again, I would be really interested to hear from listeners about what do you think about these ideas? I’m curious, because when I read the book, some of these ideas, that makes a huge amount of sense, but man, how would you make the shift to move from where we are currently in the U.S. to that kind of framework?
What would that take, and how do you do that in a way that’s orthogonal? The transition, the period of time you’re doing the transition doesn’t elevate your risk, because it’s a major change.
Jan: It’s a major change that quite a few industries have gone through, and that, in fact, the pipeline sector has gone through in other countries. We all used to have prescriptive rules and focus on compliance.
The shift to this, the lowest-as-reasonably-practicable idea and all of that came about as a result of major accidents that happened despite compliance with the prescriptive requirements.
Russel: Doesn’t this approach have its roots in the report coming out of Piper Alpha? Isn’t that where this has some of its roots from?
Jan: Yes. Piper Alpha was the impetus behind introducing this kind of regulatory approach into the UK offshore oil and gas sector. It existed in other industries and in other countries before that, but certainly, the UK offshore oil and gas moved to this after Piper Alpha. You’re correct.
Russel: I’m going to shift the conversation again. I appreciate, Jan, you letting me jump around like I’ve been doing.
One of the other things I took out of your book that I thought is one of those things I read and it was a big aha, is you talk about the need in critical infrastructure and heavy industry to have professionals that have accountabilities to other than just their company.
We’ve talked about pilots. Certainly, when you go and you get your pilot’s license, there’s a certain code of conduct and a certain expectation about what it means to be a pilot and what your responsibilities are as a pilot. You talk about the need for that in any professional. I’m like, again, that makes a huge amount of sense.
I got to wondering about what would it look like if we created a professional certification for a pipeline professional? What would be the requisite things that they would need to know, or do, or whatever? I’ve noodled on this, I have an idea. I’d like to hear your thoughts about it.
It’s a two-step thing. One piece is you ought to have a professional engineer registration, or certification, depending on what country you’re in. It’s a step beyond that, because I need to have some kind of understanding and orientation around what does it mean to have a high-reliability organization?
I probably need to have an understanding and can articulate a dozen major incidents and what led to those incidents occurring. It’s that that informs my decision-making in the future.
What’s your thoughts about all this? Is there anything going on in Australia or other places that you know of, where there’s some pipeline professional program?
Jan: I’m not the expert on this, but here in Australia, this takes us back to the beginning of our conversation. The APGA has a technical competency framework around pipeline engineering.
Broader than that, engineers in Australia, the system gradually across the different states is moving towards registration of engineers. There are a number of different specializations that are possible. The criteria for becoming a pipeline engineer in that system is something that APGA have had a project to develop for many years.
I’m not using quite the right language to talk about it because I don’t have the jargon at my fingertips. We can certainly put links to that material as well in the show notes from the podcast.
Broader than that, that only covers the technical competencies. Super important, but not sufficient in its own right. That’s where the practice guide comes in that we were talking about at the very beginning.
It’s not directly linked certification at the moment, but it’s around developing professional values for pipeline engineers that go across the profession rather than being something that’s aligned with a particular company.
As we said at the beginning, that practice guide is very much based around not only how you might respond in particular cases, but developing a background of cases that engineers can draw on.
Yeah, I think that’s super important. I like your idea.
Russel: Kind of like an attorney. When they’re working on a case, they go to other cases, and they use that to inform their arguments. This would be more about I go and I look at what are the other systems that were out there. How did they break down? Does that have correlation to what I’m looking at or what I’m doing now?
Jan: I think we’ll find a lot of our pipeline engineering colleagues out there are doing that already, but they do it informally. They share with their friends. They share at professional meetings.
This is just bringing some of those professional development activities a bit more out into the light and making sure, in these days where everyone’s under time pressure, that these things still happen.
Russel: I know, for me, that some of the best education I’ve gotten is going to lunch and going to dinner with other senior pipeline operators and hearing their stories. The way you share around a dinner table versus the way you share when you write a report and present it to industry has a different impact.
I agree with you. I do think that there is a lot of interest in getting this kind of stuff and learning it and so forth, but it’s hard to access. I think we could benefit from some formality.
Now, this whole idea for me is notional. It just comes out of reading your book, but it’s like wow, that’s a really interesting idea. Maybe that becomes part of what we’re doing in the U.S. around pipeline safety management.
Jan: That’s an interesting idea. I should also point out that this has links back into the whole high-reliability organization piece because one of the qualities of high-reliability organizations is around deference to experts in making decisions.
If you think about this, you’ve talked about pilots a few times. Happens with a surgeon in an operating theater. There are certain situations where those experts have more authority to make decisions than the CEO of the company.
If you’re just trying to fly the plane, you don’t ring the CEO of the airline to ask what to do. Once you’re in the cockpit and you’re flying the plane, the pilot has absolute authority to do things. In a control room, the most senior professional in the control room usually has the highest level of authority.
Even if the CEO of the company comes into the control room, the control room most senior person is still the one in control. This idea of having a certain set of critical decisions where you defer to expertise and you defer to professional skills is completely consistent with the whole high-reliability organization piece as well.
Russel: It’s interesting. Whenever I have one of these conversations, we end up spending a couple of minutes talking about a bunch of subjects, any one of which we could spend an hour talking about. [laughs] It’s just a matter of how deep do you want to dig into it.
The whole conversation about who’s in charge and who has the authority to make the decision becomes really complex because, unless those things are ground into culture, like they really manifest in culture, it’s very hard for them to operate correctly when you’re under the pressure of an impending incident and time.
You tend to divert to more classic hierarchical decision-making, and that’s not good in an organization that needs higher reliability decision-making.
Well, look, Jan, I am so glad that you’ve been willing to do this and get through the difficulties we’ve had scheduling between Houston and Melbourne, because I have really enjoyed this conversation, and I have found it very interesting and very informative.
I’m very hopeful that at some point, as we’re getting back to traveling around the world and going to events, I have a chance to actually meet you in person and say hello. I’d love to have you back whenever you have an idea you think pipeliners would be interested in hearing us talk about.
Jan: Thank you so much, Russel. I do hope we get to see each other face to face at some point. I’d love to hear what your listeners think about some of these ideas we’ve been discussing.
Russel: I’ll certainly share whatever I find out or whatever we hear. It’ll be interesting.
Jan: Fabulous.
Russel: All right, well, thank you again.
Jan: OK, bye.
Russel: I hope you enjoyed this week’s episode of “The Pipeliners Podcast” and our conversation with Jan. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit PipelinePodcastNetwork.com/Win and enter yourself in the drawing.
If you’d like to support this podcast, please leave us a review. You can do that wherever you happen to listen. Apple Podcast, Google Play, Stitcher, and many others. You can find instructions at PipelinePodcastNetwork.com.
[background music]
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at PipelinePodcastNetwork.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords