This week’s Pipeliners Podcast episode features first-time guest Christopher De Leon of ROSEN discussing the recent 2021-2022 NTSB Most Wanted List recommendation for pipeline operators to optimize pipeline leak detection.
In this episode, you will learn about how to run the right inline inspection tools to detect threats, the importance of having ILI systems and not just tools, how to support the human element of leak detection to drive toward zero incidents, the importance of how you’re collecting and using data to support leak detection, and more important topics for operators to consider.
NTSB Most Wanted List for Pipeline Leak Detection: Show Notes, Links, and Insider Terms
- Christopher De Leon is the Head of Integrity Solutions for ROSEN. Connect with Christopher on LinkedIn.
- ROSEN is the current episode sponsor of the Pipeliners Podcast. Learn more about ROSEN — the global leader in cutting-edge solutions across all areas of the integrity process chain.
- NTSB (National Transportation Safety Board) is a U.S. government agency responsible for the safe transportation through Aviation, Highway, Marine, Railroad, and Pipeline. The entity investigates incidents and accidents involving transportation and also makes recommendations for safety improvements.
- Read the 2021-2022 NTSB “Most Wanted List” for safety improvements, including recommendations that affect the pipeline industry focused on pipeline leak detection.
- Leak Detection Systems (LDS) include external and internal methods of leak detection. External methods are based on observing external factors within the pipeline to see if any product is released outside the line. Internal methods are based on measuring parameters of the hydraulics of the pipeline such as flow rate, pressure, density, or temperature. The information is placed in a computational algorithm to determine whether there is a leak.
- Integrity Management (IM) (Pipeline Integrity Management) is a systematic approach to operate and manage pipelines in a safe manner that complies with PHMSA regulations.
- ILI (Inline Inspection) is a method to assess the integrity and condition of a pipe by determining the existence of cracks, deformities, or other structural issues that could cause a leak.
- API 1163 is the industry standard for inline inspection systems qualification. The standard covers the use of inline inspection (ILI) systems for onshore and offshore gas and hazardous liquid pipelines.
- API 1160 (Managing System Integrity for Hazardous Liquid Pipelines) provides a process for establishing safe pipeline operations, including robust assessments of potential risks and the establishment of systems to safely and sustainably manage them in day-to-day operations.
- Magnetic Flux Leakage (MFL) is a magnetic method of nondestructive testing that is used to detect corrosion and pitting in pipelines.
- AGA (American Gas Association) represents companies delivering natural gas safely, reliably, and in an environmentally responsible way to help improve the quality of life for their customers every day. AGA’s mission is to provide clear value to its membership and serve as the indispensable, leading voice and facilitator on its behalf in promoting the safe, reliable, and efficient delivery of natural gas to homes and businesses across the nation.
- The PRCI (Pipeline Research Council International) is the preeminent global collaborative research development organization of, by, and for the energy pipeline industry. [Read more about the PRCI collaborative research projects, papers, and presentations.]
- Listen to PRCI president Cliff Johnson on the Pipeliners Podcast discussing the latest pipeline research projects and PRCI initiatives that are advancing the industry forward.
- Find out more about the threats to pipeline integrity that PRCI has identified.
- The Bellingham Pipeline Incident (Olympic Pipeline explosion) occurred on June 10, 1999, when a gas pipeline ruptured near Whatcom Creek in Bellingham, Wash., causing deaths and injuries. Three deaths included 18-year-old Liam Wood and 10-year-olds Stephen Tsiorvas and Wade King.
- The NTSB accident report attributed the cause of the rupture and subsequent fire to a lack of employee training, a faulty SCADA system, and damaged pipeline equipment. [Read the NTSB Pipeline Accident Report]
- Listen to Larry Shelton describe his first-hand experience from the incident on Pipeliners Podcast #79.
- AGA (American Gas Association) represents companies delivering natural gas safely, reliably, and in an environmentally responsible way to help improve the quality of life for their customers every day. AGA’s mission is to provide clear value to its membership and serve as the indispensable, leading voice and facilitator on its behalf in promoting the safe, reliable, and efficient delivery of natural gas to homes and businesses across the nation.
- ASME (American Society of Mechanical Engineers) develops codes and standards for industrial use to create a safer world. ASME has been defining piping safety since 1922.
- ASME B31.8S (Managing System Integrity of Gas Pipelines) is the engineering standard created through the ANSI consensus standard process to manage natural gas transmission pipeline system integrity.
- NACE (National Association of Corrosion Engineers) or NACE International is a membership group whose stated goal is to “equip society to protect people, assets, and the environment from the adverse effects of corrosion.”
- SP-0102 (Inline Inspection of Pipelines) is a standard that outlines a process of related activities that a pipeline operator can use to plan, organize, and execute an ILI project. This standard is intended for use by individuals and teams planning, implementing, and managing ILI projects and programs.
- INGAA (Interstate Natural Gas Association of America) is a trade organization that advocates regulatory and legislative positions of importance to the natural gas pipeline industry in North America.
- Pipeline Pigging and Integrity Management Conference (PPIM) is the industry’s only forum devoted exclusively to pigging for maintenance and inspection, as well as pipeline integrity evaluation and repair. The event draws engineering management and field operating personnel from both transmission and distribution companies concerned with improved operations and integrity management.
- International Pipeline Conference (IPC) is organized by volunteers representing international energy corporations, energy and pipeline associations, and regulatory agencies. The IPC has become internationally renowned as the world’s premier pipeline conference that supports educational initiatives and research in the pipeline industry.
- PHMSA (Pipeline and Hazardous Materials Safety Administration) is the federal agency within DOT responsible for providing pipeline safety oversight through regulatory rulemaking, NTSB recommendations, and other important functions to protect people and the environment through the safe transportation of energy and other hazardous materials.
- HCA (High-Consequence Areas) are defined by PHMSA as a potential impact zone that contains 20 or more structures intended for human occupancy or an identified site. PHMSA identifies how pipeline operators must identify, prioritize, assess, evaluate, repair, and validate the integrity of gas transmission pipelines that could, in the event of a leak or failure, affect HCAs.
- PIR (Potential Impact Radius) is defined by PHMSA (49 CFR subpart 192.903) as the radius of a circle within which the potential failure of a pipeline could have significant impact on people or property.
NTSB Most Wanted List for Pipeline Leak Detection: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 196, sponsored by ROSEN, the global leader in cutting-edge solutions across all areas of the integrity process chain, providing operators the data they need to make the best Integrity Management decisions. Find out more about ROSEN at ROSEN-Group.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time, and to show the appreciation, we give away a customized YETI tumbler to one listener every episode. This week, our winner is Brian Jackson with Columbia Gas of Kentucky. Congrats, Brian, your YETI is on its way. To learn how you can win this signature prize, stick around till the end of the episode.
This week, Christopher De Leon, head of integrity solutions at ROSEN, is joining us. He’s going to talk about the recently-published NTSB Most Wanted List and using the right ILI system for the threat. Christopher, welcome to the Pipeliners Podcast.
Christopher De Leon: Thanks for having me. It’s always a great chance to talk to you.
Russel: Tell me a little bit about your background, Christopher. Where do you come from? How did you get into integrity management, pipelining, and all that?
Christopher: Yeah, that’s always a fun barstool talk. Born and raised in Houston, grew up in Southwest Houston, went to Bellaire High School if any listeners know about that area. Went to the University of Houston. Studied electrical engineering. Like every, or at least most of the people I know in pipelines, we had no intentions of being a pipeline professional.
I was interning at a power generation plant as an electrical engineer. While I was roaming a career fair, a recruiter jumped out in front of me and said, “Hey, give me your resume.” I was like, “Why?” He said, “Because that’s what we’re here for, right?” I said, “Sure.”
I gave him my resume and he was like, “Do you know what pipelines are?” I was like, “No, not really.” Then one thing led to the next. I’m cooperating with Southern Union, which was the owner of Panhandle Eastern Pipeline, right?
Russel: Yeah, absolutely.
Christopher: Yeah, so that’s how I had my ingress into the pipeline world.
Russel: Some aggressive in-your-face recruiter changed your life.
Christopher: [laughs] Sometimes God just puts you where you need to be, Russel.
Russel: I’ve certainly gone to some recruiting fairs and had to grab people by the arm and make them talk to me because they had no idea what we did. It’s good to know that that works sometimes.
Christopher: Sometimes. That’s how I came into pipeline. Then, similarly as in electrical, I had no idea I’d end up in pipeline integrity. The director at the time was a gentleman by the name of Jerry Rau. He gave me a shot. I thought I’d go into compression. He’s like, “No, we’re going to bring you in integrity.”
He put me on to a couple of pipelines that I managed, what’s now called Energy Transfer, and focused mainly on integrity assessments through inline inspection. That’s kind of where my passion for ILI came in.
Russel: Interesting. I asked you to come on and want to talk about the recent NTSB Most Wanted List as it relates to pipelining. I think most people in the pipeline world know what the NTSB is, but I’ll ask you what’s the NTSB and what’s the Most Wanted List?
Christopher: Sure. Here’s my take on it. The best way we think about it is they are a federal agency that has a congressional mandate to help make transportation safer. They do that normally through two main activities as I understand it. Independent accident investigations, and then making recommendations based on their investigations.
How that ties into the Most Wanted List — so that’s been around for a few years. The two more recent ones that I can recall was 2019, where they were basically doing a big push for closing the gaps in some of the recommendations that they had made related to pipeline safety.
Russel: Yeah. It was a lot of control room and alarm management stuff in the previous bunch.
Christopher: Yeah. This year, what we find is basically a focus on leak detection and mitigation of that. That’s their Most Wanted List. The reason why they put that out there was a call to action by industry. Pay attention to the things that they’ve noticed.
I think they stand behind the statement around, “If we do these things, it will save people’s lives.” It’s one of the messages that they send. That’s a little bit on their Most Wanted List.
Russel: Yeah. I think the NTSB is kind of a fascinating agency because they don’t really do any kind of advocacy for anything, one way or the other. Other than when an accident happens, their job is to go to that accident, find the root cause, and look for the things that could be changed that would improve safety performance. They do that across aircraft, trucking, shipping, trains, pipelines, all that stuff. They are very competent, very capable folks.
Let’s talk a little bit about the specific recommendations on their Most Wanted. What are the key things in the Most Wanted around ILI or integrity management?
Christopher: We ought to separate that a little bit. One of the things that I think is really relevant about some of the recent NTSB work is they published their factual report on one of the pipeline incidents that occurred in Danville, Kentucky, back in 2019. There was another one that they’re also working on. We haven’t gotten the factual report on that one yet. That’s a geohazards-related failure that happened in Hillsboro, Kentucky.
There is a common denominator in those, and it’s that if we look at how those incidents happened, and we look at the theme of pipeline integrity management, tying back that Most Wanted List from 2019 and this now factual report, what we find is that inline inspection can be a great tool for finding data and making decisions, but it’s not an end-all, be-all, if that makes sense.
Russel: Yeah. You made a comment off-mic as we were getting ready for this. You talked about, it’s not really about choosing the right ILI tools. It’s more about choosing the right ILI systems. Maybe you could talk a little bit about what do you mean? What’s the difference between an ILI tool and an ILI system?
Christopher: Great question. This is where I often get on my soapbox, so feel free to interrupt or interject, because I’ll be here a long time on this topic. [laughs]
Traditionally, what we think of is inline inspection, ILI. The majority of the effort — the physical effort that goes into ILI — is inspecting. A tool shows up, you put it in the pipeline, you hope it doesn’t get stuck, come out, then it goes on its way. Then sometime later, we’ll deliver an ILI report to you.
We focus a lot of our efforts on the tool. Often, when we’re talking about pipeline integrity and integrity assessment for specific threats to pipelines, we talk about a tool, but what we find is we as an industry come together through consensus organizations, namely ASME or API.
API gives us an ILI validation standard. It’s API 1163. In it, they have definitions, and the ILI tool is just the tool itself. What we need to focus on more as an industry and even in the way we communicate is…I heard on one of your previous podcasts around safety management systems and certain terminologies. That’s an important part in our industry.
When I look at what an ILI system is, what we find is it’s the tool and that tool’s ability to go into a pipeline, navigate the pipeline, come out, and under operating conditions that are agreed with an operator. Tool velocities, bin radiuses, wall thicknesses, and all that.
Another important part, too, is the technology, the sensors that are on board that tool, and what those sensors can do. Often, what that brings to the table is often what we would consider POD [probability of detection] and resolution. The probability that it can detect something, and then at what resolution it can do it.
Downstream of that, now you have evaluation procedures and algorithms. That’s also part of an ILI system as defined by 1163. That’s where you start thinking about how an analyst is able to do POD and sizing. The probability that they’ll identify what you’re looking for, and then their accuracy in describing that thing. Think length with depth, etc.
Then on the back end, it’s procedures and people. What procedures are in place to maintain quality and consistent work? Then, the people that you need to have the knowledge and the competencies to do a good job.
I really encourage, whenever we’re talking about ILI, you always hear me purposely say, “What ILI system did we deploy?” “What ILI system is being used for your threat management on that pipeline?”
Russel: Christopher, you just said way more than just a mouthful, is what you just said. [laughter] I say this all the time on the podcast, that I am not an integrity management guy. I know enough about it to be really dangerous. I think one of the things I do know about integrity management is it’s a very diverse, very complex, very technical domain. It’s very easy to get lost in the minutia of what is the nature of this specific threat and its probability of failure?
What is the size of it? What is its growth rate? All those kinds of things. All of that is work that has to be done but that, by itself, is not enough. It’s really trying to understand the net effect of all those things working together.
Sometimes, I actually wonder how we’re able to do it. I know we do it and I know we do it well. Sometimes I wonder how we’re actually able to do that. Does that make sense?
Christopher: It makes a lot of sense. An important part there, if we go back to this factual report on that Danville failure. Integrity management was in place. Intentions were good. ILI technology had been in the line since the late ’80s. They had inspected the line with different technologies, everything from MFL tools to hard spot tools.
Specifically, a hard spot was deemed as one of the interacting threats that led to this failure — one of the failures that the line had experienced. The operator did run a hard spot tool to be diligent and to try to address this threat to their pipeline.
Yet, in 2019, as an industry, because we are an industry and we’re in this together, we find that a line that had been assessed with what one would deem the right ILI technology still found themselves in a situation where they’re dealing with an incident. Unfortunately, in this incident, there was a fatality.
We look back at the industry and say, the program was in place. The right tools had been deployed. The ILI system is what, potentially, failed in this case. Again, we say potentially, depending on what the scope of work was and all of that fun stuff, things that can always be debated. In the end, the point there is that integrity management has evolved a lot since the early 2000s, both on the hazardous liquid side and the gas side. The NTSB has made recommendations to industry to help close those gaps.
ILI is evolving. I think one of the big points out of there is, we’re learning a lot as we’re going through pipeline integrity management. We’re using data that we’ve collected over years and we’re trying to make the best decisions possible.
Russel: I’m reading some of the information you provided me about the Danville report, in particular. It’s talking about a tool run in 2011 that identified 16 features. Then, they ran again in 2019, and they identified 441. Was that because the tool was better or was that because the pipe had actually grown that many more features?
Christopher: It’s another good discussion point. Actually, the way I understood that report is that the same data from 2011 was actually re-analyzed. There’s always a lot of circumstances there. Technology has evolved. People have more knowledge. Different algorithms can be applied to previous data sets.
What it appears to be is that that same inspection data set from 2011, when it was re-analyzed with modern practices, thinking about that ILI system, not just the tool, but that same tool with improved algorithms and more experience on that type of threat and that tool technology, they actually ended up saying that there were more than 400 features on that line that that tool could have detected.
Russel: That’s like a big “ah ha” for me, Christopher. That’s a big “ah ha” because what you’re saying is they, basically, got the same results in 2011 and 2019 from the data the tool produced, but they got very different results in the data analysis that was done in 2011 versus 2019.
Christopher: That’s the “ah ha.” That’s the point that I want to drive home. There’s a difference when we think about a tool, an inline inspection tool, and how service providers, now, that look at ourselves as what we believe to be integrity partners because it’s not just a tool. It’s the ILI system, and part of that is all of the things you just said. It’s different algorithms that we learn and modify over the years. A more recent one is machine learning, right?
ROSEN, specifically, we’re looking at all the history of inspection data that we have and supplementing potential human error and driving consistency through machine learning evaluation protocols that use pattern recognition, spatial recognition — neighbor analysis is what we call it. All of these things are in proximity. It really improves analysis ability.
Back in the day, I used to always hear people say they used to scroll through their ILI log. You can imagine this piece of paper with what looks like EKG symbols on it and they’re scrolling through paper, trying to identify features.
Russel: I’m old enough to remember that. [laughter]
Christopher: Now, what we’re saying is there could be a lot of human error in there. Fatigue is a big one. That was actually on the NTSB’s Most Wanted List for different parts of our industry in transportation — driver fatigue, etc. — but that’s real for us. We’re people. We’re looking at ILI data. We’re making decisions on pipeline integrity. These modern analytical processes enable us to make better decisions. That’s part of this whole concept of an ILI system.
Russel: Let me ask some more questions about that. We tend to act like machine learning is new. It’s not. Machine learning has been around for decades. What’s different is the amount of data we can crunch through the algorithm. That’s the primary thing that’s different.
I think the other thing that’s different is the ability to tweak the algorithm, improve it, and train it. We’re able to do that at a higher cycle rate than we used to be able to do. Again, it just has to do with the systems and their limitations that we were using. To what degree is machine learning improving the analysis? What is machine learning not able to do that the engineer still has to do?
Christopher: Sure. A couple of things there. When we look at what we’ll call AI, that’s the broader term, artificial intelligence, a component of that is machine learning. What’s fundamental there, as you said, is you have to have enough data. And, enough is generally described as dependent on what you’re trying to achieve.
The second one is the quality of the data. You have to have a clear objective, and a lot of people you know who are getting more interested in machine learning, that’s a big part for them. As a user, if you don’t understand machine learning, you don’t have to. Things you can start asking is, “What’s your objective and what are you trying to achieve?” Then you can ask the next question, which is, “So, why do you think you have enough data for that?”
For us, with the way we’re trying to close gaps is — we’ve been inspecting pipelines in the U.S. for over 30 years, and as a company globally, for more than that — if we’re able to take all of this inspection data, and all of the field verifications, and the huge data that we’ve been able to accumulate over the years and actually put that data through different routines and identify the second part that I was going to bring up, which is knowledge, that’s that other part of the ILI system, which is the people side of it. We say, normally, these are the type features that fail. Whenever we record a log, we see that these type features are the ones that are either undercalled or overcalled where our interaction rules between two features usually fail in these circumstances.
We can point machine learning towards those type of scenarios and help us begin to identify those so that an analyst knows where to focus their effort and know when to raise their hand and ask for help and say, “Hey, I see this. I’m not sure if I understand it. But maybe you need to talk to somebody else and find out what’s going on here.” Maybe you call the operator and the customer and say, ‘Hey, I’m seeing this in your ILI data.'” That’s one way that we can enable identifying threats to ILI data that are maybe needles in the haystack.
I’ll bring up one more point, if you don’t mind. In this factual report, one of the themes in it is that… What made a big difference between the 2011 report that identified under 20 features and when they re-analyzed it in 2019, it produced over 400 was this concept of gain. The best way to think about it — we don’t have to get too technical — is just the setting in the data.
Russel: I’ve got an analogy for that. What I was thinking about, it’s kind of like if you remember high school biology. Everybody had microscopes and you would focus those microscopes. Depending on where you put the focus, you would see something different, right?
Russel: As I focused in more tightly, I’d see more detail around something, but something else would go out of focus and then I’d change the focus again and something would come in and something going on. It’s kind of like that. You find what you’re looking for. You don’t find what you’re not looking for.
Christopher: That’s it. That’s a great way of interpreting sometimes what ILI data can feel like. Once you know that you have a feature that you really want to investigate that you change your focus, it can distort the view. Again, I don’t want to get too much into machine learning, but it’s this concept, this ILI system, right?
As our analytics evolve, as our knowledge increases as to how different morphologies of threats are represented by ILI data, we would use these algorithms and knowledge to highlight the things that are more relevant. Those needles in the haystack — trained by knowledgeable people because they know what to look for — are more visible. It’s not this arbitrary algorithm that says, “Oh, here, I found 17 things you want to look at.” Rather an SME is saying these are the things we need the algorithm to help us find because those are the needles in the stack.
Russel: Having knowledge of my material properties and the environment I’m in and my operating conditions that tell me these are the likely kinds of things I should be looking for.
I think one of the challenges with these new algorithms and the machine learning or whatever you want to call that, the data science, is that one of the things that can happen is you find more things you need to look more carefully at, right?
Christopher: [laughs] Yeah.
Russel: One of the things about that old-school way of just flipping through the charts or flipping through the X-rays, is those guys that did that, they got to where they would quickly see something that needed to be looked at. All the stuff on the edges might get missed, but they would quickly… You end up with a short list of things you really need to pay attention to.
Where, with the flip side of this, you ended up getting a long list and you have to do some additional, “Okay, so I identified all these features. Where are the features I care about?” It actually creates a need for another level of analysis.
Christopher: That’s a great topic, right? I think that’s where as an industry, we were coming more together, this relationship of service provider and operator. When we think of an inline inspection, it’s not just the system about how well the analyst is able to report features, but there’s another part to that, which is, what do you do with the data once you have it?
You have to respond to data in different ways. I’ll give you an example. Let’s say an operator runs an MFL tool — that’s one of the more common — and it reports a specific feature. If the integrity engineering is not associated with that feature, then you can miss something. What I’m trying to get to, it’s not just the ILI report, but it’s understanding the integrity engineering side of it.
If you go out to the field and say, yes, you indeed found this feature — let’s say, it was reported at 50 percent deep — but the person in the ditch doesn’t realize that the morphology, the geometry of how that corrosion is on the pipeline, how the tool responds to that morphology, you’re missing a lot of information there that should be fed back to the analyst so that they know is that a bigger threat or a lesser threat?
Closing that loop is really important. A lot of times, you can be focused on the tool and the analyst, but what operators are doing once they receive that data, if that’s not coming back to us as an ILI service provider, we’re handicapping ourselves there in the value that’s in the data itself.
Russel: That’s right. I think, Chris, that’s a really good point. I think of this too as…I remember when I first started the podcast about four years ago, the first thing I did is I got a guy who has a Ph.D. in ILI tools on, and we walked through all of these definitions, and all these different tools, and how the tools worked in some pretty excruciating details. Really good, I learned a lot.
One of the things that I still get twisted up about is this idea of, “Will I identify features and identify defects?” and the distinction between a feature requiring evaluation versus it’s just a feature and I don’t need to evaluate it.
Christopher: Can I jump in there?
Russel: Go ahead. What I want to get at is one of the things that goes on with these tools — and they’re performing better — is I’m actually identifying more features, and it’s becoming a lot more important that I understand which features require evaluation and which features do not.
Christopher: Yeah, it’s great. I’ve been listening to your podcast, so I’ll speak to one of your more recent ones, where you talked about normalization of deviance, a big one.
I’ve been pretty fortunate, Russel, to be able to hire a handful of graduates from different universities. A big thing that I drive into the organization as head of integrity solutions is using the right terminology. We have a basis for that in 1163 where we have these definitions. Part of that is an ILI tool provides indications, and those indications then get evaluated. In that evaluation, you then identify features. Those features represent something. It can represent metal loss. It can represent a linear indication, which could often be considered crack-like. It’s those things. It could be a geometry feature.
It’s not until you physically examine it, until you go and dig it up and you get your eyes on the pipe that you know if the pipe has a defect, because if you don’t put your eyes on it, it’s an anomaly. It’s not until you physically excavate the line, you physically are standing over the pipe that you see this is a defect. It’s not an imperfection. It’s not an inclusion…
Russel: A manufacturing anomaly.
Christopher: Yeah. It’s a defect. I’ve put my eyes on it. ILI tools report features, and once you examine it physically, it’s now considered a defect.
1163 also offers some guidance there as the separation between an imperfection, and feature, and defect, but we’ll leave that for a more technical podcast. [laughter]
Russel: Come on, man. Let’s redline the geek meter. We could do it.
Christopher: I don’t know if I’ll do that. I do want to use the whole thing about feature and going back to the in the ditch practices that we find is, I’ll give another example of the importance of that relationship of what activities happen in the field, and tying that back into the ILI system.
An example of that is I’ll go back to that idea of if the operator goes in and physically examines the pipe in the ditch. Too often we expect a specific ILI system to do more than it’s capable of. One of the things I want to highlight is the importance of each project getting the important time and energy focus that it deserves.
Pipeline integrity management, as we all know, is not just about compliance, and it’s really this move towards one industry, going towards this goal of zero incidents. For that, we really need to close up this loop, and an ideal example of that is this. PRCI has a pretty well-known report out there that identifies 22 threats to pipeline integrity. One of them is unknown, so really it’s 21. They’re classified under the normal buckets — time-dependent, time-independent, and stable.
If I stay on this road of an MFL tool, which is one of the more common ones, it says you can have external corrosion or internal corrosion. We think that broad. It’s unlikely that I’ll have a metal loss incident on my pipeline because I’ve run an MFL tool and the MFL tool is designed for that. Here’s where I’ll circle it back to the specification.
Each MFL tool will be good at specific geometries of defects that are on the pipeline. We represent that in our performance specifications. What you’ll find is we actually have seven different industry-recognized in the standard geometries of different morphologies of corrosion.
If we don’t understand what morphology or geometry of defect you have on your pipeline, that ILI tool is likely to miss some of those if you don’t understand what you have on your pipeline. You could end up in a situation where I ran an MFL tool but it actually had a morphology of metal loss or corrosion on it that that tool is just not good at seeing and it’s in the specification.
If you do have an incident, or you went and did some digs five years ago but you didn’t feed that information back to your ILI service provider, they may not be able to tell you, “Hey, I looked at what you dug up and this maybe isn’t the best tool to identify that threat.”
Russel: I think that’s really the human element in this whole process, is the intelligent decision making around what threat should I be looking for and what tools — and I’m not talking about just ILI tools — but what tools should I be using to identify and evaluate those threats?
Again, Christopher, I think you’re making a bunch of really good points. You said something earlier, and kind of moved on, and I want to point it out. I think this is a big deal. Every organization has a certain amount of man-hours that they can apply to this activity. With the kinds of tools that we have, we’re identifying more things than we can evaluate. Really understanding where you’re going to put those man-hours to get the best safety impact is huge in our world.
Christopher: It’s not easy. I’ll be the first to say that.
Russel: That’s why senior integrity management people that are responsible for pipelines sometimes have trouble sleeping at night because they know all the details.
Christopher: Again, I want to turn back to this factual report. The accident investigator really brought this to my attention. I was in an AGA Transmission Integrity Committee meeting and she, the principal investigator, came on and summarized some of this and said, “Hey, keep your eyes open for this factual report — it’ll be out soon.”
As I read it, I almost felt bad. The reason why I say I almost felt bad was because you could tell that this pipeline had been in an Integrity Management plan almost from the go, since the inception of Integrity Management. Back to a discussion that you had with Rhett Dotson a couple of weeks ago where threats are sometimes finding operators, and operators aren’t finding the threats. One could almost argue that it may have happened on this specific 30-inch pipeline as it relates to hard spots. They reacted to it. They ran an ILI tool that had a performance specification and was designed to find that. Again, it wasn’t until eight years later, with modern analytic methodologies and with more competence, that one would argue that the NTSB report states that nine hard spot features under the new evaluation protocol were surfaced in the joint that failed. Whereas, in 2011, there were none reported in that failure.
Russel: Everybody in our business is doing the best they can and they all understand the significance and the seriousness of it.
If you’ve listened to the episode I did on Bellingham, I had a gentleman by the name of Larry Shelton, who’s an Integrity Management guy who’d been in senior leadership with some pipelines. He ended up on the board of one of the owners of the Olympic Pipeline the day after the event occurred and was involved in dealing with the families and the after-effects and such. For years after that, until he retired — he had like seven different features that he had on the wall in his office. He’d bring the new integrity guys in, and he’d say, “Okay, you’ve got enough budget to dig two of these, which two do you dig?” One of the ones that was up there was the feature report for Bellingham. He said in all his years, nobody ever picked that Bellingham feature to dig. That’s kind of compelling, right?
Christopher: I’ll use that as a segue to another topic. Another big part of picking the right ILI tool is industry knowledge. How do you quantify that in different operators’ risk modeling?
If we go back to this topic of pipeline integrity management, code requires us to use some risk assessments to establish what threats you’re susceptible to and what the consequence of failure would be and then prioritize accordingly.
At some point, you’d say, we really want to move towards this quantitative modeling where we’re using data to tell us what threats we have and where and what the consequences are. You’ve got to get back to that concept of knowledge is fundamental and we have to leverage it.
If I stay on that a little bit around industry knowledge, and back to this Danville incident, if I ask us as an industry — pulling away from that operator — in 2004, INGAA reported on hydrogen cracking specific to this type of pipe that was in the ground. In 2007, there was a report about hard spots for this type of pipe. When you look at it, these are years prior to where this incident happened. You have to step back and say, one, “What exposure do I have as an organization to knowing that this information is out there, then how do I harness it for my Integrity Management plan?”
Russel: That’s right. I think what that leads to, Christopher, is just the whole bigger conversation of how do we do a better job of sharing the data so that we can all be learning together versus each operator learning independently.
Christopher: As an industry, I think we do a really good job, Russel, around sharing information in case studies. Two big ones that come to mind are here in Houston, the PPIM Conference, and obviously one up there in Calgary, IPC, where we try to bring technical case studies and advances to industry to start knowledge-sharing that.
I’ll also go back to that other example of running an MFL tool. In 2012, TC Energy presented on this idea of complex corrosion. One way to describe it is if we say there’s seven different geometries of metal loss corrosion, what happens when you have two of them in the same place? It becomes complex.
How does an ILI service provider — their ILI system address that — and to what ability? That’s been out there since 2012. As an industry, if I just look at API 1163, NACE’s SP-0102, ASME B31.8S, and API 1160, it’s difficult to harness that in consistent standards.
We now know that there’s a threat out there, and it’s real. How has that been incorporated into our normal decisions both as a service provider here at ROSEN, but then also as an operator saying, “How am I harnessing that industry knowledge to move the needle more forward from integrity management and maybe more towards this idea of pipeline safety?” Which is definitely what it feels like. There’s a lot of inertia moving in that direction.
Russel: Oh, I think we’re at the infancy of what Pipeline Safety Management is going to be all about. That would be my opinion. It’s something that provides an opportunity for transformational improvement in our safety performance in the industry.
I mean, you’ve seen that in the airline industry over the last 30 years or so, and I think there’s a lot we can learn from the airlines. Obviously, there’s a lot about pipeline that’s different.
No, I think you’re right. Safety management and the idea of continuous improvement, and the idea of continuous improvement as an industry in the technical details, there’s a lot of opportunity there.
Christopher: Just on that topic of Integrity Management and pipeline safety, one of the other points that we find in this Danville NTSB report was our regulation for gas lines was primarily focused on Subpart O, which is HCAs. The report clearly indicates that while there was one fatality, Integrity Management wasn’t required because there were an insufficient number of structures intended for human occupancy within that PIR, that potential impact radius.
If we think of pipeline safety, that process of looking at how many structures are there and having that knowledge or information about whether there’s people on the pipeline, around the pipeline, is that is Integrity Management going to be able to move and address all those things, or do we need regulation for that? And how do we begin to manage that?
It’s a little bit of a different topic, maybe for a different podcast…
Russel: It’s related to the idea of not only how many features do I have, but where are these features relative to the potential consequences of failure, right?
Russel: All that lends itself to some of these advanced risk modeling things, but ultimately, no failure’s good. The only thing that’s acceptable is zero failures. Certainly, if you look at what’s been going on in the last few years, we’ve got a ways to go.
I think the other thing that’s also true, and I wouldn’t even begin to be able to say what I think we’ve got to do as an industry about this. I do think that the fact that we tend to have a flurry of issues in newly-constructed pipe, more so than pipe that’s been in the ground for 10 or 15 years, that is really problematic for us as an industry.
Christopher: That road to zero incidents is definitely a journey. Obviously, in that journey, there’s a lot of lessons learned. We’re going to have to do it together.
Knowledge sharing, those that are responsible for innovative technology, and those that are ultimately responsible for budgets and knowing where to put those resources to make sure that the environment, the public, and operations are all safe. That journey is one that I think we’re all on. We all agree to be on it. I think all of us have the right intentions.
Russel: Yeah, I certainly agree with that. I certainly agree with that. I think too we’re seeing improvements on all those fronts. Christopher, if I may, one of the things I normally do or I often do is I try to come away with my takeaway comments. I’m going to try to see what my takeaway comments are, then I’m going to ask you to give me a letter grade, okay? Does that sound fair?
Christopher: Go ahead.
Russel: I think a couple of comments would be, number one, you think about the system, which is the tools I run, my processes, and procedures, and tools I use for evaluating the data, and then the systems and processes I use for round-tripping data, the data I collect versus the data I evaluate.
You also need to take that, just like you mentioned, somebody gets data in the field when they do a dig, it needs to come back to the guys that are running the ILI tools. Likewise, when you’re running ILI tools, you find that data, you need to feed that data back to the guys in the field, too. It’s a both-end kind of thing. That’s one key takeaway.
I think the second key takeaway is this is one of those subject domains that the more I know about it, the less I know about it.
Christopher: [laughs] I think you did a good job with summarizing it. I’ll just end with this. Ask for help. Ask for help.
Russel: Yeah, buddy. Find somebody you can ask for help. That’s awesome. That’s a great place to leave this, Chris. Truly, one of the reasons I did the podcast is I learned the business by talking to other smart guys, and I felt like there would be an appetite or an interest in our business for listening to smart guys talk. We’ll try to find some smart guys and talk to them.
Christopher: You’ve been talking to a lot of the guys on my team, and they’ll tell you my motto. As a team lead, I work hard at hiring people who are more experienced and smarter than me. You’ve already got to talk to two of them. There’s a couple more downstream of me. You can grade me on my job well done at the end of the podcast. [laughs]
Russel: Yeah. I tell people, I don’t have any desire to be the smartest guy in the room. My desire is to be the dumbest guy in a room of extremely smart people.
Christopher: There you go. That’s how you learn the most.
Russel: Exactly. Hey, this has been awesome, man. I appreciate it.
Christopher: Hopefully, we get to do it sometime soon.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Christopher. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at pipelinepodcastnetwork.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords