This week’s Pipeliners Podcast episode features Gary White, former president and CEO of P.I. Confluence, discussing how to apply Pipeline SMS in pipeline operations. This is episode two in a three-part series on the subject of Pipeline SMS.
In this episode, you will learn about how to implement the Plan Do Check Act (PDCA) cycle, an additional step that operators should take in the PDCA cycle, the importance of reviewing leading and lagging indicators in Pipeline SMS, the need for a mechanism that facilitates the process of checking for understanding, and more timely topics.
Application of Pipeline SMS: Show Notes, Links, and Insider Terms
- Gary White is the retired former president and CEO of P.I. Confluence. Connect with Gary on LinkedIn.
- P.I. Confluence (PIC) provides a complete GMS (Governance Management System) that manages process, workflow, communications, and information exchange among stakeholders to help operators align with the fundamentals of Pipeline Safety Management Systems (PSMS).
- Pipeline SMS (Pipeline Safety Management Systems) or PSMS is an industry-wide focus to improve pipeline safety, driving toward zero incidents.
- The Plan Do Check Act Cycle (Deming Method) is embedded in Pipeline SMS as a continuous quality improvement model consisting of a logical sequence of four repetitive steps for continuous improvement and learning.
- PHMSA (Pipeline and Hazardous Materials Safety Administration) is responsible for providing pipeline safety oversight through regulatory rulemaking, NTSB recommendations, and other important functions to protect people and the environment through the safe transportation of energy and other hazardous materials.
- PHMSA published Advisory Bulletin ADB-2012-10 to inform owners and operators of gas and hazardous liquid pipelines that PHMSA has developed guidance on the elements and characteristics of a mature program evaluation process that uses meaningful metrics.
- PHMSA Docket No. 2012–0279 captures the pipeline safety topic of using meaningful metrics in conducting integrity management program evaluations. PHMSA’s integrity management regulations require operators to establish processes to evaluate the effectiveness of their integrity management programs. Program evaluation is one of the key required program elements as established in the integrity management rules. For hazardous liquid pipelines, 195.452(f)(7) and 195.452(k) require methods to measure program effectiveness.
- Integrity Management (Pipeline Integrity Management) is a systematic approach to operate and manage pipelines in a safe manner that complies with PHMSA regulations.
- Distribution Integrity Management is looking at the various threats that can cause an unintended release of gas from the pipeline.
- DIMP (Distribution Integrity Management Program) activities are focused on obtaining and evaluating information related to the distribution system that is critical for a risk-based, proactive integrity management program that involves programmatically remediating risks.
- ILI (Inline Inspection) is a method to assess the integrity and condition of a pipe by determining the existence of cracks, deformities, or other structural issues that could cause a leak.
- Pigging refers to using devices known as “pigs” to perform maintenance operations.
- HCA (High-Consequence Areas) are defined by PHMSA as a potential impact zone that contains 20 or more structures intended for human occupancy or an identified site. PHMSA identifies how pipeline operators must identify, prioritize, assess, evaluate, repair, and validate the integrity of gas transmission pipelines that could, in the event of a leak or failure, affect HCAs.
- BAP (Baseline Assessment Plan) is the plan that a pipeline operator must develop to assess the integrity of gas transmission lines included in their Integrity Management Program.
- In 2004, PHMSA issued the final rule, “Pipeline Safety: Pipeline Integrity Management in High Consequence Areas (Gas Transmission Pipelines).”
Application of Pipeline SMS: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 178, sponsored by P.I. Confluence, providing software and implementation expertise for pipeline program governance applied to operations, Pipeline Safety Management, and compliance, using process management software to connect program to implementation. Find out more about P.I. Confluence at piconfluence.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time, and to show that appreciation we give away a customized YETI tumbler to one listener each episode. This week our winner is Peter Cunningham with Brunel. To learn how you can win this prize, stick around until the end of the episode.
This week Gary White returns to talk about the application of pipeline SMS. Gary, welcome back to the Pipeliners Podcast.
Gary White: Thank you.
Russel: We did our first conversation. We talked a little bit about an introduction, what is SMS, and your perspective on it. Today, I want to do a little bit about application, about “how do you actually apply this stuff?”
I think the first question to get us kicked off is in your perspective, the way that SMS is written, is it a performance standard or a prescriptive standard? I shouldn’t say standard. I should say recommended practice.
Gary: Based on the fact that it’s supposedly predicated on the Plan Do Check Act cycle, it would seem like it’s designed to be performance-based. But if you dig into it and look at what it says, it’s got 253 shall statements. Of those, 95 percent are “do something,” not necessarily check something or act on something.
Russel: Do you have an opinion about why that is — why it came together that way?
Gary: Honestly, I believe that one of my first clients many, many years ago when I was still getting started,a pipeline operator with many years experience said to me, “Gary, pipeline operators plan and do all the time. But they’re not so good at checking and acting.”
This is 20 years ago, and I don’t think it’s changed. I think that since the recommended practice was written by operators, they didn’t understand the Plan Do Check Act methodology when writing it.
Russel: I have a little different take. This is not really based on direct experience, more based on having conversations with others. It’s more about in this first generation, if you will, of Pipeline SMS, a lot of it is just getting the operators to play from a more consistent playbook across all the operators.
One of the things I hear many people say is, “Well, the operators are already doing all this.” It’s just a matter of they call it something different, or they have it organized differently. I think there’s truth to that. I think the focus has been more on capturing what we’re already doing than what are the gaps in Plan Do Check Act.
Gary: At the end of the day with that statement, yeah, every operator for the last number of years has had operations. They’ve had emergency response, they’ve had training, and they’ve had risk. Yeah, they’re doing it. They’re a pipeline operating company. Of course they’re doing it.
The SMS purpose was not to say, “Do the things you’re already doing.” It was to say, “We created a safety management system,” which I’ll repeat from last time, is the systemic implementation of quality management methodology to the various elements.
Russel: That’s a great transition, Gary. That’s a great transition. You talk about the various elements. The elements would be the 10 program elements that are listed in the guideline, starting with leadership. Right?
Gary: Yes, mostly.
Russel: You talked about doing a review of the statements and how many shalls and all that kind of stuff. You’re going to have more do statements. If you look at the total program, not necessarily what’s an SMS. You’re going to have more doing than you have checking in terms of tasks.
Russel: But at some level I would assert you ought to be doing Plan Do Check Act all across the board on every single element. Do you agree with that? Is that a fundamental premise?
Gary: Yeah. Each element, not all 10 of them, but most of them should have within it somewhere the overall goal or objective of planning it, doing it, checking it, and acting upon the checks. Every element should have those four pieces in them. It doesn’t exist that way.
Russel: Yeah, I think that’s right. I think also, too, that when you get focused on doing, you miss one of the critical elements. We were talking about this before we got on the microphone, but ultimately what are you trying to do? What are we trying to do?
Gary: I think what we’re trying to do is be safe. Try to have a more quality work environment and quality product with less risk.
Russel: Proof performance is the way I would say that. We want to hit our business objectives, do that in a more effective way, more efficient way, and do it without causing any harm. Ultimately, that’s performance in the pipeline world.
When you’re talking about check and act, what are you checking?
Gary: When you check something, you have to go back to the word performance. Performance has two types of indicators. Leading indicators and lagging indicators. Leading indicators are what you did. Lagging indicators simply are what you end up with, what result you got.
When you do a check on something, you want to check first of all the leading indicators, was it done by the right person, the right time, the right way, the right place, the right time? Those kinds of things. If it was, then that gives you a strong, high level of confidence in the results. The results have more meaning. Then you check the results and you say, “Do they tell me something?” Or, “What do they tell me? What actions can I take based on these results in order to make my systems safer?”
If you checked leading indicators and you’re not doing the right thing, the right way, the right place, the right person, the right training, and so forth, then the act becomes fix that.
Russel: [laughs] Yeah. That seems so blatantly obvious, right? In the midst of the struggle sometimes those things are not that blatantly obvious.
Gary: At the end of the day, if the leading indicators are not being promptly fulfilled, the validity of the results could drive you to take action in the wrong place or the wrong action or no action. Or whatever the case may be.
If you do what you say you’re going to do and it’s working, great. If it’s not working, change what you do.
Russel: I guess you would say the other thing, too, is if I’m doing my check and there are no leading indicators, my act is to figure out the leading indicators and get them in place.
Gary: If you’re checking results, for example I’ve got a pipeline and I’ve got a high number of leaks on it. Why is that? Let me think. “Am I surveying it? Am I protecting it? What am I doing?” If I don’t know what I’m doing, I have no way of relatively gauging my control over the event.
Russel: Even something like a leak, sometimes a leak is not a leak. Sometimes a leak is a faulty instrument or a measurement imbalance or a poor proving or calibrating procedure.
Gary: That’s a good point. Because when you talk about acting, instead of Plan Do Check Act, what it really should be is plan, do, check, investigate, then act. When you do your checks that come up with some premise of something that requires attention, you need to go out and investigate that.
Was it a leak? Was it a bad tool? Was it this? Was it that? Was it misperception, miscommunication? What was it? Then, once you actually validate that through investigatory methods, then you can clearly define your action.
When we did risk in DIMP, one of the things we did was we drove investigation. The data said we had a certain type of threat at a certain place. We didn’t just say, “Okay, let’s go fix it.” We went out to that place and talked to everybody. “Is this true? Is this really what we’re seeing? Why is this happening?”
Once that information was gathered, then a more intelligent decision on the corrective action could be put to play. Plan Do Check Investigate Act.
Russel: Yeah, investigate, verify, get to root cause, however you want to call that. But, certainly, that’s an important part of it. That makes sense to me when you talk about the functional elements, things like integrity management, control room management. I can see that cycle running out.
When you start talking about things that are soft elements, things like leadership, how do you do “Plan Do Check Act” on leadership?
Gary: Well, what do you want leadership to do? You’ve got to start with understanding what’s the objective of leadership. The first thing of leadership is to culturally suggest in a very firm manner, “This is what we’re going to do, boys and girls.” This is not just, “Would you please do this work?”
Russel: Yeah, clearly communicate the objective.
Gary: Yes, but buy into it. Make it a fact. Then, for example, when you do checking on things, if you have 100 metrics, and you roll up to a certain level of manager, and he looks at 25 of those, and you roll up to the next level, he looks at 10, you roll up to senior leaders, they look at two, they have to gauge who you’re talking about.
Leadership, at the end of the day, they have to be aware, because if the mandate is, “We’re going to improve this,” and they’re aware that we are improving or not improving, they’re the ones that drive this change.
How we measure them is, “Are they in the loop?” How we check them is “Are they reviewing what we’ve given them? Are they giving us feedback on this, and giving us support and direction to make these changes necessary, or to make these improvements?” And those types of things.
It’s the same Plan Do Check Act, and if they’re not doing them — now, how you act against an executive; somebody at the top line’s going to have to be the one that says, “Bob, you’ve got to do this.”
Russel: Right. I think there’s certainly an aspect of that, being very clear about, “Here’s the objective, here’s the expectation, and here’s the systems to follow up and make sure it happens.” All that’s part of leadership.
I think the other part about leadership is listening. Am I actually getting feedback from the organization that tells me where I’m at?
Gary: In my opinion, that feedback filter gets tighter and tighter the higher you go up the ladder. I have seen the cases where I worked with seven levels inside the same organization, and I was amazed at what the reality was on the ground and what the senior leaders were actually hearing.
I’m not going to talk to why that is or how that all works, but at the end of the day, it seems to me pretty obvious that people don’t want to tell the people above them all the details, and that just gets worse and worse and worse.
Russel: A lot of times, the people upstairs don’t want the bad news, right?
Gary: Well, the question it begs is, the people upstairs, how much do they really need to know? What they need to know is, “Is it working? Are we making progress?” The down- the-line people need responsibility for communicating, discussing, sharing, organizational skill sets, and working out solutions. That’s why they’re in the job they’re in.
Russel: The knowledge is there, right? The knowledge is where the work’s being done.
Gary: Yeah. In the old days, the knowledge was all the way up and down, but these days, maybe not so much.
Russel: Yeah, I think that’s probably true. I think that’s probably true. As you’re talking about this, one of the things that’s certainly key is understanding what performance is and understanding what the indicators, both leading and lagging, are for that performance.
Gary: Now, if you think, AB, Advisory Bulletin, ADB [PHMSA docket no. 2012–0279]. It was meaningful performance metrics. Meaningful, it’s a hot topic. Meaningful, what were they?
There’s pages and pages of meaningful performance metrics. Honestly, a lot of them were all lagging, lagging, lagging, lagging things. They weren’t really the whole concept of leading. You’re going to be what you do. It’s like, “You are what you eat,” the same kind of a thing. Whatever you’re doing is going to be ultimately what outcome you’re going to see. If you’re not exercising and eating right, there’s a good chance you’re going to gain weight.
Gary: That’s just the way it works. Meaningful performance metrics is, “How much did I weigh yesterday? How much did I weigh last month? How much do I weigh now? How many pounds am I gaining per week?” Those are all lagging indicators. They’re not telling what I’m doing.
Russel: What did I eat today? How much did I eat today? How much exercise did I get today? What kind of exercise did I get today?
Gary: And how much will I do tomorrow?
Russel: Yeah, good point, good point. I guess the next thing I want to talk about is, as you’re starting this, and you’re really trying to lean into it, and really get this Plan Do Check thing going, certainly, we’ve identified the performance indicators are key. How do you start? How do you get an initial measurement?
Gary: I think that, without going into detail, at a high level, you talk about measuring maturity of your SMS. A lot of people have these spreadsheets and these things that show how you measure maturity. Yeah, I was doing 200 of the 253 dos. Now, I’m doing 210, and now, I’m doing 220. It’s all based on the objective, the overall what are we doing this? What are we here for?
The first thing you have to do, you have to understand your take on what the SMS is and what’s it for. If you buy into my premise that it’s the systemic implementation of the Plan Do Check Act methodology, then you have to ask yourself, “Are we doing any sort of planning, doing, checking, and acting at any level,” and then you subdivide that by, “in any element?”
You have to do what I consider a cultural quality management assessment. You have to ask the people. I’m not talking about asking the leadership. I’m not talking about asking the senior VPs. I’m talking about asking the guys on the ground, the guys who do the work.
“Is anybody checking this stuff? Is anybody telling you how to get better at it? Is anybody communicating it?” It goes back to a lot of the organizational bits we talked about before. You start off with a quality management assessment, and crafting that. Now, this is where stakeholder engagement comes into play. That’s one of your 10 elements, but that’s the one you use to engage your stakeholders — both contractors and employees — to learn “Who are we? What are we eating? What exercise are we doing? All this stuff, what are we doing?”
Once you figure that out, then you can sit back and say, “Okay, now, where are the gaps?” For example, element number three, we do all kinds of planning, and we do, we don’t know what did, we don’t write it down, we don’t check.
I tell clients, “If you don’t plan on checking anything, don’t write it down.” If you’re not going to ever look at it, you’re not going to check it, don’t waste your time documenting it. The whole premise of documentation and recordkeeping is so key, but it’s only key, because it’s the driver or the enabler of checking. You have to go in and check.
Russel: It also goes to you need to get the right thing written down. You don’t need to write down just anything.
Gary: Yeah, this goes back. I tell a story about the public awareness metric. It was to go wear your company’s blue pants, your red hat, your orange shirt, knock on doors, and hand them a very important document that talks about safety.
Five years from now, I want to know what that document said and who got it. I don’t care what color pants you wore. However, the procedure we used at the time said wear these pants, this shirt, this hat. There’s many things that we have procedural or process-based that we want to do, but not all of them have this long-term value.
Russel: They don’t all relate to the Plan Do Check Act cycle. They’re part of what’s necessary to effectively do the task, but they’re not part of what is necessary for the Plan Do Check, I guess, is what you’re saying.
Gary: They’re not at an informational level that would give you something to look at.
Russel: Again, if you take the public awareness thing where you’re going out and distributing brochures door-to-door, what you really want to know is, five years from now, do the people living in that house know what you gave them? Do they even remember that you came by?
Gary: That’s 100 percent true. One of the big, big, big [laughs] issues I have of that whole thing was, “Did you mail out your stuff?” “Yeah, we mail direct mail.” I had clients who would mail out to 1.5 million customers along the pipeline.
I asked, “Did they get it” … “I don’t know.” “Did they read it?” … “I don’t know.” “Did they understand it?” … “I surely don’t know that one either, but I know that I mailed 1.5 million brochures along the pipeline from Texas to Florida,” for example.
Russel: There’s some interesting work being done on that and revising the standards and looking at other mechanisms for checking and getting that data. The reality of a performance-based standard, particularly in this domain, is that many times you’re trying to measure performance that you do not directly control.
I don’t directly control if the postal carrier delivers the mail, if the people get the mail, if they read the mail and they retain it. I don’t directly control that.
Gary: This is why I built a tool called Pipeline Watch many years ago. The whole idea by that was you put the messages online. You know who read them. You know how long they read them. You follow them up with a questionnaire to prove understanding comprehension, and then you measure it. You say, if I hit 1,000 people and 13 got on there and read it this year, it’s a performance metric. It’s not the best, but at least it’s real.
Russel: Yeah, it’s better than just “I did the task.” It at least gives you some idea of the efficacy of the task, how effective the task was.
Gary: Exactly. The reality, Russel, is that type of approach is what’s got to be done here. Otherwise you’re just talking the talk.
Russel: I think you’re absolutely right, Gary. I’m always a little careful to try and put myself in the shoes of the operator. I realize how big a task this is.
This has probably been a couple years ago now, but I sat down with an alarm management guy — was having a coffee with him — for one of the big pipeline operators. I asked him, “Are you aware of Pipeline SMS?” He’s like, “No, I’ve never heard of it.” Alarm management is one of the standards that’s listed in the Pipeline SMS. That just tells me where we are in the maturity of this.
Gary: Yeah, I wouldn’t say maturity. I would just say in the infancy of it. It’s not what you’d call maturity because at the end of the day, the Plan Do Check Act, you’re always jumping back into the manufacturing realm with that statement. It applies everywhere, Russel. It applies to everything you do along the whole lifecycle of a pipeline.
Russel: It’s not just that, man. We do that in our personal lives, too. If I look at how I’ve managed my money, I’m doing a Plan Do Check Act in the way I manage my money. Most of us probably do. We’re putting together a budget. We’re getting the money in. We’re spending the money. We’re looking back to see, did we get it in and spend it the way we thought we were going to? We might be looking at our investments and then we’re doing some analysis and investigation and then we’re making changes.
Gary: If you do it right, you might not make the same stupid stock market mistake that I made three times in my life. I should have made it once and been done with it.
Russel: Yeah, that’s what somebody told me one time. When you get out into the real world, you keep getting the exam until you get the education.
Gary: Right. Back to the question though about how do you start this thing. From my perspective, you have to take stakeholder engagement. You identify your stakeholders within your various elements. You identify the objectives of those elements. You identify a series of questions that give very simple indications of planning, efficacy, doing, checking, acting. Not talking about SMS or not talking about Deming, just basic questions so that all the population can then feed back. You look at that and you say, “Okay, now what do we learn?” That’s your check.
Then you investigate. Again, it’s important to go out to those groups and sit down, have conversations to figure out what they meant by this. Just because they said something doesn’t mean it’s the prevailing attitude or it’s the proper measurement, if you will.
Russel: Not only that. One of the things we commonly do in our business is we use the same words to mean different things.
Gary: I don’t, but PHMSA does that a lot. [laughs]
Russel: Even within an organization. I do a lot of work in alarm management, and alarm is used by everybody and everybody has a different idea of what that means. That’s just one example. There’s a lot of others. It’s the nature of our business.
Gary: As you get the responses of these inquiries back and slice them and dice them and do analysis on them, you get a flavor for how deep do we do it or how much we don’t do it and how we do it inside each element or each business unit inside of an element.
Then you tackle it by prioritizing on an element-by-element basis which one’s first. You go out, and it takes the next steps to somehow get a Plan Do Check Act process management environment of some sort working inside that element so that they know what they’ve got to do.
They’re doing it. It’s being documented. It’s being looked at. It’s being analyzed. Investigation is taking place accordingly and actions are being taken. Then, when you take an action, if you don’t create a performance metric at the time of the action to circle back and see “Did the action work?” then you don’t know what good it was.
Russel: That’s right.
Gary: It’s a circle. It’s a vicious, vicious circle. But, it’s got to start somewhere.
Russel: It’s also true that once I get in at the element level and I’m clear there, then I’m going to take another bite out of the apple and I’m going to go a little deeper. Then I’m going to take another bite and I’m going to go a little deeper.
I’m going to drive this all the way down to those specific things I’m doing in the ditch or those specific things I’m doing as I’m starting up or shutting down a pipeline.
Gary: Then you’re also going to do them in an integrated fashion between elements. When this information from emergency response gets fed back to risk to drive operations, that’s the real nirvana.
Russel: When does information from pipeline operations related to pressure cycles get fed back to integrity management so that they can look at their life calculations, that kind of thing?
Gary: When does risk evaluation start using the fact that we’re not training very well? Or the fact that we’re not planning very well? Or the fact that we’re not executing operational functions very well or consistently across the board?
Russel: There’s always an area to improve. This actually tees up another question for me, Gary. One of the things we do a lot of in pipelining, particularly around risk management, is we calculate a probability of failure.
Often, those probability of failure calculations are related to some kind of data that we gather from an ILI tool or a corrosion history or cathodic protection or something like that. It’s very data-centric. When you talk about SMS, you talk about these soft elements, wouldn’t there be a probability of failure related to those soft elements as well?
Gary: Yeah, that’s what I was saying before. I think the soft elements exacerbate the probability of failure of the asset. When the protocols first came out for gas back in 2004, the company we had, they were all marked with Ps and Is. They were process and implementation.
There were things you did and things you had to have a process for and had to execute, so they measured it. Inside there, we had on-the-pipe elements and we had off-the-pipe elements. On-the-pipe ones were the HCA threat risk, BAP, so on and so forth.
The original protocols for transmission had quality assurance, communications, change management, recordkeeping, and performance. They were all elements in the A through N cycle. Those were the soft ones. Those were around in 2004. This stuff in SMS is nothing new.
I remember when they first came out, the operators didn’t have a strong sense of what the soft elements were. The regulators surely didn’t have a sense of what the soft elements were for. They kind of blew through them, right?
Gary: It was never really a big part of the audit push at the time, but the concepts should be there. They’ve always been there. The soft elements — our SMS, our quality management, quality assurance, communications, recordkeeping, scheduling, documentation — these kinds of things and not doing those well, 100 percent affects your probability.
You got a piece of pipe out there. It’s a piece of pipe, and every year, it’s one year older. It’s got thickness, diameter, pressure. It’s got environment. It’s got these physical characteristics, right? What are you doing to it? You’ve got a piece of pipe that is 80 years old and is properly maintained that is as good as gold. Or, you’ve got piece of pipe that is six months old that is poorly maintained that is not worth a darn. That soft side or that quality side of that human side is 100 percent affecting those probabilities of failure.
Russel: How do you quantify that, though? How do you get to a performance standard around those things? You talked about leadership. How do you get to the performance standard around things like communications effectiveness?
Gary: Let’s just say I’ve got…Something happens in my domain, so I’m going to tell somebody, “Well, did that happen?” If that happened 100 times, and it got told to somebody three times, you put in numbers and all this stuff. If you have that mechanism in place to document and capture whether communication was made or not made, you can measure it.
The task is to tell them, “Okay, first question is, did this happen?” …”Yes, it did.” “Did you tell so and so?” … “Yes I did. I attached the email. Here it is.” I look back in time to last year, and that happened 100 times, and the guy attached the email twice, so, no, we’re not doing it. You can measure all these things, but you have to have a mechanism in place that facilitates that process.
Russel: That’s really important what you just said. You’ve got to have a mechanism in place that facilitates that because if it starts to take the administrative workload and ramp it up, it’s never going to work. Because people won’t do it. They don’t have the time. They don’t have the interest.
Gary: The process management platform has to be one where it doesn’t make you do more work. It just manages your work. It gives you a place to attach stuff.
Russel: It should actually help you to get the work done, and should keep the records just as a matter of doing the work.
Gary: I’ve have always said if the process tells you to go out and build a spreadsheet, it’s never going to happen. If the process takes a spreadsheet you’ve already used and stick it here, it takes two seconds. Move on.
Russel: That’s true. One other thing I want to talk a little bit with you — we’ve talked about this a lot from a safety perspective — but I think it’s important for the industry to understand that this approach is not just a safety issue. This approach can be used for performance.
I can take this approach and apply it to electric consumption around the pumps on a liquid pipeline. I can do the same thing around fuel consumption for gas pipeline and compression. I can apply this to things that are not safety, but are purely operational effective financial performance.
Gary: Efficiency, right?
Gary: 100 percent. The story I heard someone else say they ran a pig, they found corrosion, they dug it up, and they put a sleeve on it. Then, they ran a pig five years later, they found corrosion, they dug it up, and, “Oh, look, there’s a sleeve on it.” Five years later, they did it again.
How much money is wasted doing things? There’s never enough time to do it right, but there’s enough time to do it over again. How much money is being wasted on these types of inefficient things that are unnecessary?
Russel: My answer to that would be less than yesterday and more than tomorrow.
Gary: Can you measure that? [laughter]
Russel: No, that’s notional. That was not really measurable. That was more notional. I want to wrap this conversation up. Gary, we’re going to have you come back and talk again about the systems needed. What are the aspects needed in the systems to implement SMS? That’s not well understood in the industry. It’d be good to talk about. We’ll wrap this one up right here and we’ll bring you back.
Gary: All right, thank you.
Russel: Hope you’ve enjoyed this week’s episode of the Pipeliners Podcast in our conversation with Gary. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you’d like to support the podcast, please leave us a review on Apple Podcast, Google Play, or on your smart device podcast app. You could find instructions at pipelinepodcastnetwork.com.
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at pipelinepodcastnetwork.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords