This week’s Pipeliners Podcast episode features Jim Francis discussing risk analysis and its recent emergence in the industry, the difference between functional and probabilistic risk analysis, and how it can benefit in finding the root of issues.
In this episode, you will learn about the importance of a good compliance program, ways to see if new changes made a difference and what EEI has in store for the future.
Functional Risk Analysis Show Notes, Links, and Insider Terms
- Jim Francis is the Vice President of SMS Consulting at EN Engineering, or ENTRUST Solutions Group. Connect with Jim on LinkedIn.
- ENTRUST Solutions Group provides comprehensive and dependable engineering, consulting, and automation services to pipeline companies, gas and electric utilities, and industrial customers.
- Pipeline SMS (Pipeline Safety Management Systems) or PSMS is an industry-wide focus to improve pipeline safety, driving toward zero incidents.
- The functional risk assessment is made according to the method- Failure Mode and Effects Analysis (FMEA). The Overall Risk resulting from the Risk Assessment has identified all potential failures requiring mitigating actions/controls.
- Total quality management (TQM) is a management approach that focuses on continuous improvement. Organization engage all members to focus on improving processes and products to increase customer or user satisfaction.
- AGA (American Gas Association) represents companies delivering natural gas safely, reliably, and in an environmentally responsible way to help improve the quality of life for their customers every day. AGA’s mission is to provide clear value to its membership and serve as the indispensable, leading voice and facilitator on its behalf in promoting the safe, reliable, and efficient delivery of natural gas to homes and businesses across the nation.
- PHMSA (Pipeline and Hazardous Materials Safety Administration) is the federal agency within USDOT responsible for providing pipeline safety oversight through regulatory rulemaking, NTSB recommendations, and other important functions to protect people and the environment through the safe transportation of energy and other hazardous materials.
- API (American Petroleum Institute) represents all segments of America’s natural gas and oil industry. API has developed more than 700 standards to enhance operational and environmental safety, efficiency, and sustainability.
- API 1173 established the framework for operators to implement Pipeline Safety Management Systems. The PSMS standard includes 10 core elements. The API Energy Excellence Program followed this model to establish its 13 core elements.
- The PDCA (Plan-Do-Check-Act Cycle) is embedded in Pipeline SMS (API RP 1173) as a continuous quality improvement model consisting of a logical sequence of four repetitive steps for continuous improvement and learning.
- PRA (Probabilistic risk analysis) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity or the effects of stressors on the environment.
- The Edison Electric Institute (EEI) is an association that represents all U.S. investor-owned electric companies. Its members provide electricity for 220 million Americans, operate in 50 states and the District of Columbia, and directly employ more than one million workers
- QMS (Quality Management System) is a continuous process that involves implementation, maintenance and improvement.
Functional Risk Analysis Full Episode Transcript
Russel Treat: Welcome to the “Pipeliners Podcast,” Episode 290, sponsored by the American Petroleum Institute, driving safety, environmental protection, and sustainability across the natural gas and oil industry through world class standards and safety programs.
Since its formation as a standard setting organization in 1919, API has developed more than 800 standards to enhance industry operations worldwide. Find out more about API at API.org.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate that you’re taking the time. To show that appreciation, we give away a customized YETI tumbler to one listener each week.
This week, our winner is Jared Williams with Texas Eastern Transmission. Congratulations, Jared. Your YETI is on its way. To learn how you can win this signature prize, stick around till the end of the episode.
This week, we’re going to speak to Jim Francis, who returns to talk to us about functional risk analysis. Jim, welcome back to the Pipeliner’s Podcast.
Jim Francis: Hey, Russel. Great to see you again. Good to be back.
Russel: For those that haven’t heard you because you haven’t been on in a while, maybe you could do a little introduction and tell us a little bit about yourself and your background, what you do.
Jim: Sure. Jim Francis. I am the Vice President of SMS Consulting at EN Engineering, or ENTRUST Solutions Group now. I’ve been with them for about a year and a half now, consulting with utilities all over the country on SMS and all things related to it.
Prior to that, I spent 30 years as an operator working in both the gas and electric sides of the business, doing all things, compliance, engineering, operations, HR, whatever else. You name it, I got to touch it, which I think fits well with what an SMS actually does and the purpose of my life now.
Russel: I think that’s right. Having that really broad base of experience is very important to really understanding how to do safety management, for sure.
Jim: It’s funny. The one thing that’s always been fascinating to me about it is there’s this commonality or connection, not only between the operational things we do, but the human side of things we do. Even getting into diversity and equity and inclusion and those sorts of things has a role to play within SMS, so anyway.
Russel: If you do it correctly, it’s a holistic thing.
Jim: That’s right.
Russel: It’s a holistic thing because there’s a lot that goes into having good safety performance beyond just knowing how to do the work.
Jim: Yeah, absolutely. The people’s side of it is right up there.
Russel: Yeah, no doubt. I’ve done a number of podcasts on the whole human factor side, and/or the organizational, and just how you write position descriptions and how you do incentive programs and how you get teams to work together, all of that impacts safety.
Jim: Yeah, absolutely. Right on.
Russel: I asked you to come on and talk about functional risk analysis because I saw you at the API pipeline conference. I sat in your presentation, and I went, “I want to know more about that. I’m going to get him on the podcast and quiz him.”
Jim: Always happy to talk about it. The functional risk assessment is interesting. The reason and the timing around that was because there were some new regulations coming about.
As I started thinking about how do operators start to function there and what do they have to do and how do they start to understand their operations and the impact from something like that. That functional risk assessment and the approach behind it was the trigger there.
Russel: Maybe you could give us a definition of functional risk analysis. Before you dive in, let me build a little context for the question. Being a guy who has worked in doing the work but has never worked in risk analysis, I always struggle a little bit about what that is.
I have an educational background in my master’s around total quality management, and there’s some corollaries there, but I always have a hard time making the leap. For those of us that know that we don’t know, how would you define functional risk analysis?
Jim: It’s funny. When you talk to anybody, an operator, whoever, we all know what we do day to day, but there may be two people that do exactly the same thing. When I go talk to one person, they’ll tell me, “Here’s the things that I struggle with. Here’s the things that keep me up at night.”
I talk to the same guy within that he has a completely different set of things. We experience this when we go out and we talk to different operating centers. You assume when you walk in, they all know they all perform the same functions. The same risks and the things that bother them or keep them up at night will be a part of the conversation.
It’s fascinating how often they’re specific to their own experiences. What lacks in that sometimes is a view of the processes that they own and doing more of a deep dive and a focused piece of that.
Let’s just say they’re the lead survey tech, dissecting that process down into all its finer components and evaluating risk from that perspective because you often go out and you talk to somebody, and they say, “Hey. Look, my biggest issue is maps and records, and my biggest issue is something else.” That’s part of their process.
They never dissect it from the beginning to the end to understand what are all the things that are involved in what they do and how does that start to surface the risks so that you can start from the beginning in the process and start to whittle away at those things and make improvements so that ultimately their process flows a little more smoothly and has less risk to it.
Russel: I’m going to ask a clarifying question about this. Jim, I got to tell you, this is me trying to learn because I know that I don’t know risk analysis, and it’s something I want to learn. I’m quite familiar with measurement and measurement uncertainty.
The idea of measurement uncertainty is you look at all the things related to a metering station, and you put that together as one number that gives you an uncertainty. That goes to how did you design it? How did you install it? What equipment did you purchase? How did you do it? What are your procedures and processes for calibration and all that?
Level of training in the people, and then things you’re doing on the backside to check and verify your measurement. All of that process gets numbers put on it, and it gets rolled up into an uncertainty factor. What you’re saying is that functional risk analysis is the same thing. It’s just looking at, “Well, what are all the things you’re doing, and what could go wrong?”
Jim: Ultimately, the lens of the functional risk assessment is more from a safety perspective as opposed to the accuracy piece, but your concept is exactly the same. What you’re getting at is all the controls. What are all the things that keep that number or make that number achieve a level of accuracy that you want or expect?
Similarly, from a safety perspective, what are the outcomes that you expect from it? Part of a functional risk assessment in the very beginning of it is when you break down your process. Secondly, you look at what the expected outcomes are. Ultimately, you want to have a safe process, and you want that, but there’s other things that you’re trying to achieve from your process.
Am I doing a lead survey appropriately? What are the outcomes that you’re trying to achieve from a lead survey? If I’m doing construction, what am I trying to achieve? I’m trying to install a piece of pipe. I’m trying to get gas to the customer, those kinds of things. What are the outcomes?
Same way with what you just described, and then you start to work backwards into looking at what are those controls and the things that you need to have in place to make sure that those outcomes happen as intended.
Russel: That is the first piece of that. Like what is the work I’m doing and the outcome I’m trying to get to. That’s a very TQM kind of thing, a Total Quality Management kind of thing.
Jim: Yeah.
Russel: Understanding what are all the elements of the process and what are the outcomes, and then what are the things you measure to ensure you’re getting to the outcome. Then, when you start, you take that from a quality assurance to a risk analysis. It’s “OK, well, if I don’t get the quality I’m looking for, what are the potential negative outcomes?”
Jim: That’s right. Generally, there’s a seven step process just to get to the point of identifying the risk. We haven’t even gotten into the risk assessment piece where you’re doing the controls analysis, but identify your processes.
You work through what those outcomes or expected outcomes and outputs are in the process. You map your process out so you understand what those things are identifiable touch points. You start to then look at your failure modes associated with each step in the process.
Ultimately, from that, people will start to articulate what those risks are, and then you’re populating your risk register, you’re doing the scoring, you’re evaluating those. This is where you get that connection to the probabilistic risk model. What are the likelihood and the consequence of these events occurring?
Ultimately, you need to prioritize where you’re going to spend your time and effort and the things that are going to be most impactful to those expected outcomes. Then, you normalize your scoring.
Ultimately, you say, “Hey, these are the things that we’re going to go work on. Now let’s go do risk assessments. Let’s go do a bow tie analysis. Let’s do other things to start looking at those controls and the effectiveness of those.”
Russel: This is the part that always blows my mind up a little bit because I might be able to get clear about that in one element of the work I do, one area of work, but when you start trying to address that holistically across the entirety of a gas utility or a liquid pipeline, that is a lot of information. It’s a lot of definition.
Jim: This is one of the reasons why you have a central organization that manages your 60 management system for exactly that reason because you think about the structure of the utility in all the different departments, the measurement department, the construction department, you’ve got service, you’ve got engineering, you’ve got integrity management. The list goes on and on and on.
They each have their own processes, but there’s a connection between them. Independently they can go work and look at their processes, but they have to start to understand those connections, which is why you start to raise that upper level, and you have somebody that can sit and see where those risks are.
Some risks and issues may reside within multiple departments. Now you’ve got a systemic issue, and you got to start to factor those together. Bring them together. Figure out as a corporation where you’re going to spend your time and effort, and that same process fills out.
The functional risk assessment may be the thing that they do at the department level. Looking at a very specific thing, a very specific process. Something that they do day in and day out, which nobody else may perform.
Whereas as a corporation, you’re starting to look at, “Hey, where do I understand where my risks are? Let’s bring those together in a common risk register in a common process and make sure that we’re understanding where those corporate risks are so you can put your resources to it.”
You can’t do it all. You got to do the things that are most important and things that are going to drive the most risk out of your business.
Russel: Yeah. Ultimately, we’re going to be called to do it all over time. This teases another question I wanted to ask. I’ve had people come on the podcast from the airline industry. Airline industry, nuclear power industry, they have mature safety management systems. They’ve been doing safety management for 30 plus years.
They have pretty well understood, at the industry level, risk matrices. Now, every facility, every organization is going to make modifications to that, but they have a well understood starting place. Where are we in that level of maturity as an industry?
Jim: Let’s see. God created whatever on day whatever.
It’s like we are at the beginning of time to some extent. It’s funny you say that. I remember years ago, I was at an AGA function, and one of the pipeline administrators from PHMSA was there. I said to him, “Hey. I’ve got an idea that I’d like to talk about, which is how do you start to take…”
Just like we talked about at a company level, every department has a risk register. Every department has their own risk. As a company, you have to bubble them up to a level where you start to understand that.
Now you take that same concept where maybe I’m within an individual state, and you go, “Look, I’ve got eight, 10, 15, 48 utilities within that state, and they all have the key risks and the things that they’re working on operationally.”
“Their risk registers, their processes, they should come together as an entity within those states and start having that same conversation because then they can see where there’s commonality across utilities. Then you could do the same thing at a national level.” It’s daunting, which is why it hasn’t happened.
Where we’re at is many utilities are still at that beginning phase of trying to figure out what are their risks and where do they spend their time, and do they have adequate process around it. It’s important to have that structure. This is where things like the functional risk assessment and having a very intentional structure around that is important.
A lot of what we do is we help utilities build those processes, making sure because it’s got to be something that’s repeatable. It’s got to be something that they can validate. It’s got to be something that can be held accountable to.
It meets the compliance requirements and the standard. All of that has to happen in order for it to function in an efficient way. To be defendable ultimately when you’re making decisions about what you’re going to do.
Russel: I want to make a couple of declarative statements. I want to get your opinion on what I’m saying. One of the declarative statements would be that compliance is a predicate for safety management. Meaning I’ve got to have a good compliance program before I can start moving to an effective safety management program.
Jim: Yeah. If your compliance processes are inefficient and inadequate to meet the compliance requirements of something that you’re being held accountable for today, I don’t know how you’re going to do the safety management system as well. That same skill set and discipline is important for driving SMS.
Russel: Good. That means I’m not hallucinating that idea. I do that sometimes.
The other declarative statement, and again, putting it out there for your comment, is that SMS is a first step and a fairly high level step, but it’s comprehensive.
Jim: Yeah. It is a game changer when you do it well, and you have the structure in place. I don’t know that I would necessarily say it’s the first step because I do think back to your prior comment about the compliance piece, that stuff, particularly in the gas industry, it’s lived for a long time. There’s a basis there.
There’s a basis there for making decisions about how to mitigate risk and that sort of thing. There’s processes that exist, but I don’t think the comprehensive piece has been prevalent in the way people start to function.
They live in their own silos and their own worlds and their own business processes rather than taking this more comprehensive and holistic approach, which is what the SMS has intended.
Russel: I should clarify that. That’s good feedback, Jim, because it forces me to clarify what I was trying to say. If compliance is a predicate, and the next step is safety management, SMS, in other words, API 1173, is a good first step, but it’s fairly high. 1173 to my mind, is fairly high level.
It’s talking about, “These are the programs you should have.” It’s addressing it comprehensively, but it’s not driving that. There’s no standards or specifications or guidance about how to drive it into the details.
Jim: That’s exactly why the challenge around how do you do this as an industry and how do you start to drive the right change like you might see in nuclear or in the aviation industry where maybe there’s a more common approach or platform to it. Every utility is given their own leeway as to how they want to go do this and where they start.
That’s where there’s more variability. I always tell our clients, “Start with risk management because when you read the standard, it’s all about reducing risk. Why would you not start with risk management?” A lot of companies, what they’ll say is, “Well, we have processes. We do integrity management. We do depth, whatever.”
Those are valuable and they have a purpose. What they lack is the identification of all those process issues. Which, back to the functional risk assessment, you have a structure there.
If you start to apply that, you ask your integrity folks, they’ll tell you where all their process gaps are, they’ll tell you the things that drive them crazy and they show up and the end results that they start to see but they don’t have control over.
A lot of times they go, “I don’t have a landing place for somebody to help me solve this.” Which is what the INS starts to be able to do for them.
Russel: Yeah. Because particularly in integrity measurement, integrity management’s kind of complicated enough. On the surface it’s complicated enough. It is challenging when you’re doing the process and you’re staffed to do the process, to have work done on the process.
It’s the difference between doing the work and thinking about the work. It’s a different way of thinking about, it’s a different skill set.
Jim: Yeah. I agree. I also would say that the one good thing about integrity is that there’s an expectation that you’re going to evaluate the effectiveness of your program. Which is part of what the SMS tells you to do. You got a management review, you got to do effectiveness assessments, you got to audit.
All those things happen naturally where it didn’t happen before. I do think the integrity programs are a good model for how you can exercise that PDCA concept in an individual program.
Once again, if you go back to the functional risk assessment, if I broke my processes down and I started looking at the risks associated with them, and then I exercised my program through the course of the year, and I take the results of that effectiveness assessment and apply it back into that risk model or that risk assessment that I did previously I can see whether or not my improvements made a difference.
If they have, great, then I can maybe move on to the next thing and if they haven’t, then I’ve got to try something different.
Russel: Would you say that the functional risk analysis is a predicate to doing the PDCA cycle?
Jim: Here’s what’s interesting. The reason why I did that presentation at API was because in Austin, when we get a new regulation, people dive right into the operational controls. They go, “I got my own plan. I got to go update this procedure.” What they don’t do is step back and go, “Let’s look at the whole process.”
Russel: Oh yeah I absolutely agree with that reality.
Jim: Really, from a management perspective, I would want to know, and I think back to the early 2000s when the temp regulations came out, we had no idea what we were doing. Compared to where people are today is light years advanced.
Anytime a new regulation came out, if I was management, I’d want to have a real level of comfort that somebody understood all of that and breaking it down in that way. That’s what the SMS, if you start applying, literally 8 of the 10 elements apply when a new regulation comes out.
I don’t care how simple or complicated it is, if you apply it in the right way, you will make sure not only that your operation controls are right, but you have governance around, your data systems are right, that your training and quality programs are right, that you have adequate controls. You’re doing QA to make sure that it’s effectively implemented.
All those things that have to happen where today we go, “Let’s just focus on the operational controls and we’ll update our own plan.” Then we’ll execute and you forget about all the other things because it’s really complicated.
Exercising the SMS in a disciplined way and breaking that down through some sort of functional risk assessment ahead of time is the key to making sure that you understand that comprehensive approach to implementing those.
Russel: Again, you’re making my mind blow up a little bit. I’m just trying to process what you said. That all makes sense to me. Whenever I have this kind of conversation with somebody like yourself, where I start going to is I start going to pert diagramming in my head primarily because I was steeped in that in my master’s program.
Pert diagram is understanding all of the pieces of the process and their interconnections, and how the outputs of one process and the quality of those outputs impact the inputs of another process.
It seems to me that functional risk analysis is similarly the same thing. It’s understanding what are all the specific things you’re doing and how they interrelate, and how they as a system get you to the outcome you’re trying to get to.
Jim: That’s where the failure mode assessment comes in. Yeah, exactly. It’s really interesting that you said that, and I love the fact that I’ve been in continuous improvement events. We’re looking to improve the process.
We’ve already figured out here’s the issue, here’s what we’re trying to solve. Inevitably what we find out, we do like an informal, “Hey, tell me about this experience. Tell me about what you guys got out of this.” You’ll have somebody that will say to you, “I had no idea that what I did impacted this many people or this much of the process downstream.”
Because they don’t understand those connection points. That’s where the risks start to show up. That’s the failure or the gaps that have left a hole into the cheese.
Russel: It’s in the handoffs from process to process.
Jim: Yeah, exactly. That’s why when you break it down in an individual process and do it from a functional perspective, then you get to see those connection points all the way through and then where those gaps or those holes in your controls are.
Russel: A lot of people don’t realize this, but it’s possible if I’m working on a particular part of a process, if you take an integrity management program, and I’m the person who is in the ditch during the dig doing the hand gathering of data, it is possible that if I optimize my process, I can de-optimize others.
That’s a pretty big leap for people, the boots on the ground, doing the work. That can be a pretty big mental leap, particularly if you don’t have any training or orientation that teaches them, “Here’s the bigger picture, and here’s how you fit in.”
Jim: You don’t want to solve that problem while you’re in the ditch, right?
Russel: No. That’s not my point. It’s…
Jim: Yeah. No, I know. Sometimes those things come up. People make decisions in the moment. Like, “Why do I do this? Why do I collect this piece of data? Why am I following this part of the process? It just seems a little odd to me.” There’s a reason why that structure is built into that, and it becomes the basis for continuous improvement.
You have those conversations after the fact. It’s part of how you do post project reviews and get the lessons learned back into your process. If you build that functional risk assessment out, you can continue to use that PDCA cycle and take the input from an operator or whoever, it’s executed pieces of that, and then you could evaluate whether or not it makes sense.
Those then become teaching moments maybe for them, and maybe you have to adjust the way that you’re educating those folks before they go out there and do the work.
Russel: That’s, again, the people part of this is part of the functional risk analysis. What are the competencies, and what are the competencies they need to have, and what’s the context they need to have before they go work? It’s interesting. If you find yourself coming into an organization that’s had a lot of turnover, it’s very easy to look at their processes and go, ‘This is crazy.”
What I always try to tell people whenever that conversation is occurring, I say, “Look, slow down for a minute because the people before you were smart, and they were trying to do as good a job as you’re trying to do, so there’s a reason they’re doing it the way they’re doing it.”
“You really need to try and find the reason why they’re doing it the way they’re doing it now before you try to change it. What’s the exception to that?”
Jim: Now you’re getting into a whole nother podcast around management of change.
Russel: What I’m trying to do is just talk about what is necessary in the humans doing the work because if you can’t get that historical concept like this, a context if it’s disappeared, if it’s not down someplace in writing or whatever, we’ll you have to recreate it, that always creates risk because you don’t know what you don’t know.
Jim: That’s the reason why you’re seeing this, I say, more of a shift. The pipeline industry has a lot of regulations, a lot of controls as a result of that, so a lot of stuff that’s documented.
I think back to my earlier parts of my career, and frankly, there were arguments that we shouldn’t have all these things documented in a process or a procedure because it creates a gotcha moment for the regulator. I can’t trust that my people are going to go execute these things, but when you don’t, you get those organizational changes, you get the new people.
Just think about how much turnover we’ve had in our workforce in the last five years. Without that adequate detail, they’re lacking on how they can go execute it.
Russel: Yeah. I would assert strongly that one of the things we need as an industry is more openness and sharing. This comment may upset some people, so I’m going to preface my remarks by saying I apologize in advance.
We are in a litigious society, and when we have a negative outcome, there is a lot that happens in the legal domain that can be punitive. I would assert that we can’t run our businesses by trying to manage that in advance. We have to manage our business in a way that we don’t have to have that outcome.
Jim: My experience is, one, it’s dependent on the state you’re in. That surfaces the relative impact or effect of the legal environment. From an SMS perspective, though, our job is to make sure we’re continuing to push for the learning side of that and making sure that there’s open dialogue and communication around it.
Far too often, an instant curve, and then it feels like everybody’s head goes in the sand and the legal process. Understandably, it’s important, but there’s got to be a balance because something can happen next week, next month, and some of these things take years before they resolve. Your ability to affect your controls and mitigate the next risk, potentially, is limited by that.
Russel: I would say too that if you relied solely on the accident reports and the legal process, and then what becomes part of public domain in order to modify industry procedures, well, then you’re taking 10 years to make changes you could make in two.
Jim: Yeah, I agree. Just think about how long the process takes to put a new regulation out.
Russel: Well, just the process to do an accident investigation and get to a final report. There’s a reason it takes so long. They’re very, very thorough in what they do. Yeah, no, it’s interesting. I want to come back to functional risk analysis. How is functional risk analysis different than or related to probabilistic risk analysis?
Jim: In my mind, it is the precursor to doing the probable risk analysis. It is more about identifying the risks and the things that you or your organization, or your business process…I don’t know. Identifying those things that keep you up at night and where you got to spend your time and focus on.
The probable risk analysis comes then after that, where you start to understand, “Well, where’s my priority? How do I understand the likelihood of these things occurring and the consequences associated with it?” Making sure that you’re applying that structure to it so that you can, one, make a determination of where you’re spending time.
Two, measure whether or not your mitigations were effective. That’s where PDCA cycles. That’s where when you start to implement your controls. You have to identify the metrics associated with those different risks. How do you use that to validate the probability of those things occurring?
You use your actual data to do some validation around that. That’s also the tie to the QA/QC process because you may not have a clear outcome or clear metric that you’re measuring. Measure your process. Measure the effect of that. You have to have some way to make that determination.
To me, that’s where the relationship occurs. Figure out the risk, measure the probability around those things occurring. Use that to determine where you spend your time, focus your efforts, and then put a measurement process in place to make sure you can convey to somebody, “We’ve reduced risk.” Or, “No, we haven’t, and here’s what we have to do about it.”
Russel: Right. Again, so pivoting again just a little bit. I know I’m hopping around on you a bit, but we had talked about the Edison Electric Institute and some of the things it’s doing. Could you tell us tell the listeners a little bit about what’s going on at EEI and why that’s important for us to know about as pipeliners?
Jim: One of the things that we get the advantage of doing is we’re not just working in 1173, so we support electric utilities as well and putting SMS in and processes in place to evaluate that. EEI has done some things working with Matt Hollowell from University of Colorado and others.
They’ve had teams put in place to start to focus on high risk activities or high risk high energy situations where they identify where all the serious incident fatality prevention occurrences are. What that allows them to do is as a utility, you can go back and look at your business processes and the things that your operators do and how they tie to those high energy sources.
Clearly, working in a pipeline industry, the commodity that we’re pushing through the pipelines is a high energy situation. The beauty of what they’ve done is they’ve tied mathematics behind it, and so you can start to quantify whether you do or do not have a high energy situation.
These are typically where those SMS events occur. It’s a way for you to start to prioritize which processes and which functions that you perform and how they align to those high energy situations where you’re most likely to occur or have the occurrence of a significant or catastrophic safety incident.
It’s a nice structure around making sure that you can look at your processes. Now you can start to hone in where you need to spend your time. If you’re going to do a functional risk assessment, what are the things that are going to allow you to avoid those? It’s a good model to be able to do that.
Russel: Interesting. The functional risk analysis and novice summary of this current conversation, I’m going to give that an attempt. First off, doing this and doing it well is a lot of work, and it requires the company to build certain competencies and muscle to be able to sustain those competencies and sustain those efforts. That’s one takeaway.
That being said, there is a maturity model. Starts with compliance, moves to having an understanding of the processes you’re performing and the outcomes you’re trying to get to. From there, it’s an understanding of what are the things that could go bad. That’s the functional risk analysis part.
Then, from there, it’s putting probabilities on those negative outcomes, and that’s the probabilistic risk analysis part. If I were starting at a utility that had none of that, most people have some. If I was starting, I might take an approach where I look at where the high energy activity or the high consequence activities are, and I start with those first.
Jim: Absolutely. Yeah. We spend a lot of time talking about darts and preventable vehicle collisions and people’s sprains and strains, and while those things are not trivial. They certainly are prevalent, but they’re not the things that keep people up at night.
We’ve got to spend our time on the things that are serious. That model helps you get to that. You can make eyes wide open with a determined approach to that supported by data. Give yourself the opportunity to focus your resources on those things that are most significant.
Russel: If I were a startup midstream pipeline, and I’m having to build all this from scratch, where would I go to start? Is there a place I could go and find a model that gets me 20 or 60, or 80 percent of the way there?
Jim: There’s books out there that you can go buy. There’s a variety of things, but it’s like reading an engineering textbook in many cases. There’s resources that are online. From an SMS perspective, pipelinesms.org.
There’s other forums that are out there that can help you get going from an SMS perspective. There’s a LinkedIn group, a PSMS LinkedIn group, just a great group for discussion. It’s not going to get you to the details. It’s going to get you to the resources.
Russel: There’s no off the shelf framework that you can start with and modify. You’re going to have to build your own.
Jim: The off the shelf framework is API 1173. It’s not the only one. There’s other models, other SMS models that are out there. You could use a QMS as well. QMS maybe lacks a little bit of the risk management process that SMS has. They’re all related.
Russel: The devil’s in the details, running it all the way down to “How do I execute a dig program type detail” versus “I’ve got to have an integrity management program.” It’s a lot.
Jim: Russel, one of the beauties of being in the job I’m at now is I get to talk to utilities all over the country. Sometimes what you see are departments within the company that do things exceptionally well, and they effectively exercise SMS.
I suspect even within a company that maybe hasn’t jumped into it as much, they probably have that example. They may not have parts and pieces, but the intent behind it is there, and you can see that as a model, and having somebody maybe to come in and help organize or understand that that’s the case.
Then how do you apply that to the broader network and utility? There’s pockets of that all over.
Russel: Yeah, no, that’s absolutely true and that’s certainly conforms with my experience. Whenever you have the opportunity to work with a lot of different pipeline operators and you get to work with multiple departments within those pipeline operators, you find that we’re all human systems.
There are places of excellence and places that need improvement, and that’s just the nature of human systems.
Jim: Yep. Absolutely.
Russel: Jim, this has been great. I feel like I can at least talk intelligently about functional risk analysis now, so that’s good. We’ve made progress.
Jim: That’s all right.
Russel: I’m not ready for the exam yet though, so don’t be sending me any multiple choice questions.
Jim: I had an old boss that told me one time, “Jim, sometimes you’re trying to teach a master’s degree level engineering course to a bunch of sixth graders. You got to bite it off a little bit at a time, and eventually we all get there.
Russel: Absolutely. Listen, thanks for spending your time with us and we’re definitely going to have to get you back.
Jim: Yeah, sounds good. Appreciate it, Russel.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Jim. Just a reminder, before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit PipelinePodcastNetwork.com/Win and enter yourself in the drawing.
If you’d like to support the podcast, the best way to do that is to leave us a review. You can leave us a review wherever you happen to listen. You can find instructions at PipelinePodcastNetwork.com.
If you have ideas, questions, or topics you’d be interested in hearing about, please let me know on the Contact Us page at PipelinePodcastNetwork.com, or reach out to me directly on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords