This week’s Pipeliners Podcast episode features Jill Watson continuing her discussion about probabilistic risk analysis, how to build a probabilistic risk analysis model as a pipeliner, and the many ways a PRA model can be beneficial to the pipeline industry.
If you missed last week’s episode you can listen to it HERE.
In this episode, you will learn what goes into the probabilistic risk analysis model, how humans can interfere with the process, and what models are considered strong.
Probabilistic Risk Analysis Model Show Notes, Links, and Insider Terms:
- Jill Watson is the manager for Technical Safety and Risk division at Xodus Group in Houston. She has over 25+ years of experience in process safety and risk analyses in the oil & gas, energy, and industrial sectors. Growing up as a daughter of a nuclear engineer, she gravitated to the nuclear industry with an expertise in Probabilistic Risk Assessment (PRA). Jill holds an MS degree in Chemical Engineering from the University of Colorado at Boulder and BS degrees in Chemical Engineering, Chemistry, and Applied Mathematics from North Carolina State University. Connect with Jill on LinkedIn.
- Xodus Group is a global energy consultancy, in which unites unique and diverse people to share knowledge, innovate and inspire change within the energy industry.
- PHMSA (Pipeline and Hazardous Materials Safety Administration) is responsible for providing pipeline safety oversight through regulatory rule-making, NTSB recommendations, and other important functions to protect people and the environment through the safe transportation of energy and other hazardous materials.
- PRA (Probabilistic risk analysis) is a systematic and comprehensive methodology to evaluate risks associated with a complex engineered technological entity or the effects of stressors on the environment.
- Risk in a PRA is defined as a feasible detrimental outcome of an activity or action.
- PHMSA Risk Modeling Guidance
- Nuclear power is the use of nuclear reactions to produce electricity. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion reactions.
- SMYS (Specified Minimum Yield Strength) means the specified minimum yield strength for steel pipe manufactured in accordance with a listed specification.
- Burst pressure refers to the internal pressure that causes a pipe to burst or fracture.
- Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined.
- Consequence analysis is the analysis of the potential of hazardous incidents that could cause injuries, fatalities, and damage to assets and the environment.
- Uncertainty Analysis
Probabilistic Risk Analysis Model Full Episode Transcript:
Russel Treat: Welcome to the “Pipeliners Podcast,” episode 276, sponsored by Gas Certification Institute, providing standard operating procedures, training, and software tools for custody transfer measurement and field operations professionals. Find out more about GCI at GasCertification.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time, and to show the appreciation, we give away a customized YETI tumbler to one listener every episode. This week our winner is Nathan Stolper with Magellan Midstream. To learn how you can win this signature prize, stick around till the end of the episode.
This week, Jill Watson with the Xodus group returns for part two of our Introduction to Probabilistic Risk Analysis. Jill, welcome back to the Pipeliners Podcast.
Jill Watson: Hi, Russel. Thanks for having me.
Russel: Last week, we were talking about probabilistic risk analysis, and I got to tell you, you hurt my head. For the listeners, we just took a short pause, and now, we’re recording the second part of this episode, and my head is still hurting. Anyways, thank you for coming back.
I want to dig in a little deeper and really talk about how you go about building a probabilistic risk analysis model and why you might want to do that if you’re a pipeliner. Tell us a little bit about the risk modeling working group and kind of the guidance that’s coming out of PHMSA.
Jill: That’s a great question. In February of 2020, PHMSA issued their risk modeling guidance document. It was a compilation of both industry people and leaders in integrity management to take a review of just what the different types of risk models were and then the weaknesses and strengths of these models, but also how effective they were in addressing the integrity management challenges.
As a result of that risk modeling group, they put together a document that walks through all the different types of risk models that you could use. Then, it evaluates the competency of that model to address the integrity management challenges.
One of the models they looked at was the probabilistic risk assessment model. The risk modeling working group deemed that to be the only model that was best practice for making all of the decisions in the integrity management program. PRA stood out with the big gold star.
Russel: Interesting. What was your rationale or why did they come to that conclusion? Did they elaborate that in the report?
Jill: It’s called risk allocation. You mentioned it on the last podcast. Again, the more information you have from the standpoint of risk and consequences, the better you can decide where to put your efforts, but if you’re only looking at two different things, you can’t really effectively allocate those resources.
The idea is that you want to put your money where you really need to and put your efforts there, too, and that’s where that risk allocation kind of concept comes from.
Russel: That actually makes a lot of sense. That makes a lot of sense. If I’m a pipeliner and I’m wanting to start to build a probabilistic risk analysis model, how do I go about doing that?
Jill: Most operators probably are pretty close. People have models, and they also have very large databases. What you really need to do is to rev up your approaches.
When you’re looking at, say, a burst pressure calculation, you might want to use your operating pressure to determine when that defect is going to burst based on your SMYS or what have you, but you could also do a structural analysis, looking at the material properties. You could take that deterministic calculation, and you could turn it into a probabilistic solution.
That’s just by accounting for the uncertainty and, say, your material properties with regard to that burst pressure calculation. It’s a very easy way to go from a deterministic calculation to a probabilistic calculation.
Russel: Just take that example, if you would, Jill. Could you take it another level? Walk me through more specifically what does that mean.
I understand the deterministic burst calculation. You have inputs into the algorithm, material properties, and so forth. Then, I have pressure, and then I calculate a burst pressure.
Jill: Exactly, exactly. With regard to the calculation, you get values for, say, the materials’ properties. They give you a number, but the vendor also has a confidence level associated with that value, so it’s a best estimate, for lack of a better word.
There is a distribution because if you say something is 10, it’s not always 10. Sometimes, it’s 9.9. Sometimes, it’s 10.1. The probabilistic approach doesn’t assume it’s 10. It assumes it’s some distribution represented by the uncertainty.
That’s where you end up getting much more potential values for what pressure you would fail at. It could not just be 10. It could be a big range based on the operating pressures and then the material properties.
Russel: Interesting. Those are the things that make my mind hurt, because doing this integrity management, doing the risk analysis is tough enough as it is, and now, I’m adding another whole level of complexity to the model. There has to be value in that complexity. What’s the value and adding that complexity?
Jill: For the operator, again, I will go back to the fact that when you do something probabilistic, you’ve learned so much more from it, and you gain confidence when you’re making your decisions.
From a regulatory standpoint, if you’re using a probabilistic solution and a probabilistic risk model, then that opens the door to the regulator that within your model, when it tells you something is risk significant versus not risk significant because you have a PRA model, you already have the evidence that can allow you to say that this is significant or this isn’t.
Then when you can identify things that are not significant, then you don’t have to do anything extraordinary to treat those things. That’s exactly how it works in the nuclear industry.
Russel: If I had a particular parameter I’m looking at, and the uncertainty is high, does that make that parameter have more risks than if the uncertainty was low?
Jill: Not necessarily. Having a large uncertainty is OK. What would end up happening is if that feature ended up being risk significant in your model and it was low confidence, then you would probably either figure out what the lack of confidence was – Was it lack of data of some nature or wrong data? – you would probably go ahead and address that because you didn’t have the confidence that that is an exact number.
Russel: You might, for example, do a dig to collect data to improve your model, not just to repair a defect?
Jill: I can give you a pretty good example if I can talk about nuclear space.
Russel: Sure.
Jill: In the nuclear industry, we have this event and it’s in the steam generator and it’s in the pressurized water reactors. If a tube in the steam generator ruptures and the water level is below that rupture, it’s an automatic release to the outside world. We called this thermally induced steam generator tube ruptures.
The industry has put together a guidance. There’s a lot of things that happened late in the accident sequence that we don’t have a lot of confidence in. Everybody has to look at these accidents because they’re significant and we have them in our model.
If we had to make a decision about thermally induced steam generator tube ruptures, we’d go back to our model, and we would see that we had a low competence in those numbers to start with.
We wouldn’t be able to use our model to make those decisions. We’d have to go outside of the model to get more information and do a deeper dive to figure that out, so we can’t let our model drive us in those situations.
Russel: What the model drives you to do is get the data you need to make the model work?
Jill: If these scenarios change the outcomes at the model significantly, which means they’re important, but we don’t have all the data, so we have to recognize that they’re in there, and we’re unclear about it, but we can’t use our model to make those decisions.
It’s like using an approximation, and it is still in there because it still shows these are bad and they’re high, but there’s nothing that we can do about them.
Russel: Oh, man. I got to tell you that this stuff is very hard for me to conceptualize. I don’t know why it’s very hard for me to conceptualize. I’m normally pretty good at that, but in this domain, I’ve had this conversation with other risk folks that it’s hard for me to mentally get there, if that makes sense.
Jill: No, I get it. I get it.
Russel: What are the things you have to do to build your model? What are the various kinds of things that you’re putting into the model?
Jill: You definitely need all of your data, and it would depend on what it is. Nuclear plants require different stuff, but in the pipeline, it’s probably all the same information that we already have, but the treatment of the data is kind of where the probabilistic changes.
Then also, our model is a fault tree analysis, and that is a highly structured analytical tool. Again, in nuclear space, and then when I’ve done a probabilistic risk model for the pipeline, it’s a humongous model, but everything is uniquely put into the model.
Every pipeline segment is in the model. Then, when you push the button to get the results, it just rolls out everything that’s happening for your entire pipeline section. You start with the data.
Russel: Yes, that segments and sizes and properties.
Jill: Right, depth of cover, you name it.
Russel: Alignment.
Jill: Exactly.
Russel: Well run histories, all that stuff, so all the physical information about the environment the pipe is in. What about human factors? Is that part of the fault tree analysis?
Jill: Absolutely, it is. We use human reliability analysis, and that is calculating the probability that an operator or maybe a maintenance fellow is incorrectly doing something.
We also talked about looking at incident investigations. In most incident investigations, you will always find there’s some contributing factor coming from human interactions.
That’s the ability to predict what your people, your operators are going to do and how successful they’re going to be able to do it, and particularly, in times when they’re not running at a steady state in normal operations.
They’re thrown into a situation where, “Oh, my gosh, we’ve got something going on. You need to go and run and do this.” That’s when you have the most stress on your people, and that is also when you would expect them to perform less competently.
Russel: Certainly. Certainly. I’ve got all the information about the pipe. I’ve got all the information about the environment. I’m doing human reliability analysis. Am I doing consequence analysis? If it occurs on this part of the pipeline, what’s the consequence of that versus someplace else like high consequence area analysis and all of that?
Jill: Yeah, absolutely. That’s pretty much what most of the operators are doing, too. The industry is in pretty good shape on that. A lot of people are using the PHAST model, but those release classifications and those release scenarios also can be a probabilistic distribution for the consequences associated with a pipeline release.
Russel: Right. The thing that’s coming to me here is that this model, this data set is immense, because it’s all the information we have. Then, is there any additional information I need to build?
Jill: No, but it’s what you do with your data. In a PRA, we have to find what those initiators are, and those are things that don’t directly go to an accident, but they’re a perturbation in your process.
Say, you’ve got a brake on your pipeline. You have shut in valves. There’s the opportunity for those valves to shut. Also, depending on the size of the break, maybe your control room is not getting the information that you’ve got a leak going on, because it takes a while for small breaks to be able to show up in some of those control systems.
You would look at the pipeline from the standpoint, “Well, OK, if there’s a rupture, the control room is going to know right away and they’re going to be able to shut those valves.” We might be in better shape with a large break than if we have a little one, that goes on for a while before we even know it.
The timings of those kinds of accidents also play a role in the probabilistic scenarios. Again, for the small breaks, you might have a leak that’s running for three or four days, and for the large ones, you shut that valve in and you should be in good shape.
Russel: One of my takeaways from this conversation is…There’s a big data management challenge around all this, because what you’re talking about is the kind of data you’re talking about, typically, at least in my experience, it exists in multiple places and multiple systems in a pipeline company.
Jill: Yes. Right.
Russel: I have to gather all that stuff up in some way. Then I’ve got to feed it into a model and crunch it.
Jill: Yes, but it’s not wholly different from what they’re doing now.
Russel: Yeah, I acknowledge that.
Jill: You’re still touching the same data. It’s just what you do with it.
Russel: It makes sense to me. Of course, I’m not a guy that lives in this. I’m sitting here thinking it feels like a big lift to me. It might just be the difference between what I understand and what the reality is, but it feels like a pretty big lift to implement this as my risk approach.
Jill: I would agree, but I also go back to the nuclear industry. A couple of years ago or 20 years ago, this was new, too, and now, it’s not new anymore.
Then, the other thing that I would point out is there is a big turn with regulatory agencies looking at this probabilistic approach. We see a lot of regulatory agencies interested in this methodology, because of the results that it yields and the confidence that they as regulators have, that those operators are working safely and they’re adhering to the requirements. Also…
Russel: I’m sorry, go ahead.
Jill: Also, science and technology are moving us along, again, like I mentioned, that digitalization.
We’re going to want to see our pipelines. We’re going to want to see where all of this stuff is. We’re going to end up taking that data, maybe aggregating it a little bit up, so we’re already doing that with our data. It is a lift, but it’s not such a big lift as coming from where we might have been 10 years ago.
Russel: Sure. Certainly, our ability to build big datasets, our abilities to work with big datasets, the things we can do on the cloud and we can do with analytics, there’s a lot happening there. Certainly, the folks that live in those worlds, they know how to pull data together and process it quickly, so there’s certainly a lot happening in that domain.
The other thing, though, that I’m hearing in this is that there’s a pretty big payoff potentially for the operator.
Jill: Yeah, I would agree.
Russel: Talk to us a little bit about what that looks like.
Jill: There’s risk averted costs. When you use the probabilistic risk model, again, one of the features of the model is that it generates these risk importance measures.
What those do is they are a measure of the safety significance of those model attributes or a component within that model. If we did it in pipeline, we could determine which segments of our pipe are the most significant.
We could go back and look at the ones that didn’t make the cut. We might still do something with them, but rather than just prioritizing, it gives us a different way to chase risk reduction measures, which maybe some, as I talked like, there’s the big rupture out there, but there might be thousand little cuts out there, too, that if you could fix those thousand little things, that you’d have that same risk reduction.
That’s when we go back to that risk allocation. Do you want to throw your money at your largest pipe with the highest pressure, or what else can you do? What are your options? The PRA model is that optioneering tool.
Russel: Yeah. Would it be fair to say that if you built out a robust PRA model, you could identify those things that you didn’t need to focus on?
Jill: Yes, and you could prove it.
Russel: That’s pretty compelling. That’s pretty compelling, because a lot of times, that’s a hard thing to prove to a regulator.
Jill: That’s the purpose of these risk importance measures. As I say, there’s a value and everybody recognizes that if it’s below this value, you’re good.
Again, there’s just this understanding across regulators, because obviously, it’s a mathematical computation that has some validity in and of itself, but that you can just say, “OK, here’s the cut off and that’s that,” and then we don’t have to worry with some of these other methods of assuming some risk tolerance and risk levels and what have you.
Russel: Interesting. If I wanted to learn everything there is to learn about probabilistic risk analysis, where would I go? What book would I read? What seminar would I attend?
Jill: Wow. I don’t know if I could answer that question. There’s a lot of resources. Obviously, there’s lots of nuclear regulatory stuff, and there’s also a lot of NASA stuff, so I would start there. What’s interesting is the history, how this came to be, because it wasn’t just something they selected. It was developed.
Russel: Yeah, that’s right. That’s right. It’s a science that’s involved in this very low tolerance for negative outcome type industries.
Jill: Yes, and when things really matter, right?
Russel: Right. How do I manage that? When NASA first started and when nuclear first started, those are not questions that really anybody ever tried to answer before.
Jill: Yeah.
Russel: It’s interesting. Maybe I could ask you to do this. We put together show notes and links and all that kind of stuff on the website when we release a new episode.
Maybe, you could offer some links to resources and such, and we’ll put that into the show notes. People can listen to this, or they can go to the website and go to the transcript and the links, and maybe find more to read and dig in a bit more if they’re so inclined.
Jill: Absolutely. I’d be happy to.
Russel: That would be awesome. Look, Jill, I really appreciate you taking the time to have this conversation with me. I got to say I’m very intrigued. I’d like the opportunity to actually get hands on with one of these models. This is probably a rabbit hole. I’d be happy to go all the way down the well.
Jill: I can tell you, if you see a model, you’ll never go back. You’ll never do it another way.
Russel: That’s fascinating. I believe you. I believe you. Thank you so much. It was great to have you here.
Jill: Thank you, too, Russel. I appreciate it.
Russel: I hope you’ve enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Jill. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit PipelinePodcastNetwork.com/Win and enter yourself in the drawing.
If you’d like to support this podcast, please leave us a review and you can do that wherever you happen to listen. You can find instructions at PipelinePodcastNetwork.com.
If you have ideas, questions, or topics you’d be interested in, please let me know either on the Contact Us page at PipelinePodcastNetwork.com, or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords