This month’s Pipeline Technology Podcast episode sponsored by Pipeline & Gas Journal features Black & Veatch’s Michael Nushart discussing his recent article in the Pipeline & Gas Journal where he goes over the different layers of pipeline security.
In this month’s episode, you will learn about how securing a pipeline requires considerations beyond cybersecurity. Other factors must be considered, including understanding the nature of the threat, performing a criticality assessment, and protecting against vandalism.
Enhancing Pipeline Security: Show Notes, Links & Insider Terms
- Michael Nushart is a principal consultant with the Black & Veatch Global Advisory Group. Connect with Michael on LinkedIn.
- Black & Veatch is an employee-owned engineering, procurement, consulting and construction company with a 100-year legacy of innovations in sustainable, critical human infrastructure.
- Pipeline & Gas Journal is the essential resource for technology, industry information, and analytical trends in the midstream oil and gas industry. For more information on how to become a subscriber, visit pgjonline.com/subscribe.
- Read Mikes Pipeline & Gas Journal Article here
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote locations.
- TSA (Transportation Security Administration) is an American governmental agency that is responsible for travel safety, especially air travel
- Process Hazard Analysis (PHA) is a set of organized and systematic assessments of the potential hazards associated with an industrial process.
- NGO stands for non-governmental organization. While there is no universally agreed-upon definition of an NGO, typically it is a voluntary group or institution with a social mission, which operates independently from the government.
- SVA (Security Vulnerability Assessment) aims to uncover vulnerabilities in a network and recommend the appropriate mitigation or remediation to reduce or remove the risks. A vulnerability assessment uses automated network security scanning tools.
- PLCs (Programmable Logic Controllers) are programmable devices placed in the field that take action when certain conditions are met in a pipeline program.
- Risk is the likelihood that an attack, incident, or exposure that could occur, and the severity of that impact related to your systems or your organization. Risk is the combination of likelihood and consequence.
- Threat is the thing that can take advantage of, use, or cause a vulnerability, either intentional or unintentional.
- RTUs (Remote Telemetry Units) are electronic devices placed in the field. RTUs enable remote automation by communicating data back to the facility and taking specific action after receiving input from the facility.
- NTSB (National Transportation Safety Board) is a U.S. government agency responsible for the safe transportation through Aviation, Highway, Marine, Railroad, and Pipeline. The entity investigates incidents and accidents involving transportation and also makes recommendations for safety improvements.
Enhancing Pipeline Security: Full Episode Transcript
Announcer: Welcome to the Pipeline Technology Podcast, brought to you by the Pipeline & Gas Journal, the decision-making resource for pipeline and midstream professionals. Now your host, Russel Treat.
Russel Treat: Welcome to the Pipeline Technology Podcast, episode 25. On this episode, our guest is Michael Nushart with Black & Veatch. We’re going to talk to Mike about his Pipeline & Gas Journal article entitled “Enhancing Pipeline Security: Does Cybersecurity Alone Provide Adequate Protection?” Mike, welcome to the Pipeline Technology Podcast.
Michael Nushart: Thank you very much. Glad to be here.
Russel: I’m glad to have you. As we were talking before we got onto the recording here, we were talking a little bit about your article and a different take, maybe, on cybersecurity than what some others have done, looking at cybersecurity as part of your operations reliability. Maybe you could tell us a little bit about what’s going on right now with cybersecurity in the pipeline space.
Michael: I’ve been working on a number of different projects, especially from the SCADA upgrade, SCADA install standpoint. Obviously, that’s a big issue. Cybersecurity is a big issue with that, on both transmission and distribution. Much of what folks are looking at, and rightly so, is the IT aspect of cybersecurity.
What we’re trying to emphasize as well is looking at cybersecurity from the standpoint of what are the downstream factors that an operator does not want to happen as a result of an intrusion. That can apply itself both to the IT cybersecurity part but also the physical aspect of security as well.
Russel: Can you talk about that a little bit more? What are some of those downstream aspects that you’re referring to?
Michael: When you look at some of the TSA requests that came out – they were requests at the beginning – those begin to talk about the issues of things that will affect, have an impact on, the operation of the system. They’re actually operational disruption.
We want to make sure that we’ve looked at the downstream effects, whether that be intrusions that cause overpressure issues, curtailment issues that were unplanned, or other types of disruption to the system. It’s especially aimed at looking at, obviously, safety to the public but also inconvenience to the public. TSA is also concerned about disruption to government facilities as well.
Russel: I’ve done a number of podcasts on cybersecurity. I don’t consider myself a cybersecurity SME, but I’m pretty knowledgeable about the issues and the details. I think this take is really interesting. Fundamentally, a lot of the focus, at least as I’ve heard it, is making sure that nefarious actors can’t get into your network and interact with your SCADA system.
Really, I don’t know that that changes, but what you’re pointing out as a focus is what it’s really about is operations reliability, my ability to continue to deliver to my customers, and my ability to continue to do that safely and what are the kinds of operational upsets that would cause me grief.
Michael: That’s correct. I’m glad that you picked up on that as well. Granted, as you mentioned the nefarious actors that can get into your systems, they can wreak havoc with a lot of things.
When it comes to the issue of operational reliability and the pieces that go with public safety and so forth, in those cases and, as I started out saying, things like dealing with a SCADA system, for example, all of those will eventually require some change of state to the system, whether that’s open a valve, close a valve, change a set point.
It’s all of those pieces with a change of state. Changes of state can happen to your system beyond the cyber realm as well. They can happen at remote locations. Those are some of the things that I’m suggesting operators take a look at when they look at this overall piece of pipeline safety and pipeline reliability.
Russel: It’s akin to what you do in a PHA, process hazard analysis, but adding in the cyber or unplanned actuation. Gosh, I’m struggling for a way to say this, actually. The changes of state that I didn’t cause, some other person or some other agent caused.
Michael: Yes, that’s correct. Again, those changes of state can be caused, obviously, through changing a set point or a command in SCADA, but they can also be caused at the station itself, from a physical standpoint.
If that’s a concern, that’s what the original TSA guideline set out, is to look at what are some of those vulnerabilities that you have in your system, that could cause you situations that, again, affect pipeline safety, public safety, and, as TSA is concerned, the impact potentially on the government facilities.
Russel: If I were a pipeline operator and looking at that, I’d look at what other critical infrastructure is relying on me for deliverability and that sort of thing. It’s all the same analysis. It also goes, I would think, to mitigation. Some of that mitigation might be in things completely unrelated to my automation or network. Would that be true?
Michael: I believe it is. It is things like, for example, what level of intrusion alarming do you have built in, for one thing, on the cyber side. At the same time, what type of physical intrusion prevention or mitigation do you have at the remote sites?
Having been an operator in a past life, I know that many of the stations are basically protected with a single chain around the gates and four padlocks on it. It would not be difficult to push the gate with your car and pop the chain.
Is that what you really consider to be security at that point? If that’s a critical station, you’re really going to want to make sure you may have some other things, whether it be bollards, other types of security, and security intrusion alarming.
Russel: I would think too that, depending on what you have at that station, if all I had at the station was a meter run, that’s a whole different level of threat or risk than it’s a valve station, than it’s a regulator station or a pump station.
Likewise, once I actually get into the station, what can I actually interact with to do something? Are there things I can do to lock up cabinets and so forth to make sure people can’t get into them to make change?
Michael: Absolutely correct. That’s a good observation on your part. That is part of the issue. When you look at the issues from the cyber standpoint, yes, you have the software at the front end, at the control end, but you also have the OT, the operational technology, at the station.
Whether that be remote operators, motor controls, sensors and so forth, whether that be for pressure, temperature and so forth, all of those things can be accessed as well if they’re not well secured, and the other pieces that you mentioned as far as pressure regulators, valves and so forth.
With the pressure regulators, the folks I’m sure that are listening to this call know how much damage can be done just by closing a quarter- or three-eighth inch valve on a control line on a regulator. That can give you either complete outage problems or complete overpressure problems as well. Exposed control lines can also be subject to any number of caliber of projectiles.
Russel: This way of thinking really does cause you to do a whole rethink on how you design pipeline stations.
Michael: Definitely. The mitigation that you mentioned earlier on is do we need to put up some other types of barriers that can handle things that are somewhere at the vandalism level or slightly above, with the full understanding that if it was full-out terrorism, the likelihood of an operator really being able to completely defend against terrorism is relatively low.
From the malicious vandalism and up, those are things that probably can be mitigated with relative ease.
Russel: That’s another good point you’re making, Mike, is it’s not just what are you trying to secure, but it’s also what’s the nature of the threat. If I’m operating in a geography where there’s active NGOs that are protesting against my pipelines, it is very different than if I’m operating in West Texas, where most people support oil and gas.
Michael: Good point.
Russel: Understanding the nature of that threat and then understanding, for this pipeline station, it’s critical to these things, these customers, this government facility, etc. or not, getting an inventory of risks from that perspective is another part of this whole mindset, I guess.
Michael: That’s where the TSA Pipeline Security Guidelines talk about doing, for example, criticality assessments. What are the possible downstream effects, as you mentioned. Then their next step is talking about going into what they call the SVA, the security vulnerability assessment. It’s how critical that facility is and how vulnerable might that facility be.
To me, that is part and parcel of this whole overall, as we’re generally terming it, cybersecurity. It is making sure that the whole pipeline system is safe.
Russel: This is really beyond what is typically thought of as cybersecurity. It’s more operational reliability or operational security, cyber being a key piece of that, but not the only piece. If you would, maybe you could walk me through a little bit about what is a criticality assessment, what are the kinds of things that an operator would need to do, to do a criticality assessment.
Michael: Let’s take some of the obvious situations. Operators definitely want to make sure that they are avoiding overpressure conditions. That would probably be the largest one, although it’s not to say that the opposite end of that scale is not important either.
Again, depending upon the condition, a complete outage situation, where, again, change of state causing overpressure to the system or change of state cause the system to be shut in without the operator’s plan to be able to do that, those are some of the things that an operator would need to look at.
How critical is this, or do we have additional points of supply that, even though there may be an intrusion or an incursion in that case, will we still be able to maintain what needs to be maintained?
Looking at some of those situations, is it valves? Is it the inlet valve to the station, outlet valve to a station, or, as I mentioned earlier, even things like control lines on pressure and flow controllers? Those are some of the things that you would look at from the criticality standpoint. How critical is a particular facility? What impact can it have on jeopardizing public safety, the pipeline safety, and then, as TSA’s considered, government supply as well?
Russel: I’m reading the article. The thing that’s snapping out for me as we’re talking on this criticality assessment is an analysis of the computer systems and physical assets that can contribute to one or more of the operational consequences.
The thing I’ve never really thought about in this context. I’ve thought about the overpressure issue, but overpressure, for most systems that have been well designed and been through a PHA, they’re going to have physical relief. They’ll have pressure safety valves and such that if they get above certain pressures, those things will open up and protect the integrity of the system.
You don’t want to be operating those things, but that’s very different than if I’ve got a city gate. It’s got a couple of power plants behind it. If I lose this city gate, I lose power to a city. That’s a very different kind of thing. I would assert that protecting against unplanned curtailment is a much more challenging issue than protecting against overpressure.
Michael: I would say, in many circumstances, it is. Yes, I would tend to agree with you.
With, again, in my past life dealing with both transmission and distribution, I would say that there are conditions in distribution where overpressure is potentially more hazardous, especially for operators that operate, for example, low-pressure systems, where there are no pressure regulators and internal relief valves at the customer meter.
You could have a situation there as well, somebody operating a bypass that they shouldn’t be operating and overpressurizing a system. There are any number, with a little bit of knowledge, it can create a hazard to your system.
Russel: Certainly, we saw that issue occur, for completely different reasons, in Massachusetts here a few years ago, where an overpressure got too much gas to some homes. Those homes, because of pilots and other issues, caught fire. Certainly, that’s a real threat.
Same kind of question. Tell me a little bit, if you would, about the security vulnerability assessment. What is that?
Michael: Once an operator has made the determination where their highest criticalities are, their critical systems are, now it goes back to some of the things we’ve discussed a bit in the past. What are the vulnerabilities to those particular critical systems? Meaning, are the greatest vulnerabilities through the software and so forth?
Again, we’re not trying to downplay that, by any means. That is a potential to make sure, on the true cybersecurity point, that you don’t have a vulnerability for someone to get in, change state without your knowledge.
Then moving on to the physical pieces, the simple pieces, like intrusion alarms and physical site security, and then going beyond site security, into, as you mentioned, the cabinets for operational technology. Are those well secured and, at least to the extent possible, removed from the ability for someone to vandalize them?
Then moving on to things like valves being locked and control lines being protected. It’s a matter of looking at that full spectrum of the things that can go wrong.
One of the things I’ve tried to emphasize there is that by having a cross-functional team looking at this, that’s one of the key pieces, that it’s not only the software folks that are looking at it but also the operational folks that have an understanding of what could happen if a change of state occurred at an undesirable time.
Russel: The other thing that you talk about in your article is the remote facility physical vulnerabilities assessment. We’ve probably talked about this already, but maybe you could elaborate about that a little bit as well.
Michael: Talking about things like not only the SCADA systems but the remote terminal units (RTUs), the PLCs (programmable logic controllers), at the stations, and so forth.
When we look at that not only from a criticality but a vulnerability, could someone at that site create a change of state at that remote location to be able to operate a remote operated valve and so forth that the operator doesn’t wish to have operated at that particular point?
Russel: Something as simple as can I walk up, switch a valve into manual, push some buttons, and change a setting. That could fall into just vandalism.
Michael: Again, with a little bit of knowledge (that) could be dangerous.
Russel: Versus I have all that secured and locked up. Vandals couldn’t get to it as easily.
Michael: Well put.
Russel: You have the same issue too with things like valves that are manual valves that could be manually operated. Like manual isolation valves and such, those are some of the other things that would need to be looked at in this context.
Michael: That’s correct, yes. Not only isolation valves, but also bypass valves. Bypass valves can cause a significant problem for you if operated incorrectly.
Russel: The typical way that a pipeline station would be implemented is all of those valves have wheels or handles or whatever to allow them to be just walked up and remotely operated.
You might do something as simple as just taking the crank off the valve stem and putting that in the back of your truck. That is something that creates another level of obstacle for somebody who’s trying to do something nefarious.
Michael: That’s a very good point. That’s a very good example as well. Many operators depend on some quarter-inch chain from Home Depot and a padlock to lock that wheel. That doesn’t take as much effort as having no wheel there. That’s a good example of things that can be done when folks are doing this security vulnerability assessment.
Russel: I’m sitting here. I’m thinking about this. I’m thinking about what some of my operator friends might say if I was talking to them. They’d say, “Well, you know, if I find myself there in an emergency situation – I need to close that valve – what do I do if I don’t have that on my truck?”
A lot of these things, on the surface of them, I guess, could seem simple, but when you begin to really unpack them operationally, it’s non-trivial.
Michael: That’s exactly correct. Things that the operations folks will have to think through thoroughly in this overall process. Again, why a cross-functional team would be valuable in doing this so that it is not just cyber for the software side of it.
Russel: I actually did a podcast once, with a guy who is a PHA process safety guy, process safety expert. We were talking about alarm management and PHAs and how those processes are both similar and different.
With this kind of cybersecurity and operational reliability conversation, it’s the same thing. All of the things you do to get those people together and build the context in order to have the conversation are the same.
Michael: That’s correct. I compare it to, what do they say, a parallax view. Everybody’s looking at the same thing, but they’re all looking at it from their own angle. When orchestrated properly, that’s not a bad thing. That is a good thing, to bring all of the disciplines to bear on this situation.
Russel: The hardest thing about PHA is getting everybody in the room and getting the context built. Actually asking the questions is relatively straightforward. Getting everybody in the room and getting the context built, that’s difficult. It’s expensive. Once you get them there, you tend to want to keep them there until you’re done.
Pulling all that expertise out of daily operations is non trivial. I understand what you’re getting at here. I see the value of it, but I also think I have some sensitivity to the reality of doing it in practice.
Michael: At the same time, again, having been an operator in a previous life, one of the things that I continue to ask myself to this day (is), ‘Do I want to have the struggles getting everybody in the room at the same time and the expense involved with that, compared to an NTSB investigation?’
Russel: We ought to have a beer over that conversation right there. Again, it’s really not an easy conversation.
Michael: No, it’s not.
Russel: You’re talking about things that are highly unlikely but have immense consequences if they were to occur. Those are the hard things to manage.
Michael: Very often, as in my role now with Black & Veatch, as I deal with our clients and operators, it gets down to a question of how much risk appetite do you have. You can design your system so that a crash of a 747 on a town border station doesn’t wreck anything. [laughs] Is that logical? At the end of the day, how much risk can you afford? How much risk do you have an appetite for?
Russel: This is just one of many risks. The challenge for any operator is I’ve got a limited budget to expend for risk mitigations. Where do I spend it?
This has really been interesting. I really appreciate the time, Mike. It’s caused me to think about these TSA guidelines a bit differently. I’ve read them and understood them from a purely cyber standpoint, but this is a little bit more than just the purely cyber aspect of these guidelines.
Michael: Thank you. Again, what I try to look at is that concept of beginning with the end in mind. Unfortunately, if your thought process is strictly on the IT side, that is probably not the end result that should be looked at.
When you’re operating a pipeline system, how is it going to affect the public, how is it going to affect the safety of your system, and so forth. That, to me, is the end that should be looked at. There’s a lot more facets than just the software.
Russel: I agree. I think I now have a clear idea of what you mean by looking at it with the end in mind. Perfect.
[background music]
Russel: Look, I appreciate your time. This has been awesome. Have to get you back again sometime in the future.
Michael: Sounds good. Thank you very much for having me on.
Russel: I hope you enjoyed this month’s episode of the Pipeline Technology Podcast and our conversation with Mike. Did you know that it’s time to submit your nominations for the 2022 Pipeline & Gas Journal Awards? Simply go to the episode page and click the link to submit. [Note: the nomination period closed on July 15. Join us for the awards event on November 17 in Houston.]
If you’d like to support this podcast, please leave us a review on Apple Podcast, Google Play, or wherever you happen to listen. If there’s a Pipeline & Gas Journal article where you’d like to hear from the author, please let me know either on the Contact Us page at PipelinePodcastNetwork.com, or you can reach out to me on LinkedIn.
Thanks for listening. I’ll talk to you next month.
[music]
Transcription by CastingWords