In this week’s Pipeliners Podcast episode, Russel Treat welcomes back cybersecurity expert Clint Bodungen to discuss the latest developments in cybersecurity, specifically cyber vulnerabilities and quantifying risk.
You will learn about the CVSS scoring system that Clint’s team is building upon to create the IVSS system for comprehensive risk quantification calculation for industrial cybersecurity, how to think about the current industrial standards for cybersecurity, and more critical topics.
Cybersecurity Risk: Show Notes, Links, and Insider Terms
- Clint Bodungen is an ICS cybersecurity guru, the author of “Hacking Exposed: Industrial Control Systems,” and he teaches at the Gas Certification Institute (GCI). Connect with Clint on LinkedIn.
- ICS (Industrial Control Systems) captures the control systems and instrumentation used for industrial process control. These systems are used in oil & gas and other key industries.
- IIoT (Industrial Internet of Things) is the use of sensors and connected devices for industrial purposes, such as communication between network devices in the field and a pipeline system.
- S4x19 was an ICS cybersecurity conference that took place in January 2019 in Florida. The conference brings together industry leaders, guest speakers, and the overall cybersecurity community to learn about the latest trends and information.
- CVSS (Common Vulnerability Scoring System) is a standard used to assess the severity of a system’s security vulnerability. CVSS assigns severity scores to vulnerabilities, creating a hierarchy of what should be responded to first.
- IVSS (Industrial Vulnerability Scoring System) is currently a beta system built off the CVSS concept to assess risk from a more specific industrial perspective. [Find the scoring system on Clint’s website.]
- The US-CERT (United States Computer Emergency Response Team) provides a vulnerability database and resources for industrial control systems stakeholders.
- NIST SP 800-30 is a government paper that serves as a Risk Management Guide for information technology systems.
- NIST 3739 is a technical paper that addresses how to prevent an attacker from submitting typed input to the ‘auth’ parameter.
- The ISA99 standards development committee brings together industrial cybersecurity experts to develop ISA standards on industrial automation and control systems security.
- The ISA/IEC 62443 Cybersecurity Fundamentals Specialist certificate program is designed for professionals involved in IT and control system security roles that need to develop a command of industrial cybersecurity terminology and understanding of the material embedded in the ISA99 standards.
- ISO 27005 refers to a published set of standards for information security risk management that includes security techniques.
- PSM (Process Safety Management) is an approach to manage industrial hazards and to reduce the frequency and severity of incidents. [Read the OSHA standard.]
- HAZOPs (Hazard and Operability Study) is a structured and systematic examination of a complex planned or existing process or operation in order to identify and evaluate problems that may represent risks to personnel or equipment.
- PHA (Process Hazards Analysis) is the process of drawing from safety-related industries to determine the real risks of cybersecurity incidents. [Watch this SANS webinar on Combining Process Safety & Cybersecurity.]
- CISSP training is a form of training and education on quantitative risk analysis and risk management that focuses on identifying threats and vulnerabilities, while also implementing control.
- The RIPE (Robust Industrial Control Systems Planning and Evaluation) Program was created by the The Langer Group to create a framework for effective and sustainable ICS cybersecurity. [Read this whitepaper: A RIPE Implementation of the NIST Cyber Security Framework.]
- The ICS Cybersecurity Conference took place October 2018 in Atlanta covering critical cybersecurity topics. Clint Bodungen spoke on Red Team/Blue Team Industrial Cybersecurity training.
- ThreatGEN is a virtual reality (VR) industrial cyber-physical range for physical threat response training, process improvement, and team events.
- ThreatGEN Red vs. Blue is an online multiplayer training platform version of ThreatGEN.
- Maxxsure is an independent cyber risk assessment platform providing quantitative framework-based (NIST and others) detailed assessment of processes, hardware/software, and network vulnerabilities.
Cybersecurity Risk: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 60, sponsored by EnerSys Corporation, providers of the Pipeline Operations Excellence Management System compliance and operations software for the pipeline control center. Find out more about POEMS at enersyscorp.com/podcast.
[music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations.
Now your host, Russel Treat.
Russel: Thanks for listening. We appreciate you taking the time. To show that appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Jenny Childers with WaterBridge Resources. Congrats, Jenny, your YETI is on its way.
This week, we have Clint Bodungen returning to the show. Once again, we’re going to talk about cybersecurity and the quantification of cybersecurity risk. Clint Bodungen, welcome back to the Pipeliners Podcast.
Clint Bodungen: Thank you. Good to be here again.
Russel: Are we ready to red line the geek-o-meter?
Clint: Let’s do it. There’s going to be some math.
Russel: [laughs] Oh, gosh. I love math. That may scare some people off. I don’t know.
Clint: I failed math in high school and college. Now I’m doing it. That’s what we’re going to talk about. That should scare people even worse.
Russel: [laughs]
Clint: You’re having me do math in industrial control systems by somebody who failed math.
Russel: Everybody does at one point or another. That’s just the nature of it. Anyway, let’s jump into it. We brought you on to talk about vulnerability and risk quantification. Maybe you can give me why is this topical. What’s going on in the cybersecurity world that makes this conversation timely?
Clint: I just recently did a panel discussion on this at the S4 ’19 conference just a couple weeks ago in Miami South Beach, Florida. That’s Dale Peterson’s conference. We discussed adding a modification or doing a modified version of the Common Vulnerability Scoring System, or the CVSS, because the industrial community feels like that the CVSS…just let me back up.
The CVSS, the Common Vulnerability Scoring System, this is a scoring calculator. It produces a number that you see on every vulnerability advisory in every alert. Basically, if you go to the CVE, it’s in the vulnerability database, the US-CERT, any time you have an alert, there’s a number on there. That’s your CVSS score.
That’s the Common Vulnerability Scoring System score that either CERT or the researchers have designated this to be. It’s based off of a severity and consequence, mainly just basing off of severity. I’m just going by what the community says here. I’m not going to give my opinion right now.
The CVSS has not facilitated industrial control systems vulnerabilities very well because it takes into account your confidentiality, integrity, and availability, which of course, availability and integrity, it’s a thing in industrial control systems security, but it focuses on things like data theft and the confidentiality aspect and privacy.
Even the temporal and the environmental sections of the calculator, which tries to take into account some environmental localization to help customize it for the user, it isn’t very ICS applicable. There’s been a lot of talk and debate in the ICS community that we need to have a calculator or a scoring system that’s more applicable to industrial control systems. That’s where this all spawned from.
A bunch of colleagues and myself have started working on trying to create a modification. Another couple panelists at S4, Art Manion and Billy Rios, also had their own versions. It seems to have gotten pretty good reception. There’s one camp that says, “Yes, we need this.” There’s another camp that says, “Ah, the CVSS is good enough if you just use it right.”
Either way, that’s what’s going on in the industry and why it’s a pretty timely topic right now.
Russel: We’ve talked about this on some of our other conversations, even going back as far as December 2017, where we talked about risk, quantifying risk, and how risk from a ICS standpoint is different than risk from other types of cybersecurity standpoints.
One level of cybersecurity is I want to protect my credit card information. I don’t want my photos to get loose in the Ether. Another level is I don’t want a bad actor to take control of the refinery down the street. Those are fundamentally different classes of risk. I’m not saying one’s bad, or not comparatively one’s more or less risky. They’re just very different classes.
Clint: It’s like insurance, actuarials. The risk to your life is different than the risk to your car and the risk to your house, but you still have to have a policy for all of those. Let me say this. I’m going to preface what I’m going to say early in this conversation so that we don’t lose a bunch of people because there’s a crowd out there that will say this and be concerned about this.
When I say vulnerability scoring, I’m not saying that that is risk quantification. Vulnerability score and quantification is just a piece of an overall risk quantification. I want to preface that. I want to get that out there right now and differentiate those two before we go further.
Russel: I know on a couple of our previous episodes, some of conversation has blown up a little bit. We’ll try to mitigate that risk, too. [laughs]
Clint: Yeah, since we’re mitigating risk here. I’ll reiterate one thing that I did mention in a previous episode, just to get the risk conversation started. We don’t do cybersecurity just to do cybersecurity, and I’ve said this before. We do cybersecurity because cyber is a vector to risk. We do cybersecurity because of risk.
Everybody really wants to know where their risk is, what the severity or the impact of that risk could be, what the likelihood that risk could be realized, and how they deal with that risk with the resources that they have available to them.
The difference between the cyber risk in the IT world and the industrial world, you touched on it, is that, in the IT world, you’re mostly talking about the risk to my private information, the risk to my credit cards, the risk to my money, that sort of thing.
In the industrial world, we’re talking about there is a risk to money, production equals money, but also risk to health and safety, human life, and these sorts of things.
Russel: And environment.
Clint: Exactly, and environment.
Russel: I think that’s important to understand, is that the nature of the risk is just different. Clint, you brought something up we ought to spend a little time on before we actually go on the microphone. That was that, to some degree, risk is a bad word in the cyber community.
Clint: Right.
Russel: Maybe you could tell me why that is or why you think that is.
Clint: I think that, in a lot of ways, you could explain why risk is a bad word more in the pipeline community. In general, risk has become a bad word because, like compliance, a lot of people think compliance doesn’t equal security. There’s a lot of tedious calculations and things that go along with compliance.
Similarly, risk has been made tedious, and there’s a lot of controversy surrounding risk due to the number of different standards out there that help people identify, assess, and quantify risk. For example, if you’re looking at NIST 830, 3739, they have one way of identifying risk.
If you look at it from the industrial community, ISA99 or the IEC 62443, they have a way of identifying risk. Even the ISO series 27005 has a different way of identifying risk. There’s overlap. There’s so many different ways.
Another reason, especially in the industrial community, why risk assessment, risk quantification analysis has a bad word is because whenever you have overly vague and ambiguous heat maps, basically, “This is low; this is high,” on a heat map, it really doesn’t help you and to do any type of predictive analysis. It really doesn’t help you prioritize what systems or what vulnerabilities you should be tackling first.
In the end, that’s what you want to do. There’s two types of risk analysis. One, and it’s what we look at in terms of insurance or financial risk, also, is predictive analysis. We want to know, based off of the data that we’re given, what’s the likelihood that I can have some sort of exposure? What’s the impact of that?
The other type of a risk assessment is, I just want to be able to have some way to distinguish these risks from one another and be able to prioritize them against one another to help me form a more targeted, cost-effective, risk management strategy. The latter one is a lot more practical for your everyday user and owner/operator, rather than trying to have a predictive analysis risk assessment.
Russel: You just said a whole lot there. I want to try and put that in terms that might be more understandable or relatable for folks who are not necessarily cybersecurity folks. There is a big push in the pipeline industry, and API’s running point on this, called pipeline safety management systems.
They have a whole process and they have some tools on their websites that you can use to evaluate your program for pipeline safety. One of the retired pipeline operators that I know, he made a comment to me that I think this is really material. There is a fixed amount of resources to be applied to risk mitigation.
One of the biggest challenges faced by an operating executive is, “Where do I apply my limited resources to get the best result in terms of mitigating my risk?” There’s cyber risk, which is one type, but there’s also integrity risk, there’s operating risk. There’s a lot of different kinds of risks.
The beauty of quantification is it gives you a easily understandable result of analysis that can be used to support decision-making, which is the point you’re making. For people that are familiar with PSM, process safety management, it’s the same kind of thing.
Clint: Right, and that’s important to remember when you’re talking about assessing risk, especially in the industrial side of things. I think I mentioned this in a prior episode. The industrial engineers, operators, and asset owners have been doing industrial process risk management for decades. It’s called a process hazards analysis or HAZOPs.
That is the method that should continue to be used as opposed to trying to take these IT-centric risk assessment methods and force them upon industrial. Basically, just view cyber, like you said, cyber is just another avenue to risk. It’s just another type of risk that has to be considered in your HAZOPs.
Russel: Exactly. That’s exactly the point I’m trying to make. It’s all the same thing, and if you’re dealing with industrial control systems, ICS, then you should think about this in the context of industrial safety management. That’s the new buzzword we’ve invented, Clint, today. It would be cyber safety management.
Clint: Yeah. We need to go back to a couple of our prior episodes. We’ve come up with phrases that we think we’ve coined before, and I think this might be one of them. We need to double check because every time we sit there and think we’ve come up with something…
Russel: [laughs]
Clint: …I think we’ve already done it. If not, then we’ve got two or three we’ve coined here.
Russel: Exactly. We should create a catalog.
Clint: I have heard the term cyber safety. It’s only recent, so be careful about thinking you’ve coined it. I think there are a lot of people right now starting to catch on to the fact that cyber is an element to a PHA, and safety being an element of that. I’ve seen that somewhere, the whole cyber safety thing. It’s starting to get around.
Russel: It makes sense. If I’m running an operating company, I’m thinking pipelines, but this could likewise apply to the process operators, the refineries, the petrochemicals, and others, cyber is just another type of operating risk that has to be addressed.
Clint: Right.
Russel: It’s really part of an overall program. I guess the point we’re trying to make here is you don’t want to think of this as a one-off. It’s part of the overall program.
Clint: Right. Even in IT, you have your ongoing risk management or risk assessments. Too many people consider a risk assessment a one-off thing. If they’re really on top of things, they’ll have a little mini-risk assessment every time they make a change to their network.
If you have a change request, you have to fill out this little risk assessment form that has maybe four or five different variables you check off, which is really meaningless, anyway, that’s if they’re really good. Instead of the one-offs, risk assessments, it always needs to be evaluated. It always needs to be a part of change management.
It needs to be constantly a part if you have a monitoring system, if you have an IDS. All of your threat intelligence and your cybersecurity monitoring, everything should always be feeding into your situational risk, your risk management system. I think there’s not enough people doing it out there.
That’s one of the elements to where I was talking about risk is a bad word. If people would be using risk properly, using all of the data points and all of the situational awareness that they have to feed into a constantly always on risk evaluation, then that’s going to be a lot more meaningful for people than to be using these calculations that use speculative and presumptuous numbers.
For example, a lot of risk calculations have likelihood. Especially the CISSP risk analysis formula’s driving me nuts. It says, “Over the course of a year, this has a likelihood of happening twice.” In other systems, you have ratings of like, “What’s the likelihood of this happening, one through five?” People are guessing at these numbers. They have no basis for coming up with these numbers.
Russel: Very different than what’s done in pipeline integrity where there’s science behind how they come up with the likelihood of failure. There’s actually research and data that goes behind how they came up with those calculations.
Clint: Exactly. There’s something that the IT folks can actually learn from the industrial side of things. It’s how you come up with your probability factors and your likelihood of an incident.
Russel: That’s a good segue, too, because what we had talked about as we were visiting before we got together here was the idea of consequence-driven risk management. That’s a pretty easily understood thing to people who come out of the process safety or the pipeline safety background because they’ve got a mechanism for scoring different types of outcomes that could occur.
How does that get applied in ICS cybersecurity?
Clint: The consequence-driven risk management piece borrows from a bunch of other industrial risk frameworks, like Ralph Langner’s RIPE System, the Bow Tie system. It’s basically enumerating what the consequences of a process failure are and then figuring out the impact.
Before you get into likelihood or even impact, you really need to understand all of the individual consequences, and even cascading consequences, of a process failure before you can start to get into the impact. It’s like the consequence piece fits in between the likelihood and the impact. It’s part of that overall equation.
Anybody that’s been doing HAZOPs or PHAs in the industrial sector, you understand the meaning of consequence. You understand that a consequence is not an impact. For example, the best way to explain this is a plant shut down is a consequence, not an impact. How much money it’s going to cost you or damage, that’s the impact.
That’s lost in the IT risk assessments. They go straight to, “What is this going to cost us?” or, “What’s the impact? What’s the likelihood?” Most IT risk assessment frameworks don’t get into, first of all, it’s basically another step in the equation. This is what I like about a consequence-driven approach.
If you look at breaking down the consequences of each failure, action, or incident, it could be positive or negative, as opposed to trying to jump straight to impact or likelihood, it actually helps you build a threat model or a risk model with that in mind. Unfortunately, we’re not using graphics here, so I can’t show this. I did a talk at the ICS Cybersecurity Conference in Atlanta.
I was explaining that, for every attack vector that you have, an action has certain consequences. Each individual consequence leads to one or more other attack vectors or consequences. You can have a much more granular approach to your quantification, which adds a lot more accuracy.
Russel: If you’ve got something that you’ve got on a website someplace and we can link to it, then we’ll put that in the show notes.
Clint: I do have a link when we talk about the risk quantification and scoring. I will be posting, pretty soon, a white paper on consequence-driven risk management, which is a spin-off of the talk that I did in Atlanta.
Russel: Cool. That’s, again, a good segue. That goes to your Industrial Vulnerability Scoring System. I want to know what the IVSS is.
Clint: Like we mentioned at the beginning of the show here today, this was a spin-off of a community debate that Dale Peterson facilitated at his S4 conference. The Industrial Vulnerability Scoring System, we’ll provide the link. I’ve got a link to that where it’s in beta state.
It takes into account not only all of the factors in the original Common Vulnerability Scoring System, all these things that help to contribute to the severity of the vulnerabilities such as, is there a public exploit available? What’s the difficulty level of the vulnerability, of the exploit? What’s the user level required, the attack vector, access levels?
It takes into a lot of things that need to be looked at to determine the severity of the vulnerability. However, other things that we want to take into account for industrial, for example, in addition to just confidentiality, integrity, and availability, these are called multipliers or modifiers.
You have these basic scores, ratings one through five, whatever. It’s on a 10-point scale. There are certain things that are aggregators. Basically, all these things add up to a piece of a score. There are certain things that are called multipliers. Things that are more important on your criticality radar are going to be multipliers. They’re going to have a higher weight to them to increase or decrease the score.
For example, in the Common Vulnerability Scoring System, the multipliers are whether or not it takes into account what severity or to what depth it affects confidentiality, integrity, and availability. We kept that intact in the IVSS, but instead of saying confidentiality, integrity, and availability, we didn’t make them multipliers. We put them into the base cyber consequences.
For example, data manipulation, data extraction, these are confidentiality and integrity kind of things, as well as denial of service and taking control of the system. However, what we’ve done then as our multipliers, the things that we care about in the industrial arena are safety, reliability, and production.
We’ve made the impact multipliers, “How does this affect your production, your safety, or your reliability?” As consequences, we want to say, “To what severity, to what degree is this incident or this vulnerability capable of affecting process visibility, process control, process monitoring?”
Russel: I’m listening. I’m thinking this through as you’re talking. The only thing I hear that’s missing is it ought to be safety, environment, reliability, and production. Environment ought to be in there.
Clint: You’re exactly right. That’s one thing we should put in there. Okay, noted. Like I said, it’s in beta status.
Russel: [laughs]
Clint: We put that in there. Instead of spelling it out, environment, we have another multiplier in there for cascading consequences and collateral damage. Collateral damage could be part of environmental, so we do have that.
Russel: The point I would make to you, Clint, is to the extent you can put this stuff in language that leadership will understand.
Clint: That’s a good point.
Russel: It becomes more usable to the industry. Management’s not going to really understand depth of penetration, vulnerability, exploit, or those kinds of words. They do understand the consequence related to your ability to operate safely, reliably, meet your production targets, and no releases. Everybody understands triple zero, no incidents, no releases, no injuries.
Clint: What’s important to note is that we’re really not trying to replace the Common Vulnerability Scoring System because I don’t think that the industry or the world is ready to completely uproot an entire system that’s already put in place just to have a separate system for industrial.
This is really meant for the operators and the asset owners. This is meant for you to be able to take this and implement your own values. It’s very localized. In the base criticality score, there are pieces and variables that a general researcher that doesn’t know your system or your environment can score, and that can come with your base score.
A lot of times, we’re finding out that the way that we’ve got it calculated, that base score is very close to the original CVSS score, anyway. What’s different is all of those industrial related consequences and impact that is up to the local asset owner/operator to modify with those scores, to localize the score and make it more targeted.
What this is for, like I mentioned before, two different types of risk assessments, this is to help add a score to your overall risk assessment that is going to help you differentiate vulnerabilities against each other to help you understand what you need to tackle first.
I say, again, this is not a risk assessment. This is only the tip of the iceberg to a much larger risk quantification. It still needs to be localized and taken into account at least some sort of environmental factors, consequences, and impact. This is going to feed into a much larger calculation.
Russel: That’s right, and it’s a process, too. You do these things. The big thing I think for operators is, “Where do I stop? Where’s the first bite I take out of the elephant? Where do I start this process? What can I do now? Where’s the low-hanging fruit? What are easy things I can do that have big consequences?”
Clint: That’s a key. I want to touch on something real quick. You said, “What’s the low-hanging fruit?” Too many people get hung up on low-hanging fruit and quick wins. It doesn’t make any sense to expend resources on low-hanging fruit or quick wins if it’s not that impactful.
At some point, there is an imbalance between the number of resources that it still does take to take care of low-hanging fruit and quick wins that’s not very impactful. Whereas, all of a sudden, if you look, you have a critical that has easy access, and it’s critical to the process with severe or catastrophic impact. You should probably be taking care of that first, even if it’s not a quick win or low-hanging fruit.
Russel: We should probably talk a little bit about semantics. In my mind, that’s part of what a quick win is. It’s moving the needle. How do I move the needle the most, the quickest, with the least effort?
Clint: I agree. I just want to make sure that everybody listening, everybody out there understands that quick win and low-hanging fruit doesn’t mean low impactful. A lot of times, low-hanging fruit, especially in vulnerability assessments, it means getting those things out there that are easy to fix, but they’re not always necessarily really, really critical.
Sometimes it gets political where people just want to do some things to show management, “Hey, I closed these gaps really quick.”
Russel: I did an episode recently on leak detection program management. I was interviewing a group of guys. One of the things that was really interesting to me is they felt the most impactful thing they did was put together a philosophy document that they were using for governance for the program.
It allowed them to focus on what they needed to do next, they felt that was one of the most impactful things they did. I found that kind of interesting. Typically, you wouldn’t think of that task as being value.
Clint: I think that is right in line with one of the first things that you need to do when you’re performing an overall risk assessment. By the way, a risk assessment is not these little bitty things to just assess risk of one system. A risk assessment is a complete, comprehensive series of tasks and projects that all contribute to assessing the risk.
It includes vulnerability assessments, pen testing, asset identification, policy and procedure review, etc. One of the first things that you do in that is to list out your operational objectives, your business objectives, and your mission statements. You’re doing that, at the very beginning of a risk assessment, you’re outlining, “What are our objectives?”
To that end, your philosophy and your objectives all work together to help guide what path you’re going to take, very much what you’re saying.
Russel: In this whole conversation, I like using a really simplistic illustration. Cybersecurity and whatever you’re doing in that program, you can think of that as putting locks on the house. If I’ve got a shed that’s 25 years old and there’s nothing in it of value, I might not even put a lock on the door. I might not even put a door on the shed.
If I’ve got precious gems, precious metals, and documents that can’t be replaced, I might have those in a safe in a hidden area behind a locked door, behind a locked fence with security around it. It’s the same kind of thing here.
If I’ve got an industrial process that really it’s ancillary and there’s not really anything that could physically go wrong there, that’s a very different thing than if I’ve got a critical process controller controlling the feed to a plant. If I don’t properly control the feed to the plant, then I lose the plant. It’s just a different context.
Clint: That’s a perfect context. To illustrate that, I literally just got through doing a penetration test for an industrial company to where their Windows servers and host on their control system network were so bad, so littered with vulnerabilities. You had it in a remote attack footprint that was through the roof, and it was easily exploited.
However, the overall risk factor of those systems was very low, because there was no way to access them. You had to have physical access. They had a data diode that was the best data diode I have ever seen implemented in the industrial environment. There was no access to it. No way in and no way out.
Even if someone had physical access to their local land, who could get to those computers through local land, there’s no way to exfiltrate data. There’s no way to do anything. Anyway, the risk was so minimal that even though it was littered with vulnerabilities, there was no access. It was low impact.
There was another situation to where there was access to a computer that had tons of critical vulnerabilities, but it wasn’t tied to any critical process of any consequence. Therefore, it was a low risk, to your point.
Russel: What we’re really talking about is having a way or a mechanism to communicate. The other thing I think that’s important about this is anybody who’s doing industrial cybersecurity is doing an industrial process and has other risk that they have to manage. There’s operating risk. There’s corrosion.
There’s all these other things that they have to be looking at for the leadership, for the people that are trying to figure out how to operate the enterprise effectively, safely, profitably. They’re always trying to figure out what are the things that need to be done versus what are the things that can wait. That matter doesn’t need to be done at all.
Clint: It ties right back to the point we’re making at the beginning of this that that’s where industrial risk differs from IT risk and just cyber risk. When you’re talking about the overall risk to the business and what it is you’re trying to protect, you lose sight of the fact that we’re not just looking at malicious attacks. We’re looking at an all-hazards approach — an all-hazards assessment of what type of risks do we have to the process, to the business.
Like you said, it’s corrosion; it’s equipment failure. It’s not just malicious attacks. You have to focus on the new attack vector, the cyber aspect of malicious attacks to include in that. If you view it like that, an all-hazards approach, and don’t get so caught up on the fact that all we’re looking at is malicious behavior, then you’re headed down the right path.
Russel: That’s exactly right. I’d agree with that 100 percent. It’s a good place to wrap up our episode. Clint, as always great conversation. We’ll link up the resources as you mentioned in the show notes. If any listeners want to do some research and see some of the things we’re talking about, they’ll be available via show notes. Again, thanks for coming aboard and joining the conversation.
Clint: Thank you very much. There’s one last I want to mention is that all of this talk about risk and risk scoring I said was the tip of the iceberg. I just want to mention real quick that I am working with another organization right now, helping an organization called Maxxsure actually come up with an entire risk quantification mechanism and calculation for industrial risk assessments.
If you do happen to go look at the IVSS system that we’re doing and you like what we’re doing there, again, that’s the tip of the iceberg. What we’re working on in the broader picture is a much more comprehensive risk quantification calculation for industrial. I just want to put that out there.
Russel: Clint, I hadn’t asked you this in a while but I’ll ask you again. What’s the best way if somebody wants to reach out and find you for them to reach out and find you?
Clint: The best way to find me is you can find me on Twitter, but my Twitter handle is a little bit obscure. We’ll go with my email address, which is clint@threatgen.com. We’ll talk about that name later. My personal website for all of this stuff and all my projects is pretty simple. It’s securingics.com.
Russel: Thanks again for joining us.
Clint: Thanks, Russel. Take care.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Clint Bodungen. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you would like to support the podcast, please leave us a review on Apple Podcast, Google Play, or whatever smart device you happen to use. You can find instructions at pipelinepodcastnetwork.com.
[background music]
Russel: If you have questions, ideas, topics you’d be interested in, please let us know on the contact desk page at pipelinepodcastnetwork.com or you can reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
[music]
Transcription by CastingWords