This is a special edition of the Pipeliners Podcast covering the recent high-profile pipeline cybersecurity incident affecting Colonial Pipeline.
In the episode, host Russel Treat is joined by IT and cybersecurity subject matter experts Pascal Ackerman and Clint Bodungen to discuss the latest news reports covering how the threat was launched, the use of ransomware by the adversary, what the bad actors are seeking as part of this event, and what this means for pipeline operators to take the appropriate steps to protect their data and systems.
Colonial Cybersecurity Incident: Show Notes, Links, and Insider Terms
- Pascal Ackerman is a Principal Analyst in Industrial Threat Intelligence & Forensics and the author of Industrial Cybersecurity. Pascal is also part of the ThreatGEN team. Connect with Pascal on LinkedIn.
- ThreatGEN is a virtual reality (VR) industrial cyber-physical range for physical threat response training, process improvement, and team events.
- Clint Bodungen is an ICS cybersecurity guru, the author of “Hacking Exposed: Industrial Control Systems,” and he teaches at the Gas Certification Institute (GCI). Connect with Clint on LinkedIn.
- Colonial Pipeline Cybersecurity Incident: Read this article from Pipeline & Gas Journal recapping the latest developments on the cyberattack against Colonial.
- May 12 Update: Colonial began to restart systems in the wake of fuel shortages.
- May 13 Update: Colonial reportedly paid an approximately $5.0 million ransom to restore the systems.
- Florida Water Treatment Plant Attack: Read this article from ThreatGEN on how an attacker gained access to the TeamViewer remote access software of a Florida water treatment facility. The TeamViewer access allowed the bad actor to interact with an operation station, which in turn allowed the attacker to manipulate the setpoint for a chemical dosing control of the Industrial Control System.
- Ryuk is a group of bad actors that have used ransomware such as TrickBot and Emotet to exploit vulnerabilities.
- DMZ (Demilitarized Zone) is a physical or logical subnetwork that contains and exposes an organization’s external-facing services to an untrusted network, usually a larger network such as the Internet.
- HAZOPs (Hazard and Operability Study) is a structured and systematic examination of a complex planned or existing process or operation in order to identify and evaluate problems that may represent risks to personnel or equipment.
- PHA (Process Hazards Analysis) is the process of drawing from safety-related industries to determine the real risks of cybersecurity incidents. [Watch this SANS webinar on Combining Process Safety & Cybersecurity.]
- PHMSA (Pipeline and Hazardous Materials Safety Administration) is responsible for providing pipeline safety oversight through regulatory rulemaking, NTSB recommendations, and other important functions to protect people and the environment through the safe transportation of energy and other hazardous materials.
- FERC (Federal Energy Regulatory Commission) regulates, monitors, and investigates electricity, natural gas, hydropower, oil matters, natural gas pipelines, LNG terminals, hydroelectric dams, electric transmission, energy markets, and pricing.
- OT (Operational Technology) the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices such as valves, pumps, etc.
- IT/OT convergence is the integration of IT (Information Technology) systems with OT (Operational Technology) systems used to monitor events, processes, and devices and make adjustments in enterprise and industrial operations.
- Alarm management is the process of managing the alarming system in a pipeline operation by documenting the alarm rationalization process, assisting controller alarm response, and generating alarm reports that comply with the CRM Rule for control room management.
Colonial Cybersecurity Incident: Full Episode Transcript
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. And now, your host, Russel Treat.
Russel Treat: Welcome to the Pipeliners Podcast, a special edition to talk about the recent Colonial cyber incident. We put this together at the last minute with Clint Bodungen and Pascal Ackerman of ThreatGEN.
Our purpose here is simply to — as quickly and effectively as possible — get information out about the Colonial incident and what pipeline operations could and should be doing to make sure they’re prepared for the current threat. Clint, Pascal, welcome back to the Pipeliners Podcast for this special edition.
Clint Bodungen: Thank you. Good to be here.
Pascal Ackerman: Thanks for having us.
Russel: I’m going to tee this up for the listeners. This particular episode, we’re doing it a little off our normal routine. We’ve pulled this together very quickly, at the last minute. Specifically, we’re here to talk about the recent Colonial Pipeline cyber incident that occurred over the weekend.
As we sit here recording, it’s Monday, May 10th, about 5:45 in the evening. I’ve got Clint Bodungen and Pascal Ackerman, both of whom who’ve been on the podcast before, and are my go-to guys on all things cybersecurity.
Our goal here is to pass information to the best of our knowledge. We don’t necessarily have better information than is available in the press, but we might have a better ability to interpret it, and we have experience with similar events with other operators.
Our goal here is to try to provide as good of information as we can in a very timely way. We’re going to wrap this up by talking about what pipeline operators should be doing in response to this particular cyber incident. Sound good, guys?
Clint: Sounds good.
Pascal: It’s a safe statement.
Russel: [laughs] Let me ask this question. What’s happening here? What’s going on?
Clint: Pascal, you want to lead, or you want me to talk about it?
Pascal: Yeah, go for it. I’ll fill in the blanks.
Clint: Sure. What we know so far is, and I saw this, I don’t know exactly when it started, but I think I saw the first reports on Friday. Ultimately, there’s a pipeline operator, Colonial — and I don’t think it got reported that it was actually ransomware until this weekend — but they were reportedly hit by a ransomware incident.
Make no mistake about this. As we know of right now, the ransomware did not cause the outage. It was the ransomware that hit their systems. I think it was their IT systems. It was reported that possibly some of their control system network systems, but not production, not critical.
As a result, they shut down. I don’t know if they attempted to go manual first or just went for a full shutdown, but it caused the operator to shut down the system for safety precautions, I imagine. That’s where we are so far.
The actor was DarkSide, and this was supposedly, what they believe to be, a Russian-speaking actor. They operate like a business, and they have a website that has an ethics page. They specifically mentioned countries and other organizations that they will not attack, according to their “ethics.”
It included a lot of post-Soviet Union countries, and so that’s why they believe it’s a Russian organization. I literally just heard, over the last two hours, that they have now issued an apology. I haven’t seen that yet. I don’t know what this means.
Pascal: I saw that, too.
Clint: Anyway, that’s what we mean. I think that the ransom amount, I believe, that I saw was, they’re asking for like $2 million. They downloaded a bunch of the data. They said, “You’ll get access to your data, and we’ll turn your systems back on if you pay it. If not, we’re going to release all of this data.”
They have a repository of data that they stole, and they say they’re going to release it. That’s where we stand. That’s the high-level view of where we stand right now. You want to add something, Pascal?
Pascal: I was just going to point out that this looks like a typical ransomware attack, where they go in by any means necessary, they get your systems, and they start demanding ransom. The problem with this is, and this is probably why they apologized as well is that they got into something that can cause a lot of downtime for a lot of people down the pipe, pun intended.
Clint: I would like to think that the perpetrators of this have enough humanity to actually say, “Oh, this is causing bigger incidents than just us getting money or somebody losing money. We’re sorry.” I would like to think that.
Russel: Yeah, I don’t. I don’t know about all that kind of stuff. You guys are closer to that than me. I’ll run down what I understand occurred. I don’t know that I have information any different than you guys.
My understanding, and this has been what’s in the press, that it was an attack initially on the business network and that given the attack and out of abundance of caution, Colonial decided to shut down its critical operation system in order to assess and mitigate whatever they needed to do in order to address the attack.
I have heard that they have some of their terminals and spurlines already back in service. I think the thing that’s interesting about this to me is there’s a two-pronged attack. There’s two mechanisms they’re using to get paid their ransom. One is we’re going to shut your systems down. The other is we’ve stolen this data and we’re going to release it publicly if you don’t pay up.
For a lot of pipeline operators, there’s a lot of information they have that is sensitive. It’s information you wouldn’t necessarily want to have out in the public domain.
Pascal: People wised up because of all of the ransomware that has been rampant over the years. People started to wise up. They did their due diligence, started to back up their systems so they could recover quickly without having to pay the ransom.
The bad guys are following ship. They’re changing their tactics. Now they’re like, “Okay, if you’re not going to pay for recovering your data, then you’re going to pay for me not releasing the data that I stole in the first place. Either way you’re going to pay me.”
Clint: That is a new tactic that we’ve been seeing in ransomware. That tactic has existed, but it’s something that’s a bit new in terms of prevalence. But before it was always we’ve got this ransom. Pay us or we’re not going to give you your data back. We’re not going to give you access to your systems back.
The advice has always been, “Don’t negotiate with terrorists. Don’t pay it.” You’re just going to encourage future activity. However, this is a new tactic. The response has actually worked. The defense has been don’t negotiate, don’t pay.
It’s not a new tactic, but the new norm is, “Okay, if you don’t pay, then we’re going to release all of your data, public data, etc.” That is a new turn of events that has been happening with the ransomware as of late.
Russel: Who knows what information they got, whether it’s critical or sensitive or not. I don’t guess you necessarily know. You just know that they got some information.
Let’s talk a little bit about DarkSide. What do we know about DarkSide and what do we know about how this particular ransomware might be injected into a system?
Pascal: I’ve got to be honest. I haven’t looked at the exact sample they’re using, but these are the guys that we knew before from the Robin Hood attacks. Apparently when they steal money, when they extort people, they actually give a little to charity. That has not been proven, but that’s apparently on their website. It is their motto.
Clint: I think we should be clear that the ransomware is not DarkSide. I guess Dragos would call them an activity group, but the threat actor was what the government would call them. I’m going to call them bad guys. The bad guys are DarkSide, not the malware that they use.
Russel: I’m actually reading the flash report and they’re calling the malware DarkSide.
Clint: Okay. They’re probably calling the malware DarkSide because I think the actor was calling themselves DarkSide. I could be wrong. Like you, like everyone else, this is unfolding since Friday. We’re all reading this real-time.
Russel: Yeah, we’re all trying to read the same stuff and get caught up. What was the vector? What do you think the attack vector was for this ransomware? I have my guesses, but I’d be interested to know what you guys think.
Clint: I’ll take first stab at this. Again, it was Mother’s Day weekend so I wasn’t exactly on the horn of going at this. In fact, I appreciated this tweet too from Rob Lee saying, “You know what, guys? All this stuff happening with Colonial? I get it, but I’m barbecuing this weekend, so I’ll see you Monday.” I really appreciate that.
Like the rest of the community, I was trying to keep an eye on this while also attending to mother and wife, who is also a mother of my children. I was dealing with that. Then we did this today. Trying to see and watch all this unfold.
I haven’t seen anything but I had two thoughts. One, it could have been related to the same attack vectors as the Florida water plant incident where they got in through TeamViewer and slightly poisoned the water there.
I did see some reports that there was speculation that they purchased the same default credentials and all the password lists that they got in through the weak TeamViewer passwords as the Florida water plant incident.
Also if it wasn’t a direct injection into the process control network, then that probably wasn’t it. If it was a direct injection into the IT network, then it probably came in through an email phishing campaign or something like that, like TrickBot, like what Ryuk did.
I would not be surprised to find out that this is another evolution or a descendant of the same types of attack that Ryuk was getting in using things like TrickBot.
Russel: For the pipeline listeners that may not be familiar with some of those buzzwords, I’m going to try to dumb it down a little bit.
Basically if it went into the business network, it was probably a spearfish. Meaning, somebody opened an email and clicked on something that they shouldn’t have clicked on and that downloaded malicious software to the network and it took off from there. To my mind, that’s most likely. Certainly, the spearfishing campaigns continue to get more elegant, more believable, and so forth.
The other possibility about using some kind of credential or password stuff that’s been out there that’s been released like the TeamViewer hack to direct access a network and drop stuff, I think that’s less likely.
For pipeliners that are listening, it certainly points out two things. Don’t use the same password for multiple sites. Change your passwords. Use complex passwords. That’s for all the things that might be done via direct access. The other thing is if you get an email and you don’t know who it’s from, don’t click on anything. Just delete it. Not even the unsubscribe.
Clint: Use multi-factor if you can, but also don’t provide direct access into your controls. Regardless of whether or not this was injected directly into control systems networks or not, don’t allow things like TeamViewer to go directly into your control systems network without an interim like a DMZ.
Russel: There’s still a lot of that kind of thing out there, but I think one of the things this does, hopefully, one of the good outcomes of this event is that it’s really going to elevate in people’s mind the seriousness of cybersecurity and just how impactful a breach can be.
Clint: One would think, right? On one hand, this isn’t the first time an incident in an industrial process has happened.
Also, our own success is our own failure. What I mean by that is the exact type of kinetic consequence that people believe a cyber incident should cause or could cause is the exact type of kinetic consequence that operators have been assessing and preventing for decades with safety instrumented systems and with safety protocols, etc.
Quite frankly, it doesn’t matter whether it’s a cyber hazard or any other hazard. The failure or the consequence is the same according to a PHA or a HAZOP. For the most part, operators are pretty darn good at safety assessment and mitigation, incident mitigation for industrial processes.
Therefore, as a result, we haven’t really had a major catastrophic damages or incidents from a cyber vector. Let’s be clear. Cybersecurity in industrial equates to a cyber hazard vector into the overall safety process.
If, because of the fact that we haven’t had a major incident caused by a cyber hazard, we feel like it’s not really a thing. But that’s just because it hasn’t happened yet. I’ll stop there for comment.
Russel: I think the way I’d frame that, Clint, is just because your house has never been broken into doesn’t mean you shouldn’t lock the doors and set the alarm when you leave. That’s the analogy, I think.
Pascal: On the IT side, they have the joke that an IT security person goes to his manager when nothing is going wrong, everything is running, everything is right. Then his manager says, “What am I paying you for?”
Then the flip point of that is when everything is burning and everybody is breaking in and their company is being breached, he goes to his manager and the manager says, “What am I paying you for?” Until you see that you need it, you don’t know if you need it.
Russel: Yeah, it’s just like being an offensive lineman. You don’t get your number called on the PA until something goes wrong.
Clint: Until you’re holding. [laughs]
Russel: That’s right. What are the implications, guys, of this hack? I think that from just a straight pipeline safety standpoint, Colonial has done a good job of addressing that.
They’ve operated in the best interest of the public from a safety standpoint because they weren’t going to operate the pipeline if there’s any risk that there’s a nefarious actor in their system. Kudos to Colonial for that for sure.
Clint: Agreed. On one hand, my phone has been ringing off the hook today. I’ve been telling people the same thing. On one hand, this is a safety success. On the other hand, there is a cybersecurity failure.
Russel: It’s an operations reliability failure.
Clint: Yeah, I think, good point.
Russel: It’s a safety success, but it’s an operations reliability failure, because you really don’t want to have to take a pipeline offline to address a cybersecurity incident.
Clint: Right.
Pascal: Ideally, you want to be able to keep running while you address this. That’s probably something we’re going to talk about next, but you need to be in a really good situation and a really good position to be able to do that. Not many companies out there can do that.
Russel: I’m certain that at the end of the day, Colonial’s going to look at this and say, “What were the financial consequences of this incident?” This is going to be looked at even broader than that.
PHMSA is going to look at it. TSA is going to look at it. FERC is going to look at it. This is going to have major impact on the pricing of fuels in the Northeast, because this pipeline delivers about 45 percent of the market demand for the use of fuels, diesel, aviation, gasoline in the Northeast. It has big economic implications for those people that are reliant on that pipeline.
I think one of the other things interesting about this as a pipeline guy is that pipelines are getting all these bad rap, bad rap, bad rap, “We don’t like pipelines.” Well, this is going to pretty clearly illustrate what happens when you don’t have a piece of the critical infrastructure.
Pascal: Yeah, how many trucks do you have to ship out now to catch up for this?
Russel: You physically couldn’t move it by truck. It’s not possible. We don’t have that many trucks.
Clint: What is it, the latest executive order or whatever by the administration feels like that we can, by we’re going to allow truck drivers to stay on the road longer, and longer hours, and all this stuff. Yeah, you’re right, this is why pipelines exist.
Russel: They’re also going to allow tankers to take these products from the Gulf Coast around to the Northeast, but again, there’s a couple of problems with that. One is, by the time you get a tanker scheduled, you get the tanker full, and you get it around, they’ll probably have this pipeline back in service.
If it was going to be out for a long period of time, that’s going to help, but it’s not going to help in the short term. It’s more of a trucking and train problem, at least in the short term.
Let’s talk about, because I know, I’ve already had a number of conversations today about this. I’ve got operating executives that are calling and asking, “What do we do about this? What actions do we need to be taking?”
Let’s talk about that, because I think that’s really critical for pipeline operators right now. They’re all asking the question, “Well, what do we do so this doesn’t happen to us?”
Pascal: Let’s split that up. In my humble opinion, if I was Colonial, I would think twice before turning everything completely on. I would do a breach assessment or a threat hunt to make sure that everything is out.
When we turn the key back on, if we turn switches back on, if we plug in cables, if we get everything back up and running, we’re sure that it doesn’t come back. That’s threat hunting.
That’s somebody going out, taking artifacts, looking at logs, capturing network traffic, and making a decision or a conclusion on whether or not the bad guys are out, or the tools, or whatever it is, that everything is taken out of the environment before we get back up.
It’s difficult. It’s easier said than done, because you do want to get your production up and going. There might be a medium in there where you bring partially up, where you start things disconnected, stuff like that. That’s a really tough call to make, but that would be one side.
If you are infected, if you think you’re infected, or even if you want to know, at this point, “Hey, do I have something like that in my environment?” I would say look into a threat hunt, look into a breach assessment, and get to the bottom of that.
For the long term, how to hopefully prevent this or at least detect this, in my opinion, or in general, ransomware comes into your environment in two major ways. It’s an opportunistic exploit. You go to the wrong website, you open up the wrong email, or your credentials were stolen, your credentials were easily guessable, and somebody breaks in.
You’re part of an APT. You’re targeted. Somebody has been in your environment, and on the way out — and we’ve seen it with RIAC — on the way out, they leave a nice little present for you in the forms of ransomware, just to make as much damage as they can to hopefully hide their tracks. Those two.
You can protect against both types of those kinds of ransomware infiltrations with proper segmentation. We’ve talked about this before. You should make sure that your IT and your OT business networks are as much separated as you can.
Russel: Yeah. I think the other thing though is even more basic than that, Pascal. Because you basically said there’s two primary ways that this kind of ransomware track gets delivered.
It’s either through an email or a website and someone clicks on something they shouldn’t click on, number one. Number two, their credentials get stolen. Those are basically the two ways that these attacks manifest.
For most operators, the most critical thing is training everybody. Train, train, train. They need to know the nature of the threat. They need to know how the threat is executed. You need to train, train, train. That to me is the first line of defense.
Clint: If you want to break it down to the most simple, basic steps for operators, what to do for this sort of thing. It goes down to number one, yes, first and foremost everybody needs to be educated on what the threat is and how the threat can affect operations and safety, everything. That’s easier said than done, I know.
Number one needs to be trained. I know everyone does this quarterly annual awareness training, etc. The training needs to be targeted specific to the operations or specific to those individuals so that they know exactly how these incidents or how these consequences affect their job and what they do.
Number one, targeted training. Number two, understand your most critical assets and the consequences of them failing. When you understand that, then you need to look at, “Okay, is there a cyber hazard? A cyber vector — either am I compromised there, or, is there a vulnerability there? Then, do I have the most basic mitigations in place to prevent that consequence?”
I’m trying to put it in operator perspective, operator language. Number one, understand the threat that applies to you. Number two, does it look like there’s a compromise in my most critical assets after you’ve identified your most critical assets? Is there evidence of a compromise or a failure?
Number three, have I done the most basic things to mitigate that consequence if that asset fails? I’m trying to say it without a cyber perspective, but, ultimately, that’s what you do in this case. We’re talking about a cyber hazard or a cyber vector.
Russel: I absolutely agree. It’s interesting. In my role, I have conversations with the cyber guys and I also have conversations with the operations guys. I’ve found myself in a lot of my career bridging the technical conversation to an operations conversation.
To me, you should have a plan. This is how we do cybersecurity. You should have training to make sure everybody understands the threats and what the expectations are. Mostly they just need to understand the threats. That’s saying a lot. Then thirdly you’ve got to have a response plan. If we have an incident, what do we do?
It’s the same thing that you do if you’re operating. You have an operating plan. You have operator qualification. You train people to the operating plan. Then you have an emergency response plan. You need the same basic things in place. The difference is the skills technically are different.
The other thing is you need to understand your network. You need to understand your network. Did I say that you need to understand your network? Where is my network? Where is it sitting? How is it configured? What’s open and what’s closed?
Pascal: What’s supposed to be open and closed?
Russel: Right, exactly.
Pascal: Know what it should look like and then you’re in a position you can notice the differences.
Russel: Anybody that’s ever run a business knows the last thing you do is a lockup check. You go around the building, you make sure everything is locked up. When you’re running the business, you make sure you’ve got controlled entrance and exit.
It’s the same thing. You’ve got to do the same thing in your network. You’ve got to make sure that when you’re not using it, it’s locked up. When you are using it, you’re controlling who gets in and gets out and through which door.
Pascal: I’m going to steal that line. That’s well put.
Russel: [laughs] It’s interesting, Pascal, because I’ve spent a lot of time in conversations trying to explain cybersecurity to people. They get all twisted up I think about the software and the jargon and all that stuff that’s really pretty simple.
It’s where is my network? Where does my network touch the Internet? What doors am I opening? What traffic am I letting through? Who am I letting through and what credentials do they need to get through?
It’s really that simple. The problem is that nobody generally really knows where the fence is and where the gate is in the fence.
Pascal: To compound that, a lot of vendors trouble the water as well. Right now, my inbox is overflowing with people saying, “Hey, if you would have installed this XYZ software, you could have prevented what happened to Colonial.”
No, software is the last thing you should think about if your architecture and your fundamentals are not in order. That’s what you do first.
Russel: Preach. [laughs] That’s all I can say is preach. You’re right, it’s not about the software. It’s really about the human beings and managing the infrastructure.
Clint made an excellent point earlier. What he was saying about this is a process safety thing. It really is. It’s a process management thing.
Pascal: Yeah, that’s a fantastic way of looking at it because a lot of our customers, a lot of our business already know how to deal with safety. They’ve done that for years. Now seeing security in the light of safety, that opens up eyes. It makes people understand it better.
Clint: Cyber is literally just relatively speaking a new hazard in the entire safety equation.
Russel: It’s actually a new lens more so than a hazard. It’s a new lens to look through. I’ve had a similar conversation about alarm management where alarm means something different. Everybody uses the word alarm, but it means something different to everyone that uses it. That causes problems.
But fundamentally, cyber is a different lens, so when I look at the process, that’s one lens. When I look at the automation, that’s another lens. Cyber is just a different lens for looking at the same system. The analysis you need to do is very similar.
Clint: Absolutely.
Russel: Look, guys, we’re probably coming to the end. We probably ought to wrap this up. How would you like to summarize this? Clint, I’ll let you go first. How would you want to summarize this?
Clint: I think the best way to summarize this is that this is nothing new. It’s the same old problem, the same old situation with the same old solutions in that what we just talked about. You need to educate people on the threats and the issues. You need to have a process. You need to have a plan.
You need to understand what your critical assets are, what your consequences are. Do you have a breach? Do you have a solution with the most basic blocking and tackling in place? This is nothing more than yet another illustration in another chapter in the same old book.
Russel: Pascal, what do you have to add to that?
Pascal: First of all, I completely agree. I want to take it a little bit different because a lot of people will get a lot of calls and a lot of emails with a lot of promises. Let’s step back and look at what fundamentally is necessary here.
There are two things we need to do to secure ourselves. One, make it as difficult as possible for an attacker to compromise a network, to reach its objective. Be it training, be it segmentation, be it firewall, be it software if you’re to that point that you can implement it, but make it as difficult as possible.
Secondly, start looking at your logs. All of your devices, all of your systems, they’re all pumping out logs and nobody is looking at them. Spend some time looking at these logs and start looking at discrepancies.
If you make it hard enough for an attacker and in the meantime you’re looking at your network, you will pick this up before something really bad happens. You’re not going to prevent it because there’s always going to be something new, something different in your environment that can be compromised.
If you make it difficult to actually do any big harm and you’re looking at your environment, you will pick this up. I guarantee it.
Russel: Here’s my take on all this. I think that, Clint, you’re right. This is not new. All the people on this podcast have experience with major pieces of energy infrastructure being taken offline by cybersecurity.
What’s different about this one is not the nature of the attack or the response. What’s different about this one is how quickly it’s into the press and how broadly it’s in the press.
The only reason — in my opinion, I have nothing to base this on other than whatever education I have to make the guess — that this is in the news is because of the impact it’s going to have in fairly short order on oil and gas pricing in the Northeast.
Clint: Supply chain, right.
Russel: That’s what’s causing it to get into the news. That’s number one. Number two is all the things we said about human factors and training and so forth is true.
Another way to think about this and Pascal alluded to it talking about the logs is I need to know what normal looks like to be able to identify abnormal. That’s just as true in my process as it is in my network.
If I can get clear, here’s what my network should look like normally. Then I could come up with approaches to identify abnormal in the network and head these things off before they become a problem.
That’s where I’d like to leave it. Clint, let me ask you to share your company’s information, how people might find you and Pascal if they want to reach out.
Clint: Sure. You can find us at ThreatGEN.com. We’re an OT security firm. This is what we do.
Russel: Guys, thanks for putting this together so quickly. Hopefully this is a value to the pipeliners out there. If you have questions, reach out to me on LinkedIn. You can reach out to me on the Contact Us form on pipelinepodcastnetwork.com or reach out to Clint or Pascal through ThreatGEN.com. Thanks, guys, I appreciate you doing this.
Pascal: Thank you.
Clint: Thank you.
Pascal: Until next time.
Russel: I hope you have found this special edition of the Pipeliners Podcast helpful. Again, if you’d like information or if you need support please feel free to reach out to me at pipelinepodcastnetwork.com at the Contact Us page. Or you can reach out directly to our guests. You can find their information at pipelinepodcastnetwork.com. Thanks for listening. Talk to you soon.
Transcription by CastingWords