This week’s Pipeliners Podcast episode features Jeff Whitney discussing what the Edge is, how it is evolving, and trends within SCADA.
In this episode, you will learn about what the goal of the Edge is, how to secure it and properly structure the gathered data, the benefits of knowing data in real time, and how it is going to become more effective in the industry in the future.
Trends in SCADA and the Edge: Show Notes, Links, and Insider Terms
- Jeff Whitney is the founder of Berkana Resources Corporation. Connect with Jeff on LinkedIn.
- Berkana Resources Corporation provides operational and information technology consulting, integration, IIoT implementation, digital transformation support, managed services, security, and compliance solutions to customers in the oil & gas and electric utilities markets. Clients include major oil & gas companies, midstream MLPs, and electric utility companies
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote locations.
- OT (Operations Technology or Operational Technology) refers to the hardware and software systems that perform critical functions — such as monitoring and controlling equipment and processes — to support pipeline operations.
- AI (Artificial Intelligence) is intelligence demonstrated by machines in contrast to the natural intelligence displayed by humans.
- Edge Communications is a method of building out the architecture for structured communication from edge devices in the field to a host server using connectivity to transmit the data.
- Edge Computing differs from traditional SCADA only in the relevance of the dynamic lift that can be moved from the cloud to the Edge, for optimizing bandwidth, and functional efficiencies.
- TSA SD2 requires both pipelines and utilities to conduct assessments of assets and operations to determine if they meet new criteria defining critical energy infrastructure.
- ML (Machine Learning) is an application of AI (artificial intelligence) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
- PLCs (Programmable Logic Controllers) are programmable devices placed in the field that take action when certain conditions are met in a pipeline program.
- RTUs (Remote Telemetry Units) are electronic devices placed in the field. RTUs enable remote automation by communicating data back to the facility and taking specific action after receiving input from the facility.
- Layer 2 consists of the devices that are capturing the information from the actual instruments.
- Layer 3 consists of receiving the data from layer two and having an operator issue a control based on the information given.
- Cyber Vision gives you full visibility into your industrial control system (ICS), including dynamic asset inventory and real-time monitoring of process data.
- DMZ is a subnetwork that presents an organization’s external-facing services to the Internet, requiring an additional layer of security for the organization’s LAN. This firewalls the organization’s private network and only exposes public information through the DMZ.
- MQTT (Message Queuing Telemetry Transport) is a publish-subscribe protocol that allows data to move quickly and securely through the system and does not bog down the system with unnecessary requests.
- HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations. High-performance HMI is the next level of taking available data and presenting it as information that is helpful to the controller in understanding the present and future activity in the pipeline.
- Pub/Sub is a central source called a broker (also sometimes called a server) that receives and distributes all data. Pub-sub clients can publish data to the broker or subscribe to get data from it—or both.
- Poll response is a message that is sent out to a remote telemetry unit (RTU) and waits for a reply. The response back is the specific values the poll is requesting. The poll can contain a general request or it or it can simply be focused on very specific metrics.
- 49 CFR 195 404 requires operators to maintain current maps and records of the pipeline system including specific information.
- OPC (Open Platform Communications) is a data transfer standard for communicating device-level data between two locations, often between the field and the SCADA/HMI system. OPC allows many different programs to communicate with industrial hardware devices such as PLCs. The original system was dependent on MS Windows before shifting to an open platform.
- OPC DA (or OPC Classic) is a group of client-server standards that provides specifications for communicating real-time data from devices such as PLCs to display or interface devices such as HMIS and SCADA.
- BSAP (Bristol Standard Asynchronous/Synchronous Protocol) is a poll-oriented communication protocol for horizontal, vertical, and multi-layer networks. BSAP is the foundation for a proprietary network that has a tree-structured topology.
- CruxOCM enables the autonomous control room of tomorrow, operating within the safety constraints of today. Combining advanced physics-based methodologies with machine learning, CRUX software helps clients increase throughput production and energy efficiency (up to 10%), improve safety, and ensure operators stay safe while contributing to a seamless, continuous operation.
- API (American Petroleum Institute) represents all segments of America’s natural gas and oil industry. API has developed more than 700 standards to enhance operational and environmental safety, efficiency, and sustainability.
Trends in SCADA and the Edge: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 242, sponsored by Burns & McDonnell, delivering pipeline projects with an integrated construction and design mindset, connecting all the elements, design, procurement, and sequencing at the site. Burns & McDonnell uses its vast knowledge, the latest technology, and an ownership commitment to safely deliver innovative, quality projects. Burns McDonnell is designed to keep it all connected. Learn more at burnsmcd.com.
[background music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations.
Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time and to show that appreciation, we give away a customized YETI tumbler to one listener every episode. This week, our winner is Madisyn Cates with Memphis Light, Gas, and Water. Congratulations, Madisyn, your YETI is on its way.
To learn how you can win this signature prize, stick around till the end of the episode. This week, Jeff Whitney with Berkana Resources joins us to talk about trends in SCADA and the Edge. Mr. Whitney, welcome to the Pipeliners Podcast.
Jeff Whitney: Well, thank you, Russel. I appreciate the opportunity to be a part of it.
Russel: It’s about dadgum time, that’s all I can say.
[laughter]
Well, look, before we get going, you and I know each other pretty well, we’ve certainly been talking for years and years, but just as a way to introduce yourself to the listeners, could you tell us a little bit about who you are, and your background, and what you do?
Jeff: Sure. I’m a serial entrepreneur. Berkana is my fifth company. Basically, I started off as a starving entrepreneur and started my first company with a borrowed 1,000 bucks and sold it to Daimler Benz about four and a half years later, Mercedes Benz guys.
Then ran a division for them, and then decided I didn’t like working for big business, so I went out and started more companies. Berkana’s an OT system integrator. We’re in the energy sector, and we do a lot with oil and gas, alternative energy. We’ve done some water, some nuke, a few others, but our primary focus is really in the energy space on oil and gas.
We have a lot of clients in that space, and we focus on basically providing integration services but also security managed services, things like that, AI, and we are top heavy. We have a lot of very senior people who are overpaid and [laughter].
Russel: I asked you to come on to talk about trends in SCADA, and particularly how SCADA is evolving at the edge. Maybe a good place to start is to just ask the question, what is the Edge, and why should we care?
Jeff: Sure. The Edge has multiple definitions, depending on who you talk to, but basically the edge is about moving computer capability or compute capability closer to the process, so an operational technology.
I think it’s funny, we’re talking about the edge as being a new concept, we’ve had PLCs and RTUs out there forever, and they’re basically computers running ladder logic. Now we’re saying, “Oh, well, we can put more compute power out at the edge.”
I guess the theory is there’s a lot more data that clients want, so that you want to put something out at the Edge that has the ability to capture that data, and then do something with the data right at the process so you don’t suffer legacy application delays, things like that. You don’t have that issue because you’re running the application right at the process.
Russel: I guess, Jeff, one of the things that always comes up for me when I talk about the Edge is, it makes sense to me that I want to put more computing processing power at the equipment where I can look more closely at what’s going on with the equipment and make decisions about the equipment.
I would think that that has some pretty significant security implementations because now I’m changing the nature of where my network ends or where my logical business network ends, and I’m moving it all the way out to where the instrumentation exists, where it’s by itself. It’s remote, it’s low power, and all those kinds of things. That’s got to create a lot of challenges I would think.
Jeff: It does. It’s a really good point. Before we got into edge as a company, I had four, five challenges that I saw with the whole edge computing implementation. Really, from my perspective, I had to get those resolved in my own mind before I could move forward and actually start taking on edge solutions as it were.
One of the problems we faced was security. How do I secure that edge device? I’m going to put more capability out there at the actual process. I’m also incorporating things like foreign digital instruments, things like that, so I need to have the ability to process all that.
As I process that, I have to now worry about security at the edge because at the edge, as you know, especially with the new TSA SD2, the Purdue model, you really can’t connect an untrusted network to your trusted network out at layer 2. That’s a big issue. Security has become a much bigger issue, especially in light of things that have happened lately in the industry, so that’s become an issue.
Another thing from the standpoint of edge is what is my actual goal here? What am I trying to accomplish? Am I trying to just lower my communication infrastructure costs because I’m doing RBE instead of full response? Am I trying to create historization at the edge?
Am I trying to implement AI at the edge or ML if I’m somehow connected to the cloud, or maybe with a data diode or something pushing stuff up? Am I trying to send that data out to some application either within layer 3 or up above layer 3 because once you’re pub/sub, you can push it anywhere. Security is a huge component of that. That, for me, is important.
Russel: This conversation right here, what you said just the last few minutes, is just chock full of buzzwords. We always put together show notes, and we decode all this language and stuff so that people that listen to an episode like this, they can go to the website and decode all the language.
I kind of know what you’re saying, but there’s a couple of key things you’re saying that I want to unpack a little bit. The first one is the idea of layer 2 of the network versus layer 3 of the network. Talk to me a little bit about what is layer 2 of the network and what’s layer 3 of the network.
Jeff: Basically, layer 2 is where your PLCs and RTUs are. That’s basically the devices that are capturing the information from the actual instruments.
You have a pipeline, or you have a compressor, a pump, whatever it is that you’re trying to get information from, whether it’s the temperature, flow rate, whatever it is you’re trying to grab. You’re going to have a PLC, RTU or something grabbing that data, interfacing to that at layer 2.
Up at layer 3, what you’re trying to do is you’re trying to get the data from that layer 2 device, and a lot of times we go out to things like PLCs, RTUs, and we basically poll them.
We say, “Hey, give me that data,” and you go to a register on that device, and you pull that information, and you push it up to the layer 3 platform. That’s more where you’ve got your supervisory control and data acquisition.
That’s where you’re going to have that system up at layer 3 that allows an operator to actually not only look at the data and visualize it on the screen, but he can also issue a control command and get a valve to close, or reduce pressure, turn off a pump, turn on a pump, turn on a compressor, turn off a compressor, those types things.
Russel: I would assert that historically we’ve thought as layer 3 of the network as inherently secure because it’s custom protocols, and custom wiring to communicate, and all that kind of stuff. We’ve typically thought of that part of the network as secure just because you need a lot of special knowledge to get into it, but with the edge and with these new computing devices, all that stuff is now IP, and it’s all opened up.
Jeff: It is. In the old days, they used to not have the SCADA systems of process control systems connected to anything. They weren’t connected to IT. You just threw dumps of data over the wall, and you printed it or whatever. You walked it over and gave it to people. With the advent of this connectivity and everything eventually going IP, I’m old enough to have worked on token ring and stuff.
Russel: [laughs]
Jeff: When you take the IP stuff and you started connecting things, yeah, you created security problems. As they say in security speak, you can’t really have an untrusted network talking to a trusted network like a layer 3 network in process control or OT operations technology. That, to me, was a huge issue.
Partnering with Cisco as a design partner helped solve some of those problems because they have tools like Cyber Vision that allow you to protect the edge. It’s a security tool. There are tools out there that will allow you, to your point exactly, to secure that edge. That was a big one for me to help us with that decision to move forward and create edge Solutions.
Now we had some ideas on how to work on the security, which becomes a bigger issue when you take the computer and put it down right by the process. Yeah, you’ve opened yourself up, especially if it’s IP to a lot of vulnerabilities.
Russel: All of the things that can be done in a Windows network that are nefarious are now available to somebody at the edge.
Jeff: A lot of them, SCADA manufacturers, as you know, moved away from things like Unix, and they went into Windows, and so you got a lot of Windows applications in process control environments.
I remember dealing with guys that had UNIX systems or whatever, and they were saying they’ll pry those systems out of their cold, dead hands. They were never going to go to Windows, but the manufacturers made that decision for everyone and said, “Yeah, all right, we’re going to Windows.”
Russel: It’s like the Borg. You will be assimilated. Resistance is futile.
Jeff: Yes, sir. That’s exactly what happened.
Russel: [laughs] What are some of the approaches for securing the edge?
Jeff: There’s a number of things that you can do. Obviously, if you can isolate the edge, that’s great, but that’s hard to do when you’re connected to it. Obviously, you’re pushing data out.
From the standpoint of securing it, obviously, if you’re looking at different security standards for how you approach that, they’re all saying the same thing. You basically want to only be able to communicate up to your layer 3 or down to your layer 1. When you set up your infrastructure, you’re going to set up that security around things like firewalling off different components.
At layer 2 to layer 3, you can create a DMZ between layer 3 and layer 2, and you can also do things like put applications out there that are security gridded like a Cyber Vision where you can actually load a security application right on your edge. That’ll help you as well with implementing security at layer 2.
Russel: That’s one of the things that when people start looking at AI and they start looking at the edge and all the really cool capabilities that that gives you and the things you can do to improve efficiency and effectiveness, and so forth, that what comes up is you often don’t fully consider the cost of all the related infrastructure to be able to do that in a secure way.
Jeff: Exactly right. Cost is a huge factor. What we’re seeing is if you want to take the edge, and you want to take that data, and you have a protocol enabled that allows you to do pub/sub like an MQTT, you can literally send that data anywhere where you want the data to go.
If you’re going to send it up to the enterprise layer, hopefully the layer is DMZed off. You know what I mean? You’re going up through layer 3 to 3.5 and up to 4, and 3.5 is your DMZ. You want to make sure you create that hierarchy of the closer you get to the process the more secure you are like a defense in-depth approach.
Same thing you do with operators. The closer you get to the operator in a process control environment, the more secure you want to be.
We did have one client that told us he wanted to be able to operate his facility from a boat out fishing. I said, “Well, you might. I mean you can view data, but you can’t issue a control command if you’re out on a boat somewhere fishing around.”
He really wanted to be able to do that, operate. We had to obviously talk to him about security policies and procedures, and say no, that’s not okay.
Russel: Having the ability is not necessarily the same as doing it. Having the ability to do that from your boat, yeah, sure, yeah, we can do that, but there’s a security infrastructure cost associated with that if you want to do it safely.
Jeff: Yeah, that’s right. That’s exactly right. There’s operational procedures as well that have to be considered. Security’s a part of everything now. You have to really be mindful of security at all layers.
You’re at the enterprise layer on the IT side, layer 4, or you’re at the, even at the DMZ at layer 3.5 where a lot of these applications sit, and then you’ve got your, of course, layer 3 where your SCADA, systems like that, sit. You have to be very mindful of that.
Russel: There’s a whole organizational and human resource element around all this too. I want to shift the conversation a little bit. You’ve used the term a couple of times, pub/sub, and I know what pub/sub is. I’m going to try and tee this up a little bit for folks that aren’t as familiar with the technology.
Classical polling in SCADA is poll response, meaning I send a message out to the remote device, and I talk to one item in a channel. I wait for it to respond. I process that response, and I go to the next one. It’s a very serial talk to number one, talk to number two, talk to number three kind of thing.
Pub/sub is something really fundamentally different. Pub/sub is I’ve got the data, and I push it up to somebody and say, “OK, anybody that wants it, there’s the data. It’s right there.” That’s a pretty radically different way of thinking about a communications infrastructure from a SCADA perspective.
Jeff: A lot of the legacy platforms and even a lot of the current platforms that aren’t quite on board with some of the new protocols still do the old poll response. You send out a poll, and then you wait for the response to come back.
Depending on what asset you’re running will determine how often you do that poll response. You’re liquid, you’re in the five seconds range. It’s seconds. You know how it is.
When you’re doing a poll response, you want to ensure that the actual device you’re polling gives you the appropriate response in a time frame that you’ve set up for it to respond. If it doesn’t respond, then you get alarmed at whatever message is commencing. OK, my device didn’t respond, or I got kind of a response that doesn’t make sense to me.
With pub/sub, yeah, you’re absolutely correct. You publish the data, and you have subscribers. You can have subscribers anywhere you want to set one up. Whoever gets that message is going to have the data available to them because you published it and broadcast it to all the subscribers.
Russel: The analogy is like email. I’ve got an Exchange server. If I hook up to the Exchange server with my mailbox, everything that’s addressed to me, I’m going to get. If I disconnect, I don’t get anything. When I connect back up, I get everything that has come since the last time I was connected. Now we’re doing that with data bits versus email messages.
Jeff: We’re basically sending out that data. The subscribers are looking for that data in whatever format it is. That’s another key aspect of the edge. You have to make sure that when you get the data, data in and of itself doesn’t always provide a lot of value.
You have to contextualize it. You have to make sure, if I’m sending it up to SCADA, that the data’s in the format SCADA’s looking for. If I’m sending it around the SCADA environment up to the enterprise, maybe up to SAP or something, if I’m sending that data, the data has to make sense for something like SAP to bring that data in. It has to be formatted.
That’s another function you can perform at the edge, is that data standardization, data manipulation, so that you can basically get the data in the format that everybody’s looking for, an application or a SCADA system, whatever, which is basically an application. You have to get that data structured and formatted correctly.
Russel: Yeah, and there is a whole lot bound up in just that conversation. A lot of people would tell you, well, if the data’s in the SCADA system, it’s well structured, but I don’t know that that’s really true in a lot of cases. Because really, SCADA systems aren’t really designed for managing a structured data set. They’re designed for quickly moving data from A to B.
Jeff: Yeah, and the thing about SCADA that is interesting is that they’ve historically been the OT system of record. You send it into the SCADA and you have a real-time database, and that data goes into the real-time database, and then you take that data out and you can visualize it, you’ve got an HMI component, or you can historize it.
There are some regulatory requirements around that, especially for things like liquid, I think it’s 49 CFR 195.404. You have to keep that data for three years.
They’re pretty prescriptive about what data you have to keep, but that all ties into the SCADA system and what’s required of that SCADA system to gather the data, collect the data as it were, and then give the operators the ability to take action based on that data.
Then of course, you have your alarm packages that allow you to alarm if something’s not right, but everything was SCADA centric. Now what you’re doing I think with the edge, which is an interesting concept, you’re separating it out instead of supervising as we control the data acquisition, you’re doing supervisory control because the actual data acquisition is now happening.
Russel: There’s no longer SCADA, there’s SCA and DA. We’ve then decoupled.
Jeff: Yeah. One of our architects was talking about that, and I thought, I think he’s right. You could argue the point that that’s not really true. I think it is in a way. One of the solutions we’re building for a client is around an edge for the particular problem that they’re having.
We were looking at the issue and saying, how do we solve this? Well, the first thing we have to do is be able to connect with everything. We have an application that’s part of our baseline for our SCADA solutions. This particular application speaks about 300 protocols. Literally, with an edge box, we can talk to anything. It doesn’t matter if it’s OPC DA or BSAP, it doesn’t matter what the protocol is, we can speak it.
Now we can go talk to that instrument or whatever it is we’re talking to, and pull that into the edge, and then from the edge, we can take that data and start doing some really creative things because we have a little more compute power out there at the edge. We can massage the data, or we can basically just pass it through as it were.
What is your goal if you’re trying to lower your communication infrastructure costs from poll response? You have a lot of bandwidth for that relatively speaking compared to RBE. What we’re seeing out there is if you implement RBE over a poll response, we’re seeing about a 50 percent reduction in overhead.
You don’t have to pay the Verizons and AT&Ts of the world that much money because your cost for that is low.
Russel: Particularly in these cellular centric networks, I move a lot less data because I only moved to change data. I don’t move data every time it’s asked for whether it’s changed or not.
Jeff: You don’t have a poll going on every eight second, so it hardly affects your polling. You literally are waiting for an RBE. You literally are waiting for an RBE. If nothing changes, why do you care? I mean, you obviously care.
Russel: That has to do with all the things that have to happen in the protocol to give you a level of confidence and reliability that you actually didn’t have a change that you missed. Right?
Jeff: Correct, yeah.
Russel: That we’ve over the last 40 years have maximized what we can do with poll response networks to make sure we get everything versus now we’re doing that with pub/sub.
It opens up a whole lot more opportunities, not to mention the fact that when I separate the data acquisition and the supervisory control, supervisory control now just becomes one system that’s accessing the data or interacting with the data.
I can have other interactors like accounting, and machinery analysis, and they’re all using, and I’m doing air quotes, very helpful on an audio podcast to do air quotes, but air quotes, standardized data. Right?
Jeff: Yeah. What’s nice about it is that you truly can send the data anywhere. Of course, you’ve got the security concern down at layer 2. Let’s say I want to implement the cloud, which you don’t want the cloud connected down at layer 2, especially with the TSA directive talking about that a little bit. You don’t want that untrusted network.
What you can do is put in a data diode like a Waterfall, or Owl, or Fox. You can put one of those in there, and then it’s truly an air gap because it’s two fiber barriers and you cut one.
You ship the data out to the cloud, and then you can do some cool things and take that data, ship it up there and do some ML, machine learning, or something, or you can take that data, and you can shove it up to your enterprise layer one way if you just want to get data, real-time data.
We have one client that’s looking at something that I thought was really interesting from a solutions perspective based on the edge, and that was they want to be able to see a change in their financial applications when something happens in the field real time.
So a pump shuts off or a compressor shuts down, they want to see that reflected in SAP. I thought that was a really interesting challenge because now you’re doing what we’ve talked about for years. You’re going from the plant floor to the boardroom. You literally are.
Russel: You’re in real-time commercial optimization.
Jeff: Yes. Exactly right. You’re going all the way up to SAP and saying, OK, hey, a pump shut down, or one of my wells are out, or a block of wells is out. How’s that going to impact my finances or whatever?
Russel: Being able to say hey, I’ve lost wells on this leg of my gathering system, and what is that going to mean at the tailgate of the plant, and what does that mean at the inlet to the fractionator, and what does that mean in terms of volumes I’m going to have in tanks and able to shift tomorrow morning? That’s really handy data right there.
That’s where this kind of stuff can really have some power because I can be hooking right up to meters, and analyzers, and all of that kind of stuff, and allocating all that type of stuff in real time. Very, very doable.
Jeff: It’s one thing to visualize the data. You can send the data out. There’s a lot of visualization packages.You go visualize that data, but to actually do something like that, and send it into a financial accounting package and show a change in a financial number, now you’re talking a whole different level.
That ties back to the conversation at the edge about how you have to structure that data because I need that data structured so that it makes sense to the financial application.
At the edge, if I’ve got compressor data, a lot of that’s not going to translate to my financial app. I just need to know certain things about that particular compressor, or pump, or whatever that you would translate into the financial app, saying, oh OK, if that pump goes down, what does that mean from the throughput of my assets, whatever.
Russel: Right. What’s going to happen with my hydraulic profile upstream and downstream of that pump, and how’s that going to impact what my blends are if I’m moving gas? What’s that going to do to my analysis because now instead of having rich gas off of this leg, I’ve got lean gas off of this other leg, and what does that mean?
Those kinds of things and knowing that in real time can be very valuable and can create a significant commercial advantage.
Jeff: No question. That’s really what we’re driving to. A lot of this data capture and data normalization to get it into the formats that we need for certain applications, it’s really coming down to driving efficiency. What you want to do is create efficiency with the assets you have.
If you can take that data, and one example is CruxOCM, I know you had Vicky on, but if you can take that data from the field, and you can put it into SCADA, and then you can run it over to an AI program that can create efficiency at the console like that, that’s a great example where you can say OK, I’m operating a pipeline.
I’ve got these controllers doing all of these commands, and they’re looking at all the data, and they’re making decisions, a lot of decisions, and trying to determine what’s the most efficient way to move crude into refinery, refined product out, whatever you’re doing, and then you put in this AI program, and all of a sudden things get a little more efficient.
Even if you’re only creating a four or five percent efficiency, it translates to millions of dollars per year. It’s just a huge benefit. You can sell things like that on an ROI basis.
That’s really just data capture. You’re just taking the data and doing something different with it. Instead of just giving it to the operators to make control decisions, you’re putting it into an AI program and trying to use that AI algorithm to create efficiencies in the operation of the asset.
We thought that was really cool, so we partnered up with Crux as an SI and said that’s really neat because we do a lot of control center work, and you’re right at the console. Of course, that’s only going to get enhanced with the edge as you get access to more and more data.
Russel: To me, those two things are very directly connected. A lot of times if I’m looking at a pipeline and I’ve got a fleet of pumps, making decisions in the control room about what pump to start and what set point to put on what pump doesn’t make a lot of sense.
That’s all stuff that can be done at the edge with intelligence and applied APIs to say, “OK, I want to move from this current set point to this new set point from a flow perspective and let the AI figure it out.” What’s the best pump and should I turn this one on, or turn this one off, or tune this one out, or all of that.
What you can do is you can smooth that control algorithm. That’s got all kinds of pipeline integrity and operational benefit if you can just smooth things out when I’m making control changes.
Jeff: Sure. Like the startup and shutdown. There’s lots of things you can do that would create a lot of efficiency.
The other thing you can’t forget about is safety. At the end of the day, these things are also providing a little bit more safety when you’re talking about moving hydrocarbons. It’s volatile stuff, so anything I can do to create better efficiencies when I’m doing things that could eventually cause an issue.
Russel: Just something simple in a liquid system like being able to start a slack line and minimize the hydraulic water hammer when I do that, if not eliminate it, simply by the way I’m sequencing and starting up a set of pumps, very hard to do for control room, but you can do that with an AI system. It’s relatively straightforward.
Jeff: If you have a good one, they learn. You create more and more efficiency as the algorithms learn. It’s the future coming at us 100 miles an hour.
Russel: [laughs]
Jeff: That’s where we’re going–
Russel: That’s the story of the last 30 years of my life.
[laughter]
Jeff: It’s cool. It’s an interesting time for oil and gas. I really enjoy being in this space because it’s just getting fun. The old thing about setting it and forgetting it, these old, antiquated, SCADA systems and stuff, that’s changing very rapidly. The new technology’s fun, exciting. It’s not leading edge, but we’re a little closer to the leading edge with things like AI and ML.
Russel: And the kind of skill sets that engineers coming out of school have. They all have, pretty much any engineer this day has some level of automation and programming. They’re familiar with Python. They have competencies and capabilities that they bring to the table day one that really drive the need for doing things at the edge and applying computing power more intelligently.
Jeff: Looking forward, we’re going to see an acceleration of the capability of this new technology to create more and more efficiencies. It’s just a question of as a system integrator, how do we actually apply that? How do we integrate that with all the different issues?
There’s, obviously, from the client’s perspective, there are cost issues. Also, there’s the technology layer because somebody over there’s got to support it. And once you put it in, it’s the training and all that stuff.
As an integrator, you have to actually try to make all of this play in an environment where you really have a lot of security concerns. You have to make sure you don’t breach any security policies.
Russel: There’s a very big difference between building a new system and operating a system.
Jeff: Yes, sir.
Russel: If you build it right, you create a lot of opportunity to improve, and plug in, and optimize. If you build it wrong, you create a lot of opportunities to drive cost in the wrong direction.
Jeff: Exactly. You have to factor in, especially if it’s a new system, you have obviously the business layer aspects, and you’ve got the operational aspects to consider. You’ve got all of the security aspects, the safety, which is number one. You have to worry about safety. Impact to the environment, impact to humans, that’s what you’re trying to minimize.
On top of that, and technology, it’s always at the bottom of the stack. The technology’s there, but as people process technology for a reason. Technology’s kind of tough because you have to get the people up to snuff on what you’re doing, up to speed on what you’re doing.
Russel: It’s purpose people, process technology.
Jeff: Good point.
Russel: The purpose is what gets everybody focused on the direction we’re going, and then from there you can move.
Look, Jeff, this has been fun. I could sit here and wax philosophic like this with you for hours and hours like we’ve done before at other times and opportunities.
Jeff: [laughs]
Russel: What would you say pipeline operators ought to be contemplating or doing as it relates to the edge, and pub/sub, and all that? What would you be, particularly those that are looking to either modernize their SCADA system or building a SCADA system in house for the first time, what would you be encouraging those people to take a look at and think about?
Jeff: They should be looking at, from my perspective, more of the current state of the art. In other words, where are we now compared to where we were?
I’m working with a number of clients right now on upgrades. Pretty much all of them are looking at doing something at the edge. Sometimes it’s driven by a specific business case that they’re trying to address. They’ve got an issue. Other times, it’s just to create efficiency. Other times, they want visualization. It depends on what goal they’re trying to achieve.
I would say yeah, take a look at the technology. There’s new protocols. There’s new edge devices out there. There are things that address security.
There is a whole different layer of technology available that wasn’t available 10 years ago that you can look at and sit down with maybe a system architect, and then obviously, a network architect, and really sort out what would be the optimal implementation for us to meet our operational and safety and security objectives, and obviously, meet the business requirements for the overall company.
Russel: I was just going to say, that’s really well said. The way I would contextualize that is, realize you’re building an operations management system. You’re not putting in place a SCADA technology.
Jeff: No, that’s correct. People get it backwards. They focus a lot on technology. We’ve certainly seen that, but remember who your customer is. It’s operators. Operations is your client. Your job is to make their life easier and create more efficiencies for them. When you’re building this technology layer, always keep them in mind.
The one thing I would just say that my team’s tired of me saying is, projects of inclusion succeed, projects of exclusion fail.
Russel: Ooh, whoa, I’ve got to get you to unpack that right there because that’s a deep concept. What do you mean by that?
Jeff: What I’m saying is, if you’re going to make a change. We operate at the delts, right? When things change, that’s when they call us. They’re upgrading, moving, they’re building new, something’s happening with their OT infrastructure. That’s when they call us.
What I would say is that, if you’re going to move forward on a project, try to get to the best of your ability, try to get all the stakeholders in the room. Then make sure that they’re invited to the meetings, that they understand the goals of what you are trying to achieve here. Then everybody’s buying in on going in that direction. It also helps to have a high-level sponsor.
Projects of inclusion we’ve seen pretty consistently succeed. Projects of exclusion do not. We have seen some really big – I hate to say it – pretty big failures because certain individuals or groups weren’t included.
When they come in late, you’ve created a difficult situation. It’s a big challenge when somebody who will be impacted by the new solution you’re providing isn’t included in that process. It can really cause challenges. I would definitely say projects of inclusion work. Projects of exclusion don’t.
I hate to say it, Russel. We have seen this firsthand.
Russel: Oh, yeah. That’s why I wanted you to unpack that. I have seen so many times where the more time you spend upfront getting everybody oriented into what you’re doing and why you’re doing it, and why it’s different than what you’ve done before, why that’s important, it is so, so critical to having a project turn out successful.
Jeff: There’s no question. We had one client that spent a lot of money. It was millions. They really didn’t have that project of inclusion where the goals were clearly identified. The vendor was doing a good job to the best of their ability, but they weren’t really getting to the end goal.
At the end, the project was canceled, not because people weren’t trying or working hard. It’s that the people that really needed to be in the room, weren’t. When they came over and looked at the project, they said, “Well, that’s not going to meet any of our requirements.” It got killed, which is really sad when that happens.
Russel: Oh, man.
Jeff: We were brought in to look at the project because it was several years on and wasn’t complete. We just said, “Well, I think we’ve got a project of exclusion going here. Your team that was excluded came in and unfortunately, that project was terminated.”
Russel: That happens sometimes. Sometimes the smartest thing you can do is just admit defeat and start over. [laughs]
Jeff: Yeah, basically, it is time to reset. You got to level set expectations upfront. Everybody needs to understand. Again, it never hurts to have a high level sponsor so they can smooth out some of the rough edges.
Russel: Listen, this has been great. Appreciate you coming on. We need to do it again without waiting so long.
Jeff: Sounds good to me.
Jeff: I really appreciate being invited, Russel.
Russel: Great to have you, Jeff. You have a great weekend.
Jeff: You too. Thank you, sir.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Jeff. Just a reminder before you go. You should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit PipelinePodcastNetwork.com/Win and enter yourself in the drawing.
If you’d like to support the podcast, please leave us a review on Apple Podcasts, Google Play, or wherever you happen to listen. You can find instructions at PipelinePodcastNetwork.com.
[background music]
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at PipelinePodcastNetwork.com or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
[music]
Transcription by CastingWords