This week’s Pipeliners Podcast episode features Scott Williams of EnerSys Corporation discussing the latest trends in data analytics and edge technology with host Russel Treat.
In this episode, you will learn about Python, language series, Docker, and more. You will also get a deeper insight into data analytics and how the conversation around edge technology pertains to pipeline operators in the oil and gas industry.
Show Notes, Links, and Insider Terms
- Scott Williams is the Manager of Development and SCADA for EnerSys Corporation. Find and connect with Scott on LinkedIn.
- Edge Communications is a method of building out the architecture for structured communication from edge devices in the field to a host server using connectivity to transmit the data.
- Python is an interpreted, high-level, general-purpose programming language.
- C# is a general-purpose, multi-paradigm programming language encompassing strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented, and component-oriented programming disciplines.
- C is a general-purpose, procedural computer programming language supporting structured programming, lexical variable scope, and recursion, while a static type system prevents unintended operations.
- C++ is a general-purpose programming language created by Bjarne Stroustrup as an extension of the C programming language, or “C with Classes.
- VMware provides cloud computing and virtualization software and services.
- Microsoft Hyper V codenamed Viridian, formerly known as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows.
- ARM chips is a family of reduced instruction set computing architectures for computer processors, configured for various environments.
- Linux system is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds.
- Raspberry Pi is an ultra-small and affordable computer that runs on the Linux operating system. The main industrial functionality is to attach the computers to edge devices for more efficient, reliable, and cost-effective data collection.
- Windows 10 IoT formerly Windows Embedded, is a family of operating systems from Microsoft designed for use in embedded systems.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote location.
- PLC (Programmable Logic Controllers) are programmable devices that take action when certain conditions are met in a pipeline program.
- RTU (remote terminal unit) is a microprocessor-controlled electronic device that interfaces objects in the physical world to a distributed control system or SCADA system by transmitting telemetry data to a master system, and by using messages from the master supervisory system to control connected objects.
- Pentium is a brand used for a series of x86 architecture-compatible microprocessors produced by Intel since 1993.
- Xeons is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeted at the non-consumer workstation, server, and embedded system markets.
- Core I7 line of microprocessors are intended to be used by high-end users.
- CPU (central processing unit) is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions.
- Shareware is a type of proprietary software which is initially provided free of charge to users, who are allowed and encouraged to make and share copies of the program.
- IBM Db2 is a family of data management products, including database servers, developed by IBM.
- Metadata is data information that provides information about other data.
- Inlet gas means all Gas delivered into the Plant from the KMTP Line.
- EFM (enterprise feedback management) is a system of processes and software that enables organizations to centrally manage deployment of surveys while dispersing authoring and analysis throughout an organization.
Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 94, sponsored by EnerSys Corporation, providers of POEMS, the Pipeline Operations Excellence Management System, SCADA compliance, and operation software for the pipeline control center. You can find out more about POEMS at enersyscorp.com.
[background music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects and pipeline operations.
Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time. To show that appreciation, we give away a customized YETI tumbler to one listener each episode.
This week, our winner is John Buflod, at the American Petroleum Institute. John, congratulations, your YETI is on its way. I’m very hopeful that you will share that and show it off so that we’ll get more listeners from the API. To learn how you can win this signature prize pack, stick around until the end of the episode.
This week, we have Scott Williams, one of my fellow Bubba geeks, coming back on the podcast. We’re going to talk about the edge and data analytics, and what’s going on with technology around all of that. Scott, welcome back to the Pipeliners Podcast.
Scott Williams: Glad to be here, Russel.
Russel: You realize that you were the guest for episode 11. That was like over a year and a half ago now. [laughs]
Scott: It has been that long.
Russel: I know. It doesn’t seem like it, right?
Scott: That’s a lot of episodes.
Russel: Yes, it is. I asked you to come on because we’ve been having chats now and again about data analytics, and the Internet of Things, and the edge, and all of that. I thought it might be interesting for the podcast listeners to hear us get geeky. [laughs]
As always, I guess we’ll do show notes so that we’ll decompile, explain, or define some of these terms we’re going to use, but the idea here is for you guys to hear some of the conversations we’re hearing about all this cool stuff.
Scott: Sounds good.
Russel: The first thing I wanted to ask you about is Python. All I know about Python is that it’s a coding language, and it’s what all the data analysts use. What do you know about Python?
Scott: I started noodling around Python in my spare time because it looks interesting. As a developer, you see these various websites that purport to have a, “Here’s the list of the top languages of 2019,” or lists of that nature.
I use C#, and C# is up there, but it’s not at the top. Python’s at the top, which is really interesting. If a lot of people are favoring this language, I ought to take a look. A lot of that popularity is driven by the areas it’s being used for.
Apparently, it’s not just the language itself. It’s the tools that surround it. The Python language and tools are apparently very good at data analytics, and artificial intelligence, and machine learning. There’s a lot of applications in some of the new academic and nonacademic environments that are focusing on Python.
I’ve also seen that Python is not too hard for beginners to learn, which sounds great. More people doing programming, the better off we’ll be.
Russel: [laughs] Yeah, that would be one perspective. It’s probably been a few months ago, but I was noodling on the idea of getting a certificate in data analytics.
There are some big name schools like Harvard and MIT and some others that offer a two year data analytics program. At the end of this, you get a certificate, and in some cases, depending on the program you select, you can get maybe even a master’s degree in data analytics.
For every single one of those, the prerequisite is, you got to know Python. It’s interesting for me to hear you say it looks like it would be easy for beginners to use because even though I’ve been working in software forever, I’m not a coder. It’s been a very long time since I’ve tried to write any code. I’m curious just how steep that learning curve would be.
Scott: It’s a modern programming language. Learning programming is never easy. It’s all about barriers, at least with, let’s say, the C suite of language series — C originally, then C++, and now C#. They’re different languages, so don’t take that the wrong way, but the syntax is rather rigorous.
There’s a lot of, let’s call it special punctuation that’s required, and if you’re new to programming, that’s a barrier that you don’t need.
I’ve been doing it for ages, so it comes natural to me, but in the little bit I’ve been goofing around with Python, the structure of the language, the way you type it, is much cleaner. A lot of that special punctuation is just gone.
From that perspective, I think that’s probably why they’re saying it’s simpler because there’s, “Why doesn’t this program run? You forgot a semicolon.” A lot of that’s gone with Python. This doesn’t have it.
Russel: Interesting. That takes away some of the fear and trepidation about trying to get into it, because I have a high degree of interest in statistics and have always…I’m just kidding. [laughs] I’m giving up just how big a geek I am.
I went through a phase for about three years where I was reading all kinds of books and magazines on technical analysis of stocks and commodities, and basically all that is, is statistics.
I built spreadsheets, and algorithms, and approaches to crunch data, to look at it, and all kinds of stuff around that. When I first started hearing about analytics, my mind went to this place of, “Well, that will be easy for me because I know that math.”
I started to investigate it, and it’s like, “Well, you don’t just need to know the math. You also need to know this programming language,” and I’m like, “I’m not sure at this point in my life I want to go and learn a programming language.” [laughs]
You’re taking a little of the concern off by saying that the language, and the syntax, and the approach to writing that kind of code is more straightforward, would be the way I would say it.
Scott: Yeah, a lot of the notions of modern object oriented languages — Java, C#, C++ — a lot of those notions are still in there. One of the barriers they took away is, the syntax is a lot simpler. It’s streamlined.
Russel: Have you actually tried to do anything with Python, or have you just been noodling on it?
Scott: Just goofing. Right now, I’m at the stage of…I’ve used enough different languages, so learning a new one is more about, “How is this different? How is it better? How is it worse? Where does it fit?” I’m still in that phase. I haven’t written any big, interesting applications yet, but it’s coming soon. I’ve got to get the overhead first.
Russel: I think one of the other reasons it may be simpler, and I only know this from what I’ve read about the language, not anything I’ve actually done with the language, is that its function set is smaller because it’s pretty much dealing with the math part of analytics. It’s not dealing with all the other kinds of things that you might do with a programming language.
I don’t know if that’s true, but that’s one of the conclusions I came to.
Scott: I don’t know about that one. It’s possible, but I think Python’s been around long enough that that may not have been the original impetus. I’m not sure how old it is, but it’s not new. It’s been around for a little while.
Russel: Exactly. We have now probably exhausted the full extent of our knowledge about Python and should move on.
[laughter]
Scott: I think we’re pretty close.
Russel: The other thing that came up recently in conversation is Docker. I know virtually nothing about Docker. It’s only recently come up, and we were talking about this earlier before we got on the mic. Maybe you can tell people what is Docker. From a standpoint of what’s going on in automation, and analytics, and edge, why would I even care about Docker?
Scott: Stated simply, Docker is an awful lot like virtual machine technology, except that it’s a lot lighter weight.
Consider a typical virtual setup. You have VMware, or Microsoft Hyper V. There might be some other products — those are the two biggies — installed on a server, and then you create virtual machines, but your virtual machine is a whole, complete, separate install. I install Windows, I install my apps. I install my tools. It’s a full installation of Windows inside of every virtual machine that’s running on the server.
Whereas Docker is a little leaner than that, in that your Docker image is really just your application parts. All the Docker images running on a particular server all share the operating resources of the server more directly. Your actual Docker image, you’re just dealing with the technologies you need to run your program and not the operating system.
They’re a lot smaller because you don’t have to install Windows and do all that business. Just install the tools you need to run your program that goes in the image. Now, all of a sudden, they’re a whole lot smaller. Machine can run more of those.
I’m pretty sure various of the cloud providers have a mechanism to run Docker images. That’s going to be at a lower crosspoint than a virtual machine would be.
Russel: Interesting. The conversation I was having was with a manufacturer that has an edge device. They were talking about their chipset and enabling Docker on their edge device. [laughs] When you’re a nerd and you think you’re on top of everything and then somebody throws in the buzz word at you, you’re like, “Huh, I haven’t heard that one.” [laughs] That’s what my reaction was.
I wasn’t really clear what Docker was. I guess if I’m talking about a chipset or an edge device where the processor is I’ve got to write embedded firmware, then Docker would be something where I could have several different embedded firmware programs running and they could be isolated from one another. Would that be a way to frame that?
Scott: Going back to the virtual machine analogy, because Docker is much lighter weight, they still share the resources. You have multiple Docker images running at once. They share the resources in the host computer they’re on. Just like a virtual machine does.
There’s no replication there. The Docker image has what it needs, nothing extra. The operating system is separate. That’s shared. With Docker, the isolation isn’t as complete as with a virtual machine. Even that isolation is not as complete as with dedicated server. It’s all about what you’re trying to get done.
Russel: That makes sense.
Scott: Like you’re alluding to, Docker will run on ARM chips. It will run on Intel chips. It will run on a Linux system. It will run on a Windows system. I believe I read recently that somebody got Docker running on a Raspberry Pi with the newer, higher horsepower Raspberry Pis.
Russel: That’s interesting.
Scott: Raspberry Pi costs you 35 bucks, plus hardware. The main board is 35 bucks.
Russel: Interesting.
Scott: While you’re looking at that, think Raspberry Pi, you’re thinking Linux because that’s the native operating system. There is a Windows 10 IoT that runs on Raspberry Pi. There’s a Windows 10 subset running on a Raspberry Pi, which is an ARM chip, not an Intel chip like all your desktop machines are. It’s getting very cross platform.
Russel: [laughs] I was just listening to all the things you’re saying. I’m listening to it just in the way I normally listen. At the same time in that conversation we just had, I’m having this conversation with myself about I wonder how the podcast listeners are navigating through all those buzz words. [laughs]
Scott: There’s that. [laughs]
Russel: We probably have to unpack that a little bit. Raspberry Pi is basically a controller. This is a board level controller. It’s a printed integrated circuit that does all the things that a controller could do, like a PLC, an RTU, or something like that.
Scott: It’s even beyond that. It’s a little bit bigger than that. Consider the size of a 5×7 index card. It’s smaller than that. It’s a single circuit board. It is computer, HDMI monitor, USB keyboard/mouse. It’s a computer.
Russel: It’s a full computer on a very, very small profile not only in terms of size, but also in terms of power and everything else. A lot of processing. It’s the kind of thing that a lot of people start with when they’re developing for the edge. That’s interesting. That’s what a Raspberry Pi is.
The other thing we talked about was chipsets, ARM, Intel, and etc. Most people know what an Intel chip is because you go and you buy your new computer and it says, “Powered by Intel,” and it’s got some fancy marketing stuff about how that’s the latest, greatest, fastest chip that ever existed.
Intel is not the only chip manufacturer. ARM and others manufacture chips as well. What chipset does a Raspberry Pi run?
Scott: Raspberry Pis run a flavor of ARM.
Russel: A flavor of ARM.
Scott: Just like Intel. You go Intel’s website and you have the Pentiums, you have the Xeons, you got the different speeds, and the i7s, all these numbers. They’re all different flavors of Intel chips. ARM has the same thing. There’s a lot of different flavors of ARM chips.
Russel: What would cause me to pick one chipset over another if I’m trying to figure out what I’m going to use as a platform or something?
Scott: In the past, you’d think about your horsepower of the chip itself. How much computation can the chip do. ARM chips are coming along. The modern ARM chips are getting pretty close in power to your desktop computer, Intel processors. The performance edge is going away. There’s architectural reasons for this. I won’t go into them because it will be worse than that last conversation.
Russel: [laughs] Or better, it depends on your perspective.
Scott: [laughs] That’s true. In general, ARM chips use less power than Intel chips. Broadly stating, there’s 1,000 variants to that. Broadly stating, that’s the thing.
Russel: That’s exactly right.
Scott: You’ll find ARM chips on a lot of lower horsepower machines, like the Raspberry Pi. The Raspberry Pi is not going to match up with your desktop computers. It’s not that kind of horsepower. It’s maybe your desktop computer from a couple years ago, which is not bad. It’s not your modern desktop computer.
Now, you take the argument and you look at all of Apple’s products. I think since 2010 they decided that the core CPU was going to be ARM, the Apple products.
They have the names. You may have seen the advertisement. “We’re using the A10 something, something, the A11 something, something.” That’s just Apple’s chip which incorporates a lot of different stuff. One of the things it has on it is the ARM processor.
Russel: It’s very clear if I’ve got an iPhone, an iPad, or something like that, or even a MacBook, the advantage of running an ARM, if it’s lower power, is I can get more out of the battery.
Scott: I’m not sure about MacBooks. They might still be using Intels. All the mobile devices, they use ARM.
Russel: Exactly. My point is that battery life is becoming a bigger and bigger deal. Now, it’s interesting because when you start talking about the edge…I’ve been using the buzz word edge. I should define edge.
Edge is, if you think of what a current PLC or a flow computer does, that device is operating at the edge. It’s the first computer that’s attached to the instruments. That, by definition, is the edge. When you start talking about building a new platform for the edge, taking and putting a computer out there, like a Raspberry Pi, versus putting a controller out there.
When you start talking about low power particularly in oil and gas, particularly in pipelining is there’s more to it than just batteries because in order to get to division classified or explosion proof, I’ve got to get the power down, because it’s by lowering the powers, by lowering the voltage that I clear some of those division/classification hurdles.
I think that in terms of what’s going on out at the edge that there’s some preference for the ARM chip over the alternatives, because it’s low power and then also tends to have a higher environmental ability. Meaning, it can run at much lower temperature and much higher temperature. Does that fit with your understanding as well?
Scott: I’m not sure about the temperature specs. Low power, for sure. Obviously, that leads to, if you’re running a solar system, smaller batteries, smaller solar panels, and all those things matter if you’re running solar for instance. Or, if you’re running AC, you might not be constrained with your power if you’re running AC. Anything short of that is something to think about.
If the AC goes out, the battery backup. With ARM units, you don’t need as much battery backup. Again, processing capability isn’t the issue it used to be. Plenty of horsepower and ARM chips to do whatever you need.
Russel: Exactly. I think this is interesting because, I can do a recap, Python, there’s a lot of people out there that are selling analytics platforms. Most of those analytics platforms are going and they’re getting things that are available as shareware that is Python code. They’re putting some application on top of it that makes it easier for a user to interact with the code.
What that means is, for a lot of people, they could get to a deployment at a better price point just by using Python. To me, that’s interesting because there’s a lot of products out there that when you start unpacking what’s really available underneath, it’s stuff that’s widely available at very low cost.
Scott: I’m going to suggest a different word for you there, though. Shareware is an old term for something else. The word you’re really looking for today is called open source. Even with open source, you still have to keep an eye on the license models.
There are open source projects that you can use freely for your own open source project. There are other open source projects that you can use freely on your own commercial project. There’s that limitation. You just got to be careful when you’re looking for libraries you’re going to use.
Russel: Basically, what the application providers are doing is they’re getting you beyond that. Plus, they’re making it easier to interact with the capability of that code.
Scott: Libraries do encapsulate a whole lot of functionality to make it really easy for you to use.
Russel: The other thing I want to talk about in this whole what’s going on at the edge, what’s happening as new development is the idea of data management. To me, one of the things that’s going on that’s really not being addressed yet is that, and I’m old enough to remember software and databases on mainframes and IBM Db2. For anybody who understands what that is, you know exactly how old I am.
[laughter]
Russel: When we moved off of mainframes and we started moving to what at the time was called mini computers, what now we’d call servers, it’s not the same thing but it’s a fair analogy one of the big challenges was on those mainframes, the mainframe vendors sold their software packages. The way they structured and organized data was extremely rigid and bound up in the application.
When we moved off the mainframe, we began to separate the data from the application. I think the same thing is going to have to happen at the edge if we’re going to really get to the value proposition. Let me unpack that a little bit. I want to tell you what I mean by that, and I want to hear your opinion about it.
Right now, most of the data at the edge exists in a PLC, an RTU, or a flow computer and is not well structured. There’s not common naming. There’s not common units. There’s a lot of challenges around how the data…The data’s flat. Meaning, it’s a list of datapoints with names. There’s no organization to it. It’s not, “Well, here is all the data around a pump and this is what the…”
To elaborate that a little bit more, a pressure is not always a pressure. Sometimes, a pressure is a pump suction. Sometimes, it’s a pump discharge. Sometimes, it’s a compressor suction. Sometimes, it’s a pipeline line pressure. Sometimes, it’s a meter body pressure. Sometimes, it’s a vessel pressure. All those pressures mean something different.
I think there’s a big need to create tools like relational databases but for use at the edge such that I can manage the entire enterprise of real time data. I’m not aware of anything on the market doing that, although I do know some people that are working on it. I’m wondering what your thoughts are about that.
Scott: What you mentioned is the key bit. Every PLC program, every RTU vendor, the naming of the different pressures is all over the place, which is fine. They did their thing for their device, and that’s great. When you bring it all together, you want to normalize that somehow.
Just like you said, static pressure for measurement versus a pipeline pressure versus suction pressure, discharge pressure. They’re all different. They all mean something different. To each of these datapoints, you do attach some metadata. Metadata meaning data about the data. You need to identify it.
You said it’s a pressure. Which kind of pressure? Suction discharge. What are the units? Is it psia? Is it bar? Is it something else? Is there a time factor associated with it? Is it a live pressure? Is that daily average? Is that hourly average? If it’s a daily average or hourly average, what hour is it for? What day is it for? You have to pull those together.
Is that the stage one suction pressure of compressor A or is it the stage two suction pressure of compressor A? What is actually the suction pressure on pump six? If you can normalize that, now all of a sudden, you have a mechanism for looking across your system. I want to see all my pumps. You look at my suction pressure. You can do a lot of things once you applied common identification.
Russel: One of the key things that’s required to do broad based enterprise level analytics is you’ve got to have control over the data. The data has to be reliable. It has to be well organized. It has to be accurate.
Scott: Exactly.
Russel: That’s a fundamental requirement before you can do anything meaningful with analytics. If you go to a single run flow computer from a major manufacturer, then it’s pretty easy to get understanding of what that data represents. Although there’s still complexity around is that a Coriolis meter? Is that an orifice meter? Is it ultrasonic meter? Is it a turbine?
Those differences aren’t always clearly easy to ascertain by the way the data’s set up in the RTU. I actually think that’s one of the big problems that needs to be addressed in our business. We’ll see how that shakes out.
Once you start getting all this bigger power at the edge and once the edge becomes a computer versus a proprietary device, now all of a sudden, there’s a lot I can do that, really, there’s not any technical obstacle to doing it now other than there’s infrastructure obstacles.
Scott: You go from PLC, which I would say is not a general purpose computer. You can do a lot with it with your program, but it’s not a general purpose computer. Where you put your edge device out there, you can write any program you can imagine just to get out there next to your PLC.
Russel: There’s a lot that goes into actually making that viable. I think as an industry, we’re a ways away from getting there. It’s a very interesting conversation.
Here’s how I want to wrap this conversation up. A lot of times, I do three key takeaways. I want to do this a little different because we’re talking about a whole bunch of technologies and a whole bunch of things that many people are playing with or experimenting with, and may not yet have completely figured it out.
My question is this. Why should I as a pipeline operator care about this conversation [laughs] we just had? What do you think?
Scott: I think it’s really interesting. With the cost of these things coming down, like we mentioned earlier, Raspberry Pi, 35 bucks. General purpose computer with reasonable horsepower, 35 bucks, we all of a sudden have the capability of substantial computational horsepower that uses very little power, you can put anywhere you want. The possibilities are endless.
There’s the operational issues or maintenance that will come up that if you had that capability, you might have been able to discover the situation before the equipment failed, get better information about misconfigurations, or there are large class of problems you could solve by having a targeted, but general purpose computer next to your PLC looking for something else. The PLCs are running the systems safely, this computer can be looking for other things.
Russel: I’ll try to give a specific example, because I think specific examples are often helpful. I’ll give a couple. The first one I’ll give is a process example. Let’s say I’m running a gas plant, and I’ve got inlet gas coming into the gas plant, and I’ve got a chromatograph there, and then I don’t have chromatographs upstream of the plant inlet.
That would be pretty common, but I have measurement, and the measurement has detailed analysis at the field. Once a month, I’m updating the flow computers in the field, but at the plant, I’m looking at the real-time gas stream.
I could actually create an algorithm that would tell me that the wellhead composition is changing because it’s deviating from what I think it ought to be. I could do that in real time, if I use this kind of capability, and by knowing that, I might be able to operate my plant more efficiently and consequently drive more profit.
Further to that point, I might do that in a way that my algorithm becomes proprietary, and then that becomes a way for me to outperform my competition. That would be one example, right?
People that know what I’m talking about would say, “Well, I can kind of do that now,” but there’s a difference when you’re doing that on millisecond data versus every few minute kind of data. It matters.
The other example I’d give would be something like leak detection where, by putting a computer in the field and by looking at a pipeline pressure, and how that pipeline pressure is moving in very small increments of time, I might be able to see things that I wouldn’t otherwise be able to see. That would be another example.
I’m with you. I think the examples are endless. There’s lots of them out there. Ultimately, what this does is, it creates an opportunity, one, to do further optimization and further asset integrity type things with real-time data.
Secondly, it creates an opportunity to create an algorithm that creates a competitive advantage and deploy it in a way that before now you wouldn’t have the ability to deploy it.
Scott: Even the simpler model, you have vibration sensors on your compressors, and you’re monitoring those in your SCADA system. If they get high enough, you’ll alarm. You’ll do something about it.
Now, imagine you have a device out there in the field that can collect those vibration values once a second, or twice a second, something really fast, and then it can package them up, and it can send them up to a back office system, and they can be compressed and you can do a lot of pre-analysis in the field.
You can also do more modern computational stuff, compress it and get it small so you can get it to the back office, because traditional SCADA systems don’t deal with compressed data streams. They’re dealing with real-time data. Even EFM collection is not really compressed. It’s just full-size data.
Russel: That’s exactly right. I haven’t even thought about that.
Scott: That was so you can have a much richer data set in the office than you had before without burning up your communication standard.
Russel: The other thing that this does, too, along the same vein is, if I’m running on a standard computer, then the whole bunch of things opened up to me in terms of data compression, encryption, firewalling, and security — all things I can do in software to harden up my process system.
Scott: Right, because that’s one of those other areas…A right way to put this is if your control network is on open radio frequencies, unless you’ve got firewalls, that’s a potential risk, but with this kind of unit at the field, if you have a firewall, you can’t get to it from the host, or it can transmit to the host only. There’s a whole lot of things you can do by having a proper firewall at the field.
Russel: Right, and you can do it in software, so you don’t have to install, support, and power another piece of hardware.
Scott: That’s right.
Russel: This is awesome. [laughs] At the top of the show, we always do the intro, and we welcome all the Bubba geeks. What’s clear in this conversation to anybody that’s listening is, Scott and I are definitely geeks.
[laughter]
Russel: What would also become clear if you hang out with us on the weekend sometime, we’re likewise Bubbas.
[laughter]
Scott: There you go.
Russel: This is like Bubba geeks conference. If we were to go hunting together, we would be sitting out in the blind having coffee as the sun comes up and talking about edge computing. That’s what a Bubba geek does right there.
[laughter]
Scott: Yes…
Russel: Scott, thanks so much for coming back. It’s been too long. This was fun. I definitely learned some things, and we need to do this more often.
Scott: I look forward to it.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast, and our conversation with Scott Williams. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you would like to support this podcast, the best way to do that is to leave us a review. You can do that on iTunes/Apple Podcasts, Google Play, Stitcher, or whatever smart device podcast app you happen to use. You can find instructions at pipelinepodcastnetwork.com.
[background music]
Russel: If you have ideas, questions or topics you would be interested in, please let me know either on the Contact Us page at pipelinepodcastnetwork.com, or reach out to me on LinkedIn.
Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords