In this episode of the Pipeliners Podcast, host Russel Treat welcomes Nicholas Guinn of Summit Offshore Systems, Inc. to discuss the importance of IT for SCADA build and the logistics of managing compressors in pipeline operations.
You will learn about the importance of understanding who will be using data and how the data will be used in a pipeline operation to define the SCADA system.
You will also learn about various options for compressor optimization and how to prepare to adopt futuristic technology that will help operators develop a more robust system to meet future challenges.
Logistics of Managing Compressors: Show Notes, Links, and Insider Terms
- Nicholas Guinn is a Senior Consultant for Summit Offshore Systems, Inc. Connect with Nicholas on LinkedIn.
- A compressor station is the location where natural gas is compressed to increase its pressure, causing the gas to move through a pipeline.
- A compressor automation or optimization project addresses the needs of pipeline operators to improve reliability, response time, and data collection in the field.
- Telemetry is an automated communications process. During this process, measurements and other data are collected at remote locations and transmitted to receiving equipment for monitoring and data analysis.
- Field automation and field historian are important elements of a comprehensive SCADA host platform to reduce downtime and save costs.
- The Compaq Portable 386 was a pre-laptop era portable computer released in 1987 that was used to gather data in the field, take a snapshot, and return the data to the office for analysis.
- Downsampling is the process of reducing the file size of gathered data in order to transport across a communications network to a recipient.
- Polling rates refer to the rate at which a station, unit, or communication device reports its location from the field.
- Modbus is an older protocol that enables communication among many devices connected to the same network. The drawback is delays in the communication, oftentimes creating timestamp discrepancies.
- DNP3 (Distributed Network Protocol) is a set of communication protocols used between components in process automation systems. The protocols are crucial for SCADA systems.
- RTUs (Remote Telemetry Units) are electronic devices placed in the field. RTUs enable remote automation by communicating data back to a facility and taking specific action after receiving input from the facility.
- Decline curves or decline curve analysis helps predict future oil or gas production based on past production history, which helps SCADA system developers and IT consultants understand how to build or improve a system.
- Fortran (or FORTRAN) is a programming language that is especially suited to numeric and scientific computing.
Logistics of Managing Compressors: Full Episode Transcript
Russel Treat: Welcome to the “Pipeliners Podcast,” episode 19.
[background music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. And now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. We appreciate you taking the time, and to show our appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Roger Reetz, with Enbridge. Roger, your YETI tumbler is on the way.
If you’d like to learn how you can win this signature prize pack, stick around for the announcement at the end of the episode. This week, our guest is Nick Guinn.
Nick is an information technology and management information systems dude, with a strong background in automation and SCADA, and is currently working in project management, putting in a enterprise compressor analytics and optimization platform. He’s here to talk to us about what it looks like to do that kind of project. Nick, welcome to the Pipeliners Podcast.
Nick Guinn: Thank you very much.
Russel: So glad to have you. I think you’re going to bring an interesting, and maybe a little different perspective to this conversation. As a start off, let me ask you to tell the listeners a little bit about yourself and your background.
Nick: Absolutely. I started out graduating from college with an MIS degree. After getting management information systems, started working desktop support, moved on to some server administration, worked as a network engineer for a little bit, and then ultimately started working more as a team lead for those different teams.
During that time, when I was trying to figure out what my next step was going to be, I’d had a little bit exposure at this point to SCADA and SCADA control systems, had an opportunity to start working internationally offshore on new builds, mobile offshore platforms, semi submersibles, drill ships, that kind of thing, just commissioning and doing QA, QEC on those rigs as they were coming out of the shipyard.
I did that for four or five years. Then around the time of the downturn, started working in Denver, in oil and gas companies, as either a business analyst, or project manager.
Russel: Interesting, so you’re one of those, pardon my language, horrible IT guys that makes life difficult for us automation guys.
Nick: Absolutely.
[laughter]
Russel: That’s actually one of the reasons I wanted to have this conversation with you, because I think the challenge is, at the automation world, what we’re trying to do is get something working. In the IT world, what you’re trying to do is implement standards and appropriate constraints and control, so that when it’s working, it’s working for the right people, it’s reliable and supportable.
That raises a whole different set of issues than just getting something working. It’s interesting. I’m a civil structural by education. I learned computers from the ground up, and learned automation from the ground up. What do you think is different about coming at this business of automation and control from a IT inspection quality perspective, versus a development commission perspective?
Nick: In a nutshell, I think one of the easiest observations to make is that, like you said, from the operational side, the goal is to get things working. Coming from an IT side, we have the same focus, we want to get things working.
A lot of times, we maybe get bitten by the fact that we don’t get it working right the first time, a lot more quickly, and at least in the IT environment, a lot more costly manner right away, so I think we get into the habit of slowing things down, and trying to put a little more rigor around getting it right the first time.
I think the end result is the same in both cases. The only difference is, when you’re compressing gas, you want it done now.
Russel: That’s right. That’s exactly right. I tend to have more of an engineering approach to stuff, versus a “get it done” approach. I think both are valid. It’s inexpensive to re-engineer a project when your engineering.
Nick: When you’re at the feed level. [laughs]
Russel: Yeah. [laughs] It’s very expensive to re-engineer a project when you’re commissioning.
Nick: Absolutely.
Russel: I think that’s probably the difference. Let’s talk. I know you’re working on a fairly good-sized compressor automation project, so let’s segue into that part of the conversation. As you’re working through this, what are you finding are the key components of a compressor automation, or a compressor optimization project?
Nick: Obviously, you’re starting with telemetry. You’re looking at automation around the valves, and having the solenoids, and control over safety system, that sort of thing. You’re looking at what you’re going to do for data management in the operations view of the data.
Least again, going back to the IT perspective, making sure that all that stuff is done where you don’t actually have guys walking out with a pad of paper to record SCADA numbers, which makes absolutely no sense in light of the fact we’re in the 21st Century.
Russel: [laughs] Even though we still do a lot of that. If you look at legacy technology, and I would define legacy technology as a PLC, you bring the data back to some central host, typically a SCADA system and, then, some kind of enterprise historian, the legacy approach, if you will, versus the approach that everybody’s putting out there as the new approach.
I think part of the way I’ll talk about it is, “Is this real, yet, or not?” which is I’m going to grab all that data at a very high rate at the edge, at the site, and I’m only going to push back to the host what’s needed or necessary. Are you finding that conversation difficult to navigate?
Nick: Yes. I think having a little more granularity at the local site, like maybe even a local historian and the control systems, does make a lot more sense. You start getting the information back to a central historian, and it’s not so bad if you’re managing a handful of sites, maybe a small company.
As the companies start getting a lot larger, then you’re starting to track, especially if you keep your data resolution levels down to the second by second, you’re looking at tracking what could potentially become terabytes of data really fast.
Russel: Petabytes.
Nick: Easily. [laughs] Then, being able to navigate that information and make that useful again is going to require an entirely different platform.
Russel: Years ago, I don’t want to date myself, but more than 10, significantly more than 10 years ago, I worked in compressor automation. We had…This is going to date me. I’m going to talk a little bit about the technology. We had very high-accuracy, very high-rate data collection instruments that we hooked up to a Compaq Portable 386, if you remember those.
Nick: [laughs] Yeah.
Russel: We would gather data and snapshot it. Then, we would bring it back to the office to do analysis. The beauty of this tool was, because we were getting very high-rate data and we were actually looking at the firing timing on the cylinders, the suction curves, and all the time stamping on all that data was all highly accurate, you could actually see gas slipping by the compressor.
You could see ring problems, firing problems, and timing problems in this data that you could graphically analyze because it was very high-rate, very high-resiliency. You couldn’t, in that time, capture that much data and manage it. You went and snapshotted it and, then, provided a report to the customer. They used that to do their maintenance planning.
I think that that’s the kind of thing that the Internet of Things promises, but what is the value of the additional data versus the cost of the additional data?
Nick: That is the key question. Think if you’re looking and capturing every tag, obviously, you start getting watchdog tags and things that aren’t necessary to capture and certainly not necessary, in some cases, to the resolution of the second by second capture.
One of the things that I’ve heard as an option is downsampling the amount of information that’s coming out of the local station. What that would mean for most of us, it would be the local station captures everything at a hundred percent, probably for event framing or incident management, that kind of thing, you can actually look back and tell what happened, do a root cause analysis based on the data.
Once it leaves the compressor station, maybe that’s not necessary. At the point where it’s leaving the compressor station, what you’re wanting to do is start building forecasts and maybe even predictive maintenance tables. When it comes out, sample it down to minute or, instead of minute to minute, maybe it’s every 15 minutes or every hour, depending on what you’re going to use the data for.
Right now, I don’t know that there’s a best practice. That’s where we keep getting hung up in some of our discussions is that there is no perfect answer. It’s part of what you just have to analyze for the business and for the business model, and see what is going to work best. Some of the organizations I’ve worked with have polling rates in the field.
Again, this is probably more upstream side, they’re anywhere from 15 minutes to four hours apart. Call it normal polling rates. Obviously for a compressor station that’s not very functional, you’re wanting second by second, so you can really make sure that the system is operating optimally.
Getting all of the information back, I don’t think that’s necessary. Minute to minute is probably fine in most cases. But I’m sure that’s up for argument.
Russel: To respond to that, I’m very familiar because we’ve done a couple of compressor monitoring projects, different than compressor optimization but compressor monitoring. The reason those poll rates are slow is because of the quantity of sites that you put on a particular circuit. Your typical type Modbus round robin protocol, you’re only going to get the data when you ask for it.
Are you looking at anything where you’re getting the data like a DNP3 or something like that that’s a report by exception protocol?
Nick: In some cases, we do have some DNP3, more of the report by exception. At least so far as the projects that I’m involved with now, we’re still in the analysis phase of determining exactly what direction to go, what’s going to fit the company best.
Russel: Do you have a sense of where you think that’s headed?
Nick: I do. The operations side has I think pretty much determined that they’re going to do a local historian capture. Then we’ve started working recently on feeding a lot of that information back to a central historian. At the same time, we’re also setting up some more security, like a SCADA, a segmented network for SCADA security and things like that, so all of it’s moving together.
Russel: It’s the kind of thing that every time I touch one thing, everything else is impacted.
Nick: Exactly.
Russel: Years ago, I worked around the space program. It was ridiculous because any time they added one pound to anything, everything changed because I needed more fuel. Everything changed. It’s like that when you start looking at this is the way we’re going to do telemetry, then you start designing your system within those constraints. You can either live with them or not.
Nick: Right. The IT perspective from a lot of the communications side is you tell us what you need and we’ll build around it so long as money is not an issue.
Russel: [laughs] How well is that working out?
Nick: Most companies don’t usually like to hear that. [laughs]
Russel: Certainly, been my experience.
Nick: We can give fiber to almost any location. You can get copper to most locations. You can get satellite to a lot of locations.
Getting the data back out again over satellite, it’s gotten better, but you still have to worry about a heavy cloud cover or having antennas for radios turn because of a hard wind or any number of other issues or fiber just getting broken because somebody decided to use a backhoe on them.
Russel: Anything that gets buried gets broken. That’s a rule.
Nick: Absolutely. It’s a common problem.
Russel: One of the thing we did on a project, again, this has been a while back, we actually built a system where the local system was historizing the data three different ways. This was all around the constraints of the RTU we were using. We would take and we would get millisecond data. We would keep that for I think it was four hours, just a four-hour rolling millisec keep data.
We had one second data for 24 hours. Then we had other data that we’d snapshot once every 15 minutes. What would happen is if there was a shutdown on the compressor, it would cry out. There was nothing going on on the telemetry until there was a problem. Then an analyst could demand poll and could get the very rich data. They would use that to determine what was the nature of the failure.
I need to send a mechanic. Do I need to send an electrician? Do I need to send an I&E guy in order to resolve whatever the issue is that I think is going on with this compressor? Just basically trying to reduce the drive time to go to the site. What they were doing before is they’d drive to the site, figure out the problem, drive back, and make a second trip with maybe a different skill set and the right stuff on the truck.
Of course, that’s not really what optimization is all about. It’s a different problem.
Nick: Absolutely. That’s the beginning to optimization is identifying the issues and being able to at least have a solid monitoring plan and see it working before you can even optimize anything.
Russel: I think one of the things that is consistent with what we were doing then versus what I hear you saying now is that it seems to make a lot of sense to do the high-rate data locally.
Nick: Agreed. If you don’t have it, a manned station for whatever reason, obviously, that’s not going to be as valuable.
If you do have somebody or at the very least a rover that’s working within a handful of stations that has access, and this goes back to the IoT issue you mentioned earlier, if you have an I&E or some other operator that’s operating in a general area that’s monitoring say three stations all together, they don’t have to be on site.
They can get alerts straight to their phone or tablet or whatever and then know to go to that site and go take a look at whatever is going on.
Russel: Right. Let’s talk a little bit about data management. We’re talking around that, been talking more about it from a telemetry perspective. I think one of the challenges is if I’m looking at an enterprise solution and I want to try to optimize, there’s some need to normalize the data. What do you all looking at or evaluating? What are you finding as you’re doing your analysis in that area?
Nick: Right now, we’re still trying to determine what the historian’s going to look like and where the data’s going to be stored. I think we’re talking about probably a three-tier system where we have local storage that’s the highest resolution. We’re looking at another potential storage that’s still in the operational area.
Being that it’s all time series data, it’s pretty large, flat file. Then, of course, we downsample any of that going back to a central historian where all of the information could actually be attached to an analytics platform. That’s the model we’re following right now.
At each step past tier one, which would be at the compressor station, we’re probably downsampling a little bit. Second to second, millisecond, maybe at the operational tier two would be maybe a still a localized historian gathering for a particular region, and then a central historian gathering across the enterprise. The one that’s gathering across the enterprise might be just minute to minute or every 15 minutes.
Russel: Interesting. I want to do a quick definition for the listeners that might not understand it. We’re using the term downsampling. That’s kind of been explained, but to more directly explain it, if I’m getting data every 100 milliseconds at the field site and I’m taking the data every second, I’m basically taking 1 out of 10 sites. I’m downsampling by a factor of 1 to 10.
That’s what that means. I’m keeping less data. That’s interesting. There are actually three tiers. I hadn’t thought about it that way. The people in the field closest to the need to analyze are getting more data. Then the enterprise is getting a lesser set of data. What would be the needs of the uses of the data? How are they different from tier two to tier three? Have you gotten that far?
Nick: I think that’s a mixture of actual function and perception. If you’re looking at the model for the physical level of the sensors, the telemetry and everything, down at the lowest physical level, then next level up is your supervisory level, then management and then enterprise above that.
If at the physical level you’re looking at having to manage everything operationally, you’ve actually got the I&E in the field maybe looking at a problem, they’re going to need pretty close to as real-time data from an instance start to the end to figure out exactly what maybe even led up to it.
That’s not necessary at the very highest level, at tier three level, because at an enterprise level, you’re making strategic decisions, not operational decisions. I think the biggest difference is that it’s operational, tactical. Tactical level, the tier two level, would be your engineering.
Maybe they’re looking at optimization. Maybe they’re looking at a couple other things. At the enterprise level, your tier three, that’s where you’re actually trying to tie it all back to financials.
Russel: At the operational level, I’m looking at that compressor’s not pushing what it should be pushing. Why is that? Or that compressor’s going down, why did it go down? Let’s figure that out. That’s an operational issue.
Nick: It’s overheating, whatever.
Russel: Versus the financial issue would be how much are we moving? What’s the cost of moving that? Should we be making a different decision about the package we’re using to move the gas.
Nick: This starts tying back to the upstream side, but you start looking at decline curves across to field instead of across just a single pad or whatever and how all that is being fed to a particular compressor station. By tying all that at the enterprise level, do you really need it millisecond by millisecond? No, probably not.
Russel: That makes good sense. That’s a new way of thinking about it to me. But that makes a ton of sense. That actually kind of gets to the next question. That’s the buzzword du jour is data analytics. We need to do data analytics. Data analytics isn’t new. I was doing it 30 years ago at college and programming in Fortran on punch cards. We were doing data analytics.
Nick: [laughs] Yep.
Russel: Don’t laugh too hard about that, Nick.
Nick: I’ll try not to.
[laughter]
Russel: Now the issue is data analytics. What is the challenge with figuring out what the analytics platform needs to be or do? Because there are a ton of products on the market in this space. There’s a lot of noise about analytics, tools and all that stuff. What are you guys finding? What are you using as your guiding principles to navigate through those decisions?
Nick: Those discussions really have just gotten started. For me, the KISS method is probably the easiest way to get started. [laughs] Really, really looking at it from the operational view first because the analysis of the data is not going to do anybody any good at the enterprise, at the top level, the C-Suite level, if it’s not contributing to the bottom line operationally.
It’s gotten to the point now, and I can’t tell you how many articles that have talked about just what you just said, everybody’s got their own opinion, everybody’s got their own perspective. There’s a lot of discussion over what needs to be done but not a lot about what’s being done.
I think it’s start simple. What analytics can you place around a particular issue like analyzing an event frame? Particular compressor goes down, let’s grab time from the beginning of what we believe the instant where it started to where it finally was resolved and look at that information just for that one issue. That is tying it back to the operational or tactical level.
You start looking at it from an enterprise, again you start tying things in like decline curves and hauling. If you got produced water, you’ve got to haul off. How you tie that into a gas or oil, those kind of analytics are going to be a completely different issue. Not a whole lot of point in getting into that until you start fine tuning at the first stage.
A lot of the solutions that we’ve found so far, at least with the recent projects, some of them offer analytics from the very beginning. Maybe it’s simple, totalizers, averaging and summing of certain things or maybe minor calculations based on a specific gravity or on gas quality, that kind of thing.
Then you start getting little further down the line, you start looking at analytics for a lot more, even to the point of starting to tie in AI. Now you’re looking at a lot of predictive and forecasting, and that gets pretty hairy. [laughs]
Russel: That’s why there’s becoming such a demand for mathematicians and statisticians because they can figure out what algorithms make sense to solve different kinds of problems.
You made a comment about having read a lot of papers and there being no consensus. I think we’re in an interesting place in how that technology’s maturing because the people who are offering the solutions, they need data to make their solutions deliver value.
Nick: Oh yeah. Absolutely.
Russel: There’s not a lot of drive to share data with those guys. Likewise, the guys who need the analytics, they want to figure out how to get a competitive edge by doing analytics. There’s not a lot of drive for them to create something that is going to get shared with their competitors. I think this is inevitable, but I think we’re really early.
The other problem you have is you’re doing a project and this project is a big integration project. You’re looking at field automation, field historian. You’re developing a multi-tiered strategy around how you’re managing your data. You’re having build-out telemetry infrastructure, network infrastructure, server infrastructure, and pick software tools to manage all this.
There’s nothing out there, at least not that I’m aware of, that is an off-the-shelf solution that just solves the problem. Somebody, if you’ve got a garage and lots of free time might want to do that.
Nick: [laughs] Unfortunately you’re still going to run into the same response that we get with some of the discussions that we have where somebody might put all that stuff together and they still won’t like it.
Russel: The other issue about building something that works is, is it scalable? Making something work for a single compressor and demonstrating that value proposition is way different than doing it for an enterprise that’s got a fleet of several hundred if not several thousand compressors.
Nick: I think it also ties to the delivering the value probably has to come with a little bit of education to the user and to the company that it’s being delivering to, which is a difficult proposition for any company trying to make a sale. I don’t mean to say that this is an industry-wide issue, but I think there is a lack of understanding of what could be possible.
Coming from an IT background, I can definitely see where things can go because I’ve seen where they’ve been. For the people in oil and gas who are maybe less familiar with IT solutions and certainly less familiar with an IT history, have no idea what’s possible now let alone in 5 or 10 years.
Russel: In the pipeline world, we’re risk-averse. We are late adopters of new technology because we’re risk-averse. We’re trying to adapt technologies that other people have figured out and been using for 10 years. 10 years in the pipeline world, not a long period of time.
Nick: I read a whitepaper recently. They even talked about SCADA development over time and how it really stalled in the ’80s. Where IT continued to grow and advance, SCADA solutions, SCADA communications, the different protocols just stalled.
Russel: I think that’s a really interesting observation. I hadn’t thought about that before. Sorry, I interrupted you. Go ahead.
Nick: That’s okay. I was just going to say it was only until the automation industry really started pushing that forward in robotics and all that kind of thing that you saw a lot of advancement in SCADA. I feel like it’s come around.
Three years ago, I was still seeing Windows XP on rigs. To put that in perspective for everybody, XP, that was at least 10 to 15 years after its issuance. They are risk-averse. They’re also slow to change. I don’t blame them because if it’s not broke, don’t fix it.
Russel: That’s right, but the reality is that this technology is coming. It is going to make big changes in our industry. Those of us that can figure out how to figure it out, adapt it early, we’re going to achieve a value proposition that’s better than and in advance of our competitors.
Nick: I think it’s going to tie back to things like being able to see a compressor overheating before it actually gets to shut down.
Russel: That’s a real simple example, but making that actually work in practice and changing the way people think about how they operate, that’s huge.
Nick: It is.
Russel: We’re already making changes. When I started in the business, people were still largely driving around in pickup trucks and writing things down on clipboards and faxing that back to a central office. They had some SCADA but that for critical monitoring. We’re way beyond that, nobody’s sending guys around…I shouldn’t say nobody. Many fewer people are doing it that way.
[laughter]
Russel: Sometimes when you’re sitting here in front of a microphone trying to podcast, the words just don’t come out in a eloquent way. I ask the listeners to forgive that bad thing.
Nick: As my wife is fond of saying, English is hard.
Russel: [laughs] True, particularly for engineers.
Nick: [laughs] That’s right.
Russel: Nick, I think this is probably a good place to wrap up the conversation. One of the things I like to do is I like to try and sum it down to three key takeaways. I’m going to try to do that. Then I’ll ask you to make commentary about it.
1. I think what I hear you saying in all of this is, number one is there’s not an off the shelf solution for doing compressor optimization, analytics and all that. If I want to do this, I’m going to have to build something by pulling different various things together.
2. The second thing I think is a key takeaway is you really need to understand who’s using the data and how they’re going to use the data because that’s material to defining what the system needs to be. There’s three uses. There’s the way down in the details use. There’s the operator use. Then there’s the financial or enterprise use. Those are different contexts. That’s the second key takeaway, understand the data use.
3. Then the third key takeaway is you need to do a little bit of analysis of where the technology’s going and what you can do with it this year, five years from now, and build a plan around it.
This is commentary on those takeaways. One is there’s nothing off-the-shelf, you have to build it yourself. Two is you need to figure out who the users of the data are and how they’re going to use the data. Three is you need to do an assessment of technology and where it’s headed.
The corollary to that is that means you’re not building a solution and it’s fixed. What you’re doing is you’re putting in a capability and you’re going to improve it. Putting in a capability’s a different kind of project than putting in a tool.
Nick: Yep. If I can tie that last bit together and also wave the IT banner just a tiny bit, business does have to drive SCADA. I do not disagree with that even a tiny bit. Business has to drive SCADA, not IT. IT can help tie it all together in a way that is perhaps a little slower in getting to that point, [laughs] but certainly in a more robust way, in a way that only has to be done one time, with any luck.
I know from my experience working either as a project manager for offshore, some of the offshore projects where I was actually managing electrical and mechanical inspections, plus managing the actual control systems inspection for drilling systems and things like that, if you can get the team in the same room and get them working together, you can do some amazing things.
Russel: Yeah, particularly if you get them all starting from the same understanding of where we’re at now. A lot of people tend to start the conversation about where can we get to, and really, you’ve got to start the conversation with “Where am I now?” Any journey requires two things: a destination and a clear understanding of where I’m starting from.
Nick: Absolutely. That guiding strategy and having everybody on the same team — the guiding strategy of where it is we’re going to go, what is the destination, and building a roadmap to that point is going to be a huge step in the right direction for getting any of your analytics back out.
Russel: Just like SCADA, you spend a lot of money building infrastructure in order to get numbers on a screen. Then making changes to screens is relatively cheap. You’ve got the same kind of issue here of getting numbers back to an analytics engine is expensive. Doing the incremental analytics is relatively cheap, but you’ve got to put the infrastructure in that allows you to do that.
Nick: Absolutely, that’s right.
Russel: Nick, thank you so much for being on the Pipeliners Podcast. I’m certain we’re going to want to come back and talk to you again as you get a little further down this. Maybe you’ll share with us some of the other things you’re learning as you walk through the process.
Nick: Sounds great, I’ve really enjoyed it.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast. I enjoyed the opportunity to talk to Nick and certainly learned some new stuff, some new things about compressor automation, compressor analytics, and some of the challenges associated with doing a large scale project.
Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at pipelinepodcastnetwork.com, or you can reach out to me directly on LinkedIn. Just look for my profile. It’s Russel Treat. Thanks again for listening. I’ll talk to you next week.
[music]
Transcription by CastingWords