In this episode of the Pipeliners Podcast, Russel interviews Stuart Mitchell from ProFlex, discussing the innovative approach to pipeline leak detection using pressure events.
Stuart explains the evolution of their technology, starting with the application of negative pressure waves and the challenges associated with false positives. The conversation covers the use of high sample rates, machine learning techniques like convolutional neural networks, and the integration of edge processing and cloud analysis to enhance accuracy and eliminate false positives.
Listen to the episode now to learn more about the complexities of leak detection technology, the importance of accurate leak location, and the broader potential of leveraging data for pipeline performance analysis.
Direct Leak Detection Using Pressure Events Show Notes, Links and Insider Terms
- Stuart Mitchell is the President of ProFlex Technologies Inc.. Connect with Stuart on LinkedIn.
- ProFlex Technologies Inc. is focused on providing technology driven solutions to integrity management challenges within the oil & gas industry. ProFlex is actively challenging the current “status quo” approaches and technology solutions to long standing problems. ProFlex has expertly created revolutionary leak detection technology that identifies and locates spontaneous leaks within seconds, enabling pipeline operators to rapidly mitigate product loss and minimize environmental damage.
- Leak Detection is the process of monitoring, diagnosing, and addressing a leak in a pipeline to mitigate risks.
- Negative Pressure Wave (NPW) is a method of leak detection that can locate the position of leakage by collecting the negative pressure induced by the sudden leak.
- ML (Machine Learning) is an application of AI (artificial intelligence) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.
- Edge Computing differs from traditional SCADA only in the relevance of the dynamic lift that can be moved from the cloud to the Edge, for optimizing bandwidth, and functional efficiencies.
- Cybersecurity is the state of being protected against the criminal or unauthorized use of electronic data, or the measures taken to achieve this.
- OPC (Open Platform Communications) is a data transfer standard for communicating device-level data between two locations, often between the field and the SCADA/HMI system. OPC allows many different programs to communicate with industrial hardware devices such as PLCs. The original system was dependent on MS Windows before shifting to open platform.
- OPC UA (Unified Architecture) is a platform independent service-oriented architecture that integrates all the functionality of the individual OPC Classic specifications into one extensible framework.
- Pigging refers to using devices known as “pigs” to perform maintenance operations. This tool associated with inline pipeline inspection has now become known as a Pipeline Inspection Gauge (PIG).
Direct Leak Detection Using Pressure Events Full Episode Transcript
Russel Treat: Welcome to the “Pipeliners Podcast,” Episode 318, sponsored by EnerSys Corporation, providers of POEMS, the Pipeline Operations Excellence Management System, compliance and operations software for the pipeline control center to address control room management, SCADA and audit readiness. Find out more about POEMS at EnerSysCorp.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.
Russel: Thanks for listening to The Pipeliners Podcast. I appreciate you taking the time, and to show that appreciation, we give away a customized YETI tumbler to one listener every episode. This week our winner is Desi Urias with Vaquero Midstream. Congratulations, your YETI is on its way. To learn how you can win the signature prize, stick around till the end of the episode.
This week, we’re going to speak with Stuart Mitchell from PipeSense, and talk about direct leak detection using pressure events. Stuart, welcome to the Pipeliners Podcast.
Stuart Mitchell: Hi, Russel. Good to be here. I’m looking forward to the discussion we’re going to have.
Russel: Absolutely. Before we dive in, if you would, tell us a little bit about who you are and your background and how you came to be working in leak detection.
Stuart: I’m Stuart Mitchell. My background, I guess, I’ve been working now 27 years in the oil & gas industry, mostly focused in the area of research and development. Done a lot of work both onshore and offshore in this area.
How I got into leak detection was that I’ve always been interested in developing new technologies. Having spent most of my time offshore, a really interesting onshore pipeline tech came up around leak detection. Having just exited one business and being a serial entrepreneur, I kind of started up another business to go and do pipeline leak detection. That’s how we got here.
Russel: My grandpa called that serial entrepreneur thing being a businessman.
Russel: That’s just a fancy name we made up to feel better about it.
Stuart: It does. It makes us all feel better.
Russel: Anyways, look, I want to talk to you about your tech and what you’re doing. Tell us a little bit about the company that you’re with and give us just a quick overview of your technology, what you’re doing.
Stuart: Sure. The company that I’m with now is called PipeSense. PipeSense, we started the business here to help people enhance their pipeline performance through data analysis. Part of that technology that we apply that data analysis to is pipeline leak detection.
Our technology has evolved through a couple of stages, key stages. For those people out there that are not hugely familiar with pipeline leak detection, the first technique that we applied was called negative pressure wave. Just as a real quick explanation of that, think of sonar within a pipeline.
When you get a pipeline leak, you get a pulse in the pressure within the pipeline. That pulse travels up and downstream and can be detected by pressure sensors. That’s the basic technology. Generally, there have been problems around using negative pressure waves in the industry. It’s been around for a while.
The main problems with it are that if you want to pick up smaller leaks, you’re very, very prone to having false positives. False positives are bad because you’re effectively telling the operator they’ve got a pipeline leak when they haven’t.
What we looked to do was try and, first of all, enhance that technique. Couple of ways we do that. Do you mind if I get into that, Russel, now?
Russel: No. Please, this is great. Just keep going. I’ll jump in when I have a question.
Stuart: Sure. No problem. The first of those that we do is through the techniques is through applying a very high sample rate. We sample pressure on the pipeline at about 1,000 samples a second. What that enables us to do is get a very detailed, very granular view of any pressure events on the pipeline.
Once we detect a pressure event on the pipeline, what we’ve got to establish is that it’s really a leak event or is it just normal pipeline operation or is something else that’s going on? The couple of ways that we do that is once we think we have a pipeline leak, we then look at accuracy and timing between pipeline sensors.
Our technology, let’s say we put sensors on the pipeline that are looking at pressure every 10 to 20 miles, in order to determine whether it’s actually a leak event or it makes sense for a leak event, we do two things.
We look for the location of accuracy of the predicted leak. By being able to accurately predict leak location, and we are accurate to about 20 to 50ft, we can actually check on the pipeline, the pipeline map, whether that location of a leak makes sense compared to what else is going on in the pipeline.
The second thing we do is we check the timing. A leak event typically picks up on more than one field processing unit, one of our pieces of hardware on the pipeline. What we’ll do is we’ll check the timing of the signal between multiple FPUs. If that progression makes sense in terms of timing, and timing is related to location, then we’ve got more confidence it’s an actual leak event.
What this technique doesn’t do is get away from pressure events on the pipeline that look very, very similar to leak events. Let’s say you rapidly shut a valve or you rapidly open a valve and you get a very quick, transient pressure change, that can still confuse a negative pressure wave system.
What we then looked to do, Russel, was moving on from there and applying machine learning to the problem. What we’re actually doing, as I said, is gathering an awful lot of data, so a thousand samples a second in real time on the pipeline in multiple places.
What we then do, for a new technique to try and improve accuracy and do away with false positives, is to run an ML technique, machine learning technique, called convolutional neural networks. What they’re basically doing is that they’re processing the data. In this case, we’re looking to be able to identify the visualization of a leak event.
We take a pressure reading, we turn that into an instantaneous frequency plot and then we look at that plot using convolutional neural networks. Then, we do most of that processing, for that part of it, what we call on the edge. Edge computing, which is taking the computing away from the cloud and putting it actually at the location of the pipeline.
Once we’ve done that identification using the CNN locally at the pipeline, what we’ve determined is we’ve got basically an event, something that’s different from normal pipeline operation. We then take that data, when we’ve identified the normal pipeline operation, we send that off to the cloud.
Once we’ve gathered a number of different sets of data from individual field processing units on the pipeline, we then do some further analysis in the cloud.
What we’re trying to do now is we’re taking what are just pressure events on the pipeline, and that could be a pipeline leak, but it also could be a pressure surge. It could be a control valve that’s faulty and chattering on the pipeline. It could be any number of pipeline events that could have happened and we’ve detected.
Then, we use secondary processing in the cloud to try and classify those. What we’re doing is we’re comparing different types of pressure events against a training set. In our case, we’ve gathered something around 25 to 30,000 pressure training sets. Those events we then can compare to what we’ve just received from the pipeline when an event has been detected.
Russel: Let me jump in here, Stuart, because this is a lot of information for somebody listening to grab and process.
Stuart: Yeah, true.
Russel: Certainly we have some listeners that are leak detection guys and are familiar with negative pressure waves. There’s a couple of things that you guys are doing that’s different from what I have heard from others.
First off, what’s common in negative pressure waves is very high sample rates of pressure and looking for specific kinds of signatures. That’s common. One of the things I’ve seen some of the technologies do is they require two sensors, and they compare the data between the two sensors to find leak location.
What I think you guys are doing that’s unique, and correct me if I don’t have this right, one of the things you’re doing is you don’t require two signatures, you just require one.
The other thing you’re doing that’s unique is you’re combining edge processing to identify the data sample of interest, if you will, and then the data sample of interest gets passed to the cloud, and then you do further analysis in the cloud to eliminate the false positives. One of the other things you’re doing that’s unique is deliberately eliminating false positives.
Stuart: That’s correct. If we take the second part of it first, because I think that’s the most important part of the technique to what we’re doing here.
As I said, we’ve moved from just purely negative pressure waves, which is just looking at, as you say, a high sample rate at a specific type of signature. When you apply different filters to the data, that signature still stands out. That’s a traditional negative pressure wave.
What we’ve moved on to do is to take those leak events, the pressure events on the pipeline, whether leak events or otherwise, and turn those into instantaneous frequency plots. By doing that and sending that off to the cloud, we can then use cloud processing, a lot of horsepower there, to really look through a large data set of different pressure events.
It enables us to, my favorite phrase is, “pick the wood out from the trees.” What we’re trying to do here is determine that we definitely have a leak event, because the signatures of a leak event, when you use this type of machine learning technique, are very different from anything else on the pipeline.
Russel: It’s interesting. I’ve done podcasts on edge processing, and one of the things that is interesting about edge processing is you can deal with very large data sets, which allows you to do a certain kind of analysis. The downside of that is if you push all that data to the cloud, then it very quickly becomes uneconomic.
Russel: The ability to understand, “This is what I do at the edge, and this is what I do at the cloud, and this is how I move the data around and how I manage that data,” that’s pretty interesting and intriguing to me, because that’s something I’ve been talking about for a while.
There’s only two places where I’ve seen that done, where it makes a lot of sense, and this is the second one I’ve heard of. The other is in image processing, where you’re taking very large data sets of images and then performing analysis in the cloud to find the data sets of interest.
Stuart: We’re effectively doing the same. It’s a great analogy that used there, a great comparison, Russell, because effectively we are doing analysis of images. We’re turning the pipeline event effectively into an image, and we’re doing that image processing, or the turning of that event into an image, at the edge. We’re using edge processing to do that.
Then, what that enables us to do is we’re not kicking huge amounts of data to the cloud. We’re only passing data to the cloud, a small data set, when we get a pressure event. We still don’t know what that pressure event is, but we’re still only passing data when we see a pressure event.
That’s much more uncommon. Takes off the load of sending a thousand samples a second, for instance, to the cloud. The real weakness, Russell, if you don’t do that and you don’t do your processing at the edges, you become very susceptible for data dropouts and loss of communication.
For a system where, let’s say you’re doing all of your processing remotely, or you’re relying on interaction between the units to do confirmation, then if one of those units drops out of comms or you cannot send that data for whatever reason to the cloud or common locations to do comparison or processing, then you’re effectively blind.
Your system is down. For the duration that your comms are down, your leak detection is down.
Russel: The way that’s typically mitigated is you put on the remote device a very accurate clock, and you’re time syncing all of that data. From a processing standpoint, that’s burdensome.
Stuart: Yeah, it is. Actually, intriguingly, Russell, we’ve managed to do that, and maintain the ability to still process that data at the edge. In fact, all of our data is timestamped as well.
The way that we do that is each individual unit uses a GPS time server. All of the units are commonly connected to the same GPS satellite, so picking up the same time signature, and everything is very, very accurately timestamped.
Russel: For those of us that are knowledgeable of those details, you say that and you make that sound easy, and I know that’s not easy.
Stuart: Yeah. [laughs] It was a particularly complex process in developing the hardware.
Russel: Time syncing multiple devices to a millisecond and making sure that they don’t drift by very much [laughs] is nontrivial.
Stuart: We could get into the weeds here, but it involves a lot around managing data interrupts and processing interrupts in the correct ways that you do not stop the ability to influx intake data and process it whilst you’re searching for timestamps and timestamping at the same time. It’s something…
Russel: Exactly. I just want to point that out. My peers that work in that, no name will know exactly what’s involved in doing that.
Stuart: As you can imagine in developing this, Russell, there was an awful lot of testing, a degree of frustration, and a lot of going back and scratching our heads and working things out in the initial term, but we’ve managed to work it out now.
That system, that hardware and software system without the machine learning, so the more traditional negative pressure wave, our enhanced version of that, that’s been running now using that technique reliably for some three years. That whole part of that, that whole process, we’re extremely comfortable with.
Really moving on to the machine learning was, A, to get rid of false positives, so move towards zero false positives, a true zero on pipeline lead detection systems. The second thing is, and the interesting thing, is you’ve got so much data. You’ve got the ability to take that data and break that down into different types of events on the pipeline.
Russel: Before we go there, Stuart, there’s another question I want to follow in the leak detection. The other thing that comes up for me when I hear this is, “OK, so I’m doing the stuff I’m doing in the field. I’m identifying a signature for analysis. I’m sending that signature for analysis to the cloud. The cloud is performing that analysis. Then, I’ve got to get an alarm back to the control center.”
Russel: All of that has to happen in a timeframe. How are you managing that kind of, if you will, the backend part of that process?
Stuart: We built a pretty flexible backend. The first thing is that our system is initially cloud deployable. In its base form, it’s a cloud deployed system that has a Web interface.
Typically, the sort of thing that you can pop up on your laptop, or on your phone and have an interface there that will tell you basically that everything is functioning, all the units on the pipeline are functioning OK, and if you’ve got a leak event. We keep it very simple, everything’s functioning OK, you’ve got a problem, or X marks the spot where the GPS location for a leak event.
How we ensure that that’s communicated to the operator effectively in a timely fashion, first of all, leak detection process end to end is, I would call it near real time. It’s as near as you’re going to get. For leak detection, typically, within two to three minutes, we’re able to inform the operator through our system.
People get heartburn about deploying systems on an outside cloud. Some of the pipeline operators rightly are aware of cybersecurity. We’ve developed the system so we can deploy that in our cloud, their cloud instance. We can even deploy it locally on the server for them within their own building.
That’s kind of the mechanics of it. We can take any of those methods and connect them to their control room using straightforward OPC UA connection. It gives us a very flexible way to integrate that into their control room process. Now, you’ve enabled, A, a very flexible deployment. B, you can communicate directly with the control room.
The other part of this that’s nice is we then set this up to communicate to individuals in the organization. We can have up to 20 individuals assigned per pipeline. What we do is we can set up to send those guys emails.
We can notify them by text message. We can send them a GPS location in the field via text. It’s an incredibly flexible way. We ensure that the information gets to the operator within, as I said, two to three minutes across their organization.
Russel: What’s really interesting to me about this is the fact that you, and this is just interesting to me about negative pressure waves in general, but the fact you can send a location is super useful.
Having a leak alarm on a segment that’s 20 miles long’s very different than having a leak alarm on a segment that’s 20 mile long saying you have a leak at mile marker X. That’s a very different kind of thing. It impacts your whole response.
Stuart: Absolutely. As I said, leak location is very important to us for that reason, but also in being able to help to rule out false positives as well. It’s been something that we’ve been incredibly focused on.
For me, the important steps that we go through and the technique that we use, we actually have two ways now going about leak location. The more traditional way for us is we utilize those timestamps that we were talking about.
When we first put the system in, we go to the field with portable units and we do a calibration, what we call a pitch and catch. We’re putting a pressure sensor on the pipeline with a small bore quarter turn valve, a release valve. We’re doing it at one location, and then we’re putting on a pressure sensor at a second location a known distance away.
What we do is, I’m very old school, I don’t particularly put much stay in simulation work. We simulate a leak. We release a small amount of fluid, maybe five seconds – liquid or gas – at the leak threshold size the client wants to operate at, what they want to see it at. Effectively, we’re simulating the onset of a pipeline leak.
What we do is we catch that between the two sets of sensors and accurately timestamp when the first sensor sees it and the second sensor sees it. Now we have a time between sensors. We have a propagation rate because we know the distance. What we pair with that data is we obviously have pressure data.
We pair that data then with temperature readings on the pipeline at each location. We pair that, if available, with density readings. What trying to establish here, we’re trying to get a correlation between generally the speed of sound of the fluid – the liquid or gas within the pipeline – and the propagation rate that we measured at the time.
What we do is, when we get a leak event that’s been detected, all of that data goes off to the cloud. We also resample temperature and density if they’ve got it at the time, and we use those to recalibrate that propagation rate on the fly.
Why is that important? If you didn’t do that, let’s say you’re working with NGLs, effectively compressible fluids. At times, they can be squishy. Effectively, the density of those fluids and the speed of sound, therefore is very susceptible to pressure and temperature change, so your propagation rate is very susceptible to those things.
If we’re talking about trying to be very accurate with location, 20 to 50ft is what we achieve, you cannot do that without recalibrating your propagation rate on the fly. You’re simply relying too much on having a very, very steady state from when you first calibrated to when you’re now operating the system.
Russel: In practice, for most pipelines running, particularly liquids, that density is changing. Even if it’s the same liquid, it’s still changing.
Russel: To the extent you’re batching on a pipeline or anything else, then that just raises that complexity. If you’re able to calculate speed of transmission between two points, that’s going to be able to get you back the density.
Stuart: Yeah, absolutely. We’re not giving an absolute value, but what we are doing is using the same parameters as when we put the system in on the pipeline and first got that propagation rate to recalibrate it at the time of the leak event. That allows us to be incredibly accurate in location.
Russel: It’s fascinating.
Stuart: The second way that we’re doing that now is actually something that we’ve learned through the huge amount of data that we’ve gathered and the large number of leak events and simulated leak events we’ve gone through, is that we’re now also able to do a second check, which is basically trending on event intensity.
Because we’re changing this now into a frequency plot, we’ve got an intensity of that response. As you can imagine, the closer you are to a pipeline event, a spontaneous event like a leak event, so it’s initiated away from a sensor, the closer you are to that, the higher the response. The magnitude of the response is going to be greater.
As you move away from that, you get a degradation in that magnitude. Through all our analysis now, we’re able to actually trend the magnitudes at different sensor locations to be able to also pinpoint leak location that way. We’ve now got two ways of going at this.
Russel: It’s really interesting what you guys are doing and how you’re doing it. The other thing, as we were having our conversation prepping for this call, that came up for me is you use the term “pressure event detection” versus “negative pressure wave” to make what you’re doing distinct.
What comes up for me when I start thinking about that is, just from an operations and maintenance standpoint, there’s a lot of information you could get that you otherwise could not get because there’s other kinds of pressure signatures for things like valves that are leaking or flow control valves that are not properly tuned or that are chattering or…
Just all kinds of other things that are just mechanical things happening on the pipeline that this system could detect, because once you get that pressure signature, you can look for other things other than just leaks.
Stuart: Yeah, absolutely. That’s what we found to be very important. In just focusing on leak events, you’re actually missing out on effective use or efficient use of all of that data that you’ve gathered. A thousand samples a second keep coming back to that point. That’s a lot of information about how that pipeline is operating.
By moving away from just a focus on leak events to pressure events, and the way that we can use machine learning to categorize those pressure events now, we can actually work with the operator and say, “Hey, look, outside of leaks, what else is important to you in operating your pipeline? What would help you with your pipeline operations, your pipeline efficiency, even?”
We can take samples of those type of events, teach the system to look for them, and notify the operator when those are occurring as well.
In the end, what we’re trending towards here, Russell, is being able to do an end to end, a lifetime pipeline performance analysis for the operator using the huge amount of data that we’ve gathered to feed that. As I say, whether that’s pipeline leak events, which are incredibly important, detecting those quickly, very, very important.
It’s also everything else that’s going on, on the pipeline. To be able to put those into buckets for the operator, and to help them to understand what’s happened throughout the life of that pipeline and what affects pipeline life, is where we’re going with this. That’s the ultimate goal, I feel.
Russel: I think that the opportunity for improving mechanical reliability for all the rotating equipment and control equipment is immense here, because that’s the kind of thing that there’s not a lot of…I don’t know, I’m getting a little over my skis here.
When I think about the alternative means, vibration analysis and things like that, I don’t think you necessarily eliminate that, but you actually can get confirming signals.
Stuart: Yeah, absolutely.
Russel: That can help you prioritize your maintenance activities.
It’s a small thing.
Stuart: It’s a fascinating area to get into. The neat thing with the way that we’ve set the hardware up as well, despite what you’ve said, the complexity of what we’re doing and all the sort of pressure on the hardware to perform those complex activities, we’ve actually got spare capacity.
We build our hardware with spare high speed and low speed sample input channels. We can actually take other data from the operator as well. If they wanted us to look at that and do that, we could do additional processing for them as well on that data.
What you can then get is a complete combined data set from multiple different sources and sensors along the pipeline to really build, as you said, “Let me know how I improve mechanical performance or mechanical lifetime of my pipeline.” We can do that through gathering all these different data sets.
Russel: Stuart, before we wrap up here, I just want to tell the listeners that you and I met at an event where I was chairing a panel talking about product commercialization. You were one of the members of that panel.
Russel: Where would you say that you guys are in the commercialization maturity, and what are you working on now in that process?
Stuart: I guess the first part of this technique we started developing back in 2018. We’ve gone through, probably, all those years now, five years, of pain and suffering in developing a new product and testing that. We do a lot of our own confirmation testing on our own test loop facility that we have. Actually, our facility is in Houston.
We’ve also been able to test this equipment now and the techniques in the field for probably the last two to three years as well. We’ve got an awful lot of experience operating the system near commercial.
We’ve actually had an installation of the system, and you mentioned about running the system one ended as well. We’ve actually got an example of that offshore, and that’s been offshore for some two to three years as well now.
We’re starting to work into our first full commercial projects onshore, so the system, I would say, is fully commercialized at this point. We’ve got enough data and trust since 2018 to know that the system does what it says on the tin.
Where we are now is just into deployment and commercialization. The focus for us, Russell, in doing that is, the important part that’s often missed is, how do we integrate this with the operator? You can have a very mature technology, but if you don’t work with the operator in how you deploy that and how they benefit from it, it just falls on its face. It doesn’t get used.
I guess that’s the area that we’re most focused on now in the operators that we’re starting to work with, is not only how do we digitally integrate this system with them, how do we operationally integrate the system with them?
Russel: Yes, that’s the thing that I added to Chris Alexander’s maturity matrix was, what I call, the operations maturity, which is the part that the operator has to do to be able to actually deploy, operate, and maintain the technology.
Stuart: It’s absolutely key. We’ve had experience when we first put these systems in, in the operator or a guy in the field having picked up an issue and suddenly picking up a phone to one of the guys in the office here because we were the guys in the field that helped put them in and they’ve still got our phone numbers.
It’s like, “This is the wrong way to go about this, guys. You need a detailed processor in your control room, which is a system that utilizes it to check that you haven’t got a false positive, to really put it into run and operate.”
Russel: Right, and operate to build a whole capability whenever they deploy a technology.
Stuart: Correct. They have to do that, otherwise they will not get the full benefit from the system.
Russel: From what you’re telling me, that’s where you are in your commercialization journey, is working with operators to get those kinds of things in place.
Stuart: That’s correct, yeah. I think that’s the most critical step.
Russel: Stuart, if somebody wants to learn more about PipeSense and get in touch with you guys, how would they go about doing that?
Stuart: Really simple. First of all, first point of contact, best source of information, our website, SimplePipeSense.com. That’ll take you to the website. Got a lot of information on there about different techniques that we’re using this for. Really quick example, Russel, just in very short sentences, we’re using this to support real time leak detection during Hydrotest.
We can track pigs in real time using these techniques. We’ve even developed a technique to find pre-existing leaks on pipelines when you put the system in, and you’ve already got a leak on the pipeline to be able to find that as well. Go to the website, find out all that information. You can also contact us through the website. It’s got all our contact details on there.
Russel: As always, we’ll put all that information in the show notes. Stuart, look, thank you for taking the time. Thank you for walking us through the gory details – if you will – of what you guys are doing and how you’re approaching this. It certainly seems like you’re doing some really interesting things. It’ll be interesting to see where you are in another three years with us.
Stuart: Yep. I’m looking forward to the journey, Russel. It’s been a journey getting here, but we’re very excited about what this technology can do.
Russel: I hope you enjoyed this week’s episode of The Pipeliners Podcast and our conversation with Stuart. Just a reminder before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit PipelinePodcastNetwork.com/Win and enter yourself in the drawing.
If you would like to support the podcast, please leave us a review on Apple Podcasts, Google Play, Spotify, wherever you happen to listen. You can find instructions at PipelinePodcastNetwork.com.
If you have ideas, questions, or topics you’d be interested in, please let me know in the Contact Us page at PipelinePodcastNetwork.com, or reach out to me on LinkedIn. Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords