This week’s Pipeliners Podcast episode features Sean Donegan and Allan Adams of Satelytics discussing the specific application of using satellite imagery to perform leak detection on an entire basin.
In this episode, you will learn about why the Satelytics data engine was formed, the future of data collection technology and why the algorithm continues to change over time, how Satelytics is working with oil and gas companies to share the cost of data collection, and the importance of cloud computing when collecting data.
Satellite Imagery for Leak Detection: Show Notes, Links, and Insider Terms
- Sean Donegan is the President and CEO of Satelytics. Connect with Sean on LinkedIn.
- Allan Adams is the Chief Scientist at Satelytics. Connect with Allan on LinkedIn.
- Satelytics is the foremost remote sensing leader with a full staff of Ph.D. level expertise. The company uses proven science, adept software, and powerful technology to meet the toughest business challenges.
- Jay Almlie is a Principal Engineer at the EERC and a leader of the iPIPE consortium. Connect with Jay on LinkedIn.
- EERC (Energy & Environmental Research Center) is a research, development, demonstration, and commercialization facility for energy and environment technologies development located in Grand Forks, North Dakota. EERC is a leading developer of cleaner, more efficient energy to power the world and environmental technologies to protect and clean our air, water, and soil.
- iPIPE (the intelligent Pipeline Integrity Program) is an industry-led consortium whose focus is to contribute to the advancement of near-commercial, emerging technologies to prevent and detect gathering pipeline leaks as the industry advances toward the goal of zero incidents.
- Leak Detection is the process of monitoring, diagnosing, and addressing a leak in a pipeline to mitigate risks.
- ILI (Inline Inspection) is a method to assess the integrity and condition of a pipe by determining the existence of cracks, deformities, or other structural issues that could cause a leak.
- Bakken Formation is one of the largest contiguous deposits of oil and natural gas in the United States. It is an interbedded sequence of black shale, siltstone, and sandstone that underlies large areas of northwestern North Dakota, northeastern Montana, southern Saskatchewan, and southwestern Manitoba.
- Great Plains Software (Microsoft Dynamics GP) is a mid-market business accounting software or ERP software package marketed in North and South America, U.K. and Ireland, the Middle East, Singapore, Australia, and New Zealand.
- Brian Epperson is the Senior Manager of Environmental & Regulatory at Hess Corporation. Connect with Brian on LinkedIn.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at a remote location.
- AWS (Amazon Web Services) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
- AI (Artificial Intelligence) is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and animals.
Satellite Imagery for Leak Detection: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 125, sponsored by Satelytics, a cloud-based geospatial analytics solution processing multi and hyperspectral imagery from satellites, aircraft, drones, and fixed cameras to lower the cost and improve the timeliness of identifying leaks, encroachment, ground movement, and other pipeliner concerns. To learn more about Satelytics, visit satelytics.com.
[background music]
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now, your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time, and to show that appreciation, we’re giving away a customized YETI tumbler to one listener each episode. This week, our winner is Edward Naranjo with Honeywell. Edward, if I didn’t get your last name correct, I apologize, but you’ve got a YETI coming regardless. To learn how you can win this signature prize pack, stick around to the end of the episode.
This week, Sean Donegan and Allan Adams will return to talk about using satellite-based imagery to monitor an entire basin. Sean, Allan, welcome back to the Pipeliners Podcast.
Sean Donegan: Great to be with you again, Russel.
Allan Adams: Thank you for having us.
Russel: So we’re going to pick up where we left off. A couple of weeks ago, we had a conversation about the fundamentals of image analysis, and this time, I want to shift a little bit and talk about how does this actually work in practice.
We talked about how the sensors collect the data, and then you guys are applying analytics to the data to find things. The thing that’s interesting to me — and we’ve talked about this a little bit off-mic — is that you guys are actually doing this on a basin-wide approach. When you think about satellite imagery, that makes a lot of sense.
Let’s start by talking about, what is the approach if you’re going to do this on behalf of an operating basin?
Sean: So, Russel, again, great to be with you. This really is another North Dakota first, if you will. We love how forward thinking both the operators in that area are, along with the regulators who see the operators as customers, which is most refreshing.
As you’ve done a great series on iPIPE, it was the beginning of this, if you will, marshaled by Jay Almlie at EERC. The concept was that a group of operators, I think it was nine originally, would get together. The biggest cost of monitoring, typically, with data from satellite, is the data cost itself.
This group — they got together and formed this consortium that would share the cost of the data. Then, they also agreed, which was really another great move and first, was to share their best practices.
They set about looking for liquid leak detection. That grew to encroachments and changes in the right of way, whether a facility or a pipeline, even things like produced water, measuring salinity in water and on land.
From that, what has become very apparent is that it was an overwhelming success. If you will, the next maturation of that or the grown-up version would be to monitor the Bakken in its entirety each week.
I think there are currently 71 operators, gatherers, facilities of one make, shape, or form or another. There are the transmission companies, of which I think there are five. Don’t quote me on those numbers, but I’m roughly accurate. I’m pretty accurate.
The objective would be, here, to share, again, the cost. Each individual company would have their data. It’s their domain. It’s their right to set the alert levels and the alarm levels. It’s pretty straightforward.
We would gather that data from satellite once a week. Literally within a couple of hours we will have run the data through the Satelytics engine so that now you, as one of those many participants, would have alerts and alarms to conditions that are outside the thresholds that you’ve set.
Whether that’s looking for leak detection and encroachment, a change in a facility. Maybe some movement of land, although, of course, the Bakken is fairly flat. It would be for a number of operational conditions, all at the same time, that would keep you up at night if you weren’t having that form of monitoring and that capability to minimize any consequences.
Russel: That’s kind of radically different than how we approach these things now. Typically, every company has its own program. In this case, every company retains its own program, but they’re sharing the data infrastructure that drives their program. Is that a fair way to characterize what you guys are doing?
Sean: Yeah, it’s a great way to characterize it. I think what I would add to that is there needs to be some outside influences that work to make this possible.
Those outside influences are a regulator in North Dakota who’s very customer-centric and wants to ensure that the operators and producers and transmission companies have the right environment, are supported, because obviously, the oil and gas industry is I think the second-largest industrial segment in North Dakota and extremely important.
Then I think the other great factor that’s working in the favor of this group of visionaries is that the governor of North Dakota was an ex Microsoft executive. I think he founded Great Plains Software. Here is a gentleman who immediately says there’s got to be a technology way of solving some of our peskiest problems.
So, with those sort of mindsets — you’ve met Jay Almlie. You see how forward thinking he is and some of those companies that are participating in the area. You’ve interviewed Brian Epperson from Hess. Another good example of a visionary company.
So, getting those groups together, what they’re saying is, how can we solve and minimize consequences as early as possible? How can we do it in such a way that not only are we innovating from a technological perspective, but how do we make a dollar go a little further?
Because when oil prices are under pressure, efficiencies count. They make a huge difference to the bottom line almost immediately.
Russel: Oh, yeah. As we record today, we’re seeing some historically low prices for oil and natural gas, and a lot of questions are being asked about how long that’s going to be going on. So I think you’re right at it.
I wanted to ask, why once a week?
Allan: Once a week, for now, that’s where the satellites are acquiring the data. We look every day, but it just seems to me that sweet spot is how can the operators even handle taking this data in? So they need to change some things inside their company as to how they handle this information.
One week seems to be doable from their standpoint, whether that’s deploying field crews to the alerts to fix or mitigate problems, or if that’s ingesting into their SCADA system to say, how do they bring this into their alert alarms and get programs set up.
So one week is kind of that sweet spot that we’re sitting at now. That’s not to say we can’t increase it.
Russel: Yeah, so I guess the issue is there’s a workflow behind all of this that has to occur inside the operator.
Sean: Yeah. I mean, you are fundamentally changing the way that oil and gas look at their right of way, their facilities, and the way that they react.
As somebody once said to me, they said they drive 69 million miles a year. That’s 69 million more than they would like to drive because of safety concerns, employees.
There’s an underlying, serious note — and you’ve heard me say this before, Russel — that I’ve yet to meet an oil and gas operator where safety and the concern for their employees aren’t foremost at the front of their objective set.
So once a week today. Here’s where the vision is though in the future. There is talk that by 2024 2025, you will be able to revisit — because of the huge investment that is being in above the Earth’s surface — anywhere in the world with a satellite, almost intra hour, and some would say every few minutes.
So it’s fundamentally appropriate that the oil and gas and the people that are adopting this technology start to change their processes to understand, because going forward, this type of continuous data from this particular data set will almost be just that — continuous.
That’s really sort of our vision as a company. That’s why we live on the leading and the bleeding edge. We’re not there yet, but how does a turtle make progress? It’s got to stick its neck out and that’s where we are from a development perspective. We see the vision, that it will be multiple times an hour to be looking over your infrastructure.
Russel: Yeah, we’ll actually come back to that before we wrap this episode up because I want to unpack that. I actually have some ideas about how that’s going to play out.
The other thing that I think is very interesting here is you guys are looking for multiple things. You’re looking for encroachment. You’re looking for ground movement. You’re looking for leaks. What’s interesting to me is that most pipelines, the way they’re currently organized, all those things are in separate departments.
Sean: Yeah.
Russel: One of the issues that you’ve got to be addressing from an execution standpoint is, well, this is only one data set, but there’s a lot of different groups within the company that are going to use it. How does that play out?
Sean: Yeah, to be frank, it’s one of the toughest challenges. Not that once the concept is understood that it isn’t well adopted. But when you first start speaking with one of your operators, most of them are very myopically focused.
If somebody is focused on leak detection, that’s their world, leak detection. Somebody similarly is dealing with the right of way for encroachments, that’s their world. It’s a challenge to get them often in the same room, so that the silos are broken down.
Because if you’re spending a hundred dollars, and you’re dividing it by five problems, that’s a whole lot better than five of you each spending a hundred dollars to only solve one problem.
Russel: Or even more to the point, maybe I’m an operator and I have $500 to spend, and I’ve got 10 departments that each need $100. Where do I spend it?
Sean: Yeah. Either way, you work out the mathematics of it, we as an organization — when we set out and one of Allan’s goals — was to develop algorithms that could operate simultaneously within a couple of hours and give you — the person in charge of leak detection, or Billy Bob, the person in charge of change detection — the adequate set of data for them to minimize any consequences of both of those scenarios.
Russel: Yeah, I actually want to talk about that two hour thing because what we’re talking about here is moving through a huge data set in a relatively short period of time.
Being a guy that’s been in software development for many, many years, I know how challenging that can be. So Allan, what’s the magic?
Allan: Well, we leverage our data analytics in the cloud. So leveraging those two things together, we’re able to do massive computations on large data sets very rapidly, pushing our AI technology to do those data analytics in such a fashion to be able to leverage that cloud computing capabilities.
Russel: So what is it about cloud computing that facilitates that type of thing?
Allan: I can give you a little bit of the chain. Our data comes in from our vendors, where we receive the information. It goes directly to the cloud. So we’re minimizing the transfer. They put their data into the cloud, we just transfer from their cloud to our cloud, so it’s immediately starting to process this information as it comes in.
We’re reducing how much the data has to transfer. The lag time comes from how quickly can the data come from the satellite to the station. That’s what we do, and that’s why the analytics and being able to process on more cores and more computing power within the cloud infrastructure itself.
Sean: Your direct question about why the cloud and why it makes a difference, it’s because no matter how much data we absorb, no matter how much processing power, as long as we’re willing to pay for it, we can expand the processing power and how much data we can process literally with the flip of a switch.
So, it really is infinite, which you just cannot do if we were trying to control that infrastructure ourselves.
Russel: Right. Yeah, that’s huge. The other thing I’m wondering — this is an area that I should be current and knowledgeable on, unfortunately, I’m not. It’s an area I’m trying to catch up on is the whole idea of what is the cloud and what can you do with it.
So you’re running this data, this huge data set once a week. Are you grabbing resources from the cloud to run the data set and then releasing those resources back to others?
Sean: So really pretty straightforward. This is really the fundamental cost of our business. One example, we ran a 6,000 mile right of way recently. That equated to 42 terabytes of data every time.
One of the other commitments we make to our customers is that we will never delete any data. So any member of iPIPE — which is a very good example, which is now in its third year as you know — they can go back to any week that any of the data was captured and rerun any new algorithms.
If they’re involved in any litigation, they can prove or disprove a particular allegation, which has become a very common occurrence for us to look back in time.
The fact of the matter is that the companies, the cloud — we happen to use AWS, or Amazon — it is so large in terms of capacity and capability that it’s really a real simple equation. As long as we’re willing to foot that bill and pay that bill, we have infinite computing power at our fingertips.
So the raw data is deposited to the cloud. Our sausage machine goes to work and runs our algorithms. You as a customer, what you really are interested in is the results, the alerts and alarms that tell you that a certain area is outside of the core thresholds that you’ve set as an operating integrity window, if that makes sense.
Russel: Sure. For anybody who has a background in data management or data processing just the idea of collecting, processing, and transporting results on 42 terabytes of data in two hours, it’s a bit mind-boggling.
Sean: You know, Russel, the last we were talking about pixels. Allan, what was the pixel count over the 16 weeks last year when we ran iPIPE?
Allan: Oh, gosh. I don’t have the number off the top of my head, but I think we’re at two million pixels each time we collect. So 32…
Sean: I thought it was somewhere in the 80 billion pixels that we processed. That’s what I recall, but it’s a staggering number, Russel, as you can well imagine.
Russel: I’m sure that part of your secret sauce is how you actually are able to do that. I think, too, for a lot of people that are down at the worker level, and they hear that everything’s going to the cloud, they don’t get why. What you just did is explain why everything going to the cloud.
Because as a small business, there’s no way — the money you would have to raise and the infrastructure you’d up stand up and support would just make this non-viable.
Sean: Yeah. I think, Russel, even some of the biggest clients will say 42 terabytes of data would be a little bit above their appetite. Now one of the things that comes with using the cloud is scrutiny.
A lot of our blue chip companies with some of the great names that we all know, they have done their own security analysis of our capability and I guess that of AWS or Amazon. We do constant penetration testing to ensure that we live up to the very high standards of security that they expect.
So once you’ve gone over some of those initial worries — because they are good questions — there are whole teams worried about cybersecurity. The cloud has become a very acceptable mode to process very large volumes of data quickly.
Russel: Right, and I think we’re going to see more of it. It’s just there’s a lot of very strong, quantifiable benefits, but I think it’s interesting what you guys are laying out.
When I collect the data, and I have this large data set and I’m one of the users, is there a way that I’m only able to see the data that relates to me?
Sean: Yeah, well, that’s exactly the goal. As we receive the data, part of the implementation process and the design is that the customers will have each shared with us their infrastructure. Once we process that, it’s as if we were processing 70 different customers, not 1 customer in 70 different parts, if that makes sense.
Russel: Right.
Sean: So Customer A only gets Customer A’s alerts. Customer A only gets to view Customer A’s data. I mean, that’s the whole process. We’ve been doing that for many different customers along the way for a long time.
The difference here is that over a basin — where you’ve got a spaghetti of intertangled, often facilities and infrastructure — it makes a lot of common sense for the people spending money, rather than each spending 1/70 of what the cost would be, they’re sharing the big cost.
Therefore, they can gain economies of scale and rapid turnaround. It really does work and it makes an awful lot of common sense as well as business sense.
Russel: Sure. I’m sitting here, and I’ve got all these questions. I want to ask Allan about how do you actually move through that much data for that many customers in that period of time. Frankly, it’s blowing my mind. Maybe I’ll carve Allan off one-on-one and I can pump him for some of this data.
Sean: The secret sauce, you mean?
Russel: Sure.
Allan: I can sum it up in one little bit, and that’s we use artificial intelligence to make those decisions. What we learned from the iPIPE group members, they actually provide us feedback, and all of our customers provide us feedback.
So we’re not only delivering the data, but we’re also taking that data back in and learning from each little alert that’s established. It goes back into everything we do.
An algorithm ran today is not the same algorithm that was run in 2018 or 2019. Every day and every week, every piece of information we get actually iterates on itself in order to get this thing to hone in to work more effectively, more efficiently.
Russel: So you’re actually using AI to improve how the algorithm runs.
Allan: Correct. Within our alerts and alarms, we actually have feedback for customers to set resolved, or they’ll take information from that. That information goes back into our AI to help train it again to almost use some of that ground validation or that confirmation of a positive or a negative detect.
So now we’re in the millions of different categories that these things have learned from that reflective signature and how that changes over time and how that alert is established and sent out and delivered to the customer, that’s something we’re very good at.
Russel: Interesting. I want to play with this stuff, man. It sounds like fun. So let me ask a little bit different question. This is a pretty radically different business model, both in terms of you guys, in terms of a vendor providing a service to the pipeline operators, but also for the operators themselves, it’s kind of a radical change.
You already mentioned some retraining or reorienting that’s required in the operator. What have you guys had to do as a service provider to vendors that you didn’t anticipate?
Sean: Yeah, we have very good example of that, Russel, in the field. You refer to Bubba geeks, and we love that term of course. We refer to ourselves as propeller heads.
The interesting thing is we sit in our offices and we’ve got great Internet connection and great cell phone connection, but here’s a very real-life example of something that we had to change in the field in our experience.
We generate alerts and alarms. When we present our software, we present those in three form factors. You could use a smartphone, you could use a tablet, or you could use a browser — Apple or PC, don’t care.
But what we found was the folks going out in the field to remediate or investigate the alert, they had very poor cell connection. Well, not many of us have spent time out in the middle of North Dakota in the field, and to us, that was foreign. What do you mean? We’re used to turning on our cell phones and having all of that at our fingertips.
So what we developed was on the smartphone platform — which was the choice for most of our customers — they could receive those alerts and alarms, go out in the field, record any details on the remediation, take pictures and photographs to corroborate what was being actioned, and only when they got back to their office, their hotel, or their truck where there was a decent connection, it would sync up the data.
We call that Satelytics for the non-connected world. But without that field presence and without that knowledge working alongside improving business processes with customers, we would never have known.
Because we very foolishly said, “Oh, everybody’s got to have cell phone coverage somewhere.” But not until you put it in the real world — and that’s one of the reasons why we loved iPIPE and it’s maturing into this Bakken wide deal is that it was not just a petri dish.
It was a petri dish that was put in the real world where real people pull on their boots every day, get out in the field, and you can find the problems that are often overlooked, and therefore, not considered as a good answer. So these are really good life examples.
Russel: Yeah, and I think Sean, too, that for companies that are bringing a tech that’s been useful elsewhere and they’re bringing it into this market space, there are some uniquenesses about this market space.
Certainly, the fact that so many of these facilities are so very remote and have little if any connectivity — and if they have connectivity, it’s probably very narrow band — that adds just a level of complexity, and there’s other things that are true about the nature of our work and how it occurs that’s kind of unique.
Likewise, you’ve got people that know all that and then don’t understand all the technology, always bridging gaps in learning, so that’s actually a great example.
I want to talk about the future. You talked a little earlier about where you think this is headed. We talked in the previous episode about it doesn’t matter the platform is for the sensor to collect the data — whether that’s a satellite, an aircraft, a drone, or some kind of handheld device — I actually think that what’s going to begin to happen is that there’s going to be a lot more need for deployable platforms to collect data, concentrated data at a location.
I should unpack that a little bit. In other words, wouldn’t it be handy if I could put the sensor package on a drone and dispatch the drone when I needed to go take a closer look?
Sean: Yeah, that’s a really good example. So without getting carried away about satellites and what a large set of data they can collect very quickly, each of the data platforms — whether it’s drone, plane, fixed camera, some form of stratospheric balloon — all of these have roles to play, and you’re absolutely right in what you said.
In the iPIPE for the third year, one of the visionaries in that program, or one of the visions we have is to fuse data from each of the platforms. So we’re running hyperspectral aircraft, we’re running satellite, and we’re also running drones and some fixed cameras, and we are fusing all of that data together for the very point that you mentioned.
While satellites may quickly do the heavy lifting over 6,000 miles of right-of-way, as you said, I may want to become very granular over a specific facility, terminal, pipeline, infrastructure, river crossing, and I want to see it up close and personal.
Suddenly, the jigsaw puzzle has another piece of the puzzle to help you and inform you of what the next step might be from a better set of information perspective.
Russel: So have you guys looked at data other than imagery — things like pressures, data from inline inspection tools, or any of that — or is that completely outside of your domain at this point?
Sean: Yeah, it’s a little outside of our domain. But when we run algorithms for land movement, when we run algorithms, or we run algorithms for methane, we are bringing in other data sets that influence the results. For example, methane, we have to know wind velocity, wind direction, and relative humidity.
Russel: Right.
Sean: When we look at land slips and landslides, we look at vegetation, we look at soil moisture content, we look at rainfall. All of those things have an impact on what the results might be.
So while they’re not pressure, or sensors as you’ve said, what we become is a piece of the jigsaw puzzle, so that if you sat there with your SCADA system, you’re using multiple data inputs and multiples sources as the person who makes the decision as to what the next step might be.
Russel: Right, I think this is a huge challenge. One of the things that’s true in image analysis is these sensors generate a common data set, fairly common.
When you start getting into SCADA data around pressures or weather data or that sort of thing, again, it’s a fairly common data set. When you start going from there and you start talking about things off of inline tools, where the data sets are very tied to the vendors and their instrumentation packages it gets more complex. All that being said, I think the future of all this is integration.
How do I take all of these different, disparate kinds of data and pull them together and do something meaningful with the integration and intersection of all that? That would be my take.
Sean: I think you’re, again, right on the money, Russel. When we first conceived of the design of our software, we were never arrogant enough to think that you would spend all day inside of Satelytics.
We’d love that. We really would, but when you think about the billions of dollars that these blue chip companies spend, they are using the most sophisticated inline tools. They’re using all sorts of data sets that they will have gathered; their own AI platforms.
We developed inside of Satelytics both web services and a full suite of API calls so that any of the data, whether it was the imagery, the analytics, or the alerts could be fed out into other software platforms that you are using as part of your decision-making basis.
I think you’re right on the integration. As a vendor, as a creator of software, I know that Allen and Allen’s counterpart, Dr. John Zhou, our Chief Technology Officer., when they sit down and they design algorithms or the way that we’re going to present stuff, the one thing that’s very, very at the foremost of their mind is, how do we make it easy for our customers to take that data set and then use it with other sets of data that they’re using to make the next decision?
Russel: That’s a huge question. I think that’s probably a good place to wrap this. I think the way I would put a dot or a period at the end of this conversation is to say that I sure hope I get to see the pipeline industry and how it’s operating 20 years from now. I think it will make what we’re doing now look very archaic.
Sean: I think, Russel, you’ve hit the nail on the head. I think, having spent some time with some of the great visionaries that this industry has, there’s an awful lot going on behind the scenes where, like the questions you asked, they keep pushing and pushing. Even the limits of a little company in Toledo, Ohio as to how it can be part of the bigger jigsaw puzzle.
Russel: Absolutely. Allan, you didn’t get to say a whole lot during this episode. Do you have some final remarks you’d like to make before we wrap this up?
Allan: I’m looking forward to pushing our AI in order to push that element into oil and gas. Integrating our software into the industry is very key to me because it’s very true to everything I’ve worked hard for and my team has.
Russel: Great. Sean, Allan, thanks so much for being on the podcast. This has been a really interesting conversation. I look forward to seeing where this takes us in the future.
Sean: Thank you, Russel. Always a pleasure.
Allan: Thanks for having us.
Russel: I hope you enjoyed this week’s episode of The Pipeliners Podcast and our conversation with Sean Donegan and Allan Adams.
Just a reminder, before you go, you should register to win our Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing. If you would like to support this podcast, please leave a review on Apple Podcast, Google Play, or whatever smart device you happen to use to listen to the podcast.
You can find instructions at pipelinepodcastnetwork.com.
[background music]
Russel: If you have ideas, questions, or topics you’d be interested in, please let me know on the Contact Us page at pipelinepodcastnetwork.com or reach out to me on LinkedIn.
Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords