This edition of the Oil & Gas Measurement Podcast features Michael Thompson of ElevenThirteen Solutions discussing using data analytics to drive measurement validation rules in oil and gas.
In this episode, you will learn about the use of data mining and statistical analysis tools to improve measurement data analysis and validation. Michael discusses the use of data analytics to help create efficiencies in your measurement operations and to improve the identification of data issues.
Michael also recounts his past learning experiences with measurement data analytics and provides advice on how to become an internal champion for efficient measurement data analysis in your operation.
Measurement Validation Rules: Show Notes, Links, and Insider Terms
- Michael Thompson is the co-owner of ElevenThirteen Solutions. Connect with Michael on LinkedIn.
- ElevenThirteen Solutions helps oil & gas companies maximize profit potential through measurement data analytics.
- Enable Midstream, now part of the Energy Transfer LP family, owns, operates, and develops strategically located energy infrastructure assets that serve as a critical link between major producing basins and downstream markets.
- FLOWCAL by Quorum Software is an oil and gas measurement software platform that is used by operators for the back-office validation, processing, and reporting of natural gas and hydrocarbon liquids.
- PPA (Prior Period Adjustment) is an accounting adjustment to oil & gas payments that have already been made, which are triggered by a change in commercial terms, correction of a pricing discrepancy, or to correct a measurement error.
- C6+ is a common means of reporting the Hexanes and heavier components such as heptane, octane, and nonane, where they are combined together to form a single reported value known as C6+.
- Gas Chromatography (GC) is an analytical tool used to accurately determine the concentration and make-up of certain substances flowing through a sample. GC analysis can be used for petroleum, natural gas, fuels, LPG, petroleum refined products, petrochemicals, and additional hydrocarbons and chemicals.
- AGA (American Gas Association) represents companies delivering natural gas safely, reliably, and in an environmentally responsible way to help improve the quality of life for their customers every day. AGA’s mission is to provide clear value to its membership and serve as the indispensable, leading voice and facilitator on its behalf in promoting the safe, reliable, and efficient delivery of natural gas to homes and businesses across the nation.
- AGA 8 (Compressibility Factors of Natural Gas and Other Related Hydrocarbon Gases) is an industry standard that presents detailed information for precise computation of compressibility factors and densities of natural gas and other hydrocarbon gases.
- Orifice Plate is a type of primary flow measurement device that creates a measurable pressure drop across a known restriction.
- Differential Pressure (DP) flowmeters determine flow by measuring pressure drop over an obstacle inserted in the flow pathway. These flow devices are orifice plates, averaging pitot tubes, venturi tubes, and flow nozzles
- International School of Hydrocarbon Measurement (ISHM) provides instruction in both technical and non-technical measurement subjects for personnel in the industry. Problems that pertain to the measurement, control, and handling of both gaseous and liquid hydrocarbons are studied so that useful and accurate information can be developed and published for the benefit of the public.
- Measurement Analysts are back-office measurement employees who are responsible for reviewing and managing the incoming stream of raw measurement data, managing configurations within the measurement software system, and addressing issues identified within the data.
Measurement Validation Rules: Full Episode Transcript
Weldon Wright: Welcome to the Oil & Gas Measurement Podcast, episode 3, sponsored by GCI, the Gas Certification Institute, providing training, standard operating procedures, consulting, and field operations software to the oil and gas industry for over 20 years. For more info on GCI, visit GasCertification.com.
[background music]
Announcer: Welcome to the Oil & Gas Measurement Podcast, where measurement professionals, Bubba geeks, and gurus share their knowledge, experience, and likely a tall tale or two on measurement topics for the oil and gas industry. And now, your host Weldon Wright.
Weldon: Welcome to another episode of the Oil & Gas Measurement Podcast. I’m here today with Michael Thompson, a consultant and co-founder of ElevenThirteen Solutions.
We’re going to be talking about the use of data mining and statistical analysis as tools for helping set validation limits in alarm settings within your measurement system. Before we start and get really geeky, Michael, tell us a little bit about yourself, what you’ve been doing, and what ElevenThirteen is all about.
Michael Thompson: Thanks, Weldon. I appreciate being on the podcast. Again, my name is Michael Thompson. I’m co-founder of ElevenThirteen Solutions. Myself and two other co-founders met at Enable Midstream Partners. We worked there and we had a passion for how do we drive automation into the workplace of measurement.
We had a group of analysts that historically, like all the other midstreams, were working through their exceptions. They were looking at their meters. They were looking at their balance locations. The three of us really felt like there’s got to be a better way to do this. We spent several years together at Enable, working through how do we improve that, just driving that continuous improvement? How do we use data to really change who we are?
Enable was an MLP, multi-limited partnership, so growth was key. We’ll talk about that I’m sure today. Those kinds of changes and where we need to go, how we need to grow, we really felt like we have the data. We have this historic data. We have data analytics, data science backgrounds and practices, and we wanted to see what we could do to drive that out, to reduce the analyst having to spend their time doing work that wasn’t value-added. Let a computer do the data analysis and then let the analysts pick up the endpoints, which allows for growth and also allows for the analyst to be in value-added.
Here, at ElevenThirteen Solutions, that’s what we’re working on. We’re working on partnering with different midstreams in the oil and gas industry to say “how can we use data and analytics to improve, to allow for more growth.” What we’re seeing a lot is you don’t have employees that stay around 20, 30 years like we did before. I don’t have the luxury that half of my group at Enable had 20-plus years of experience. So, how do you cycle through people and bring them up to speed, and how can we use data analytics to do that?
Weldon: That makes a lot of sense, Michael. To set the stage a little bit for our audience, some of the folks here are going to be in the back-office world, some of them are in the field, some of them are in the corner office. We’re trying to reach out to everyone in the oil and gas measurement world.
Let me set the stage a little bit for what we’re going to be talking about today. Validation rules, no matter what measurement software system you use, whether it’s a FLOWCAL, a PGAS, or one of the other very capable systems that are out there, configuring validation rules and individual meter limits are a key part in allowing a small number of analysts to monitor and support a large number of meters.
Those settings are the key to making sure that your system is going to help you find and correct issues prior to your close, but those settings, what we call tuning in the industry, tuning your validation limits, has always been a part past experience, a part black magic, and a big part of luck…it seems like.
You’re always fighting a battle between settings that are too tight, which means you bombard your analysts with nuisance exceptions and work, and too loose, where you miss important stuff that causes PPAs.
Back when I was with Energy Transfer, I had a group of 30 something analysts and managers, all focused on back-office measurement, running several different measurement systems. I know it’s something we always battle.
I, myself, dabbled a little bit about what you’re talking about, probably around 2000. I failed miserably on it. I spent a lot of time. I spent a fair amount of money, brought in some consultants, and we still were not able to get brute force data analysis results that could set those validation limits better. Why didn’t we do a good job? I wondered, first of all. Why did you do better?
When I first met you probably about seven years ago or so over at Enable, you were what I considered a “younger” measurement manager compared to old guys, with no hair like me, and you had some great ideas. This was one of them. Talk to us a little bit about how you got there. How did you decide you wanted to dive into the data?
Michael: Thank you. By no means are we going to say it’s perfect. We learned and grew. Fortunately, one thing that sets me and the co-founders’ viewpoint on this is we are data, data, data, but we have a really strong change management thought-process in our forefront. One of the change management practices we’ve used was ADKAR (awareness, desire, knowledge, ability, and reinforcement).
What’s kind of easy is that we have the data. We have a tool for analysis. We have statistical models and standard deviations. We’ll talk going into some of the machine learning with random forests and neural networks. We have that, but if you can’t bring your people along, if you can’t create that awareness and desire for change, you’re destined to fail. Right?
Weldon: Right.
Michael: I will say that there was “sell the story,” sell, or let me say “sell the vision,” “sell the vision,” “sell the vision” constantly. That’s all I did was sell the vision for months, and months, and months before we ever really made any real progress.
That’s important whenever we’re working with companies that we need to bring the group along because what you don’t want. You don’t want to implement some change and then you just revert right back. Hey, I’ve done it this way for 20 years. I’m going to keep doing it that way for 20 years because I never really understood why we’re doing something different.
Weldon: I have heard that so many times, Michael.
Michael: Let’s say we start there, but really, what we had to look at, was the old way we were doing it wasn’t working. I’ve been working in this industry for about 15 years, and a good portion of it too was with the utility.
I got the chance to go to a power plant. I got to go into the board, and there were these lights lit up on the board. I said, “What’s that light lit up for?” The guy goes, “Oh, it doesn’t matter. It’s been lit up since 1987.”
Weldon: [laughs]
Michael: I go, “Okay, so I guess that light’s not important.” He was like, “Oh well, I don’t know. We use other things for that.” That’s what our measurement analysts are doing. That’s what our data analysts are doing. You talked about setting these limits, setting these alarms, setting these alerts. The majority of them are a nuisance. They’re just ignoring them.
We saw that. We would start looking at timings of resolving exceptions. Some people could resolve thousands of exceptions all just in minutes. You’re like, “How did you do that? How did you do the analysis?” It’s because they’re ignoring the light on the panel.
Weldon: Right. Right.
Michael: They’re just saying clear it out. I don’t need that. That’s not helpful to me. Over time, it’s like you said, one of the things that I got the benefit from coming into measurement seven years ago when I saw you is that I wasn’t a measurement guy. I had IT, data analytics, that sort of background, and so coming into measurement, I got to start learning measurement and look at it in a different perspective. I got to ask all the silly questions that maybe someone else wouldn’t ask because I was new.
Weldon: New eyes on an old problem.
Michael: That’s right. We had this vision that was way downstream. We had this vision of, why can’t a computer do 80 percent of the work? We couldn’t start there. We started peeling back the onion, bringing the group along, helping them to start understanding data analytics.
One thing that’s interesting, I know it’s not uncommon for other people, too, is that there was a good portion of the data analysts in the group that when they got hired, they were not doing any analytical work at all. They were doing charts. Even a question on the job interview was, “Do you sew?” Using that foot pedal was key to bringing over chart data.
Now, we take those same people, we’re asking them to look at way more data than a human can consume, and make sense of it, and do something that adds value, accuracy, and dollars to the company. They may not have been brought along the path of, how do you analyze data?
Weldon: It’s exactly the same story as the old shade-tree mechanics, not even the shade-tree mechanics, the old diesel mechanics, the guys that have been fixing the big engines for years and years. Somehow or another a lot of those guys were into the computer world, computerized engine control, and fuel control modules… got them dropped on them with no explanation and no warning.
It’s the same thing with the measurements out in the field. You’re very right. I’ve talked to many analysts that started in that chart world of getting two numbers at the end of the month, and then, all of a sudden, being thrusted into this constant stream of hourly data.
Michael: Where we started was, one, setting an expectation of continuous improvement. Let’s be better tomorrow than we are today, just one percent. Let’s keep getting one percent better. Then we started peeling back more detail and started looking at volume. Then we looked at analysis, the gas composition, missing data into the number of exceptions.
One thing that drove us is that not all meters are the same. There was this thought, “Well, I’ve got this many meters and you have this many meters.” The truth is that we knew that not all meters were the same. Some meters barked a whole lot less than others, but put out these nuisance alarms a whole lot less, but when they did come up, they were very important. The volume’s higher.
We started with setting up a relative meter. We took volume, the gas composition, is there rich gas in it? What’s the volatility of your sample? How far from a standard deviation are they each month? If they’re tight, you’re going to get a small risk warning on your gas analysis.
If they’re wider, if your C6+’s are bouncing up and down, if there’s gas composition that’s changing on the backend, then you’re going to get a higher score, your number of exceptions for that meter.
What’s the percentage of missing data? If you’re never missing data through the month, you’ve got a lower risk score than if you had lots of missing data. Those components started our relative meter size. Now, we could score each meter against it, and we could then dole out the relative meter score across all of our analysts to get an even workload.
Weldon: Fascinating. Your first start was to get the individual meters pigeonholed into smaller boxes.
Michael: That’s right. What we needed to do was we needed to give the analysts time to work on performance improvement. We couldn’t have some analysts maxed out and some low. What we wanted was we wanted their knowledge to drive that, to bring along the awareness, desire for what we had coming.
We started with “we need to level out.” I need to know that everybody’s working about the same amount of effort. Now, I can carve out eight hours every two weeks, eight hours a week and we can put them in rooms to do performance improvement. How do we get better to cast that vision towards them? That took time.
Weldon: You did a lot of work then upfront. The classic thing, if you had a transportation pipeline analyst with 100 meters and you had a midstream analyst on a low-pressure gathering system with 100 meters, one of those analysts is sitting back and bored most of the time and one of them is peddling as hard as they can, 10 to 12 hours a day.
You took that, you analyzed that, and I’m assuming you must have divided your meter counts more along workload and not strictly, “Hey, are these people on a pipeline, or are they midstream?” That makes a lot of sense.
Michael: That’s right. Another category we ended up adding later was regulatory pressure. To your point, transmission analysts had way more regulatory pressure than gathering where we were at. That was another one. That took time and effort. Transmission is lean gas. It’s much cleaner. We’ve got GCs out there. If something happens, your effort is going to be high.
Weldon: Response time is much more critical also.
Michael: That’s right.
Weldon: The length of time to find it, the end result cost if you don’t find it, is much higher on the transmission side. Makes a lot of sense.
I know from previous conversations that part of the way through this process, you had a real data scientist in your measurement group, on your measurement group payroll, just analyzing, helping you break all that down. Tell us a little bit about how you sold that idea. How did that guy begin his work?
Michael: Very good. It started with “data is key.” We are going to start making data-driven decisions.
Talking about everything we did to remove those nuisance alarms, to look at our data with statistical analysis, start understanding what standard deviation was, what normal distributions were, all of that was key to the day three years later that I finally hired a data scientist. I wasn’t going to sell that day one. You had to go prove that understanding your data was valuable.
Fortunately, I had a vice president that valued “data and analytics.” That helped me out for sure. I hired the first data scientist for Enable Midstream. Just getting the title, actually hiring someone called “data scientist” was quite a battle.
Weldon: That had to be a battle.
Michael: It was a battle. I battled HR. I battled my bosses. I had to go prove that this is a real title in the industry and it means something. When we brought in the data scientists, the very first thing that I had him do was work on finding meter freezes. There were a few reasons for that.
One, to find a meter freeze, you’re going to have to create a data model that will use every attribute inside of an AGA calculation. We’re going to automatically start training our data model or our machine learning model to understand on orifice meters, the correlations between DP and temperature and pressure, and all of that.
One thing we do is need to go find some good data, you have to train your model, but you don’t want to over train the data, right?
Weldon: Right.
Michael: That’s a challenge and making sure that you’re training your model, and you’re not overtraining it. Make sure you find good data, and you only use that 20 percent to train it because you don’t want to overtrain your model to accept bad data, and then start using that for predictive models.
Weldon: That’s an excellent point, Michael. That is actually where, after a lot of analysis, 20, 22 years ago now, that’s where I really failed when I dove into this back in ’99 or 2000. We tried to feed all the data we had to models and tried to let them learn based on that. The reality of it was, i, that “good data” was such a small percentage, and that shouldn’t be what we had the model learning. I was not nearly advanced enough to do that. That’s what a data scientist brought to the table for you.
Michael: Right. I’ll tell you, we hired a data scientist. He worked for the University of Oklahoma. He pretty much was the measurement person but he was measuring a certain stratus of the atmosphere, the very same thing we were doing, 75-100 published articles.
We brought in someone that knew data science and we taught him measurement. I promise you, he was a smart dude. He caught on quickly, but I think where we started was important. I think that’s a good lesson for other oil and gas, midstream, or whatever you are, start somewhere that’s broad.
The vision that I had before I even got him was, eventually, we’re going to have a data model. Look at the data live. Know where data is wrong. Project where it should have been. Put it back into the calculation. Recalculate it. Then, go and do the edit itself with a tag on it that a computer machine language made the edit.
Let’s try to get 80 percent of the edits done by a computer and let the 20 percent that are big-dollar, major done by a human. Where we started was that meter freezes. Winter was coming. Meter freezes were always embarrassing. If a customer finds that the meter froze and you didn’t find it, it’s embarrassing to us as a measurement department.
It’s embarrassing because you have to re-invoice. We started there because, one, it was going to touch every data attribute. Then, we could start with gathering orifice meters. We don’t have to worry about every meter type. That’s what we’re concerned about, orifice meter freezing.
We did that, and what we did was, we had 10 different models that we sent this through. At first, we required eight of the models to trigger freeze before he would flag it. Eight of the models had to tag the hourly record as a freeze. Possibly 10 of the models would. We ended up pulling that down to seven.
Over time, I think that we would have learned and could have pulled it down even more. This model, once we tuned it, we were able to take our analysts through it and have them look at it. That’s the other thing is, we’ve got this machine. It’s using more data than any human could ever consume. It’s going through 10 different models, making sure 8 trigger, but then we did the due diligence of going back in. We sat down with the analysts, and we talked it through.
“Would you have made that edit? Would you have made this edit? Would you have done this? Would you have called this?” You start finding out that humans are not the same. Right?
Weldon: Right.
Michael: We have different thresholds for risk. We look at it differently. We may not even understand what causes a meter freeze. We may not even understand…
Weldon: You may not even identify it as a meter freeze. Right?
Michael: Right. You have this spectrum of knowledge. You have one of my co-founders saying, “Here’s how I do it,” and I go, “Yeah, I agree with you,” and then someone else going, “Oh, I just would have said your temperature went down and DP went up.” Or I would have said, “I don’t know. I just cross my fingers for the winter and hope no one catches my mistake.”
Weldon: [laughs]
Michael: That’s what we have. We know this. This, in the second year, this machine learning model caught 6,000 percent more meter freezes than all of our analysts could do.
Weldon: 6,000.
Michael: Thousand.
Weldon: 60 times. 60 times.
Michael: Yes. Yes.
Weldon: 60 times the meter freezes.
Michael: That’s right.
Weldon: Wow. I thought I knew this story. I haven’t heard that number before. “60 times the meter freezes.”
Michael: Right. At this point, I’m a senior manager. I’m not the director yet, so I’m not making every decision. The next step was to convince my director and the vice president. Let the computer make the edit. That was scary. That was hard.
What we did was we said, I’ll admit, we said 6,000 percent. So many of those were pennies. We weren’t doing anything to really change the bottom line. There’s very little chance the customer was going to find it.
We said, “Okay, where are you scared? What dollar? Let’s equate that volume and that energy to dollars. What dollars would you be comfortable with the model doing its own edit?”
We found a threshold, and we started allowing that model to, because it was doing prediction, too. It had to do what is it, but then what should it have been. We just shoved that back in, did that calculation, put it back in, and started letting 80 percent of those just be edited with a — we had a tag that said machine learning edit, so you knew the machine did it.
Weldon: Wow. There’s a lot of pieces to what you just said there. Let me unpack that a minute. You got somebody in. You built a set of models. You let it learn. You had 10 different models trying to predict where there’s a meter freeze. You used 8 of 10, 6 of 10 polling to let it decide.
You evaluated that. You went through it with the individual analyst to verify the learning and tune that model. Then you went through the process of deciding at what monetary level are we willing to let the system make the changes on its own and edit the data. That’s a lot of individual pieces to that puzzle.
Michael: To me, and to ElevenThirteen Solutions, and hopefully others, that’s just the beginning. The reason I thought that meter freezes were a place to start is that you’re touching every component of an orifice meter. We started pulling in hydrocarbon dew points. Let me calculate the local weather. All of these other things that there’s no way any analyst could take the time to do that.
Now, let’s project that out to every other fault type. It’s summertime. It’s good weather. It’s bad weather. It doesn’t matter. If we can trust it to fix meter freezes, can we trust it to fix day-to-day problems? Can we now start trusting it and training it on a transmission that does 30 VSA a day, something that is big?
That’s where you start getting that incremental value when you keep opening it up, keep training it, you bring people along, you show them what it’s doing, you add that level of transparency. I think so many people are concerned about black box AI like, “Hey, I got some AI machine learning…” whatever you want to call it. “Let me run against your database, and I’ll do things.” Well how?
I think that’s the big difference that we want to do, we want to bring that along. Bring your group along. Bring your leaders along so that you know and you’re picking what it’s doing and when based on your risk profile for your company, your goals, your strategy, you can start implementing more and more to automate and over time to eliminate that headcount pressure.
As we were in an environment, where people don’t stay for 20, 30 years, like we talked about at, first, you can ramp people up faster, and you can have more done on the back end and less than on the front end analyst side.
Weldon: That’s a whole lot you said there. Some of it it’s holy grail stuff. On the other hand, a lot of it is just plain common sense there, Michael. Really, thank you for sharing all that stuff. It leaves me with about two pages of questions I jotted down here, in the margin of the paper, and thinking, “Man, we got to have Michael in here to talk some more about that stuff.”
Michael: Love it.
Weldon: We probably need to think about winding this down. Anything else you want to add related to that?
Michael: Just to add, this is fun stuff. This is our passion. As you can tell from my voice, talking about data analytics and measurement. It’s geeky. It’s nerdy, so it’s fun. There’s nothing better than the breaks after ISHM, or one of these schools, where you get to go geek out and talk measurement.
I really think that the things that we’re doing and the things that can be done on that data and analytics side and removing the black box, removing the “what is going on?” and allowing everyone to go on that journey, through that change management, is powerful, and that’s where we need to be, and that’s where we’re going to be.
We’re going to have to get there. The industry is changing. It’s going to either change with or without us and we want to be part of the change with.
Weldon: That’s a great ambition. I think you and your partners have what it takes to get to it Michael.
Michael: Thank you.
Weldon: Thanks again for what you shared. I look forward to having you on here again sometime next year. We may dive into some of the other aspects and where this could go. I know there’s other companies working on the type of thing you’re doing here, but I haven’t heard any of them explained as well as you’re explaining it here. Thanks again, Michael. Have a great day.
Michael: Thanks, Weldon.
Weldon: Thanks again for listening. If you like our podcast, please leave a review on iTunes or wherever you get your podcasts fixes from. A full transcript of this and the other episodes are available on the Oil & Gas Measurement Podcast section of the PipelinePodcastNetwork.com website.
There’ll also be definitions of any geeky terms if we got out of control with those. New episodes of our podcast will be posted each month.
If you have suggestions for topics, questions, or if you’d like to volunteer as a guest, drop me a note on LinkedIn or click the Contact button on our website. Thanks again for listening.
Transcription by CastingWords