This month’s Oil & Gas Measurement Podcast episode features Marshall Webb discussing updates to the API Chapter 21.1 – Electronic Gas Measurement standard and how the update process works.
In this month’s episode, you will learn the three sections of Chapter 21.1 that have been opened from review/revision, as well as learn about the process that must be followed in revising API standards. The discussion will include the need to differentiate between QTRs versus reports, the difference between linear and formulaic measurement, and how AI has the potential to improve our measurement data processing.
API 21.1 Update Show Notes, Links, and Insider Terms:
- Marshall Webb is a Sr. Manager for Field Measurement for Marathon Petroleum Corporation. Marshall currently supports MPLX’s West G&P Division. Connect with Marshall on LinkedIn.
- Marathon Petroleum is a leading, integrated, downstream energy company headquartered in Findlay, Ohio. Marathon also owns the general partner and majority limited partner interest in MPLX LP, a midstream company that owns and operates gathering, processing, and fractionation assets, as well as crude oil and light product transportation and logistics infrastructure.
- Listen to Marshall’s previous O&GM episode here.
- API (American Petroleum Institute) is a national trade association that represents all aspects of America’s oil and natural gas industry.
- API maintains a comprehensive Manual of Petroleum Measurement Standards (MPMS). The manual is an ongoing project that periodically releases new chapters and revisions of existing chapters.
- API MPMS Chapter 21.1 (API 21.1 or 21.1) describes the minimum specifications for electronic gas measurement systems used in the measurement and recording of flow parameters of gaseous phase hydrocarbon and other related fluids for custody transfer applications using industry-recognized primary measurement devices.
- Learn more about purchasing API standards.
- FLOWCAL by Quorum Software is an oil and gas measurement software platform that is used by operators for the back-office validation, processing, and reporting of natural gas and hydrocarbon liquids.
- AGA (American Gas Association) represents companies delivering natural gas safely, reliably, and in an environmentally responsible way to help improve the quality of life for their customers every day. AGA’s mission is to provide clear value to its membership and serve as the indispensable, leading voice and facilitator on its behalf in promoting the safe, reliable, and efficient delivery of natural gas to homes and businesses across the nation.
- GPA (GPA Midstream) is the primary advocate for a sustainable Midstream Industry focused on enhancing the viability of natural gas, natural gas liquids, and crude oil.
- SR3 or SR Cubed (API Standards Resource and Research Request Form) is the document that is completed by the policy committee of jurisdiction to request the development of a new standard or the revision of an existing standard.
- COPM (Committee on Petroleum Measurement) provides leadership in developing and maintaining cost effective, state of the art, hydrocarbon measurement standards and programs based on sound technical principles consistent with current measurement technology, recognized business accounting and engineering practices, and industry consensus.
- COGFM (Committee on Gas Fluids Measurement) is a subcommittee of COPM that develops, approves, and maintains standards for the measurement of natural gas fluids, including API 14, API 21.1, and API 22.
- QTR (Quantity Transaction Record) is the set of historical data and information supporting the quantity or quantities of volume, mass, or energy.
- DP is differential pressure.
- MCF is the acronym representing one thousand cubic feet, derived from the Roman numeral M for 1,000, combined with cubic feet (CF) for volumetric determination of natural gas.
- MSCF (often shortened to MCF) represents the basic unit of measurement for natural gas in commerce in the U.S. “One Thousand Standard Cubic Feet”, with the word “Standard” indicating that the reported volume of the compressible gas has been mathematically adjusted to a contractual standard pressure and temperature.
- Gas volume statement is a monthly statement showing gas measurement data, including the volume (Mcf) and quality (Btu) of natural gas which flowed through a meter.
- BS&W stands for basic sediment and water.
- Linear measurement is the average of the square root.
- Formulaic measurement is the square root of the average.
- AI (Artificial Intelligence) is intelligence demonstrated by machines in contrast to the natural intelligence displayed by humans.
- Learn more about participating on API’s standards committees.
API 21.1 Update Full Episode Transcript:
Weldon Wright: Welcome to 2023 and Episode 16 of “The Oil and Gas Measurement Podcast,” sponsored by GCI, the Gas Certification Institute, which has been providing measurement training, standard operating procedures, and consulting to the oil and gas industry for over 20 years.
GCI proudly partners with Muddy Boots to offer the industry a superior Field Operations platform. Visit GasCertification.com to find out how Muddy Boots can streamline your meter testing, witnessing in sample crack.
Announcer: Welcome to The Oil and Gas Measurement Podcast, where measurement professionals, Bubba geeks, and gurus share their knowledge, experience, and likely a tall tale or two on measurement topics for the oil and gas industry.
Now, your host, Weldon Wright.
Weldon: Hello, and welcome to episode 16 of the Oil and Gas Measurement Podcast. We’re here today with Marshall Webb, senior manager over at Marathon Petroleum. I’ve asked Marshall to come on and talk to us a little bit about the API, API 21.1 specifically and the revision process.
First, I’m going to get Marshall to introduce himself, tell him a little bit about how he got to his position with Marathon, and tell him why he’s my first repeat guest here on the Oil and Gas Measurement Podcast. Marshall, howdy.
Marshall Webb: Hey, Weldon. Good to be back. Yeah, Marshall Webb, senior manager for Marathon’s field measurement for G&P, for gathering and processing.
Coming up in the measurement industry there, I spent six, seven years in the field as a measurement tech.
Moved over to a measurement specialist out of Denver there. Covered many different assets everywhere from North Dakota to out east in the Marcellus down in the Permian, all that good stuff, and then moved over to the data analyst side and dealt with FLOWCAL and managed FLOWCAL teams for a number of years.
Now back over to the field side, a full circle, and really enjoying the measurement industries.
Weldon: As I said, you’re my first repeat guest. Back, I think, episode five or episode six, we talked a little bit about the challenges facing today’s measurement managers and some of the things that were changing how our workforce goes. That worked really well.
I had a number of folks that have hit me up and said, “Hey, we want to hear updates on the standards groups, API (American Petroleum Institute), AGA (American Gas Association), and GPA Midstream.” You’re the name that came up, because I believe you chair that working group for 21.1, is that correct?
Marshall: Correct. Yep.
Weldon: What I’d like to do is first of all, for those of you that wonder what we’re talking about with 21.1, API publishes the Manual of Petroleum Measurement Standards. It’s a group of books basically covering all things petroleum management, petroleum measurement rather.
21.1, which is also co-branded as AGA Report Number 13, although API is the owner, the revisor of it. API 21.1 is the flow measurement using electronic meter systems, electronic gas measurement.
Basically, the CliffsNotes version of that is that it’s the flow computer book from API. Can you tell us a little bit about what’s going on with API with this set of revisions and where you’re at in that process?
Marshall: Yeah. 21.1, essentially, what it’s attempting to do is provide your best practices and requirements in performing custody transfer measurement of hydrocarbons with flow computers and the various electronic instrumentation that’s integrated with those.
The standard has been revised a couple of times, I believe, the last time took six years to get that standard revised and updated. We have just opened this one up. I think we’re about a year into this new SR cubed, which for folks that don’t know, SR cubed (SR3) for API is the process by which you reopen a document under a working committee.
Each of the standards has a working committee. If you’d like to revise that standard, you have to present what’s called an SR3. Generally, that SR3 has very specific sections that you’re going to reopen and look at, potentially updating or revising. That has to go through the process of API and has to be approved by the various higher level groups.
This one had to be approved by COGFM and then eventually by COPM. We have this SR3, where we’re about a year in. We have some very specific revisions that we’re looking to potentially do. We’ve actually broken that work up into three sub-working groups that are looking at the various sections.
I can go through each of those working groups if you’d like in detail, but that’s the overall structure of what we’re doing with 21.1 as it is now.
Weldon: To regroup on that, that SR3 is basically a business case to API saying, “Here is why we need to make a revision to the following sections, and it’s a scoping document about what we’re going to revise.” Is that summarized pretty well?
Marshall: Indeed.
Weldon: Talk to us a little bit about what you’re doing in this set of revisions, and what each of those three working groups is handling.
Marshall: We have these three working groups. They’re chaired by individual working group members.
Working group 1 is led by Manuel Atencio. That’s primarily looking at Section 4 and a number of subsections of Section 4. It’s really looking at the various calculations and the averaging techniques.
Working group 2, chaired by myself. We’re looking at Section 5 and looking at some of the language there. There was a 50 parts per million language clarification we needed to do. Now, we’re also looking into QTR versus reporting, so the quantity transaction record versus actual reporting. I can get into details on that in a sec.
Working group 3 is led by Keith Fry. That’s looking at Sections 7, 8, and some of the associated annexes. They’re also looking at some of the red files and errata correction items.
That one’s very interesting, too. We’ve actually made pretty good progress there. That’s specifically looking at the instrumentation verifications/calibrations requirements, standards, processes. If you have a DP transmitter, you have a static transmitter, how often should you be verifying that, and what does that process actually look like?
We can get into details on this, too. One of the biggest questions that a measurement tech, a field tech, might have when he goes to the field and he sets up and connects into the flow computer in the instrument is when he verifies those from zero to span, “Do I actually need to calibrate this instrument, or is it good?”
We’re looking at that and specifying a process to determine whether you need to calibrate or not. That’s the overall look at the three working groups.
Weldon: Actually, “Do you need to calibrate,” it’s a matter of, should you calibrate.
Marshall: That’s correct.
Weldon: Today’s hardware out there in the field exceeds what we used to have in the laboratory 25 years ago.
Today, the equipment in the field many times is so close to the accuracy and repeatability of our calibration equipment that introduces that whole issue of, “Hey, if we hit calibrate, what are we doing? Was there a need to? Are we introducing more errors by hitting the calibrate?”
That’s a very piece, and that’s something that’s only come into discussion in the last five or six years probably.
Marshall: Absolutely. It’s been a big one. We have our annual measurement summits within our company. We bring in our field techs, our analysts, supervisors, and all the rest. That’s a question that comes up consistently is, are we actually introducing additional error into this instrumentation by calibrating it too often? How often should we calibrate, and what is that process?
We’re making really good progress on working group 3 and trying to specify that and lay that process out. It’s very mathematically based. Again, we can get into those details once we get to working group 3.
Weldon: Have a better definition for that, and once you have a better definition, then you can provide information not only to companies like Marathon but the software vendors.
The software vendors can start to build that intelligence into their calibration software. Go in, enter your as-founds, and have your calibration software tell you, “Hey, you need to recalibrate this instrument,” or tell you, “Hey, hands off. Don’t touch it.”
Think again, that’s something that’s only started to be talked about in the last few years, maybe five or six years. I’ve seen the negative results of that quite a bit over time.
I don’t know if you picked up on it now. I’m trying to talk you through working group 3 first and then talk about (working group) 2 a little bit because I think (working group) 1 is going to be most of the conversation.
Marshall: I would agree. Let’s jump into working group 3. If you take that fundamental question of, when should we calibrate or what is that process, a lot of folks use different baselines or foundations. It could be a percent span or 0.2 off of your DP, whatever it may be. Different companies use different standards.
What we’d like to do and what the progress we’ve made currently is that let’s take the specific instrument, the brand, make, and model of the instrument that we’re utilizing. Take that manufactured data, so its stated uncertainty as well as the stated uncertainty of your calibration equipment, and then the effect of temperature, ambient temperature from the last calibration, current temperature. Add all that uncertainty together. Do the math. It will tell you whether that reading at that given spot up and down the span is within the uncertainty of all of the instrumentation you’re utilizing. It can tell you whether to calibrate or not.
Now that’s a lot of mathematical calculation. There are certain companies in the industry, we’re one of them, we already have that built into an Excel spreadsheet, some software built around it. We have some very, very talented specialists that have put that together with an enormous amount of work. Give kudos to those guys for sure.
A tech is able to actually go out, connect to the flow computer. Let’s say he’s calibrating his differential pressure transmitter. He can put all that information. He goes in. There’s drop-down menus. He can select the make and model of the instrument, the make and model of his or her calibration instrumentation, the last known ambient temperature from the previous calibration, all of these effects. It can calculate an uncertainty up and down his or her span that they’re looking for. It can tell him yes or no to calibrate.
Now some other companies are not going to have that mathematical software really built out yet. What we’re looking at is potentially putting that into 21.1. Anytime you have an opportunity on the market, somebody is going to fill that vacuum. Whether that’s a home-grown system within a company or a third party develops that software, I think that’s the future here.
If we make that a best practice – I don’t know that we should make it a requirement, I think that’s part of the discussions right now in working group 3, whether that’s a requirement or if it’s best practice – somebody will fill that gap. I believe that’s the correct way to move, at least for me.
There is some debate going on in working group 3. It’s a needed debate. How detailed do we want to get? Not everybody can perform this. How much time do we have in the field? All of those are great questions. We’re hashing that out as we speak.
Weldon: I think it’s important for people that do not know about how the standard process works to understand how we arrive at these standards. It’s not one person saying, “Hey, we need to do this better.”
It’s the best minds from the industry getting together, volunteering their time to work on this, to discuss it, talk about opposing views, figure out, “Hey, do we need research before we work on this?” That’s the way all the pieces work, not just what you’re doing in working group 3.
That process can take a while. The time is almost always well spent because the industry learns and the industry gains value from those discussions, even the ones that are not the selected final outcome, right?
Marshall: Absolutely, yeah. Iron sharpens iron. The best minds in the industry coming together and arguing from a good faith or perspective, having these discussions and really laying these things out and basically attacking each position until what’s left is the most solid foundation or the most solid position we can have.
Weldon: Let’s not go with the word “argue.” Let’s talk about “discuss with devotion to their view.”
Marshall: Passion, very passionate discussions. That’s correct. That is correct. I absolutely love measurement folks because we can literally have a two-hour debate on the definition or the meaning or the connotation of a given word for two hours. It’s an amazing thing, very detail oriented and some of the best minds in the industry on some of these discussions.
I learned more in some of these discussions than I do in months worth of regular work. It’s a fantastic process. I love the exposure to some of the smartest people in the industries.
Weldon: I can remember the argument about air with GPA Midstream. That one was crazy. Let’s not digress on that. Working group 2, you mentioned a couple of things when you made your little introduction about what Section 5 was doing and all.
Probably one of the biggest, you said there’s discussion about the QTRs versus reports. I’m not sure a lot of folks out there understand QTR if you’re not a student of 21.1.
QTR is that quantity transaction record, what is stored in the flow computer and what we pull from the flow computer and take to a downstream system. Data that is used at its full precision for doing recalculations, for doing math on that as opposed to a report is what we print out in a format that’s human readable and makes sense, right?
Marshall: Right.
Weldon: We can sit and argue all day long about whether we want one or two decimal places on our MCF. That flow computer may have four or more decimal places involved in that calculation. For the most part, we as people don’t care.
That’s quite a discussion going on between what are we going to require of the QTR and what are we going to require of the report because that really wasn’t clear in previous versions of 21.1.
Marshall: That’s exactly right. I think there’s a lot of confusion in the industry of what is the difference. I’m going to compare the crude or the oil side for a second, too. When I go to a customer, general back-office teams, they’re handing reports to customers or different departments.
A lot of times, everybody has heard of the gas volume statement. That’s a standard statement that most back-office systems are delivering or submitting. A lot of times, people confuse that with a QTR. For the most part, not to say that a gas volume statement can’t be a QTR, but most gas volume statements are not a QTR as defined in 21.1.
If we look at what does that mean, what is a QTR then what is the difference, the best analogy I usually give is around crude measurement. In a crude world, when you hand a customer a crude batch ticket or a batch volume ticket, the data that is supplied on that ticket, you should be able to take that data and recalculate or verify the volumes that you get.
You should be able to go from your IV (your indicated volume), apply your temperature, pressure corrections, your meter factor, your BS&W. You should be able to walk that calculation from IV all the way to net standard volume with the data on the ticket itself.
What a QTR for the gas side for 21.1 should be similar. It should be the same. That record that QTR (Quantity Transaction Record) should have the sufficient data to recalculate or verify the volumes. A lot of times when people are handing off a QTR, they’re handing over a gas volume statement that isn’t sufficient.
Nobody can take that. You don’t have the data resolution, the proper decimal precision, to perform a recalculation. We’ve decided as working group 2, we stacked hands and said let’s split this out into two different buckets. Let’s define a QTR. What is that? What are the various formats of that? Is it daily? Is it hourly?
Basically, they are hourly and daily, and we’re defining both of those individually. Then, we’ll get to a section of reporting. “Here’s general reporting. This is what you can expect from a general report. You can use it to support your volumes from an accounting perspective, but it’s not meant to verify the actual calculations.”
We’re hashing through that a little bit. We got some good language. We’re moving some stuff around. It’d be beneficial to the industry so we can clarify the difference between those two things and not have folks thinking they’re submitting or delivering a QTR when they’re actually not. It’s going to be good for the industry.
Weldon: It’s needed. The more sanity – that’s the way I like to word it – that we can get into the conversation about, what is a report? What’s a QTR? What do we need in that report?
Because I have listened to people that have a lot of years in the industry, that are respected in the industry, I’ve sat there and heard them argue that every decimal place that comes from that flow computer needs to show up on that monthly volume state.
It’s good work. Of course, everything that committees do is good work. What’s the most value? What’s the most visible? What’s the most visible is a good way to introduce your working group 1.
The changes to how we’re going to calibrate, when will we calibrate, when will we just verify, when do we recalibrate doing those calculations?
More people are going to see that change in the standard than anyone else because it’s going to hit every technician out in the field eventually.
QTR versus reports, how much precision on a daily volume statement? Analysts are going to see that in the back office. Measurement managers are going to see it. Customers are going to have one or two people in the office, but quite a few people are going to see that.
You’re working group 1, working on calculations. People may be aware that change is happening to calculations, but the number of people that will understand what’s changed and see that on a regular basis is going to be almost infinitesimal. Talk to us a little more about what working group 1 is up to, and why they’re up to it.
Marshall: Working group 1 is probably the most intense, the most contentious of the working groups.
We’ve had many meetings where it feels like we’re maybe not making much progress because there’s some very strong opinions on how we approach this reorganization or this reconsideration of these calculations and should we add additional data points to them or not. It’s been a great learning experience for me, for sure.
Essentially, I think where we’re stuck a little bit, and I’ll go over this in a little bit of detail. If I get any of the specifics wrong, trust me the working group will let me know very quickly. Essentially, what it comes down to is the biggest contentious part right now is whether or not we allow additional averaging techniques back into the standard. Currently, there’s only one. The flow-time linear average is the only one allowed by 21.1. There’s three other averaging techniques that have been allowed in the past but in the last revision were excluded.
Those are all in the annexes. If anybody’s interested, go to the annexes and find those.
Right now, it’s really between just two, though. The flow-time linear, which is the current averaging technique, and the flow-time formulaic, which is the other one. Let me backup a sec, so everybody understands what the averaging techniques do work, why we use them, and how they’re used.
Weldon: Sure.
Marshall: The QTR, we can go back to the QTR, that quantity transaction record, which is the basis for your data resolution to perform a recalculation. In the flow computer, especially for most modern flow computers that many people are using – not everybody, most people are using – are doing one-second calculations. Let’s take a DP meter, an orifice meter as an example. That’s where we’re at here in terms of this discussion, and the data and the analysis that we’re running.
In that flow computer, it’s taking the signal, or the pulses, or whatever it is from all the instruments, and performing that volume calculation once a second. Then, it’s going to roll that up into a minute, and then into an hour. If we have hourly QTRs, that volume calculation is accurate as long as all the instrumentation is good, because it was performed once a second. It summed the volume across those seconds for the hour, there you go.
The problem is, when you need to take that hourly QTR and perform a recalculation. Let’s say you have an error in your DP or your static transmitter, and you need to recalculate it. The DP, the static, the temperature, all your other data points are an average of that hour. It’s an average of all those one-second calculations rolled up into that hour.
When you need to make the correction using the averages of the DP, static, and temperature, it depends on which averaging techniques you’ve used. Here’s the basis of the argument. Is it the average of the square root, or the square root of the average?
The linear is the average of the square root, and the formulaic is the square root of the average. Depending on your flow profile and your flow rangeability, let’s call it – I don’t know if that’s a term or not – how volatile your flow will make a difference in which averaging technique you use.
For an example, here’s an example. Let’s take an hour of flow, orifice meter. Let’s call it a well pad meter, and let’s say it’s extremely volatile. It’s on a plunger lift system. That thing’s going to kick on, and you get a spike in your DP. It’s going to shoot way up there, let’s call it, for half the hour.
Fifty percent of the hour is up at 100 inches of DP. Then, the other 50 percent of the hour – so the other 30 minutes – is at 1 inch DP. If you perform a linear average across that time, you’re going to get a DP of 7.10 inches. That’s the linear average.
However, if you were to use formulaic on that same hour’s worth of data – half the hour at 100 inches, half the hour at 1 inch – your formulaic DP average for that hour is going to be 5.5 inches. You have a significant difference in your DP average for that hour.
If you find that you have an error – a plate size error, or whatever other error you might find – when you perform that recalculation, that two-inch difference is going to matter. It’s going to be a material difference.
That’s the big argument. Well, it’s not necessarily an argument. It’s the truth. Those two different averaging techniques on a high-flowing rangeability or a high-variable flow well gets a material different average.
Now, what happens when you have a pipeline or a transmission style meter where you have constant flow, very, very steady? Essentially, then those two averaging techniques are negligible. There’s not any difference between your average.
Here’s the debate. We have one-second data that we pulled off of a meter out in the field. It’s a wellhead meter. We have a couple of hours worth of one-second data. We have the reference material, or what we would consider reference material.
Here’s your actual reference. Here’s your one-second data for these couple of hours. You take one hour of that one-second data, and you can sum the volume. Then, you can perform your different averaging techniques across those data points. You can see how close you get to the actual volume if you perform a correction or a recalculation given an error.
That’s the analysis we’re performing right now, but there is a big debate on how that testing is performed. What are the parameters and the protocols of that actual testing? We’ve hit some walls there a little bit.
There’s a lot of other items that working group 1 is going to be looking at as well including the IV, the calculation of the IV – should we add temperature into the calculation of the IV? – and a couple of other items. The main one right now is those averaging techniques. How do they perform when you perform an edit?
That’s one piece. It’s, how do they mathematically perform when you do a correction? The other piece we all need to be aware of, and some of the debate is happening as well is, what does that provide for the industry in terms of…?
If you have a back-office team, how is getting that detailed into the data, and knowing whether that averaging technique can be used for correction? Or if you need to use historical data for the correction, how does the analyst handle that? Is this making it easier for them? Is this going to make it much more complicated? That’s another discussion that is occurring.
Weldon: You got to be careful in that discussion, and say, “Well, we don’t care if it’s more complicated if the answer is better.”
Marshall: That’s the crux of it.
Weldon: Closer to correct, I guess, is the way to say that, but that’s not the reality of it. In today’s world where we’re pushing to do more and more with fewer people, we no longer have a day when an analyst is responsible for 300 or 400 meters. Most companies, an analyst would think they had a gravy train if they were responsible for 1,000 meters. When you have an analyst, especially on the production and gathering side, that may be responsible for way north to 1,000 meters, they only have a certain amount of time per meter to get that work done.
Marshall: Absolutely.
Weldon: We’ve got to weigh how good is the result versus what does it cost to get that result. Isn’t that what you’re saying there?
Marshall: Absolutely. Somebody could probably do a monetary analysis of this and calculate if you got into this depth, this deep resolution of the calculations, and what that difference isn’t from a volume correction standpoint versus the amount of time the company paid the analyst to figure that out, you can do this cost-benefit analysis. You could probably get to an answer.
That doesn’t necessarily mean that you shouldn’t move towards the right answer. Technology is always in the background filling vacuums.
If you were to make the standard push towards the more technically accurate process, then somebody’s going to fill that gap and potentially build something for the analysts to utilize that is much easier and more expedient to close that gap between the monetary impact of labor versus the correction on the volume.
Weldon: We already have companies out there today that are dabbling in AI for evaluating exceptions and issues.
Marshall: Sure.
Weldon: We had Michael Thompson on one of our earlier episodes talking about work they had done for machine learning to help at identifying meter freezes, which can be a tough one.
We’re talking about the same thing here. We’re talking about the need for software that becomes advanced enough to know how to make that selection. Not forcing an analyst to take 20 minutes to make that evaluation before they can make an adjustment. That’s what we’re talking about.
Marshall: I would say the other side of the argument would be, can we not figure out a way to allow the analyst to make those decisions in a very accurate way without the need for the additional software? How does that then tier the industry in terms of who can afford the new software and all of that?
You’re getting into some interesting territory there. I think there’s something here. There’s likely a compromise here somewhere that may get us to a more accurate answer. At the same time, allowing for the training or the analyst to be able to perform the work in a sufficient way that still matches that more accurate answer.
We’re so reliant on software now that it makes sense that some of those things would be built into the software. I can tell you that some folks are, around the AI stuff, a bit hesitant around that. To hand over the keys to a learning system versus a human.
As we all know, if you’ve ever edited meters before in a back-office system and got into the analyst role, some of it is a bit of an art form versus a technically correct one. You have to use historical data. You have to trend it, all that. Not to say that an AI system couldn’t do that. There’s definitely a big debate on that and the risk there.
Weldon: Back in my time with Energy Transfer, I had, at one time, over 30 analysts reporting to me there. What you say there is absolutely correct. What you say there as being correct is also the strongest argument for an AI being able to do it.
I can tell you, there was no substitute for a 10-year or a 15-year analyst with that much experience being able to look at data, and not by analyzing the data, by looking at the data, and understand what their next steps needed to be.
At the same time, that is something that comes from seeing thousands and tens of thousands of problems in data, and figuring out how to address them.
First of all, we’re losing that experience at a rapid rate. We talked about this in our previous podcast episode with you, Marshall. We’re losing that experience at a rapid rate. When we start to backfill those positions, we’re in a labor market today where people hop jobs more often. They switch more often. Getting to the point that you have somebody that knows how to handle something that complex is not going to happen with two weeks in a training session, or a month on the job.
Marshall: Right.
Weldon: It’s the same thing. That’s the argument that says an experienced analyst can do it better and faster. That’s the same argument for saying, “We may need an AI to do it.”
Marshall: Sure. In terms of getting those experienced analysts, I totally understand the point that you can supplement that knowledge leaving the industry with a machine learning or an AI tool. Absolutely. That’s going to be a factor or an option for companies.
For companies that are going to stick with the more human route or the analyst route, and training and getting that experience, if there’s one piece of advice, or one thing that you can help reduce that learning curve for an analyst is get that analyst out to the field with field technicians, at minimum, twice a year, if not every quarter.
Get them out there quarterly. Get them in a truck. Get them on site. Have them change a plate. Have them blow down a meter run. Have them calibrate some instrumentation. The more you can perform that, or more you can have your analyst do that, the better they’re able to correlate the equipment, the physical equipment in the field with the data on the screen when they get back to their regular job.
Weldon: Exactly.
Marshall: Yep, so that they see a pressure spike, or the temperatures doing this, or whatever it may be, if they’ve been in the field enough, they can say, “Oh, I know exactly what that is.” They can picture that equipment operating in the field, causing that data to spike or to do whatever it’s doing on their screen.
That’s where the magic happens. They can start getting that art form and getting more and more accurate edits. I couldn’t agree more, the AI thing is coming and companies are going to have to start looking at it for sure.
Weldon: Whether they like it or not may be a little different thing. I’m one of those people who don’t like change. You got to prove to me the change is for the better.
We’re at the stage where we’ve already introduced so much reliance on technology. I feel that making those next steps is not going to be as hard as it was in the initial days of saying, “Yeah, we can use that flow computer.”
Marshall: Especially, since you have all your baby boomers leaving the industry and now you have your Gen Xers, and we talked about this on the previous podcast, but your Gen Xers are now taking those executive leadership roles. They’re much more comfortable with that technology.
The Gen Xers probably still have a bit of – what’s the term I’m looking for? – weariness about the AI piece.
As millennials get into those higher level management roles as well, they’re absolutely comfortable. They’re completely comfortable with AI and that technology. I think it’s inevitable. That’s going to continue to expand.
Weldon: I just want to let you know, though. We value what y’all are doing on that working group, all of the stuff on 21.1.
Again, I want to make sure that our listeners understand that, not just with API but AGA, GPA Midstream, the vast majority of all of that work that goes into this is being done by individuals that, for the most part, are doing quite a bit of it on their own time.
Now, companies support their effort. Thanks to all of the companies that support the effort of their employees on these working groups, on the committees. Thank them for supporting their travel, as we do a lot of it online these days, but still, getting together a couple times a year is important.
The financial support from those companies is great, and we value it. But when it comes to the people that are on these committees doing that work, and I know a lot of those, there are a few of those people, very few, that can say, “I only do that work on the clock.”
But for the most part, what is happening is every one of those folks on those committees already has a real job, as I like to put it, and that real job is not a 40-hour-a-week job to begin with. Is your job a 40-hour-a-week job, Marshall?
[laughter]
Marshall: No. I wish.
Weldon: That is everybody, almost everybody, on these committees. When they agree to be involved in one of these committees, when they agree to take half a day once a month out of their schedule to spend the time on an online committee meeting, or when they take three days to fly to Houston and sit in some company’s office while we work through the latest things with AGA, all the rest of their work that they’re really getting paid for by their employer didn’t go away. That means longer hours, more evenings working on stuff. We really value and appreciate the folks that are working on these committees.
Marshall: I couldn’t agree more. Just to give a shout out to all the folks on the committee meetings, regardless of standard, whether it’s API, GPA, whatever it may be, most of all of that is volunteer outside of their normal working hours. They put in a tremendous amount of work to help the industry move forward, to find and resolve problems.
We wouldn’t be where we’re at today from a measurement industry without all of that tremendous amount of work from all those folks. It is amazing to see. Thankfully, those companies do support the travel and being involved in these working committees, because they understand it impacts them.
If a company’s operating properly, they understand that these industry standards organizations impact them directly, and if they’re not involved, it’s the whole adage, “If you don’t vote, you can’t be mad about who’s been elected.” If you’re not involved in the committees, you can’t be mad next time that standard affects you if you weren’t involved.
Weldon: I agree. I think that’s a pretty great wrap-up for this, Marshall. Anything else you want to say before we quit?
Marshall: No. I’m good. I appreciate the opportunity again. Thank you again to all the folks on these working committees across the industry. It is incredible. Thank you.
Weldon: Thank you for your time doing this recording, Marshall. Thanks for your work on the committees. We’ll have your information in the show notes. They will be posted on the website. I’m sure anyone out there that wants to know more about this or get involved with the discussion can give you or the other members of the committee a call.
Marshall: Absolutely. We’re always taking new members for the committee working groups. We can probably put, Weldon, maybe I’ll just send Patty with API, give her email address out maybe and we can have folks funneled through her.
Weldon: Sure. I’ll just add Patty’s email address then. (Patty Fusaro, fusarop@api.org ) All right. Sounds great. Thanks again, Marshall.
Thanks again for listening, folks. We hope you found this episode interesting and informative. If you did, please leave us a review on iTunes, Google, or wherever you get your podcast fixes from. Those reviews help improve our relevant score for the search engine, which in turn helps more people find us.
We also encourage you to share our podcast with your co-workers, your boss, and others in the industry.
As always, we’ll have a full transcript for this episode, along with the info on our guest, that will be posted on PipelinePodcastNetwork.com.
If you have comments or questions about our episode, suggestions for future topics, or if you’d like to offer yourself up to the podcast microphone as a guest, send me a message on LinkedIn or go to the contact form at the bottom of every page on PipelinePodcastNetwork.com.
Transcription by CastingWords