In this month’s episode of the Oil & Gas Measurement Podcast, host Weldon Wright is joined by Matt Holmes to discuss the intricacies of averaging techniques, flow computer settings, and back-office system calculations.
Matt shares insights from his extensive experience, covering topics such as hourly averages, VCF (Volume Correction Factor), and the impact of varying flow rates on calculations. The episode also covers the importance of accurate data representation and the challenges that measurement professionals face in achieving precise results.
VCF and Averaging Techniques Show Notes, Links, and Insider Terms
- Matt Holmes is the Senior Product Specialist at Quorum Software. Connect with Matt on LinkedIn.
- Quorum Software facilitates collaboration and information sharing throughout the energy industry, having evolved over 25 years to support various roles and optimize business workflows. With a vision for a connected global energy ecosystem, Quorum emphasizes cloud-first software, data standards, and integration to serve as a trusted source of decision-ready data for over 1,800 companies, promoting improved efficiency and collaboration within the connected energy workplace.
- Back-office Measurement Systems are data processing systems designed to import, validate, summarize, and report hydrocarbon measurement data from multiple sources, such as various brands of flow computers and laboratory data. These systems can process daily and hourly data from tens of thousands of individual flow computer devise, allowing a relatively small number of measurement analysts to manage issues and reporting.
- FLOWCAL, by Quorum Software, is a back-office measurement system that is widely used by companies in all sectors of the oil and gas industry.
- API (American Petroleum Institute) represents all segments of America’s natural gas and oil industry. API has developed more than 700 standards to enhance operational and environmental safety, efficiency, and sustainability.
- 21.1 refers to the API Manual of Petroleum Measurement Standards Chapter 21.1 – Flow Measurement Using Electronic Metering Systems – Electronic Gas Measurement
- API 14.3 / AGA 3 describe the design and installation parameters for measurement of fluid flow using orifice meters and other devices, and provide a reference for engineering equations, uncertainty estimations, construction and installation requirements, and standardized implementation recommendations for the calculation of flow rate through orifice meters.
- The annual AGA Operations Conference is the natural gas industry’s largest gathering of natural gas utility and transmission company operations management from across North America and the world. During the conference, participants share technical knowledge, ideas, and practices to promote the safe, reliable, and cost-effective delivery of natural gas to the end-user.
- Averaging related calculation errors can occur when recalculating gas measurement volumes in a back-office system when high resolution data (typically 1 second readings) have been averaged into hourly or daily resolution data. Under certain conditions, such as rapidly changing flow rates, large errors may be introduced if simple averages are used to recalculate volumes.
- VCF or Volume Correction Factor is a validation and recalculation factor calculate by back-office measurement systems. API 21.1 informative Annex C defines the calculation of the VCF and the correction methodology used when applying it to recalculated volumes.
- PHMSA (Pipeline And Hazardous Materials Safety Administration) protects people and the environment by advancing the safe transportation of energy and other hazardous materials that are essential to our daily lives. To do this, the agency establishes national policy, sets and enforces standards, educates, and conducts research to prevent incidents. They prepare the public and first responders to reduce consequences if an incident does occur.
- GPA or GPA Midstream Association is a voluntary industry organization composed of member companies that operate in the midstream sector of our industry. GPA Midstream sets standards for natural gas liquids; develops simple and reproducible test methods to define the industry’s raw materials and products; manages a worldwide cooperative research program; provides a voice for our industry on Capitol Hill; and is the go-to resource for technical reports and publications.
- Flow Computer An electronic device used in the oil and gas industry to measure and control the flow of hydrocarbons.
VCF and Averaging Techniques Full Episode Transcript
Weldon Wright: Welcome to Episode 26 of “The Oil and Gas Measurement Podcast,” sponsored by GCI, the Gas Certification Institute. For more than 20 years, GCI has been providing measurement fundamentals training and measurement standard operating procedures to the oil and gas industry.
Now, they proudly offer the Muddy Boot Online field operations platform. Let GCI show you how Muddy Boots can streamline your field measurement operations.
Announcer: Welcome to The Oil and Gas Measurement Podcast, where measurement professionals, bubba geeks, and gurus share their knowledge, experience, and likely a tall tale or two on measurement topics for the oil and gas industry.
And now your host, Weldon Wright.
Weldon: Hello, and welcome to Episode 26 of the “Oil & Gas Measurement Podcast.” I’m here with Matt Holmes today. We’re going to talk a little bit about some of the complexities around averaging techniques, flow computer settings, and back‑office system calculations. I want to get Matt to introduce himself here.
Matt and I have known each other for a while. I left Quorum back in March of ’21, and Matt must have been coming in the front door when I left the back door at that time because we swapped in three days or a week, Matt?
Matt Holmes: Yeah, I was right behind you, I’m pretty sure. I just missed you.
Weldon: Well, there, you are right. Tell us a little bit about yourself, Matt, and what you’re doing over at Quorum.
Matt: Thanks, Weldon. I appreciate you having me on today. My background is I’ve been in the industry a little over 15 years now. I worked with Quorum products – FLOWCAL and TESTit – for about eight years at ONEOK and then another seven or eight years at MarkWest in MPLX.
I started as a measurement engineer and worked my way up to managing the back office data analysts and ultimately, the whole measurement program for the G&P operations at MPLX. Then, about three years ago, I moved over to Quorum in their professional services group and have been helping FLOWCAL customers try to get the most out of FLOWCAL.
Helping new users get it up and running in a way to meet their project requirements and business needs, and helping existing customers try to get more out of new features, new enhancements. Get the most out of their analyst’s time on a day‑to‑day basis.
Weldon: We’ve probably followed a fairly similar path through our career, Matt, except for one thing. I guess I started before flow computers were cool.
Matt: [laughs]
Weldon: They were around. They just weren’t the answer to everything back then.
I started in the, “It’s a computer. I don’t trust it. Let’s prove it’s right.” We’ve moved through that circle to where now, especially, the younger folks in the industry, “It’s a computer. It must be right.” What we’re going to talk about today is one of those little niches that proves we’re still figuring this stuff out.
Matt: Absolutely.
Weldon: As I said earlier, we want to talk about averaging techniques. Why does that make a difference? What does it mean in the flow computer? What does it mean in the FLOWCAL system?
You have written, or co‑authored, at least, a great paper. I’ve heard you present it several times at AGA or maybe ASGMT on this topic. We don’t want to rehash that paper. I’d like to get your take on why this is important, and what we need to do. How does all this fit together with 21.1?
Matt: Data averaging. Because the flow computers are doing calculations once a second, and then they’re giving us the total volume on an hourly basis, that’s what 21.1 requires.
They have to represent the pressure, the temperature, and the differential as an hourly average instead of a summation like they do for the volume. That’s the number that makes sense, but how do you arrive at that average? There are multiple ways to get there.
Then, in the back office system, we’re using those hourly averages for recalculations, for calibration adjustments, for gas quality adjustments, and also for…
What’s in 21.1 is the ultimate end‑all, be‑all check is that VCF. If I take the average hourly data and recalculate the volume independently, how close do I get to what the flow computer calculated?
We use that to validate that our flow computer is set up correctly. That our back office system is set up correctly. We feel really confident that when we close those numbers and send them to accounting we’ve got the right answer.
The more we can do along those lines to make sure that VCF comes out close to within a reasonable tolerance, close to one, the more we can focus our analysts on troubleshooting issues that are actually going to have an impact.
Weldon: Matt, I would like to make one thing clear to our listeners: VCF became synonymous with FLOWCAL. FLOWCAL was looking at that before 21.1 ever said anything about it.
This is an issue across our measurement platforms no matter what back office system you’re using, or even doing it by hand. VCF, as you pointed out, it gives us this good feeling that things are going OK, right?
Matt: Absolutely.
Weldon: One of the problems with VCF, though, is it’s almost too good. When we take VCF, and say, “We’re going to calculate a volume for a new analysis or an edit,” or whatever, if we didn’t have a perfect calculation before, that VCF can be used to make it just as imperfect.
Matt: Correct.
Weldon: We kind of get complacent. We get used to doing calculations, but we should be shooting for that VCF as close to one as it should be.
Matt: Within a certain tolerance. We have to minimize the issues that we’re looking at to those that are going to have an impact. Typical day‑to‑day, if an analyst can keep that number between 0.99 and 1.01, they’re moving on to bigger fish to fry. Anything inside that 1 percent window, we’re happy, because we know it’s not going to be perfect.
Weldon: I’d like to get you to talk a little bit about what led to your whitepaper to begin with, and what the real concern is. We talked in general about what a VCF does, but let’s talk about how averaging techniques impact it. What that discussion of techniques has been for the last 10 years now, I guess, 15 years almost.
Matt: Let’s start with the paper and the genesis behind that. In my previous role, I was responsible for the back-office measurement team.
I had analysts coming to me, and saying, “Hey, we’ve got these VCF validation limits that we’ve set. We’ve looked at all the characteristics. We’ve looked at all the data that we can. We think everything’s correct. We’ve compared it to test reports. We’ve had technicians on‑site take a look at things. Everything’s correct.
“This VCF number, still, I get these crazy 0.95s, 0.8s one hour. Then, the rest of time’s good, or several hours in a row.” I was like, “All right. The only thing that’s left is, is that average data representative of the volume that flowed during that time frame?”
I got my co‑author, Jim Maloney. He did most of the hands‑on work. We set up, took all the data inputs from one source, and plugged it into four AGAs in a flow computer all doing the different averaging methods. Then, ran them back through FLOWCAL to see how those VCFs lined out based on the same data, one‑second inputs.
You can see in the average data, there’s a pretty significant difference when you have a high variation in your flow rate to what the hourly averages turn out to be. That was a good opportunity for us to investigate that. Then, when you think about that from the analyst’s perspective, all they get is that hourly average.
If it looks like the same number hour after hour, because it’s going up and down across the same span over and over again, they can’t tell that that average is not representative of what’s going on, or that that’s the number that needs…It’s causing their VCF to be wrong.
If we can eliminate that from being an issue, and make sure that that number that’s being fed into that calculation doesn’t cause an additional impact, we’ve helped eliminate a problem from their jobs, and still have very good volumetric data to send downstream for settlement.
Weldon: Two things look the same to that analyst. One of which is we have that meter that is always full flow or close to full flow. Yet, the VCFs, sometimes, go wacky, sometimes, they look OK.
Then, sitting on the edge of that are those meters that aren’t full flow all the time ‑‑ maybe they’re not full flow most of the time ‑‑ and they can suffer the same issue. That’s where the analysts start looking at, and saying, “Hey, I recognize this is a flow time issue.” That was the start of their concern, the first hint, wasn’t it?
Matt: Yeah. I’m sure that you’re aware, too, that it became very common in the industry. “Oh, it’s an orifice meter. It doesn’t have a full flow hour. I’m going to ignore the VCF exception,” right? “It’s a known issue. That’s what it is. I’ll move on.” It eliminates a valid troubleshooting tool if it’s giving us those false positives.
If we choose the right averaging method, we’ve got the opportunity to eliminate that from being something that they have to look at. It becomes one of those nuisance alarms that they automatically ignore. We want that to be something that every time it pops up, it’s worth looking into.
Weldon: I’ve been guilty of that. At one point in time, I had a couple of dozen analysts working for me. We definitely got into that same mode of operation. “There’s bigger fish to fry. Let’s worry about something else. Be careful if you recalculate.”
Matt: Exactly.
Weldon: Of course, the problem, really, that we saw historically wasn’t that it was a problem when we were on and running. It’s when we needed to do a calculation that we couldn’t leave the VCF turned on for.
We had a calculation where VCF wasn’t valid. We had to do a full recalculation, set that volume to one. All of a sudden, boom, our volumes are up. Our volumes are down. They’re crazy. We just went, “Hey, don’t recalculate,” like it was a dirty secret somewhere.
I consider myself somewhere between okay and pretty good at math. Be real honest with you. This gets to be some complicated stuff. You’ve got to be focused to follow through on it. Some of it gets pretty complex.
Matt: Absolutely, it does. The previous version had four different averaging methods that were options. Just trying to understand what those options were could take some serious time and focus to understand how they might impact the data that are represented to you.
In this latest version of 2013 2011, they introduce this extension and recalculation from the extension concept, which can provide a very good tool for coming up with a VCF, but it adds a whole nother level of complication in understanding the impact of that hourly average differential or pressure-temperature into the volume.
When you do a recalculation or if you need to do an adjustment from a calibration report, what numbers do you do?
It makes the process extremely complicated, when, especially today, the majority of the time we’re using these orifice meters, where the biggest impact on VCF is, is those meters where we’re not running clean, dry gas through it. They’re smaller volumes.
They’re stuff that we know there’s going to be other errors too. Why make the process so complicated just to get to a really, really good calculation when we can get pretty darn close if we just use the right averaging method?
Weldon: I guess that’s where part of the concern is. I wanted to talk about that specific issue a little later in our discussion, but let’s grab it now. When we start analyzing the two leading answers to addressing this, if you will, we have the accepted method in API 21.13. We have the alternate discussion, which is Durand’s method, I guess is what they call it there.
There are differences. One is obviously superior to the other when we have intermittent flow or widely varying DPs, particularly, in our flow. One is superior there.
As you just alluded to, it can make things a lot more complicated for the analyst. No longer can you say, “Hey, I’m missing some data. I’m going to straight‑line average between point A and point B.” Talk to us a little bit about the headaches that come with that.
Matt: The analysts and the industry in general is used to looking at that differential pressure as an indication of flow. It corresponds, the higher the value, the higher the flow, on a square root basis, obviously, because that’s how it goes into the equation.
The extension, like you said, is a very, very accurate way to average differential pressure, maybe even temperature, and flow time, all into one big number that accounts for everything that happened during that hour. It allows you to back into that same volume that the flow computer reported on a more accurate basis.
Like you said, when it comes to figuring out “I’ve got to fill in these missing hours” or “I’ve got to do an edit just for my temperature,” how do I extract just the temperature from this extension value and only update that variable and then plug it back into the extension value and plug it back into the volume calculation? Even 2011 doesn’t tell you how to go about that process.
Whereas it makes sense, if I’ve got the orifice flow equation and I’ve got differential pressure and temperature, I should be able to plug those hourly average volumes right back into the same equation that the flow computer used and get the same number.
I can follow through that process because it’s spelled out in AGA 3, API 14.3. It’s all there and spelled out for you. You know what the calculation process is. It’s just like you said. It provides a very, very good calculation, a recalculation of the volume.
Trying to back out, if I just need to update one variable, if I just need to update gas quality, temperature, pressure, do a calibration adjustment, fill in that missing data, doing that correctly becomes very, very difficult.
You actually have to have multiple differential pressure values visible to the analysts. They have to understand which one of those they need to edit to make their adjustment correctly. That’s just more opportunity for failure, more opportunity to get the wrong answer.
For the amount of information we’re talking about, I’m not sure that that’s necessarily doing the industry the best service to help us get through all the data that we have to review and get it correct by the end of the month.
Weldon: That analyst with 1,000 meters is busy.
Matt: [laughs]
Weldon: That analyst with 1,200 meters is hopping. In today’s age, we find analysts somewhere that end up with 1,500, 1,600, 1,800 meters. Let’s face it. They don’t have that much time to spend on the average edit when they’re having problems.
AI is the big buzzword. Everyone’s going to solve all of our problems with AI. They’re going to do better validation. They’re going to be better estimating.
If there’s room for AI ‑‑ this is something that was discussed less than a year ago, I guess, in one of the AGA meetings ‑‑ is that the answer to being able to do this more complex process on the analyst side is a software answer. It’s something our measurement systems probably need to address. That seems to be an AI solution waiting to happen there.
Matt: [laughs] There’s definitely some opportunities there to get AI plugged. Just as you mentioned before about the flow computers taking a long time to adopt, I can only imagine how long the industry is going to take to let AI into their system and make decisions for them.
I think there may be some opportunities first to help it flag some records that are out of line, as needing further review. Beyond that, the issue gets very complex as to what we’re going to allow that AI to do and what information we’re going to make available to it.
Weldon: Exactly. You know Michael Thompson, I believe. He’s been on here talking about some of his project work in trying to detect meter freeze ‑‑ there’s another validation issue ‑‑ with some great success. I know Bruce Wallace has done some of the same stuff.
I had a conversation, within the last two weeks, concerning data estimation and missing data estimation and maybe corrections being made by an AI. My side of the conversation was very much what you just echoed. The capability is probably there. Can it help us in many of the circumstances? It might/could, but that’s a long path to acceptance.
You’re not going to do it without getting acceptance. You’re not going to have acceptance without our standards recognizing it. You’re not going get it into the standards until enough people understand what’s going on to have meaningful discussions.
Matt: Absolutely. You’ve got to open up and let people understand what exactly that AI or software is doing. It’s got to be a repeatable result. We can’t have one answer this time and another answer the next time.
That’s the point of standards, right, is we all expect to get the same answer out of the same input data. Until we can validate and verify that that’s going to happen, it’s definitely not going to be mainstream.
Weldon: Exactly. This particular person I was talking to took a little offense at a comment I made in a previous podcast that we were going to be 10 years or more away from AI really taking over the analyst job, and they took offense. AI is going to solve everything. AI might, but acceptance and letting the AI do it, is two different things, regardless of the capabilities.
What I’d like to do is I want to wrap back around to what I wanted to get into before we started talking about the analyst part, Matt. First of all, is your whitepaper out there where people can easily find it? I’m not sure how I’m going to do that.
Matt: It should be. It’s been presented at ISTAM. It’s probably the easiest way to find it. If you go out and find the ISTAM proceedings for many of the past three years, so 2019 and on, you should be able to find it out there. It was originally published at the AGA Operating Conference about five years ago.
If you look back there, that was their 100th anniversary. I can’t remember the exact year, but that was a big deal they had out there in DC. That was the original one. It should be there if you can’t find it on ISTAM’s website.
Weldon: I will try to find and add a link to that to the show notes. We’ll do a transcript of this at the end of the…I mean on the website right after we finish the podcast.
The paper shows some really good examples. What I’d like to get you to talk for a couple of minutes is how that data was generated. This wasn’t just sitting down and running two or three calculations to the spreadsheet. Explain the process used in the data.
Matt: Like I said, my counterpart, my co‑author, Jim Maloney, was the one who did all the hands‑on work. He took essentially what was a signal generator that could generate a sine wave going into a flow computer.
From whatever it was, a 4 to 20 milliamp signal, he was able to make it go down to 4 and up to 20 at a very fast rate, up and down. We fed that into the flow computer, the same input for four different meter runs in the flow computer, each meter run doing a different averaging technique.
Then he had a not‑so‑temperature‑controlled warehouse he was working in, so we had some ambient temperature we could work with. Then he was also able to do something similar with another signal input we did, like a 4 to 20 signal which was going from what we thought represented about 180 to 220 PSI for pressure.
The differential was going 0 to 250 inches full range, and then the pressure was 180 to 220. Worst case, this thing’s an inter‑meter or something going up and down all the time. How does that impact the average data? We tried that both by just doing the variation on differential and then we threw in the additional pressure and temperature variations on that.
Then took that data that ran for a couple of days back into that back office FLOWCAL system to do the recalculations and validate which one of these methods is getting us the right answer. That was all done in that flow computer in his warehouse out there in Pennsylvania.
Weldon: That’s the key I was looking for our listeners to understand there, Matt, was that this wasn’t just we generate some numbers and plugged them into a spreadsheet. You generated the inputs into a flow computer that was actually changing. Changing in two or three different patterns, if I remember the white paper correctly, right?
Matt: Yeah.
Weldon: It lets the flow computer do the actual averaging, the actual flow integration that would normally be done under changing inputs. This wasn’t strictly a theoretical exercise. It uses real input data. I’ve looked through those and all of it’s impressive. Some of it’s downright scary. I’ve also seen a few cases that have gotten worse than your worst‑case VCF in that paper, right?
Matt: Yeah. In the research we did, there was 40‑inches difference in the hourly average differential from worst case to best case scenario. That’s a huge discrepancy for the same data being fed into an hourly average calculation.
The same volume is achieved from both, but when you’re trying to troubleshoot, where do I make my calibration adjustment? Do I need to change my plate, because I’m approaching too high of a differential for my transmitter? You may be way out in left field in your assessment, or you may be looking at realistic data.
Weldon: Y’all were using a 40‑inch DP swing. What about that well on a pump‑off controller, or a plunger lift that goes full scale down to zero, and repeats that on a regular basis?
Matt: Exactly.
Weldon: It makes some crazy numbers. Overall, the takeaway that I got from this is that we’ve got to understand what’s going on out there in the field better. If it’s pipeline-quality gas, and we make one incremental change when the control valve changes at nine o’clock in the morning, this is probably not our biggest headache.
Matt: Exactly.
Weldon: That varying if not wild flow coming from individual wells, especially, declining wells, sets up something that’s ripe for this. Kind of an aside to this I don’t even want to get into. One of the things I thought about several times reading this is I wondered how this kind of problem lies on top of low‑speed pulsation problems for low‑speed compressors.
Matt: That’s a great idea. Absolutely.
Weldon: I’m not volunteering to do that research.
Matt: [laughs]
Weldon: We get into, in this industry, in the upstream, and the gathering and processing part of the industry.
We get focused on VC…Not VCF. We get focused on pulsation and square root area for a while. Then, we forget about it, and sweep it under the table. We’re in one of those sweeping‑under‑the‑table modes right now. I’ve had to remind several of my customers that, “Hey, you need to be looking at this. This is likely part of your problems.”
Now, back over to VCFs and averaging and techniques. I guess it would be fair to say that this is an extremely hot topic for an extremely small number of people when it comes to our standards.
Matt: Absolutely. There’s been an active workgroup going on for, in parts and pieces, five years or better, trying to get 21.1 updated.
This has always been a part of the conversation is the averaging technique. “Which number is best for the industry? What is the best process for the industry? How to clarify exactly how to use this data in your back office process?” etc.
There are less than a dozen people that are consistently at these meetings and involved in the discussions. There can be very heated, very passionate discussions, very detailed in‑the‑weeds discussions, trying to make sure we get the best answer for the industry.
Weldon: I’m going to loop back around to what you said there in about 30 seconds here, maybe a minute.
I do a little work consulting for companies. I look at their measurement problems, and their balance problems, and do some audits, but I also teach a few classes here and there. It’s been very consistent in the classes we teach for the back office and I’m surprised how few people are aware of this particular issue.
We can be talking about back office problems and validation issues. I can ask briefly about, “Hey, do you see meters where the VCF isn’t consistent, where it’s up and down?” All of a sudden, half the heads are nodding. The hands are raised if it’s a remote session.
Invariably, when we start to talk about that, if anyone on the call has ever heard of the concept of averaging techniques being involved in that, it’s rare. Most of the time, there’s no one on the call. That’s not necessarily because it’s all new analysts. These may be managers on the call. I’ve had some 15‑year analysts in my classes.
It’s something that’s not understood, so we need to do two things. We’ve got to increase our knowledge and understanding of this problem. You don’t have to deal with that varying VCF. You can address it.
Now, part of the problem I run into is you tell the tech in the field, “You need to change the flow computer,” and you get a, “Huh?” or, “How do I do that?” We need some education on both their back office measurement system folks.
FLOWCAL is one of those, but there are others out there. We need some education for folks in the field also on, “When you see this occurring, how do I address it? Even if I don’t want to change every meter on my system, how do I fix the ones that are bad?”
Matt: Absolutely. That conversation between the back office and the field when it comes to addressing this has to be there.
Weldon: Now, back to what I said as a loop back around to, we need some more folks involved in these discussions. This is something you mentioned prior to us starting the recording that I’d like to reiterate here.
There have to be some other people out there who have not only seen this problem but have some real‑life experience with it. They’ve addressed it within their own organization, or a previous company they worked for. They have a plan, or they have some more research data on the topic.
I know we do not like to share our measurement problems. If one thing that the control room management folks and control room operations have done, and that PHMSA has done for the pipeline operation industry has been making sharing of your problems, making learning from the experience of others more acceptable.
We haven’t always got to that here on the measurement side. We’re talking directly about what turns into dollar bills in people’s eyes. If you’re out there, and you’ve got information on this, reach out to Matt or anybody else that’s on the API 21 committee, and get some conversations going.
We also need more people who want to know more about this, want to get involved in it, and want to understand it. That’s the only way we’ll get this topic up to having enough attention that it gets any action.
Matt: I can’t agree more. Anytime I get the opportunity, I try to do the same thing and encourage people to get more involved.
These standards ‑‑ whether it’s API, GPA, or AGA ‑‑ are all referenced in every company’s contracts and tariffs. They’re referenced by governmental agencies like the BLM and ONRR to make sure that they’re accurate and doing their best…Not just the most accurate thing always, but what’s best for the industry.
Where are we spending our time and focus to get the best answer that’s reasonable? We could take these calculations to NASA, and say, “Hey, help us figure out how to get the absolute, most accurate answer possible.” It’s going to come back and be more expensive than the answer is worth.
We want to get an answer in a reasonable time with a reasonable amount of effort that is close enough, that we can all live with and feel we’ve done the right thing at the end of the day.
That’s got to include both from the field perspective ‑‑ getting that flow computer out there that can do the calculations, that can retain the data, that can send the data through the process ‑‑ and on into the back office processes to where we’re doing recalculations, and validation, and editing.
All of that has to get married together in this particular process as well as in a lot of other standards.
Weldon: I couldn’t agree more, Matt.
Matt: Appreciate that. The more perspectives we get in those standards meetings, those workgroups, to say, “Well, this is the data I’m seeing. This is what we see with the type of data we have.” Whether it’s those pipeline operators that have that consistent flow, or…
One of the examples ‑‑ this isn’t mentioned in the paper, but it’s there ‑‑ is a situation where there was a control valve that was constantly hunting.
There was a lot of fluctuation as the control valve moved back and forth, but in the hourly data, it was a steady average. That control valve that was constantly hunting and not tuned very well, more of a pipeline‑style situation, had a big impact.
The more people, the more perspectives. Especially, one thing I’ve noted and I’ve told some of my colleagues more recently who have gotten involved in this space, is the historic API, GPA, AGA membership is your corporate engineering types.
They’re not, very often, the guys that have the experience, or have the real day‑to‑day experience of turning wrenches in the field, nor are they ones sitting in front of the computer, reviewing all of the data day in and day out.
The more of those perspectives we can also bring to the table that says, “That’s a nice idea, but that’s not how it works when it comes to putting this together,” the better product that we can put out as an industry.
Weldon: I like it, I like it. I’ve got to agree, we could take it to NASA. We could build something too complicated. That’s the worry that a lot of people on the back office side, or the managers on the back office side are having right now.
As I say in the classes I teach, the last thing we need in our industry, the last thing our individual companies can afford to have is the best measurement possible. That is NASA stuff. What we need is the right balance of measurement accuracy and reduced uncertainty with the cost of accomplishing it, and the company’s risk profile.
Matt: Exactly.
Weldon: Anything else you want to add before we go, Matt?
Matt: No, that was a great way to end. I’d love to see y’all more involved. Feel free to reach out to me if you need to know how to get involved. I’m happy to get you connected or to share that paper if you’re interested.
Weldon: Great. Thank you so much for spending some time doing this recording with me, Matt. We’ll have a full transcript of this on our website. It’ll have your LinkedIn profile, so people can contact you through that. I will try to find a hotlink to that whitepaper, and add it on there also.
Matt: Sounds great. I appreciate you having me on, Weldon.
Weldon: Thanks, Matt. Have a great one. I want to thank each of you for listening. I hope you found this podcast both interesting and informative. If you did, please share our podcast with your co‑workers, your boss, and others in the industry.
We will have a full transcript of this episode, a link to the whitepaper we discussed, and Matt’s contact info in the show notes on our website, PipelinePodcastNetwork.com.
Your reviews help folks find our podcast, and let our sponsors know that we are reaching the right folks. Please, take two minutes to give us a review on iTunes, Google, or wherever you get your podcast fixes from. Reviews and likes make the world go around these days, folks.
As always, if you have comments or questions about the episode, suggestions for new topics, or if you would like to offer yourself up to the podcast microphone as a guest, send me a message on LinkedIn, or shoot me an email at weldon.wright@fpvprime.com.
Transcription by CastingWords