Pipeliners Podcast host Russel Treat welcomes first-time guest Adam Hill of Kepware/PTC to discuss the fundamentals of Open Platform Communications (OPC) used in pipeline automation systems.
You will learn about the most important terminology used in OPC, how OPC technology has changed over time since starting as a Microsoft platform, the introduction of a Unified Architecture to connect legacy systems in one framework, the role of the OPC Foundation in helping set industry standards for open communication, and much more.
Listen to this valuable technological discussion pertaining to pipeline operations.
Fundamentals of OPC: Show Notes, Links, and Insider Terms
- Adam Hill is a Strategic Account Manager for Kepware Technologies. Find and connect with Adam on LinkedIn.
- Kepware is a software development business of PTC, Inc. Kepware provides a portfolio of software solutions to help businesses connect diverse automation devices and software applications and enable the Industrial Internet of Things.
- PRESENTATION: Download Adam’s ISHM 2019 presentation, “OPC Overview.”
- PRESENTATION: Download Adam’s ISHM 2019 presentation, “Simplifying Real-time and EFM Data Collection.”
- IIoT (Industrial Internet of Things) is the use of connected devices for industrial purposes, such as communication between network devices in the field and a pipeline system.
- GIS (Geographic Information System) is a method of capturing the earth’s geographical profile to produce maps, capture data, and analyze geographical shifts that occur over time.
- PLCs (Programmable Logic Controllers) are programmable devices placed in the field that take action when certain conditions are met in a pipeline program.
- EFM (Electronic Flow Meter) measures the amount of substance flowing in a pipeline and performs other calculations that are communicated back to the system.
- RTUs (Remote Telemetry Units) are electronic devices placed in the field. RTUs enable remote automation by communicating data back to the facility and taking specific action after receiving input from the facility.
- A type of RTU is an EFM Flow Computer that measures the flow of gas or fluid and reports the data back to the facility. It differs from an RTU in that it is designed to compute flow using standard flow equations with specific timing and reporting requirements.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote location. SCADA breaks down into two key functions: supervisory control and data acquisition. Included is managing the field, communication, and control room technology components that send and receive valuable data, allowing users to respond to the data.
- HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations. High-performance HMI is the next level of taking available data and presenting it as information that is helpful to the controller to understand the present and future activity in the pipeline.
- OPC (Open Platform Communications) is a data transfer standard for communicating device-level data between two locations, often between the field and the SCADA/HMI system. OPC allows many different programs to communicate with industrial hardware devices such as PLCs. The original system was dependent on MS Windows before shifting to open platform.
- OPC DA (or OPC Classic) is a group of client-server standards that provides specifications for communicating real-time data from devices such as PLCs to display or interface devices such as HMIS and SCADA.
- OPC DCOM (Distributed Component Object Model) is used to communicate between devices across networks.
- UA (Unified Architecture) is a platform independent service-oriented architecture that integrates all the functionality of the individual OPC Classic specifications into one extensible framework.
- Tunnels are methods of transporting data from one geographic location running on a separate network from another geographic location running a different network.
- CommServer is a package of communication software to manage data transfer. Technology and algorithms provide an intelligent data transmission automatically adapting its parameters to the user’s or process needs.
- Edge Communications is a method of building out the architecture for structured communication from edge devices in the field to a host server using connectivity to poll and transmit the data.
- MQTT (Message Queuing Telemetry Transport) is a publish-subscribe protocol that allows data to move quickly and securely through the system and does not bog down the system with unnecessary requests.]
- REST (Representational State Transfer) is a request/response, one-way connection to the server. The client connects to the server when needed to push data from the client and pulls the data down to the client.
- Link tags are used to link two server tags. For example, Tag A from Device A to be linked to Tag B from Device B without requiring a third-party client connection. [Read More in this Kepware manual]
- OPC Foundation is a global organization consisting of users, vendors, and companies that collaborate to create data transfer standards for multi-vendor, multi-platform, secure and reliable interoperability in industrial automation. OPC Foundation creates and maintains specifications, ensures compliance with OPC specifications via certification testing, and collaborates with industry-leading standards organizations.
Fundamentals of OPC: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 75, sponsored by Gas Certification Institute, providing training and standard operating procedures for custody, transfer, and measurement professionals. Find out more about GCI at gascertification.com.
[background music]
Voiceover: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time, and to show that appreciation, we’re giving away a customized YETI tumbler to one listener each episode.
This week, our winner is Felipe Ramirez with New Mexico Gas Company. Congratulations, your YETI is on its way. To learn how you can win this signature prize pack, stick around until the end of the episode.
This week on the Pipeliners Podcast, Adam Hill with Kepware Technologies is going to join us. We’re going to talk about OPC. For you computer nerds, you should love this one. Adam, welcome to The Pipeliners Podcast.
Adam Hill: Great. Thanks, Russel. It’s great to be here.
Russel: Tell the audience, if you would, a little bit about your background and how you found yourself working in the oil and gas world.
Adam: Absolutely. That’s an interesting question. I’ll go ahead and paint the picture, if you will.
I’ve been working in the software industry for about 10 years now. I started out of college, actually while I was in college I got into working with and learning about geographic information systems. I have a BA in economics from the University of Wisconsin-Madison and a graduate certificate in GIS.
The GIS path got me into software. From a very young age, I’ve been very interested in software, and using computers, and video games, and all that fun stuff. The GIS path got me into, out of school, working for a software company right out of college. It was a mapping GIS software company.
The oil and gas industry essentially drove the business development of that product that we were selling. I was a member of the sales team. I was doing product management, as well. Wore quite a few hats with that company. Worked for them for about six years. Learned who the major players were within the oil and gas space, and specifically what they were using that software for.
From there, I joined Kepware in 2014. Came in with knowledge regarding oil and gas customers and, like I said earlier, who the major players are. Ported that over to learning what Kepware does and assimilating that into my previous history.
Russel: That’s an interesting background. I don’t know if we’ve talked a whole lot about your experience at GIS, but we’ll set that up for a conversation over cocktails at some point in the future.
Adam: That sounds fantastic.
Russel: I asked you to come on to talk about OPC and OPC fundamentals. Maybe you can start by telling the audience what is OPC and why as a pipeliner or should I care?
Adam: I’m actually giving a talk next week on this very topic, so definitely very timely due to talk about these things. Basically, OPC today stands for open platform communication, but it didn’t always stand for that. There really is and was an evolution of the OPC acronym.
Originally, OPC stood for object linking and embedding, or OLE for process control, so some elements linked to Microsoft very early on. Then it moved away from Microsoft OLE to become open process control, and then ultimately now what it stands for is open platform communication.
What we’re talking about here, essentially a data transfer standard, some of the elements from my talk are going to involve the OPC Foundation, what it is, what it does. Also, setting the stage of what it was like before OPC, and then what it is like now with having OPC.
You can think of OPC as a data transfer standard for communicating device level data or data from various sources, up to client applications.
Russel: I’m going to try and translate a little bit for the Pipeliners. This is one of those subjects I know little bit about. Basically, OPC is a standard for moving real time data feeds between two locations. Think of it that way. That’s a very simple way to say it.
Typically, OPC doesn’t start at the field. That starts with some kind of native protocol. Anything from the communication server to anything else, the SCADA/HMI, the historian, any of those kind of things, they’re often fed with OPC.
Let’s talk a little bit about why did the world come up with OPC and when did it first enter the market. What was it like before OPC?
Adam: That’s a good place to lay some groundwork. Before OPC, no standard really existed for communication between devices or data sources and computer interfaces or the data gatherers. You really had this situation where a client interface, HMI, or a client application would communicate down to a PLC or to a device using a proprietary software application developed by a hardware vendor or whatnot.
Hardware vendors developed the devices, and then out of necessity would develop the software application to communicate to that device. Everything’s fine in a situation where you have a single client communicating to a single device, but proprietary connections and toolkits are required in situations like this.
As users want to make more client connections to one device or add different hardware, that’s where the problem gets sticky. Again, as we’re adding devices into the mix, as well as additional client applications, everything becomes a little bit of a spaghetti type scenario. That’s really where the OPC Foundation was formed, to solve some of these challenges.
You mentioned earlier communicating data to SCADA applications and whatnot. Ultimately, like I said, as we’re adding more devices into the mix and more client applications they’re overburdening various devices. You need some sort of best-of-breed approach to bring in and ultimately develop the client-server relationship with implementing something like an OPC server in the middle of your architecture.
Russel: In the early days of OPC what a lot of people don’t realize is OPC was originally developed by and to some degree controlled by Microsoft. That was originally built on COM/DCOM, which meant OPC was a Windows technology.
Maybe you could talk a little bit about what are the problems or challenges with COM/DCOM, or what’s sometimes called OPC DA versus OPC UA.
Adam: That’s good. You are absolutely right about the Microsoft dependence.
Earlier I mentioned OLE for Process Control. Microsoft OLE, object linking and embedding, it was obviously developed by Microsoft to allow embedding in documents and other objects. It’s closely related to the evolution of DDE.
There was a lot of reliance with DA and Microsoft. You mentioned DCOM. Any time we breathe the word DCOM to an engineer or to an individual, a customer you get a…
Russel: We all throw up a little bit in the back of our throats.
Adam: Exactly, that’s what I’m getting at. DA is still widely used, but what’s nice now is there are tools available to extract DA data from an OPC interface that may reside within a SCADA — a big behemoth SCADA application or something that’s been around for a while. There’s a DA interface that you can extract data from and then convert it to UA.
Russel: This is one of the things that’s not widely understood in the pipeline world, is that people that work in this kind of talk technology, they’ll say, “Well, we’ll just do OPC.” A lot of the software tools that say they support OPC, they actually only support OPC DA, which means it’s COM/DCOM based.
That means that if I’m going to do an OPC connection it’s probably going to work pretty well within a box, might work okay over a local area network, but over a wide area network it’s going to be problematic, and there’s security problems with it because of the nature of COM and DCOM and what you have to do to get those things to work.
Again, talking history a little bit, that made a lot of sense in the early days when I wanted a simple way to display real-time data in an Excel spreadsheet. It doesn’t make sense today when the level of complexity and the level of security requirements is something very different.
That’s my take on DA. The important thing that I would want the listeners to take away, even if you don’t know the technology and you don’t understand all these buzzwords, is if somebody’s saying, “Yeah, we’ll just plug it up by OPC,” it’s not really as simple as that.
Let’s talk a little bit about the OPC Foundation, what it is, and how your company’s involved with it.
Adam: OPC Foundation formed in 1995, five founding companies. There are over 450 corporate members now in the foundation.
One of the things that I mentioned during the talk is the foundation’s mission. What they focus on is to create data transfer standards for multi-vendor, multi-platform. Secure and reliable interoperability are some of the elements that they focus on.
They do things like maintain specifications and create them. Ultimately, drive the direction of OPC. We’re talking about companies like Rockwell, Fisher Rosemount, Opto 22, and what have you. You can think of it almost as competing companies together. They have these workshops, as well. The foundation hosts workshops globally to bring competing companies together to figure out ways to communicate data between applications.
There’s different programs available as part of the foundation, as well, for software vendors, which is very important. You can self-test your application.
You can also become Gold Certified, making sure that your application adheres to various specifications for supporting OPC. Obviously, the best thing to do is to become Gold Certified, but there are options for self-testing your application.
One of the nice things about working in this space is keeping up with the latest and greatest from the foundation and making sure that our software application is adhering to some of the changes coming throughout the industry.
They also have, like I mentioned earlier, these interoperability workshops. It’s neat to think about. That’s not a bunch of sales people getting together and trying to sell each other products and whatnot, but it’s a bunch of engineers and application engineers getting together to solve technical problems and figure out ways to communicate data between applications that customers may be using.
Ultimately, one of the things the foundation also does is for the benefit of the customer; for the benefit of users globally of these applications to support OPC.
Russel: I actually think OPC’s done some great work. There’s probably two big things. One is by getting all the automation vendors to collaborate and agree to a specification it creates a lot more flexibility for the users to connect things together. That’s one.
The other is, not only have they collaborated with the automation vendors, they’ve also more recently been addressing all the operating systems. Now you can get OPC UA and you can run it on Windows. You can run it on iOS. You can run it on Linux.
Adam: Cross platform.
Russel: There’s a lot of work being done on actual firmware implementations of OPC UA where that opens up all kinds of possibilities about how I can get things to talk.
Adam: We addressed DA, but it’s sort of like UA to the rescue. It stands for unified architecture, I believe.
When UA came out we noticed as a software vendor we were one of the first to support it, but it wasn’t widely adopted yet, right away, whereas now it certainly is. It’s here to stay, obviously.
Cross-platform support. You get rid of the problems you mentioned earlier with DCOM where if you’re in a LAN you’re fine. The minute you go to a WAN with DCOM and have to adjust DCOM settings to traverse firewalls and what have you, forget about it. There’s tunneling capabilities with UA.
Russel: Define a tunnel. What is a tunnel?
Adam: There’s a great slide at the end of this talk that I refer to.
The way I understand it, and it’s a simple way to think about it is that I’ve got to get data from Point A, geographically, to Point B. We’re talking about different networks and a very wide area network. I’ve got to get the data from one geographic location and one network that’s completely different than another one halfway across the world, for example.
This involves firewalls. Security is certainly a concern when trying to move data from one location to another. You have communication barriers, like routers and other networks. I mentioned firewalls earlier.
A good application that we like to portray, if you will, is moving data that’s locked in a PLC across a tunnel using UA up to a client application that only supports DA. The idea, and what we’re really getting at here, is these building blocks of how do we take data locked in a PLC halfway across the world and move that using OPC servers to a client application in Houston, for example, that can only consume DA.
We’re wrapping around applications, instead of replacing them, that may have limitations using tunneling technologies and OPC UA.
Russel: We’ve been working with OPC and UA at our company for quite a long time. We were doing UA tunneling back before anybody really knew what UA tunneling was. At that time people were making specialized software to tunnel OPC.
I’ll say this, that I did a project where we were having to do some pretty complex configuration of tunnels using DA. We moved it all to UA, and we went from a day to a day-and-a-half to get a tunnel configured to it taking an hour to two hours to get a tunnel configured. Not only that. It was way more reliable and way more secure. It’s a big difference.
What is the current industry state around the adoption of UA? I know that the last time I had this conversation, which was probably four years ago, that a lot of people were still learning about what UA is. What would you say the current industry state is in oil and gas pipelining?
Adam: That’s a really good question. Some of the trends we’ve seen is getting data into SCADA via UA is certainly popular, but one of the things we’ve seen is — obviously we’re talking about OPC — the unique challenges of needing to get data into other applications throughout the enterprise, which may or may not involve OPC.
It’s all about where’s the data coming from, getting it out of your device layer, rationalizing and aggregating it into a data collector, if you will, and then where’s it got to go. If it’s going to SCADA, typically we’re talking OPC, UA, or DA. A good quality data collector application will be able to obviously support both.
You have to take into consideration where these applications reside — LANs or WANs, for example — whether you’re using DA or UA.
One of the things we’ve seen lately is more widespread use of UA, not only to expose data to SCADA, but also to get data out of other applications, so using client drivers within a data collector to collect data from an OPC DA server interface that may reside within a SCADA application to ultimately send that data somewhere else or expose that data somewhere else.
One of the other talks I’m giving next week, as well, is to describe and mention the ways in which we can simplify real-time and EFM data collection throughout an enterprise.
Russel: One of the things that’s challenging for even guys that are IT guys that are reasonably competent and capable in the technical domain is that this whole domain that we’re talking about is even more specialized than that because we’re talking about real-time data. We’re talking about automation data. You talk about server-side, client-side, and all that.
For a lot of the listeners to this podcast, their eyes might glass over. I’m thinking about when I was doing a podcast about pipeline integrity and smart pigging, talking to a PhD in that domain. I’m a pretty sharp guy, but I wasn’t necessarily getting it.
Maybe we ought to talk a little bit and break it down. What is this idea of client-server? What is server-side? What is client-side?
Adam: That’s good. That paints the picture and lays the groundwork.
This client server device concept comes into play when talking about OPC. Earlier in the discussion I talked about connecting to individual devices with a single client application. Once you add more clients into the mix and more devices into the mix the problem starts to get a little hairy because…
Russel: Again, I’m going to ask you to do some more definition. What’s a device?
Adam: A PLC.
Russel: PLC, RTU, and EFM. A box in the field that’s between the instruments and the communications.
Adam: Exactly.
Russel: What’s a client?
Adam: SCADA, HMI.
Russel: Something that’s consuming the data, right?
Adam: Correct.
Russel: What’s a server?
Adam: An OPC server application resides in the middle of the data collector that’s brokering the communication from the PLC up to the SCADA application.
Russel: The reason I’m going through this with you a little bit, Adam, is one of the things that’s challenging, I understand the words you’re speaking, but in my world people don’t talk device. They don’t talk client. They don’t talk server. They don’t talk client side/server side.
What they say is PLC or EFM. They say SCADA, and that’s where they leave it.
This is really important, because there’s another place I want to go in this conversation. One of the things that’s really important is when you start talking about SCADA, in today’s world you often decouple the communications from the data presentation, where historically that was all part of the same block of software code.
Nowadays communications is getting decoupled. The reason is there’s a lot more places other than the control center that want the real-time data. I don’t want to run the data up to the control center and then feed it to everybody else. I want to get it to the CommServer, send it to everybody that needs it, and just send to the control center what the control center needs.
I use the term architecture. That’s an architectural conversation. It casts a different light.
Why this is important, and the reason I wanted to unpack this a little bit, is people are starting to wonder what is Industrial Internet of Things (IIoT), what is data analytics, and why do I care. This conversation goes directly to that, right?
Adam: Exactly.
Russel: Maybe you could tell me what you think is going to happen with Internet of Things, data analytics, and how does OPC play with that.
Adam: This is a really, really good topic because we’ve seen the evolution of being a data collector, application, software vendor that only northbound supported OPC, but has since adopted and implemented different tools within our data collector to expose data using more IT centric protocols, like MQTT or REST and what have you.
One of the things you mentioned earlier is the idea of decoupling. Correct me if I’m wrong, Russel. You were referring to decoupling the polling engine from the SCADA application.
One of the talks I’m giving next week actually focuses on a customer example where they’ve got the option to move or to implement a data collector south of SCADA, meaning we are collecting data from PLC. The SCADA application traditionally did the polling, but here we’ve put in a data collector south of that SCADA application to handle the polling of PLCs, EFMs, flow computers, RTUs.
The data collector is south of SCADA, and then using OPC serves real-time data up to that SCADA application. Not necessarily more importantly, but in a unique fashion is the data collector is able to pipe measurement data from EFMs northbound to measurement applications. FLOWCAL, for example, CFX files, and then also get data into OSI PI or even big data analytics packages using MQTT.
Again, it’s all about where the data needs to go and where you need to get it from. Different options with respect to the polling engine, as far as where you put it in your architecture.
OPC still plays a pivotal role exposing data to certain applications, but now we’re seeing the crossover from this OT IT bridge, if you will, depending on IoT applications and ultimately in your enterprise where this data needs to go for decision making purposes and whatnot.
Russel: That whole conversation about the data gets really complicated. You made a comment earlier that I want to talk a little bit about before we wrap up here, the idea of putting some organization around the data, because the data without organization is useless.
If you think about accounting systems, business systems, and GIS systems, they all have fairly highly evolved and mature tools for organizing the data, where in automation, where I’m looking at time series data, we don’t really have those kind of tools, and particularly with IoT.
You get this issue of, there’s the data, the pressure value, there’s the data about the data — my standard deviations, my hysteresis, and other stuff that tells me things about the data — and then there’s millisecond data, one second data, one minute, one hour, one day, and one month.
All that’s the same data, but it’s used by different people, for different purposes, contextualized different ways. That ends up becoming a really big issue around all this.
What I think is happening is that we’re getting more data, the ability to move it, connect it is getting easier because of OPC and some of these other tools you’re talking about, or other protocols you’re talking about, but the actual ability to work with the data’s got a place to go yet.
Adam: I agree. I fully agree. One unique application that we’ve run into is now this concept of having your data collector or polling engine within your SCADA, but the option to put a data collector north of SCADA, and using MQTT to publish to a broker so that other applications can subscribe to that broker. Ultimately, where’s the data need to go?
These certain applications can do data conditioning and rationalization. Let’s say within the polling engine I’m interested in Tag C, which is a simple Tag A plus B equation. I’m only interested in polling Tag C, rather than jamming all Tag A and all Tag B into a particular application and then sorting it or rationalizing it within that tool.
I hear exactly what you’re saying. There’s a lot that needs to be done, but we are seeing a lot of different applications come into play and different architectures come into the mix.
Russel: Adam, I’m going to ask you to do something as we wrap up here. A lot of times I’ll do, “What are the three key takeaways?” or what are my three key takeaways, particularly when it’s technology or something I don’t know a lot about.
I think I’d like to ask you, you’re getting ready to do this class or this presentation. What would you want the people who are listening to this conversation to be the three key things that they would take away?
Adam: The basic one, and the focus of the talk and the class in general, is what you’re going to learn as part of the talk. The big one is, what does OPC stand for? Being able to walk away with what, maybe some background about what it stood for before, and the evolution of the acronym, and what’s it stand for now.
Also, what is the purpose of the OPC Foundation? That’s a big one. Rounding things out, and we addressed it today during this talk, is OPC specifies the communication between what and what? So, client and server.
Those would be the big three key takeaways for the OPC fundamentals.
Russel: Cool. I think that’s awesome.
Thanks for participating. For the listeners that are interested, we’re going to have Adam back next week. We’re going to talk about something that’s maybe a bit more hands-on. That’s, how do I simplify data collection?
Adam: Good. Looking forward to it.
Russel: All right. Thanks, Adam.
Adam: Thanks, Russel.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast and our conversation with Adam Hill. Just a reminder, before you go, you should register to win our customized Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you’d like to support the podcast, the best thing you can do is leave us a review on whatever podcast app you use to listen. You can find instructions at pipelinepodcastnetwork.com.
[background music]
Russel: If you have ideas, questions, or topics you’d be interested in, please let us know on the Contact Us page at pipelinepodcastnetwork.com or reach out to me on LinkedIn.
Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords