This week’s Pipeliners Podcast episode features the return of Dan Nagala to discuss what the new Data Acquisition of SCADA looks like in pipeline operations with host Russel Treat.
In this episode, you will learn about where data communications in SCADA is headed for pipeline operators, the history of data communications, and Dan’s technology predictions for the next five years.
Data Acquisition in SCADA: Show Notes, Links, and Insider Terms
- Dan Nagala is the President and CEO of UTSI International Corporation. Connect with him on LinkedIn.
- Listen to Mr. Nagala’s previous podcast appearance in Episode #16 – Cybersecurity Threats & Awareness.
- SCADA (Supervisory Control and Data Acquisition) is a system of software and technology that allows pipeliners to control processes locally or at remote locations.
- Fault Tolerance is the ability of a SCADA system to bypass faults that accumulate in an overworked RTU (remote terminal unit).
- Control Room Management (CRM) is the process of safely managing controllers, control rooms, and SCADA systems used to remotely monitor and control pipeline operations.
- PI is a real-time data historian application with a highly efficient time-series database structured and developed by OSIsoft.
- Microsoft SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network.
- MQTT (Message Queuing Telemetry Transport) is a publish-subscribe protocol that allows data to move quickly and securely through the system and does not bog down the system with unnecessary requests.
- Sparkplug is a specification for MQTT enabled devices and applications to send and receive messages in a stateful way.
- IoT (Internet of Things) is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction.
- IIoT (Industrial Internet of Things) is the use of connected devices for industrial purposes, such as communication between network devices in the field and a pipeline system.
- IoTT (Internet of Trusted Things) is where all physical and virtual “things” — humans, machines, businesses, DApps — can securely exchange data and value at global scale.
- DDS (Data Distribution Service) for real-time systems is an Object Management Group machine-to-machine standard that aims to enable dependable, high-performance, interoperable, real-time, scalable data exchanges using a publish–subscribe pattern.
- HTTP (HyperText Transfer Protocol) is the underlying protocol used by the World Wide Web and this protocol defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands.
- HTTPS (Hypertext Transfer Protocol Secure) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet.
- M2M technology connects sensors, devices, and appliances together through a cellular or wired network. Comparatively, IoT systems rely on IP-based networks to send data collected from IoT-connected devices to gateways, the cloud, or middleware platforms.
- Moxa Technologies is a Taiwanese technology company specializing in edge connectivity, industrial computing, and network infrastructure solutions.
- HMI (Human Machine Interface) is the user interface that connects an operator to the controller in pipeline operations.
- Abnormal Operating Condition (AOC) is a condition identified by the operator that may indicate a malfunction of a component or deviation from normal operations that may (a) indicate a condition exceeding design limits or (b) result in a hazard(s) to persons, property, or the environment.
Data Acquisition in SCADA: Full Episode Transcript
Russel Treat: Welcome to the Pipeliners Podcast, episode 112, sponsored by iPIPE, an industry-led consortium advancing leak detection and leak prevention technologies to eliminate spills as pipeliners move to zero incidents. To learn more about iPIPE or to become an iPIPE partner, please visit ipipepartnership.com.
Announcer: The Pipeliners Podcast, where professionals, Bubba geeks, and industry insiders share their knowledge and experience about technology, projects, and pipeline operations. Now your host, Russel Treat.
Russel: Thanks for listening to the Pipeliners Podcast. I appreciate you taking the time. To show that appreciation, we are giving away a customized YETI tumbler to one listener each episode. This week, our winner is Rafael Ruiz, with Holly Energy Partners. Congratulations, your YETI is on its way. To learn how you can win this signature prize pack, stick around to the end of the episode.
This week, we have Dan Nagala. He’s the principal with the UTSI, and a very knowledgeable SCADA and data communications nerd, and I say that with affection, because he’s my kind of people. He’s coming to join us to talk about what’s going on with communications, and where’s that headed in the SCADA world. With that, let’s welcome Dan Nagala.
Dan Nagala, welcome back to the Pipeliners Podcast.
Dan Nagala: Thank you, Russel. It’s a pleasure to be here.
Russel: Do you remember when the last time you did an episode was?
Dan: It’s been quite some time, many episodes back, I believe.
Russel: You were actually on episode 16. The listeners may not know this, but when I was starting this whole thing up, I reached out to a group of friends who I thought would be very helpful in helping me build some content and interesting conversations. Dan was the early guys that said, “Yeah, sure, I’ll do that.”
Here it is, gosh, probably close to two years later, and Dan’s coming back to talk to us again. Dan, thanks for being an early guest, and thanks for coming back.
Dan: All right, I’m happy to be back, and hopefully we’ll have some more interesting things to say today.
Russel: I asked you on to talk about the nature of what’s going on with data in pipeline operations. Maybe to build the context a little bit, I’ll just say that, historically, all of the operations data has come through SCADA.
That’s beginning to change, and that’s what I asked you on to talk about. Let me ask you, in your experience, what’s going on with data and the control center?
Dan: Over the years, now, I’ve been doing this for over 40 years now, so I’ve seen a lot of changes over that time.
Traditionally, the control center in a typical pipeline company had all of the infrastructure in the field, in communications, and also through their centralized control centers to bring in information that was not only necessary for monitoring and control of the pipeline, but also for certain types of ancillary applications and purposes that were deemed valuable by maybe other groups within the control center, or more importantly, groups outside of the control center.
One of the ways we’ve dealt with this over the years is somebody identifies information out in the field. For example, years ago, we did a job where a pipeline company had a lot of quality control analyzers, gas chromatographs, and whatnot. It was a chemical company, and that information was very interesting to their marketing group for advertising how consistent they were for quality and helping them to sell more customers on using their chemicals, based on their history of transport.
We developed a set of applications outside of the control center that extracted this information which was acquired in the SCADA, passed it over to databases, and then enabled these users in the marketing group to build reports, graphs, do analysis, and other stuff.
That was just the beginning. I think we started doing that, that project probably happened in the ’90s. Over the years, we’ve seen more and more needs like that. Pipeline companies, their management, and ancillary business groups have found it very, very interesting to have access to that information.
We’ve tried to solve it in a number of ways. Probably one of the most common ways right now is to bring data up from the field, and still through the SCADA system, and pop it into some kind of historical database like a PI or some other MS SQL Server, or something like that.
Russel: I would say that’s the state-of-the art, right?
Russel: You bring the data up from the SCADA system, you throw it into a historian, and then people that want the data connect up to the historian and grab it for whatever purposes.
Dan: Exactly, and that’s the traditional approach. This is what the majority of companies are doing, and it’s worked. It’s worked well, and it’s still working well, but the problem is, now we’re getting a lot more intelligent devices in the field.
They’re generating a lot more interesting information for business and planning purposes, but it’s not necessarily needed for the pipeline control center. They don’t need it to monitor and operate the asset in a safe and efficient manner. It’s more for other purposes.
Russel: Yeah, I think one of the biggest examples of that is machinery analysis that I’m using to feed into things that are doing predictive maintenance, if you will.
They’re using the data to run to an algorithm, and looking at machinery and saying, “Well, we need to get out to this compressor and do an overhaul,” or, “We need to get to this compressor, and it just needs an oil change.” That kind of thing.
We’ve been doing that for a long time, but the nature of that kind of data and what’s going on with the buzzwords de jure, if you will, the edge, and machine analytics, and predictive analytics, and all of that, we’re just generating a lot more data.
Dan: Yeah, we are. Actually, your example of rotating equipment is the common sort of example we use a lot, because every pipeline company has rotating equipment. That rotating equipment needs routine maintenance, and it needs to be monitored for degrading performance and whatnot.
Other things are things like the quality analyzers I mentioned, power information. A lot of companies have put sophisticated power metering out at their locations, and there’s a vast amount of information in those power meters that can help them understand what their consumption profiles are, and maybe help them optimize their operations as well.
Power cost is one of the biggest expenses a pipeline company has, for the most part.
Russel: I think one of the things that people don’t understand is, these machinery analytics kinds of tools where they were getting the data up to the host and then handing it off to the maintenance and reliability engineers, they were often looking at hourly averages or maybe one-minute data.
Now, they’re starting to look at one-second data or even millisecond data to do even more advanced types of analytics and analysis. We’re seeing this, it’s a step function increase in the amount of data. We’re not like doubling the data or adding 20 percent. We’re adding 10 times, 100 times the data quantities.
Dan: Analysis of real-time data, time series data in particular, and moving that data up to cloud services where you can apply advanced analytics and AI algorithms to that data for doing more precise predictions, analysis of bottlenecks, detection of anomalies and what not, to help you streamline and potentially operate safer and more efficiently.
All of these things are fueling the need and the demand within companies for more data, and consequently, SCADA systems are becoming substantially overloaded with a lot of data that isn’t necessary.
Russel: That’s actually right where I was going to go next. What’s the implication of this, when you think about the classic model of collecting the data, bringing it up through the SCADA system, and putting it into a historian?
Dan: The classic model is, just you bring it up, validate the communications, and propogate it up to a historical system. The problem we have right now is CRM, control room management, the rules and the regulations, and the enforcement of those regulations that go along with that.
As you well know, because of the work your company does is, with CRM, anything you change in the field or change in the SCADA system requires a change management process. It requires planning, documentation, testing, validation. All that recordkeeping requires time, money, and effort.
If these are outside groups asking for information from the field that the control center doesn’t particularly need, and is not subject to any regulatory compliance, all of that work and effort to keep your SCADA environment compliant with the regulations just exaggerates the cost and time required to provide this information to your external groups.
Russel: When you start talking about these things, and for people that are not knowledgeable about the technology we’re talking about, I like to try and come up with analogies.
The analogy here would be, if you think about the data necessary to run a control room as thoroughbred racehorses, and you think about all the other data as wild horses that you want to know about, you don’t want to keep those horses in the same pasture because it makes everything harder.
That’s what we’re talking about here, that the simple fact that you’re merging together this, I’ll call it unregulated data with regulated data, it just adds all kinds of complexities and complications where, if you’re able to segregate that stuff, it gets easier.
Dan: Exactly. It’s like putting your race cars and your family station wagon on the same race track and trying to do a race around the track with them.
Dan: They don’t work together or at the same speed, and we have a problem.
Russel: Exactly. Good analogy. If the classic way of bringing the data to the control center and to the rest of the company is through the SCADA system, how would I do it a different way?
Dan: One of the things we’ve talked to a lot of companies about is a concept of having a parallel SCADA system or a non-critical data SCADA system, which would allow you to do all the same things you do with SCADA, but utilize different communications technologies that aren’t necessarily as robust or as…
Russel: Fault tolerant.
Dan: Yeah, fault tolerant, that’s a good term, as those that would be used in controlling a pipeline asset, and also using data centers that maybe don’t require secondary and tertiary backups. They don’t have to run at “five nines” of availability forever, but still give you the ability to have access to that data and bring it up and get it into the systems.
It’s basically a SCADA system, but it’s probably using newer technologies.
Russel: It’s not really SCADA. It’s just DA.
Dan: Yeah, it’s DA. That’s good. I like that. It’s just DA.
Russel: For those nerds out there that understand it and got that joke, that’s great. For those of you that didn’t, here’s what that’s about. SCADA is an acronym that stands for supervisory control and data acquisition. The pipeline aspect of that, from an operations perspective, the critical bit is the supervisory control.
What we’re talking about is taking the data acquisition and breaking it out and making it stand-alone, so it’s just simply DA. [laughs]
Dan: Yes, just DA, data acquisition.
Russel: Just, duh. [laughs]
Dan: What we’re talking about is bringing this information up. Actually, with some of the new technologies that are out there, with very little work in the field, you might be able to take advantage of technologies that are being widely used in the IoT and IIoT marketplace right now.
Russel: We ought to talk about what those technologies are, and why they make a difference.
Dan: Yeah, publish and subscribe technologies, and probably the one that people may have heard of the most is MQTT. MQTT was developed late ’90s, early 2000s. It was a couple of guys, one from a company up in Kansas City, a guy named Arlen Nipper, who some of you may know, and in a consortium with IBM.
They developed MQTT as a brokered messaging technology, publish and subscribe messaging technology. A couple of pipeline companies were very early adopters on this, and they’ve embraced it and used it for various things, including their traditional SCADA data.
Russel: Let’s talk a little bit about publish and subscribe. Quick lesson in data communications. Historically, all data communications has been poll/response, which meant there was a master sitting at the host, or at the central office.
That piece of software would say, “Device number one, I need X, Y, Z.” It would wait until device number one responded, and then it would say, “Device number two, I need X, Y, Z,” and it would wait until the device two responded.
Much of data communications is built that way. That’s called poll/response, request/answer. Publish and subscribe is a little different. It’s more like email. When I put an email out, if the person I’m sending to is disconnected from the Internet, they’re not going to get that email, but when they connect up, they’ll go to a broker, the server, and grab their email.
That’s more pub/sub and that provides a lot more flexibility, and it’s a much more optimal way to use the available bandwidth. That’s what pub/sub is.
MQTT, in my understanding, and Dan, you might correct this because I’m still learning about MQTT, but really, it is a pub/sub, but it’s kind of a thin protocol because all it’s doing is moving data.
Dan: Yeah, MQTT is the publish and subscribe transport mechanism that other things are being built on top of. Most notably, there’s a technology called Sparkplug that’s, not to be too punny, but it’s catching fire in the IoT world.
Dan: Sparkplug puts a wrapper around MQTT and enables you to configure everything very simply, in a very straightforward manner, identify what all your critical components are and all that sort of stuff, and then it goes into your broker or server and allows that data to be acquired from the field through whatever mechanism you connect to the end device with, and then propagated up in a one to many sort of architecture…
Russel: This is what I call distributed communications.
Dan: Right. You have one access to the end device, but the data itself through the broker can be subscribed once and published many times to lots of different users or different places. A piece of data could go to a historical archive. It could go to SCADA, if SCADA is interested in it. It could go to a cloud server. It can go to any number of places.
Russel: It could go to a third party that’s doing machinery analysis as a service.
Dan: Exactly. There are a lot of these. There’s some technologies called distributed pub/subscribe. There’s data distribution service, DDS. HTTP is another one that’s used quite a bit. HTTP and HTTPS are used in place of something like MQTT in some implementations.
It’s probably the second most used transport mechanism for IoT and IoTT out there, but the point is that these protocols are standardized. It’s easier to plug and play with them, and you get that one to many, publish/subscribe sort of concept.
Also, they don’t overlap with your SCADA and take away bandwidth unless, for some technical reason, you’re using the same comm path out to your end devices. They also don’t take up point licensing and CRM related regulatory issues that you might encounter in your SCADA network.
Russel: Exactly. To me, this is where we’re headed. For people that are knowledgeable about SCADA, the thing that if you’re having problems with your SCADA system, it’s almost always related to the communications network.
Further, it’s almost always related to the last mile, and particularly the legacy communications network, because even things like modern IP radios, they still, in their guts, operate like an old poll/response network.
Russel: While they can keep a cue of transactions, the actual transactions going out to the radios, often they’re still poll/response. These technologies basically take that communications complexity, and they move them all the way down to where that legacy kind of communications connects into the modern IP network infrastructure.
Russel: I was going to ask you, long comment, but I wanted to say if you agreed with that.
Dan: Yeah, exactly. Now, you can start buying embedded devices or devices that can go into the field, that actually incorporate MQTT, or Sparkplug, or some of these other technologies that are similiar, if you have IP to your end location, you can put in an embedded server.
I know Moxa and some of the other common industrial devices have MQTT already incorporated into them, so you can start doing pub/subscribe directly from your field locations, over an existing IP network by just changing out a box and doing some configuration. This is only going to increase. It’s only going to get better.
Russel: Yeah, it’s going to accelerate. Better is…That’s a relative concept, man.
Dan: [laughs] Sure.
Russel: Here’s what I think is going to happen. I’d be very interested to hear your opinion on this because historically, the SCADA group has been in the operations group. Historically, the way that companies have implemented SCADA is, it’s been in a dedicated group, typically reporting to an operational domain, because that data is so critical to how they operate the asset.
I think what’s going to happen with these new technologies is, we’re going to move…Let me give you a little bit more context.
You and I both have been doing this for a long time. Both of us, when we started in SCADA, SCADA had its own computer hardware, its own telecommunications, its own team to support all that stuff, and it was completely isolated from anything to do with the business network.
Dan: Yes, that’s true.
Russel: Over time, the informational tech, the IT guys in the company, they’ve taken over the network, and we started running SCADA over IP. Then, they started to take over the servers.
A lot of the servers are being moved to the cloud, or they’re being moved to infrastructure-as-a-service offerings. They’re being run on virtual machines, so now all the physical hardware infrastructure is typically managed by the IT group.
Dan: Yes, that’s true, too.
Russel: Now that we’re starting to talk about MQTT, and HTTP, and all these other things, these are more IT-centric technologies. They’re not operations specific technologies. Consequently, I think that IT is going to be looking at this and trying to move as much as they can, as quickly as they can to these new types of communications mechanisms.
Dan: I think they will. I did a project a few years ago for a foreign gas operator. What we did is, we designed a methodology and a mechanism for their measurement people.
It was a gas distribution company, so they were getting all their measurement data hourly over an HTTP protocol that they developed for their own purposes, but basically, that HTTP protocol interfaced with traditional electronic flow measurement technologies that were already deployed in the field.
They were doing that as an IT function, not as a SCADA function. It was completely external to the control center.
Russel: I think that’s the future.
Dan: I see that as well. I see the control center ultimately being a user of information that’s propagated from the field. We’re already seeing a little bit of movement that way in a few pipeline companies, but more in other industries that are not so regulated and so slow to change, because they don’t have the massive amounts of infrastructure they have to worry about.
Russel: There’s a couple of aspects there. One is, we have a lot of infrastructure. The other is, the effort to get around and touch all that infrastructure is so high…
Dan: That’s true.
Russel: …because it’s spread out geographically to such a great degree. There’s another factor here, and that is that change is risk, and risk is bad, so you’ve got to be very deliberate about making these kind of changes to those systems that support the critical 24/7 operations.
I think that’s going to cause this stuff to happen slow, but as a company begins to figure this out, there’s also some opportunities for huge cost savings and the opportunity for doing communications at the edge, and then just making the data available to whoever subscribes to it from the edge is pretty compelling.
Dan: Of course. We’re starting to see a lot more interest in it. We first started talking about this back in the early 2000s, and I actually did a paper on it, on, we called it M2M technology back then, in 2007.
We didn’t get a lot of feedback from it, but as time has gone on, it’s becoming more and more of interest to operating companies because their departments outside of the control centers have realized that it’s possible to get vast amounts of data from the field.
That data has value, and they can put it to good use, and perhaps provide lots of benefit to the operating company outside of just the traditional use of the same data.
Russel: There’s big needs to get to this data for purposes like leak detection, leak prevention, operations effectiveness, cost reduction, all these kinds of things that are really critical for a pipeline being viable into the future.
Dan: Yeah, integrity management.
Russel: The only thing that’s interesting about the business model for pipelines is that they have these huge capital investments, really immense capital investments, and then they have their operating cost.
Their operating cost, often in comparison to their capital investments, are really pretty small, but a little bit of change in the operating cost can make a big difference in the overall profitability of the business.
Dan: This is true, and that run and maintain cost has traditionally been something that’s been tried to be minimized. [laughs] I think it’s starting to get a lot more visibility now.
Russel: I think that’s right. Again, it’s an analogy but the nature of what we’ve been able to do to optimize cost in rotating equipment, for example, has been kind of a blunt instrument.
Now, we’re going to get to a point with these new technologies, in the amount of data that we can get and the timeliness of that data, that instrument is going to become more surgical. Which means there’s a whole bunch of understanding, and infrastructure and such that has to go around all of that, but there’s huge opportunity in all this stuff.
Dan: Yes, I agree.
Russel: If you were a gambling man, what would you be betting on in the next five years in terms of these technologies?
Dan: First of all, I’d be betting on a substantially increased use of cloud and AI technologies to help understand and optimize operations, and potentially, if we adapt fast enough, to help improve the safety and reliability of the operating assets.
I think by having the ability to move information that’s not control and operations-oriented, but is ancillary to all of that, that can be fed back into the control system through methods that are flexible and can be adapted to very quickly, I think we’re going to feed back information that helps all parts of the operating company.
Russel: Yeah, I think there’s going to be big changes in the control room in terms of what’s available to them from a diagnostic standpoint.
Dan: Lower operational and maintenance costs, quicker turnaround on issues. Things like that.
Russel: If they get some kind of abnormal operating condition, the ability to go and mine through a rich data set and analyze it is going to be very different 5 or 10 years from now, than where we currently are.
Russel: One of the challenges of that is going to be, if I’m using that to diagnose an AOC, is all that fall under all the regulatory requirements?
Dan: I don’t know. Prescriptive and root cause analysis probably will, if it doesn’t already, so it’s quite possible.
Russel: Beyond just the technology, there’s some other things to work through there. If you were advising a small operator, or you’re advising somebody who works in this domain, what would you tell them to be doing to get educated and get prepped for these changes you see coming?
Dan: If we had especially a new operator, and believe it or not, there still are companies putting in brand-new control systems that have never had them before, they should be looking at modern technologies. I don’t think they should be following the sheep from the traditional approaches with big, monolithic, isolated control center concepts.
I think they need to be thinking about more modern techniques that use up to date comm technologies. I’m really a big supporter of pub/sub technologies for communications. I’m a big supporter of more distributed processing, although you have to design that properly to make sure you don’t run into trouble with it.
These are ways that you can minimize the cost — or at least manage the cost — better than we have in the past, not only for the infrastructure but also for the software and maintenance and management of the control systems that support the operating end of the business.
Russel: I absolutely agree with that, Dan. I think that one of the things about these technologies is that the level of engineering that’s going to be required — maybe required is too strong a word — but engineering is going to be a bigger factor in the value that this can create.
Dan: But really a lot of the controls in the comm — the basis for that’s already out there. People have to start seeing it and adapting to it, and that’s slow. That’s always been slow in our industry.
Russel: Yeah, and for good reason, it’s slow. The thing I’m driving at is that, there’s a lot of people out there that are very knowledgeable in the way we’re currently doing things. The effort to make a change to a new way of doing those same things is going to need to be planned and managed. That’s going to take time.
Dan: Yes. One thing that happens, and you and I have seen it happen, and it’s going to happen as we get to the stage as well, the old-timers, we’re going to maybe have some ideas about things that we’ve developed over 40 years or more of our careers. [laughs]
As we start to move out of the picture, younger guys coming out of school, who’ve been exposed to some of these things as emerging technologies, are going to start to think, “Hey, what if we do this?”
Dan: When I entered the industry, one of the assignments I was asked to consider was, could we do a full-color HMI for a SCADA system? I said, sure we could. We could do graphics too. They went, “Graphics? What?”
Dan: In the ’70s, graphics were just starting, and they actually didn’t work very good for controls. It was a concept that the old-timers weren’t thinking about, but the new class of people coming in were starting to think about things because they’d been exposed to them in the educational part of their lives.
Russel: To some degree, they don’t know any different.
Dan: Yeah, and that’s going to happen again. It’s already happening, actually.
Russel: I was out at A&M yesterday at the Engineering Career Fair because we’re looking for some new folks to add to the company. I had some conversations with some of those students about what I was doing when I started, and you can just see them going, “You were doing what, with what?”
Russel: When I registered for my classes at A&M, we did that on paper. [laughs]
Dan: That’s a case in point. Technology changes, and I think with generations, we’ll see that generational change in the control system area as well.
Russel: Absolutely, and you know what? That’s what keeps it fun. It’s what keeps it interesting. I love working with these new technologies and figuring out how to make them add value.
Russel: Dan, thanks so much for coming back and having this conversation. We need to not wait two years before we do it again.
Dan: [laughs] Okay, we’ll try to be better next time.
Russel: All right. Thank you, sir.
Dan: All right. Thank you very much, Russel. Take care.
Russel: I hope you enjoyed this week’s episode of the Pipeliners Podcast, and our conversation with Dan Nagala.
Just a reminder before you go, you should register to win our Pipeliners Podcast YETI tumbler. Simply visit pipelinepodcastnetwork.com/win to enter yourself in the drawing.
If you would like to support the podcast, the best way to do that is to leave us a review, and you can do that on Apple Podcast, Google Play, or many other applications that deliver podcasts. You can find instructions at pipelinepodcastnetwork.com.
Thanks for listening. I’ll talk to you next week.
Transcription by CastingWords