As you might expect, natural gas measurement has a big data problem. On average, there are about 100 points per meter per minute to process in real time in today’s modern IOT equipped smart meter stations. Not all of these data points are good data. The first step is to clean the data and remove the “noise.” Who is in charge of that and how is it done? Needless to say, measurement data management is only the starting place for the real high value work.
Typical pipeline companies will employ about one staff measurement engineer to cover about 60 smart meters, 25 flow computers, 120 pressure and temperature sensors, and 20 Gas Chromatographs in the system, or about 225 devices total. About half of their time is dedicated to reporting, system balance analysis, ticket management, and other administrative duties. That leaves 20 hours per week dedicated to the analysis of the 37,800 hourly data points collected. Typically, about 10% of that will just be bad data points that has to be cleared out first at significant time investment estimated at 70% of available time, but let’s ignore that for now.
With just over 6 minutes per device to dedicate to the entire task, the analyst will probably have to be content with the spot check of the data to confirm that everything is currently OK, and reserve time for a deeper analysis of the flags that show up in the real time dashboard. In this way, he can use the first 8 hours of available time to do a 2.4 minute per device sweep to determine which devices will require further analysis in the next 12 hours available that week. As a practical matter, about 80% of the 1,000 expected red flags in the data base will be skipped to focus in on the 20%, or about 336 that are picked up during the sweep period.
An experienced analyst will judge some red flags more important than others, and the time will be used more effectively to zoom in on those “favorite” areas to analyze. The workload will become more manageable at the expense of some more small and emerging errors. By spending the next hour to determine the most important 20%, the workload will be reduced to the 67 red flags assumed to be most important and there will now be 11 hours, or about ten minutes to spend on each one. If that is not enough time, only a fraction of the issues will be reviewed thoroughly that week.
The hidden cost of this process is impossible to know without a glimpse of the alternate reality where there is enough time to review all red flags in the week. How many of the flags that were screened out were indicative of actual or emerging measurement errors that would show up eventually in the screening process or system balance, but after an error had persisted for a while? And how many errors did not show up as a red flag? We do know that while tremendous investment has been made over the last decade in information systems, data management and measurement accounting software, and analysis tools in the last decade to increase data accessibility and improve staff efficiency, the average measurement error value for the industry (LAUF) has stubbornly remained at 0.4% of throughput and has shown no sign of improvement.
the combination of design measurement uncertainties for these types of stations suggest that the measurement error (LAUF) expected should be on the order of 0.02% for any reasonable sized system, which gives us a hint that there is unmined opportunity for improvement here. The value of that discrepancy for an average sized 2.7 BCF/Day throughput midstream transmission company is over $12M/yr at the average price since 2017, so it seems like it is a problem worth investigating. There are only two possibilities – either we have optimistically calculated the uncertainties associated with our fancy and expensive measurement stations, or we are not getting the most out of our status quo methods of building dashboards and conducting manual analyses at our current staffing levels and abilities.
But how can we know which of these is true? We could throw manpower at it and increase the staffing levels of experienced, highly trained professional measurement staff engineers, assuming we could find them, to work with our dashboards and data with less pre-screening shortcuts to see if we could impact the bottom-line measurement error. Adding $1.2 million dollars to the annual overhead costs for our average sized midstream transmission company to find out the answer is not going to be met with enthusiasm from management, even though it could potentially have over 1000% return on investment! Why? Skepticism! That is a tremendous investment to make in recurring overhead costs that require training, benefits, and are hard to make go away if the hoped-for return does not materialize! And we know from the investment in data management software and improved dashboards and tools that a solution is desired, even if it hasn’t panned out yet.
But there may be another way! An automated analytics engine that would allow us to count and analyze all those 38,000 data points per week, count and analyze all the 1,000 red flags, detect, and quantify errors as small as 0.05% and provide early warning of errors would provide the information necessary to determine if improvement is possible or not. If that analysis engine were made available as a service, it could be tried out without much investment or changes to staffing levels. Further, if that service had a 10-year track record that showed that LAUF could be reduced to 0.02% to 0.05%, it could be proven that the uncertainty estimates for the stations are correct, and the problem is simply one of bandwidth for intelligently processing huge quantities of data.
Fortunately, this automated service solution exists today, has a ten-year track record, and is available from C-SMART Analytics to try out for free trial periods or discounted introductory offers. And after the trial period, lower measurement error results can be maintained year in and year out at a small fraction of the cost of increased staffing levels.
How do we know it works? Because in addition to reduced LAUF, other KPI’s demonstrate that more errors are detected, and once detected, they are fixed 40% faster for C-SMART users.