Three Data Strategy Oversights – Part Two

Three Data Strategy Oversights – Part Two

by Ziko Rajabali

April 4, 2017

We’ll cross that bridge when we get there only works if that bridge already exists. When that bridge is a strategic decision, it is a bridge of data. Last week, we talked about unconventional sources of data, like emails. In a way, it was like unearthing a new raw material for the bridge. But what about the refinement of known raw materials? How do we maximize the value of data that comes from conventional sources?

In part 1 of this series, we mentioned how sensor measurements are often overlooked in data strategy. Sensors are often used in a closed-loop automated system, where the hot data is immediately used by the embedded system as input into an algorithm that maintains optimal running levels. Once the measurement has been taken and applied by the control system, that data point is often discarded.

For example, the readings from a pressure sensor inside a chamber might be used by the injection nozzle to control the flow of gas into the chamber to stay within a specific threshold. This measurement and resulting control operation is real-time, usually measured in milliseconds. But instead of discarding this data point, the system could pass it on to some form of storage like a log file, a database or even a local file. On a regular schedule, the storage can be transmitted to an enterprise storage system like a cloud drive. This data is now cold – each individual data point is useless to the system that was using the hot data, but as an aggregation the data can be very helpful in observing patterns in long-term maintenance of the system and appropriately weighting the value of that one component in the overarching system.

It’s plausible that, in this example, the container is itself a $200 item and a failure is engineered to have a minimal impact on the rest of the system. In this case, an effective mode of managing the container might be to let it fail, replace it, and carry on. However, in this hypothetical scenario the technicians replacing the container always happen to bump into a pipe that feeds into a $20,000 component. This loosens the pipe and every third $200 container failure leads to a $20,000 failure. Without knowing what the technician is doing, an Engineer analyzing the $20,000 failure would conclude that the pipe fitting needs to be rejigged. However, if the cold data is analyzed for each failure, a trend might be observed where three $200 replacements lead to a whopping $20,000 cost, leading to a change in the maintenance strategy of the $200 container. This example is simplistic with only one degree of separation; in reality, the causal event chain is not so straightforward. It’s also important to remember that not all correlations are causal, but the data does offer objective attack vectors for a root cause analysis.

Be sure to properly value the raw data generated by sensors or the closest equivalent in your company’s operations. Be careful not to discard data without weighing out the pros and cons of chilling the data for long-term storage. Advancements in the areas of Big Data, IIoT (Industrial Internet of Things) and cloud technologies have significantly reduced the barriers for data strategies to capture and chill the output from systems that generate massive amounts of hot data.

Click here for the conclusion!