Devising a methodology to manage, store, and leverage data is now the principal challenge for manufacturers in the digital age.
It has become clear that all businesses, to varying degrees of urgency, have a need for real-time insights across their critical processes, products, and people — both internal and external. Manufacturing 4.0 (M4.0), which is anchored primarily in interconnectivity, automation, machine learning (ML), big data analytics, and real-time data, provides the capabilities to deliver these insights. Advances in disruptive technologies, such as artificial intelligence (AI), ML, and Industrial IoT (IIoT), coupled with the exponential increase in available data, not only enable the translation of the physical into the virtual world, but also facilitate the link back from the virtual to the physical world for optimized decision-making. Data-driven insights are generated from smart manufacturing technologies by closely coupling physical production and operations with modern digital solutions to create more efficient, well-connected ecosystems for businesses that focus on manufacturing and supply chain management.
With near-ubiquitous connectivity and more devices generating data from both objects and people, it is no surprise that today’s manufacturing enterprises are inundated with data. Depending on how far they have moved into digitalization, many may still be in the process of incorporating data from legacy systems, as well. While faced with the task of managing this rising flood of data, manufacturers are also challenged to direct the flow of this information to support the process of uncovering insights from the data that can benefit their business and deliver value to their operations.
Worldwide, data doubles every two years. From 2013 to 2020, the digital universe grew by a factor of 10, from 4.4 trillion gigabytes to 44 trillion. Recent Manufacturing Leadership Council research also suggests that data volumes are growing exponentially, with over a quarter of manufacturing companies expecting data volumes to surge by more than 500% over the next two years! But while most manufacturers are still learning how to best analyze all this data, 100% of survey respondents believe that data is either essential, or supportive, for future success.
“Manufacturing leaders recognize that data is power, but they also realize that unharnessed data is of little use.”
Benefits of embracing M4.0 technologies to glean insights from data are already evident. MLC’s recent survey illustrated that recent increases in manufacturing data have not only provided many companies with noticeable improvements in plant floor performance, particularly in productivity (77%), efficiency, (67%), quality (65%), and cost reduction (58%), but also have spurred innovation. Figuring out how to manage, govern, store, protect, and leverage data is now the principal challenge for manufacturers in the digital age.
Readying Data for Analytics
Manufacturing leaders recognize that data is power, but they also realize that unharnessed data is of little use. It is only when it is accessed, cataloged, and organized that it can be put to work generating insights for an organization.
A first step toward reining in this data is to gain an understanding of what data is available. A data catalog is needed to get to the first level of data discovery. This organized inventory will identify an organization’s data assets: what data is available and how it can be accessed.
A second step is getting the data to a location where it can be acted upon. Often this means mass data ingestion, or the process of getting vast and diverse amounts of information from multiple locations into a centralized place. This method is often termed as ETL, or Extract, Transform and Load, which is the process of extracting data from one source, transforming it if required to fit the new data store and format rules, and then loading it into this new data store. Another similar approach varies the order of the process to Extract, Load and Transform, or ELT.
In recent years, manufacturers have opted to collect vast amounts of data, assuming that if they harvest enough of it, it will eventually give rise to relevant business insights. However, that approach alone will just create additional data siloes and introduce other complexities.
STEP ONE: Understand Challenges of Data Availability and Collection
To get a sense of where your organization stands regarding available data and access to that data, consider how you are or will handle data in these areas:
Multiple Data Sources and Formats
Expanding on our earlier premise, data is not just everywhere, but it is coming in from (or must be accessed from) multiple sources: various locations, various speeds (frequency) and various formats. If we consider just IoT data, much of it from sensors, we see that a host of details must be considered. IoT data is:
- Formatted differently than business data, such as enterprise resource planning (ERP) data or customer demographic or order data;
- Transferred using different connectors than ERP or business data;
- Held in different data stores and historians, compared to the traditional No/SQL type of data stores;
- Provided in varying frequency intervals, ranging from immediate transmission of video to sensors that transmit information every second, minute, or hour.
The remote sensing method of light detection and ranging (LiDAR) provides a similar set of challenges – further evidence that multiple sources and formats create a situation that is complicated to resolve. But when this information is harnessed with analytics it can enable truly groundbreaking insights.
As an organization proceeds with the discovery and ingestion of data, issues of ownership arise. Who owns each data source? Are there rules in place to prevent unwanted or unauthorized access? How will access be accomplished? To meet data stewardship requirements, manufacturers would be well advised to partner with an organization that has guided solutions for these industries to leverage their institutional knowledge.
The schema (format) of data often does not match the datasets that the business end user needs to work with. Formatting that data such that it works for the end user is key to bringing out value from the information. For example, data regarding revolutions per minute of varied aspects of key equipment, maintenance windows per week, and employee movements throughout the production process may be useful in some contexts. But if the business end user needs only to discern how quickly bulk orders move from start to finish on the shop floor, that excess data is just noise. The full schema is not necessary; rather, it will need to be reformatted to deliver only the required data. When essential data is delivered to the right business user it can inform better decisions and inspire innovation.
Typical inflow of IoT data, which is growing by leaps and bounds, can quickly become overwhelming. IoT data may be streamed in real time, every few seconds, or several times a minute or hour. If not handled and processed efficiently, this input can grow to enormous volumes, and often unnecessarily. Timestamping allows selection of the frequency of data the business end user wants to see.
Step Two: Consider Challenges of Getting Data to a Suitable Location
When the challenges of locating and collecting data have been tackled, to reap its benefits it must often be moved to a location from which it can be acted upon. Are effective tools in place to bring about this relocation? Good IoT data management tools include solutions that automatically store important, low-latency data in high-performance, expensive storage. The tools differentiate that data from the less-important or less-urgent data that can be stored more cost-efficiently in cheaper, longer term storage. Tagging must be allowed, even automated if possible, so that business terms and descriptors can be added to sensor data and enrich it further.
All this sorting and storing — pre-analytics data massaging — must happen before any ML- or AI-based algorithms can be run on the data. It is these algorithms that enable an understanding of trends hidden in data, allowing extraction of insights that benefit business.
The Role of DataOps
Data management is an important function that needs to be operationalized and included in regular processes. We call this DataOps. Overall, it is an essential part of optimizing the process for performing agile data analytics to improve the quality and cycle time of real-time insights.
In manufacturing, DataOps can make the difference between experiencing overstock in inventory and meeting market demands where they stand. With an eye on the connection between data analytics and IT operations, DataOps offers an automated approach to support quality analytics while shortening the cycle time of data analytics. DataOps delivers edge intelligence to distinguish important data (resource levels and output levels) and decipher what is sent onwards (perhaps to the cloud for further processing such as correlating it with historical data for comparison purposes, for example). DataOps methodologies also support efficient, secure, governed data ingestion. In other words, DataOps brings manufacturers awareness of and the ability to stay on top of rising and falling demand levels, competent or sluggish supply chain delivery, and on-time or missed-deadline fulfillment.
“Data management is an important function that needs to be operationalized and
included in regular processes.”
Emphasizing communication and collaboration with ETL tools, DataOps supports the integration and automation of the ETL process. Data must be extracted (from a database or source), transformed into another format (so it matches the destination database’s format requirements), and then loaded (stored). Keep in mind that loading may occur in a completely different location, cloud, or a combination of multicloud environments.
DataOps solutions need to be flexible. They may need to be deployed according to customer requirements: in the public cloud, in a private cloud, on premises, or a combination of these in a hybrid cloud. To support the process of running analytic models on data, DataOps must first accommodate AI and ML models, which must be built, tested, and run in production volumes to extract insights from the data to make it actionable.
Versatile and intuitive reports and dashboards are essential for presenting results of analytics in forms that can be easy to export and share. Select from common tools, like Tableau, PowerBI, or CTools to exhibit results.
How to Get Started
A clear assessment of an organization’s current IT environment is essential to clear the way for an organization to step up to real-time insights across their critical processes, products, and people. Concentrate on these areas:
Take stock of digital readiness. Often even a small change can ripple through an entire company, and not just affect one specific organization in a company, such as manufacturing or production. For example, changes in the upper office may not have expected effects on the production floor. Are you ready for digital change in all areas?
Find key areas to optimize first. For many, this is taking the first step towards digitalization at the factory floor itself, such as using more digital and computerized processes versus manual paperwork.
Finally, operationalize and institutionalize. Embrace DataOps to set processes — analyze, diagnose, predict, and automate — for repeatability.
It certainly is not news to manufacturers that phenomenal levels of new data are flooding the industry. Neither is it news that M4.0-level technologies are available to address the deluge. What is newsworthy is the results that can be obtained from employing M4.0 technologies to rein in both legacy and new data, get it to a strategic location, and pull from it the insights that drive business profitability. Data silos, stagnant data lakes, and poor data management are conditions which may hinder your progress toward better decision-making today, but they need not hold you back tomorrow.
“Employing M4.0 technologies on both legacy and new data can help generate insights that drive business profitability.”
The right partner can help you with an assessment of your organization’s data as well as the steps to establishing data availability, moving data to an effective location, and employing DataOps to enable and speed analytics. A partner experienced in interconnectivity, automation, ML, and real-time data can help you take advantage of the smart technologies and automation of Manufacturing 4.0 and propel you toward data-driven digital transformation.
Best Practices to Maximize Results
When the work has been done to create the IT environment that will best manage, govern, store, protect, and leverage data, how does an organization make insights from data work more efficiently? The answer is in easy-to-use applications and intuitive, tailored micro-vertical solutions. Such tools are also exportable to common business intelligence (BI) tools for versatile reporting.
Many businesses are cautious about investing in custom-made data platforms that cause further vendor lock-in, whether to the software vendor or to a particular cloud infrastructure provider. The right approach to take is to use software that is vendor- and cloud-agnostic and delivers value by tackling specific problems. For example, an organization may wish to bring basic digital tools — such as digital Gemba boards— to the shop floor for use by technicians. Such a solution might discourage and eventually replace the use of inefficient paper and post-it notes.
In addition, vertical and micro-vertical solutions can be used to address specific challenges, such as asset performance management, enterprise asset management, and field service management. These efficient, easy-to-use solutions preclude the need to learn a new IoT data platform. Rather than creating one more datum silo, such solutions enable data to flow and be shared easily, allowing more collaboration and co-creation of business value. M