ML Journal August 2021

ML Journal August 2021

Making Big Data Consumable

These data strategies and next-generation tactics will help organizations embrace the new decision-making framework they need to fully operationalize data insights.   

Big Data isn’t new — but strategies for incorporating and consuming Big Data are starting to attract more attention. Increasingly, the conventional suite of intelligence software is evolving well beyond just business intelligence (BI). It is expanding to incorporate advanced capabilities like artificial intelligence (AI) and machine learning (ML). With new data management tools available, manufacturers can now catalog data, retain complexity, harmonize disparate sources, and apply low-code/no-code reporting applications. The result is a broad umbrella of insight, valuable for facing the future and building strong differentiators.

Harnessing Lost Opportunities

Historically, data was vital to decision-makers as they assembled both short-term tactics and long-term strategies in the name of increasing profit-and-loss ratios. The potential has propelled the introduction of game-changing technologies, from condition-based sensors that collect machine data to cloud-based data warehouses for storing vast amounts of raw data. How to manage and consume data needs to also transform to keep up with the way users work and make decisions. In order to fully operationalize data insights, manufacturers need easy-to-use tools, context, and direct ties to factors that can be influenced.

It’s time to look closer at data strategies and adopt next-generation tactics that will help organizations embrace a new decision-making framework. Before the introduction of SaaS, cloud-native software, and managed services, industries failed to scale to enterprise-wide, multi-year data in a way that produced meaningful outcomes. We lacked the capacity and workflow to evaluate and unravel novel types of insights.

The state of the cloud, coupled with the commoditization of storage and exploration techniques, now means never having to lose your data — or your opportunities. Given competing interests in balancing cost with performance, there’s historically a huge amount of information left on the proverbial cutting room floor because we hadn’t identified or attached intrinsic value to retaining that data. Today, most manufacturers understand that data should be preserved, even if they don’t fully visualize its long-term application.

Today’s opportunity is in recognizing the value of innocuous data — data including state changes and the evolution of how decisions are reflected in ERP transactions — and its hidden, intrinsic value in supporting the ability of an organization to fully model its past decisions and behaviors.

The Role of the Data Lake

The narratives around data lakes and their value have run the gamut. They have been lauded and supported, beaten up and attacked, and passionately defended. Such is the life of emerging technologies and innovations. However, 10 years since its advent, there’s no question that a data lake provides an economical way to store data. And so, manufacturers and businesses should recognize data lakes as not only a large component of next-generation data solution architectures, but also the point of entry for beginning a longer data journey. Commoditized pricing has introduced a host of new tools and technologies that focus on the challenging part of getting started — data acquisition, ingestion, and cataloging. Cloud deployment, with its vast elastic storage capacity, also has helped in capturing performance details of shop floor machinery — the heart of manufacturing operations.

Besides the increased volume of data being collected, the need for context is ever-present. Relationships and interdependencies between data points — or their context — are important not only in reaching meaningful insights, but also in deriving entirely new context models.

Typically, when we think of data analysis, we think of things like forecasting, planning, improving efficiencies, reducing scrap, and reducing waste or underutilized manufacturing capacity. That data on its own, though, intrinsically doesn’t solve business problems. When analyzed in context of other variables like location or temperature, however, it can present very real opportunities to start whittling away at improving your forecast accuracy, or reducing waste and scrap, and improving labor efficiencies as well as manufacturing efficiencies.An investment in creating a connected enterprise requires a strategy for harnessing data and context catalogs. Without context, the data lake can be a data swamp.

“Cloud deployment, with its vast elastic storage capacity, has helped in capturing performance details of shop floor machinery — the heart of manufacturing operations.”

Tooling Up for Increasing Data Complexity

Another added complexity is the massive data derived from machine sensors and internet of things (IoT) technology. With the growing adoption of 5G networks, this is expected to increase.

Think of equipment, like rollers and stamping equipment on a manufacturing line, providing constant feedback about performance. There is obvious complexity, not only buried within the data itself, but in being able to define value and assign meaning to that data and correlate it back to the business.

There’s a need beyond the data capture and the tools to analyze that data. Manufacturing leaders need tools to help them understand that information complexity. We’re talking about a great deal of information continually flowing. Organizations need to create value and meaning from that data and apply it to day-to-day questions, such as what equipment you can take offline for maintenance without impacting delivery dates.

Increasingly, this means a pivot from conventional batch-oriented decision making toward frameworks featuring real-time decision engines. Leaving decisions to pre-calculated, conventional insights reduces the manufacturer’s agility and ability to react to subtle nuances and changes in data.

This is where the software vendor industry is setting its sights. The traditional mashup of technology, including data lakes, AI/ML, and real-time reporting, is being replaced by one fully integrated system that is more responsive, more intuitive, and more accurate in predicting likely outcomes.

Making Data Consumable

Some software suppliers are stepping up to the challenge and helping manufacturers sort through the complexity of their data, providing new tools that go beyond simple business analytics and delve into the interaction of data from manufacturing machines, people, and processes. This is where AI and machine learning become valuable, generating inferences and efficiencies through predictive models, finding patterns that would be hard for a human to easily detect.

The next step is for the software vendor market to work at making AI and machine learning much more approachable and attainable to the offices of the CIO and other leaders. Adoption requires capturing attention and reducing friction or else the opportunity costs may lead decision makers toward more conventional capital investments.

Besides large volumes of data, manufacturers must also address disparate data, coming from different sources. Different equipment suppliers will embrace different technologies and different levels of interaction and the type of information that they can provide. Manufacturers need a very competent solution that can store and start harmonizing the data, correlating or coalescing that information, while retaining the context.

This is typically the role of a data lake. With the data in one place, solutions can analyze, experiment, and start harmonizing data, looking at the data relationships from different, independent parts of the factory. From here, it becomes easier to exploit the economic value of that data, as conclusions can be captured and displayed in role-based dashboards.

Other Modern Tools

Self-service reporting. While greater complexity exists, there is also a call for simplifying reporting to relieve bottlenecks. When the company has limited IT resources, managers may find themselves lined up at the IT team’s door waiting for specialized queries to be created. The answer is in reporting tools that use simple drag-and-drop questions to help users with no coding skills or limited coding skills to structure report queries. These self-service reporting tools truly put data analysis in the hands of people closest to the use-case and best able to achieve value.

Data cataloging. By simply capturing everything in the enterprise, your data lake is at risk of becoming a data swamp, murky with tangles of unidentified data. Imagine going into a storage unit with thousands of filing cabinets but you have no idea what information is stored or where. How can you start to sift through it all? The task can be overwhelming, even for AI-driven software.

Data governance. Data lakes, without some level of governance and intuitive metadata, can quickly spiral out of control, making it very difficult to manage and create value. Data governance and data cataloging helps users to understand what all that data means, how it can be navigated, and in a climate of growing focus around data privacy and data exposure regulations, who can tap into an enormous pool of insights with very real economic value.

“Whether you are considering using data insights for a revenue stream, a differentiator, or as tools to help you improve productivity and performance, — now is the time to get started.”

 

Tooling software. In the business software space, there is an emerging need for tools that will help users build their questions, using metadata analysis and predictive capabilities to identify trends and patterns in the data — without having to explicitly request or ask the system to do so. Modern tools can help identify trends and insights before the business even has a glimpse of the possibility. The convergence of several previously independent domains including no-code/low-code, AI/ML, business intelligence, performance management, and other value-add software investments is evolving into a singular insights-focused platform.

Converging resources. These different resources must come together in a convergence. You need to present data that is consumable, that is presented in a meaningful way so that the business users in the organization can take away actionable insights. This means driving toward the lowest common denominator, making this data functional and useful to the people who make decisions. Complexity and scale are issues, but so is the need to compartmentalize the insights. This convergence puts it in bite-size chunks, able to be delivered into the business so it can be examined through reports.

Obstacles to Adoption

Bold lead in Despite these modern tools and applications, companies still sometimes fail at their data strategies. A common reason is the lack of a clear objective. Another is difficulty in understanding what the data is telling them. People don’t always know how to develop data models, what questions to ask or what data sources they need to query in order to get their answers. This can lead to cognitive friction.

Friction is like resistance or any hesitation in leveraging the data. Often, this happens when the tools are separate BI solutions, not integrated. It seems out of context to go to a different solution to do the analysis. The tools need to be built into the ERP to encourage use and a trust factor.

So, the delivery of that information — with contextualization–also improves the absorption. If the intelligence is delivered through the ERP, anyone from the back office to the shop floor, to the production lines, and the foremen, will absorb it into their day-to-day workflows. This is when the true impact starts to be undeniable.

Advice to Novice Data Collectors

Manufacturers need to start with a strategy. They need to have a big-picture view of what they want to achieve and how they are going to make it happen. Manufacturers should also know how they plan to consolidate the disparate data sources and how the data insights will be used. Who is the audience? Before implementing Big Data projects, there’s value in examining the different components. Who are the consumers? Are there the right tools, like data catalog procedures, data curation models, segregation of duties in the business, to complement this investment into Big Data? Very quickly, without those levels of governance, the data lake can become murky offering a diminished return on investment.

Realizing the Vast Potential

You can manage your data for meaningful business insights by applying modern tools. This can also yield a revenue stream. Collecting and harmonizing data can be used to create benchmarks or guidelines that will be valuable to peers, customers, and channel partners. The insights can be packaged and sold as a reference material.

Data can also be used to build and monitor differentiating features, helping you stand apart from the competition. You can pick features to exploit, such as speed of delivery to customers, quality standards, or resolution rates for after-market service. Data can support the claims that you make to customers, then help you protect these differentiators.

Whether you are considering using data insights for a revenue stream, a differentiator, or as tools to help you improve productivity and performance, now is the time to get started. Deriving data insights takes time and sufficient data to see patterns and anticipate future likely outcomes. So don’t put off embarking on the journey.

Data provides opportunities to build a competitive edge and offer unique capabilities. C-suites need to recognize that data management goes beyond storage of information; there’s also a very rich economic value to it.  M

View More