All News & Insights

The Do’s and Don’ts for Deploying AI

Success on the factory floor hinges on a measured, disciplined approach By Lonnie Miller and Roger Thomas

Companies we work with often talk about “failing fast” – learning what doesn’t work and moving on without delay. Yet in the same breath, there’s still the real fear of simply not delivering what was promised or wasting valuable resources. While artificial intelligence has been touted as a promising new technology for a number of business applications, it’s also very likely that you are hesitant to make a significant investment in the technology due to the fear of those very risks for your company.

The promises of artificial intelligence continue to receive a healthy spotlight across numerous industries including our manufacturing communities. While AI comes in many forms, keep in mind that, at its core, AI is a host of technologies and algorithmic approaches that makes it possible for machines to learn from new data, adjust to new inputs and perform human-like tasks. Here are a few examples of things that AI is doing:

  • Predictive adjustments of blast furnace temperature settings so teams can free themselves from manual monitoring and adjustments
  • Natural language processing to absorb and spot patterns in multilingual customer logs that may signal upstream problems on manufacturing lines
  • Computer vision to spot missing, minute components that line operators often can’t spot

What do these AI use cases have in common? And further, how should manufacturers define AI? From an outsider’s view, AI allows computers to learn from new information and perform tasks requiring human-like intelligence. From an insider’s view, AI entails deploying a model that learns. You prime the model with known data and the model makes predictions and recommendations using new data and feedback loops allow the model to learn new patterns. AI is less about optimized models and more about optimizing the feedback loop for optimized outcomes.

From a manufacturer’s perspective, the first implication is that AI does not constitute a traditional waterfall IT project. To find success with AI, manufacturing leaders must rethink their approach for deploying AI onto the manufacturing floor.

Supporting successful AI strategies within your manufacturing community will rely heavily on the disciplines that lead up to a model’s deployment. Drawing on practical and recurring field observations from our SAS Institute colleagues who have worked with our customers, we offer seven “do’s and don’ts” to consider as you evaluate your path forward to deploying AI technology.

Do’s and Don’ts of AI Deployment:

Do #1: Build an analytics annuity 

Rationale: Two key concepts in data science are model building and model deployment. Model building includes the tinkering of algorithms, exploratory analytics and discovery. Model deployment happens when you put the models into production through the processes of scoring and inference.

When thinking about these concepts it helps to think about stocks and annuities in the market place. Stocks are sexy and fun. Annuities are slow and boring. Similarly, the world of AI attracts more headlines and papers on building new AI enabled insights than it fosters discussion about actually deploying AI processes. But when you’re running at scale and playing the long game, small dividends in deployed process improvement yield huge returns through the power of compounding interest.

Sounds simple to fix, but it’s easier said than done. A recent study found 77.1% of companies report that business adoption of big data and AI initiatives remains a major challenge1. The tendency to stay focused on the model-building phase leaves money on the table for many businesses.

Deployment Implications: The mindset of building an AI annuity runs counter to the hype surrounding AI. It also may conflict with your chief data scientist’s aspirations to make a name for themselves. Investing all-in on the prospect of continued AI model building and discovery will soon feel like you’re trying to beat the market. The alternative is to build a balanced portfolio of experimental AI complemented by a more stable, long-term AI annuity.

Here is the formula:

  • Step 1: Build some exploratory models for a Proof of Concept.
  • Step 2: Deploy the models into production with a limited scale.
  • Step 3: Review your model’s performance and validate your assumptions on this limited scale.
  • Step 4: Continue to iterate on model building and deployment.
  • Step 5: Scale the production deployment of your findings. This is what we call the ‘analytics annuity’.

More commonly than not, companies will iterate on exploratory model building ideas and prototypes. This leads to lots of excitement and new discoveries but not making the bigger investments to deploy at scale. This brings us to one additional caveat: deployment requires hard work.

A second reason companies wander from the AI annuity strategy is that deployment requires discipline. It requires putting your findings into production and facing the results. It requires haggling with data and iterating on your models. And speaking realistically, it requires having an analytics platform in place.

These things don’t just happen on their own. Data science programs need to be incentivized in the right way to encourage the right type of behaviors. The good news for manufacturers is that this equation for success relies less on landing top-caliber PhD data scientists and more on instilling a culture of disciplined AI investments with an appreciation for the power of compounding interest.

Do #2: Treat it like a continuous improvement process  

Rationale: What do lean manufacturing, Six Sigma and AI all have in common? They all require a focus on the journey. Imagine a lean Six Sigma project that trained up your workforce, deployed some new software, and then dispersed the team upon going live. What would happen? What would the return on your investment look like? As manufacturers know, the true value in these types of projects comes from the shifted mindset of continuous improvement and continuous growth.

Deployment Implications: The good news for manufacturers is that continuous improvement programs have been ingrained into the manufacturing ethos over multiple generations of the manufacturing workforce. Let’s think about how the three types of waste in lean manufacturing apply to AI deployment. To recap, here are the three wastes:

  • Muda – The seven wastes of transportation, inventory, motion, waiting, over-production, over-processing, and defects
  • Mura – The waste of unevenness and inconsistency
  • Muri – The waste of overburden

Inherent in lean process is a model that links back to the root cause. You start with the visible signs of waste in your process (Muda). Similarly, your AI strategy should start with things easily observed and grounded in common sense. Now think about your experience in discovering Muda. While the discovery plays a key role, the true challenge lies in linking back to the root cause, which often entails core process inefficiencies that tie into Mura and Muri. Similarly, this disciplined thought process of continuous improvement will give you the right mindset for your AI journey.

 

 

Supporting successful AI strategies within your manufacturing community will rely heavily on the disciplines that lead up to a model’s deployment.

Do #3: Codify for Consistency  

Rationale: When done right, deployed AI models provide an extension of a manufacturer’s underlying standard operating procedures. Any engineer, quality analyst or functional role needing consistency and reliability will appreciate this. A question to ask yourself is why would you solely let human judgment determine what is “acceptable or not” in a part defect? Manual inspection and scrutiny can be riddled with systematic biases (say from years of doing a job in the same way) or innocent misses (for example, not seeing what a zoom lens can see in pixels). For all the SOP (Standard Operating Procedure) rigor, these workflow challenges can lead to process variability and negative outcomes. If, for example, computer vision-based models can define what “good” looks like in a fabrication process, is it worth using these types of inputs and detection models? If high-definition MP3 audio data can detect out-of-tolerance vibrations of a mounting harness for an engine, is it worth letting predictive models help?

Deployment Implications: In line with continuous improvement strategies, reviewing SOP definitions prompts a healthy way to question dated practices and find improvement opportunities. Wherever you can augment or codify human judgment with AI algorithms offers an opportunity to derive benefits from evolved procedures, approval queues and making decisions.

But what about your workforce? Many have written that AI will replace humans. Do not accept this as an inevitable truth. AI augments rather than replaces human skills. While AI will shape the manufacturing jobs of the future, AI will not replace them. For as amazing AI has become at performing stupid human tricks, AI’s intelligence is narrow.

This narrow intelligence will allow your workforce to focus on higher levels of thinking such as optimization trends and root cause patterns. Your workforce has unique value in their judgment. Humans know the workflows and critical outcomes that define SOPs. Humans know escalation procedures that should occur given different types of alerts, signals or reports. Simply put, your workforce understands your why.

Do #4: Manage model performance 

Rationale: A model’s purpose is to reflect reality and support decision making. When you hear the term ‘model degradation’ it is less about the code getting rusty than it is about its understanding of reality changing. To that point, putting your model into production is a bit like driving a new car off the dealership lot. Your models start to decay and lose accuracy once it’s chugging through real-world data.

But as defined at the top of this article, AI models are defined by their ability to learn. If AI models can learn, why do we need to worry about decay? A core answer: Just because AI models can learn doesn’t mean you don’t need to monitor and score their behavior. There’s truth in this because new or refreshed data can change the model’s predictive power over time.

Model decay is a serious challenge. Assume a set of failure predictions are derived for a nozzle injector in a plastic injection mold used in plants in Mexico. What happens when new performance data enters that same failure model for the same part but from plants in other countries? Do you trust the model equally as before even though the new regional data was absent during the original model development? And how can you know if the model’s accuracy has dropped below acceptable levels?

Operations will require easily viewable metrics such as classification accuracy, area under curve, and logarithmic loss to determine when models need to be refreshed or replaced. The more you rely on AI, the more you will want to invest in the roles and the technology for evaluating model performance over time. Avoiding costly inferior predictions associated with model decay will require scrutiny on new data sources, predictor variables and the underlying algorithms themselves.

Deployment Implications: Models that are tracked, tuned, rebuilt and sometimes retired support the goal of acting on manufacturing intelligence. This translates to several economic benefits such as quality engineers receiving alerts ahead of a failure or procurement managers ordering optimal SKU-level quantities for parts warehouses based on daily forecasts. As mentioned, the purpose of modeling is to reflect reality. Living in a world with the constant of change means that model management must be core to your analytics strategy from the onset. To deploy models into production without built-in mechanisms for measuring model performance and degradation is setting your organization up for failure.

Don’t #1: Execute on a data strategy absent of an analytics strategy 

Rationale: AI and analytics are completely dependent on the data that feeds them. Garbage in, garbage out. So why wouldn’t it make sense to focus on cleaning up your data first? It’s a common question that leads to a common mistake.

The primary challenge ties to the reality that data quality and management is an ongoing journey itself. If you wait for your data to be perfect, then you will never launch your AI strategy.

The primary benefit to linking analytics to your data strategy comes from the powerful feedback loop analytics provides. Core to data strategy is a prioritization exercise for where to focus. The analytics will provide a flashlight for spotting which data holds the most promise for business insight. It can also provide a unique perspective of data quality that goes way beyond the master data formatting and standardization.

It’s worth noting that we’re talking more about analytics than AI in this section. Remember that analytics is a building block for AI. While it is feasible to deploy AI for this process of establishing a data strategy, more direct analytical methods are often more practical here. Throughout your journey, know the different tools in your toolbox. As manufacturers, we all appreciate the importance of selecting the right tool for the right job.

Deployment Implications: The idea of pairing your data strategy with your analytics strategy feeds directly into the concept of an Analytics Lifecyle (see picture). The three elements of data, discovery, and deployment feed into one another for both individual data science projects and broad-based roadmap strategies. To give you an idea of how intertwined these concepts are, here is a methodology one of our veteran R&D leads talked us through for thinking through an analytics deployment:

Question 1: What’s the Purpose? Always start with purpose. What specific outcome do you hope to accomplish?

Question 2: Where’s the data? Focusing on the data you have vs. the data you don’t is a critical step in creating momentum and finding success.

Question 3: How is your data quality? Some models and techniques have higher tolerance for bad/missing data.

Question 4: What is your data latency? Do you require real-time decisions, or will it suffice to process decisions in periodic cycles or entirely offline? This question has big implications in your deployment architecture.

Question 5: What skills do you need to work with? Stay cognizant of the personnel who need to own the model going forward. Also, think holistically about potential downstream impacts for relying on this model.

Question 6: What is your plan for deployment? Plan backwards. Establish this plan from the beginning.

Question 7: How will you integrate AI into existing processes? Everything comes back to process. Don’t assume your manufacturing assembly lines will blindly embrace AI automation. How does AI help support the exiting team and what will AI allow your teams to do differently?

Question 8: What is your execution environment? Mixed codes? Mixed environments? Think through any complexity factors upfront. Also, consider the long-term overhead and inefficiencies for maintaining mixed environments.

Question 9: How is the model useful? Most models never get deployed to production. After you have developed your AI strategy, be blunt in assessing its usefulness. Peer review is critical here to ensure you don’t fall victim to the endowment effect – ascribing more value to something simply because it’s yours.

Question 10: What is your plan for model governance and an ongoing feedback loop? See section above — DO #4: Manage model performance.

Don’t #2: Underestimate your data preparation efforts 

Rationale: This is a foundational aspect echoed by nearly all our interviewed experts. For models to work in production, you must engineer the data prep steps relied upon during the model build phase. For example, let’s say you join three data sets reflecting:

  • a mix of 12,000 SKUs of industrial filters that failed and survived in the field over the last 18 months, along with …
  • related warranty claims costs, along with …
  • contract data that indicated renewed orders (or not) for the impacted customer accounts using the filters.

And on top of this, your analysts created 40 new attributes from this merged data that drew from product image, numeric and text data (this is called “data fusion” – see sidebar). These attributes were created to ultimately predict an account’s propensity to renew a contract. Pushing this model into your production CRM system requires you to replicate whatever steps were used to create the initial data sets in an ongoing manner for as long as the model is running in production.

Enter your data management and IT teams! Ensuring data formats are correct, executing the right swapping of rows to columns, and successfully creating new attributes from the warranty claims file all must be repeatable. Without thinking through the data prep process upfront, you may find your attribute is inaccessible or creates too much of a performance hit for scoring. As challenging as it can be, it is a critical process to master in the overall work flow.

Deployment Implications: Without systematically repeating data preparation for AI models, prediction will simply fail. In the worst case, such models will undermine their core purpose of surfacing ranks, alerts or recommendations. Effective decisioning moments will be absent due to inaccurate calculations and model outcomes.

Similar to Do #4 on model management, the most important takeaway is build these mechanisms for automated data transformations, flows and management in from the onset. To do otherwise will set your AI initiative up for failure.

 

 

The equation for success relies less on landing top-caliber PhD data scientists and more on instilling a culture of disciplined AI investments with an appreciation for the power of compounding interest.

Don’t #3: Overlook full integration requirements 

Rationale: This is a premiere hang-up in deployment. Many of our introductory discovery meetings sound like this: “We built a model, but it won’t work in production. We’re not worried about the methodology or algorithms we used; we’re just stuck getting the operations team to implement our logic.” Sound familiar? This is also known as the great IT/OT divide.

We find that successful deployments occur when both business owners and technical teams are at the table. Sounds naïve or overly obvious? Perhaps, but when all parties help one another think through the “what” and “how” of using data science in their environments, deployments are far smoother. One of our colleagues encouraged this simple point: “Think through the labor required to integrate models in settings outside of where they were built.” This brings up the need for a flexible platform that supports multiple languages and the ability to avoid re-writing logic (often thousands of lines long!). Can an R-based scoring code be read into a Teradata database? Look for integration points that help your IT teams avoid days of testing their new code that replicates what your data scientists authored.

Related to this, integration implies knowing where the model will live and execute. Can the performance of a REST API actually render 10,000 SKU forecast outputs down to a warehouse manager’s mobile device? If not, the multi-SKU forecast might as well not exist. Without deployment, no data-driven decisions occur. In this case, stocking the wrong parts could result in rush shipments for high-demand parts and excess inventory write-offs of seasonal parts.

Deployment Implications: Technology can be your best friend or worst enemy. After teams find AI solutions for anticipating profitable outcomes, they must embrace the holistic IT/OT process for production deployment to realize the predictive power observed in their development environment. Making your AI work in the field is greatly enabled by agnostic technologies that can be understood and manipulated by others in your company.

As a final point, think through where the AI will physically execute and be relied upon. If you don’t, this last mile of AI can undermine millions of investment dollars and opportunity costs.

Conclusion 

Speed kills. Competitively speaking, the adoption and use of new or refined technology is a differentiator for those who must make complex, data-driven decisions. With the broad promises of what Industry 4.0 and the Industrial Internet of Things allude to for manufacturers, one aspect is assured: applying AI to some very basic operations is doable now. Should you wait? We think not and encourage a critical review of what we’ve seen work in the field. M

 

Practice “data fusion”

Data diversity is as real as the variety we encounter with the human population. When considering inputs into AI models, look hard around the enterprise for data types that can improve the ability for your data to statistically narrow down underlying factors contributing to recurring changes or outcomes (that may be good or bad or for your business). Don’t think in terms of “structured” vs. “unstructured” data – rather ask your teams what data can be examined to best give explanatory or predictive strength in a deployed model.

From our experience, this includes not only conveniently formatted numeric data, but also text data from call center notes or image data from pictures of contaminated materials in a staging area. We consider audio data that records anomalous decibel spikes or drops as robust candidates for predicting, for example, if an engine will seize up. In this latter example, the mere task of surfacing a problematic noise pattern requires techniques to identify which combinations or patterns of “ups and downs” in decibels are statistically indicative of pending doom.

The concept of “data fusion” is analogous to bringing together cultural recipes to create a new dish. As part of this, those building your AI models benefit from creating new features or attributes that derive from your original data sources. And if the various data types themselves are relevant to the problem you’re solving, the potential features that are created from them to serve as inputs into a model have a greater chance of predicting future outcomes or making appropriate recommendations based on the deployed modeling techniques.

One of our manufacturing clients needed to keep products from jamming up when they converged on a conveyor line. In this case, pictures of products on the staging line were a core input to a machine learning model that SAS built. The model needed to recognize whether units of the product were “together” or “apart.” In the development phase, we trained the model with pictures of different units in their staging area on that line. Then a signal was developed to indicate whether the units were together or apart, and we then associated a known jam outcome or not.

Ultimately, when the model was deployed, the model raised an alert to operators when the signal indicated that units were together for an extended duration. In the deployment, cameras placed over this staging area automatically fed the production model new images so signal calculations could trigger the necessary alert.

 


1 Big Data and AI Executive Survey 2019: https://newvantage.com/wp-content/uploads/2018/12/Big-Data-Executive-Survey-2019-Findings.pdf

The authors would like to acknowledge the following SAS Institute colleagues for their contribution to this article based on their professional field experience and industry perspectives:

  • Kirk Chinavare, Senior Solutions Architect
  • Nate Cox, Senior Associate Systems Engineer
  • David Duling, Director, Advanced Analytics R&D
  • Gene Grabowski, Principal Solutions Architect
  • Abel Henson, Principal Solutions Architect
  • Jesse Lund, Senior Associate Systems Engineer
  • Diana Shaw, Manager of Americas AI Team, Global Technology Practice
  • Wayne Thompson, Chief Data Scientist & Senior Manager, Product Management
  • David Ungaro, Principal Engagement Manager
  • Varun Valsaraj, Senior Operations Research Specialist

View More