February 7, 2023 – Reading time: 9 minutes
AI is changing the world, not just since the ChatGPT wave swept over us. AI applications are challenging how companies work in employee-intensive areas: enormous productivity gains per person are promised – and, conversely, the company’s end if AI is not used. Companies are currently trying hard to evaluate the possibilities for themselves. One of the questions to be answered is where the potential of AI applications lies, especially in system and machine operation as well as in production and also for medium-sized companies.
Predictive Maintenance is an already well-known and relatively well-studied use case. But what about the opportunities for Predictive Quality? Are there similar opportunities and risks for a drastic change in the competitive situation? In his interview with our AI expert Dr. Marc Großerüschkamp, Christopher Seinecke, Director at INVENSITY, focuses on this question.
Christopher Seinecke: Marc, so far, machine learning (ML) has not been widely established in medium-sized businesses in system and machine operation as well as in production. But you see great opportunities for production in particular. Why?
Dr. Marc Großerüschkamp: The material, the production costs, and thus saving potentials are simply the highest in production. To invest in AI, customers need a positive and tangible business case. In production, the material is processed, and often much energy is required. The business case is easy to calculate if you can save material or energy.
If you only save a few cents per part, it pays off in total. In addition, efficiency gains in production are not a one-off effect; savings are made on every part. If the production volume increases, then the savings effect is also greater. Therefore, we came to the topic of ML for production: Saving costs by optimizing the production process, for example, by using less material and/or energy or producing less scrap. This also brings us to the area of quality prediction or quality monitoring.
Christopher Seinecke: Of course, the desired result sounds promising at first. But why should production, of all things, be suitable for the application of ML? Can you explain that in more detail? How is this supposed to work?
Dr. Marc Großerüschkamp: ML needs one thing above all as a basis: high-quality and representative data. Production systems are digitally controlled and therefore generate data. Often, they are not yet recorded because the benefits are unclear or the digital infrastructure does not allow it. In other words, connectivity may not yet have been established in some cases, but data is available. And the task now is to think of suitable use cases for ML for this data. Let’s take the example of production quality. Here, precise data is often available on the key figures you want to improve: throughput times, quality deviations (e.g., through optical quality control), scrap data, etc.
These are the optimal conditions for using machine learning models sensibly and profitably. It is then important to bring together the right experts and, at the same time, think in business terms. Without the know-how of production experts built up over the years, no AI developer can improve production and vice versa. One thing is sure, however: If production data is available, then a sensible ML use case can almost always be found. Accordingly, production has everything in its hands to use ML to add value.
Christopher Seinecke: To understand this correctly: It’s about, for example, collecting sensor data that accumulates during production, recognizing patterns in it in an automated way, and then adjusting production processes accordingly. Is that right?
Dr. Marc Großerüschkamp: Basically, yes. Let’s look at a use case that we discussed with a manufacturer from the process engineering sector.
Here, an extrusion line is used, which presses plastic profiles by the meter. These are then cut into pieces and processed further. There is also quality control in the form of an optical system that scans the surface to sort out rejects. The quality of the surface is essential in this case, which is influenced by a vast number of parameters and measured variables. At the same time, the system offers the possibility of adjusting production parameters such as temperatures, pressures, or even speeds to keep the quality constant.
Nevertheless, errors occur time and again. This is normal. There is no production without rejects. But why is that? It is simply due to the complexity of the equipment and processes.
The production staff can have as much experience as they like: Nevertheless, there is no analytical formula that can continuously determine the perfect production parameters, taking all measured values into account. That is precisely what complexity is. And here we come to the topic of artificial vs. human intelligence. Humans think analytically; ML models cannot. The strength of ML models is that they can consider all influencing parameters to predict a specific value without knowing the exact relationships. They learn these in the training phase based on historical production data and quality deviations. The result is not an analytical model but a statistical one, which can predict error probabilities very well.
Suppose there are deviations in the production parameters in our example of the extrusion line; for instance, if the plastic profile is pressed through a little too quickly or slowly, this can lead to different types of defects. One defect, for example, is that the surface does not have the desired structure, i.e., it is not entirely smooth. So far, it is impossible to control the process so that the defects do not appear. Instead, it is only possible to sort them out according to their occurrence. When a fault occurs, the piece is cut out and either disposed of or recycled by being chopped into granules and fed back into the machine. The challenge here is that both options are complex and expensive.
How could ML help here? The idea is to feed an ML model with two kinds of data and train it: On the one hand, the ML model is provided with all process data on production parameters that produce a perfect plastic profile: In our specific case, these include temperatures, speeds, pressures, fan speeds, etc.
On the other hand, there is additional data from the past on faulty productions. A human being could never analyze this data meaningfully and comprehensively. The ML system, in turn, can be trained on this using a supervised learning model. The result is that the ML model can predict error probabilities to counteract in time.
Christopher Seinecke: So, we’re talking about a model that, based on past data, predicts the probability of errors occurring. And that it must learn. Depending on the quality and quantity of the data, it sounds like the model itself will also make mistakes.
Dr. Marc Großerüschkamp: Yes, that’s right, but it’s less problematic than it might initially seem. I have already said that business thinking is also essential for the meaningful use of ML. Ultimately, it’s not just about training a good AI model. Instead, it is crucial from the very beginning to develop a strategy for integrating such an AI model into a system or process in a meaningful way.
A simple example: It is not always a matter of an AI completely taking over the tasks of a human or even making decisions independently at the first moment. Let’s stay in the area of quality: For example, an AI model could be used to pre-sort parts with a high probability of error instead of sorting them out directly, adjusting the machine, or stopping production. A human then rechecks the parts. This allows humans to focus on fewer parts but inspect them more closely. Because what we must not forget at this point is: People also make mistakes. In the end, this would mean that humans would have to inspect fewer parts in the same amount of time, resulting in a lower error rate. Especially when considering the inspection of relatively expensive pieces, we would achieve relevant savings effects here.
In this way, AI models and humans can complement each other meaningfully.
As another example, however, we can enable fully automated quality checks in a meaningful way using an AI model. While the model does not always provide a correct result, we can evaluate which error occurs frequently.
To do this, you take only part of the data to train the models and keep another for validation and testing. From the history, we know whether an error has occurred or not, which allows us to characterize the model and its performance.
Typically, one creates a so-called Confusion Matrix with four cases: 1. there was an error, but it was not detected. 2. there was an error, and it was recognized correctly. 3. there was no error, but it was evaluated as an error. And 4. there was perfectly no error detected. Using this matrix, it is then possible to calculate the accuracy of the model’s predictions. For example, I could calculate how often in the future errors will be detected, although there are none. Or vice versa, that errors will not be detected correctly.
Suppose these calculations are supplemented by information on how high the costs are for undetected or incorrectly detected errors. In that case, it is possible to calculate very precisely whether using an ML model is economically worthwhile. In practice, the question should even be turned around: How accurate does my model have to be for its use to be financially beneficial?
Christopher Seinecke: That sounds promising. However, the question remains why ML should be used for this prediction. Aren’t classical algorithms or other mathematical calculations also suitable for this purpose?
Dr. Marc Großerüschkamp: Machine learning methods are not always better than classical algorithms. The crucial difference is that classical algorithms are based on knowledge, whereas ML models are based on statistics. To develop a good algorithm, you need precise knowledge about the relationships between all the influencing factors and measured values and the occurrence of errors. If this knowledge is available and you can develop an algorithm based on it, it is usually excellent, and you do not need machine learning. However, the more complex the production processes are, and the more parameters play a role, the more difficult it becomes to map the relationships correctly in an algorithm. Sometimes it is simply impossible. In such cases, statistical methods and machine learning can help – provided that the appropriate data can be accessed.
We already talked about data at the beginning. Especially in modern production plants, the conditions are often very good, and there is great potential for further improvements with machine learning.
_ _ _
Part 2 of the interview will follow with our newsletter in Q2 2023: We will talk about training an ML algorithm, prejudices about “false learning,” and selecting the appropriate data set.
Background
Dr. Marc Großerüschkamp is responsible for the topic of Machine Learning at INVENSITY. With the “Data Value Report,” INVENSITY offers companies a cost-conscious yet customized service for using artificial intelligence (AI) in system and machine operation and production. Our goal is to identify the specific use cases of artificial intelligence for SMEs, quantify their benefits, and demonstrate their feasibility. Specifically, we use our own automated data analysis to analyze existing (sensor) data within just ten days and make concrete suggestions for the individual use of machine learning. In addition to the technical feasibility, we also consider the economic added value, for example, through material or energy savings, better system availability, or quality improvement.
Learn more
Project Management
Cybersecurity
Artificial Intelligence
Artificial Intelligence