In 2013, Goldman Sachs summarized the five largest technology companies under the acronym FAANG in order to describe their influence on society and the economy. The acronym stands for the companies Facebook, Amazon, Apple, Netflix and Google (traded under the name of the Alphabet group). Together, they were worth USD 3.1 billion in March 2019.
Each of these companies uses artificial intelligence (AI) in its services in various applications: from chatbots (Apple’s Siri, Amazon’s Alexa, Facebook offers a platform of different chatbot solutions) and personalized film and product suggestions (Netflix and Amazon) to personalized advertising and search suggestions (Google and Facebook). AI research is common as well, such as Google’s AlphaGo or Netflix’ personalized movie trailers. What these applications have in common is that they require a powerful server infrastructure. Therefore, they are also called “cloud AIs”.
The core competencies of the German industry, however, are in the area of manufacturing industry. The German Share Index (DAX) includes SAP, whose core competence is in the area of software solutions, four companies from the automotive sector and a total of seven from the chemical and medical technology sectors, pharmaceutical industry and plant construction. 98% of the processors manufactured worldwide find their application in embedded systems, where they take over the tasks of control and regulation.
In cars, several dozen control units coordinate, for example, airbags and brakes. In household appliances, such as washing machines or dishwashers, embedded systems build the logic. In medical technology, embedded systems recognize critical states during an operation and in plant engineering, they control e.g. the consistency of layer thicknesses. The complexity in these systems lies in the reliability of systems and the high system availability rather than in the required computing power.
AI on embedded systems
The application of an AI consists of a mathematical model, e.g. an artificial neural network, which was trained by an algorithm to make a prediction to solve a problem. During the training, decision rules derive from a lot of information and are corrected until the desired prediction accuracy is achieved. The model trained for the use case is called Agent.
The training phase requires a lot of computing power due to the processing of the usually large amount of training data and the repetitive learning process. The application of the trained rules on the problem, i.e. letting the agent solve the problem, requires only little computing power. The computationally intensive training is comparable to a child learning to ride a bicycle. Once the necessary coordination has been internalized (the correct rules are derived), riding a bicycle itself (i.e. the application) requires little mental capacity.
Figure 1: Schematic illustration of the learning process and the application of an AI using the example of Supervised Learnings
An advantage in using AI is that training and application can be separated. The training can take place on a central server. The fully trained agent can, for example, be deployed and applied to an embedded system. The possibility to use AI to solve problems in this way, offers a large and, so far, unused potential for the industry.
Example: Optimizing the control of compressor inlet blades to increase the efficiency
In plants in the chemical industry, several compressors are often connected in series to process a fluid and are driven by a shaft. Thus, the rotational speed is predetermined, and the inlet blades of each compressor are controlled, since they can be used to influence the operating points of the individual compressors. Changing pressure and temperature conditions make continuous control necessary. By using AI, the previous PID controller could be optimized and the efficiency of the plant improved.
The agent used for this purpose could be trained by means of reinforcement learning on a simulation of the system. From speed, pressures and temperatures, the efficiency of the system as well as the subsystems can be calculated for each point in time. During the training process, both control quality and good efficiency are rewarded.
The agent pre-trained in this way is tested extensively in the simulation and transferred to the real and still monitored system. After initial tests over several weeks with close monitoring, the final optimization of the agent can be performed. After successful completion, the agent is transferred to a suitable embedded controller and permanently installed in the system. In this system, the improvement of the efficiency in the tenth of a percent range is amortized after only a few months.
In detail: Optimization of a controller by AI
If a classic PID controller is used, the controlled variable should follow the reference variable as closely as possible. By selecting the constants for the proportional, integral and differential components of the controller, the quality criteria control speed, occurrence of overshoot and robustness to changes in the controlled system can be controlled. Other parameters, such as the efficiency of the system, cannot be included in the control by classical controllers.
Figure 2: Control loop with controller
A controller based on AI can include the resulting efficiency in the prediction. If necessary, the agent can reduce the control quality to achieve a better efficiency of the whole system. To calculate the efficiency, further sensor values such as speed, pressures and temperatures at different points of the system are made available to the agent. This results in a system in which the agent takes over the task of the controller.
Figure 3: Control loop with agents trained by AI; the agent as a schematic representation of an artificial neural network (ANN). Other models are conceivable.
A simulation of the system often already exists, which is also used to design critical controllers. The agent can be trained directly in this system using reinforcement learning. Thus, the effects of efficiency optimization can be considered in advance. One question, for example, is how far the agent can deviate from the optimal manipulated variable in order to optimize efficiency.
If no simulation exists, the agent can also be trained on previously recorded data. However, the values that the agent should optimize, e.g. the efficiency, must be derivable from the recorded data. The identification of the target values is called labeling and is a prerequisite for supervised learning.
In a final optimization loop, the agent that has already been pre-trained can now be transferred to the real system by means of transfer learning. There, the deviations between simulation and real system are adjusted in a final training loop. This is one of the newest branches in AI research, which is only just finding its way into application.
Example Sensor Replacement
Another example for AI on embedded systems is Sensor Replacement. In a facility, e.g. a large excavator, an agent can be trained to perform a sensor value of e.g. a special critical temperature sensor. This is done based on the values of other temperature, pressure or vibration sensors. If this critical sensor fails, the machine does not have to be stopped immediately, but can be used until the next suitable maintenance date, subject to safety measures and based on the agent’s prediction. A mathematical modeling of the system and thus the relationship between the different sensor values is no longer possible due to the complexity. Here, the strength of AI models come to the fore, especially those of artificial neural networks, which can map the underlying relationships from example data.
Common to all Embedded AI applications is, that after completing the training, the agent is transferred to the embedded system. From this point on, the system always behaves the same with the same inputs. Continuously learning embedded Systems are conceivable but require continuous monitoring.
The possible use cases of AI on embedded systems are almost unlimited. Currently, the availability of compilers is limiting. Artificial neural networks, which are often used due to their flexibility, are mathematically represented as graphs. There are compilers for only a few architectures, which can convert the mathematical representation into machine code for the respective architecture and optimize it for execution. To close this gap, cross-compilers are currently being developed, which can address all microcontroller architectures via the intermediate step of a generalized representation.
We recommend companies that want to exploit the potential of AI to evaluate which process and product data are relevant at an early stage. The broad use of embedded systems offers great potential for the German industry. It is crucial to build up competence now, and to identify and leverage this potential.