Quality inspection with AI vision

There is also often the question of whether AI vision is even suitable for a particular task or can solve it. Unfortunately, this hen-egg problem too often leads to the technology not being evaluated at all. Certainly, the technology still has to mature, especially in the industrial environment, in order to reach an acceptance level like the proven classical image processing methods. On the other hand, there are already user-friendly software tools that enable even users without experience to evaluate their applications with AI vision and implement them intuitively.

Beneficial different

The fact that AI-based methods work in a completely different way than rule-based approaches is their greatest advantage. This enables providers to develop entirely new tools for image processing that can be used much more intuitively. They can already be used to transfer human quality requirements to AI-based image processing systems through machine learning in order to optimize and automate processes. Often, not a single line of source code needs to be written in the process, making AI vision suitable for entirely new target groups that no longer necessarily need to have programming skills. Feasibility analyses can thus be carried out by the employees who themselves have the most knowledge of products and their special features; companies are thus no longer necessarily dependent on programmers and image processing experts in the evaluation phase.

Indescribably simple

Let's look at the strengths of KI-Vision with the following application example from one of IDS's customers. Rotatable axles are often secured with snap rings. However, only a ring fully engaged in the axis slot ensures a 100% secure connection. A faulty fit can result in product damages. The task for quality assurance seems simple. Check that the ring is properly engaged! The fact is, however, that this test is still performed by humans, as no safe automation solution has yet been found. Tests with rule-based image processing could only ensure whether the snap ring was present or missing. At best, it was possible to determine whether the "ears" of the snap ring were further apart than they should be. However, this does not necessarily mean that the snap ring is securely engaged! It could also just be lying on top! The marginal image differences in the error case could be described only with difficulty rule-based.

A feasibility analysis using machine learning methods showed that only a few image examples of correct and incorrect cases, in this scenario just under 300, were required to train a neural network that could predict the incorrect seating of the snap rings with a high degree of confidence. Manual visual inspection was thus only necessary for very few uncertain results.

Probably involuntary

How good a neural network works through its training can be validated by tests with sample images. A test run with images of known error classes provides information about the learning accuracy and the quality of the AI results. The more clearly the probabilities for GOOD and BAD cases differ from each other, the clearer a decisive threshold between GOOD and BAD can be defined in order to generate as few incorrectly recognized GOOD or BAD cases as possible later in productive operation. The variance of the GOOD probabilities determined during the test also helps to optimize the production environment. After all, the less the environmental conditions and thus irrelevant image content vary, the more concrete quality statements can be made about the relevant distinguishing features in the AI analysis.

Figure 1 The validation of a trained CNN with test data of known error classes show on the one hand how well the network identifies errors and furthermore how much the results vary.
Figure 1 The validation of a trained CNN with test data of known error classes show on the one hand how well the network identifies errors and furthermore how much the results vary.

Incredibly explainable

The fact that AI quality decisions are not traceable through a clearly defined set of rules and the algorithm is more like a black box does not mean that results cannot still be explained. Tools such as Attention Maps or Anomaly Maps visualize where the pixels relevant for predictions are located in the image and to what degree they contribute. In the case of our blast ring inspection, these overlays point to the relevant features of the known defect classes as expected. Especially with anomaly detection, this allows us to sort out unknown, and thus untrained, defect cases. This proves that machine learning methods are also capable of using more than the trained knowledge of known features and can precisely signal unknown, emerging problems. As an example, an out-of-focus camera image caused the anomaly map to mark deviations in several places.

Figure 2 Attention Maps show relevant image pixels and thus visually explain how AI predictions are produced.
Figure 2 Attention Maps show relevant image pixels and thus visually explain how AI predictions are produced.

Forward-looking

Anomaly detection thus brings another advantage for quality assurance that would not be so easy to realize with rule-based image processing. The decisive factor here is the ability to detect any deviation from the normal case, even those that are underrepresented in the training. In other words, those that were not planned at all. So, where other methods become uncertain about something "unknown", sometimes even fail, this method is highly certain that nothing remains hidden. And that includes everything that may occur at some point during normal operation. Continuous data about a system condition, for example in the form of increasing product defects or deviations, i.e. anomalies, enables one to determine an optimal time to maintain a system before product quality drops too low or a worst-case scenario such as a plant failure occurs.

Figure 3 Increasing anomaly errors may indicate degradation of a plant condition due to tool wear, dirt or other disturbances.
Figure 3 Increasing anomaly errors may indicate degradation of a plant condition due to tool wear, dirt or other disturbances.

User friendly tool

AI Vision can be used in many ways in quality assurance and can extend or improve existing applications. It is important to proceed step by step. A feasibility analysis in advance helps to clarify whether a task can actually be processed with AI Vision, even before a lot of money and time has to be spent on expert personnel, knowledge building and AI systems. User-friendly software tools that enable an initial evaluation purely based on images and even in the cloud are already helping to do this today. This requires neither a real vision system with AI capabilities nor a separate training platform. This greatly reduces the investment risk. Intuitive user interfaces and easy-to-understand workflows and wizards can also create an easy entry point for users who do not yet have much experience in AI or image processing and application programming..

Nevertheless, AI vision requires a certain understanding of what suitable visual material must look like for effective training. This is the prerequisite for drawing trustworthy conclusions later on, which can be evaluated in a comprehensible way. It is also important to bring experienced partners on board who not only promise you the best AI system, but can look at and support the entire workflow of machine learning-based quality assurance. Full support from a single source is also a component of success in the AI Vision environment that should not be underestimated. The use of Quality inspection with AI vision 5 - 5AI Vision in quality assurance is therefore perhaps not quite as simple as one is told everywhere, but it is certainly simpler than is often assumed.