Real-Time Anomaly Detection For Visual Quality Control

Industrial production lines with computer vision

30
.
Sep 2022
by
Anthony Cavin
&

In many industrial and commercial settings, product quality control is a critical task that is essential for ensuring that products meet customer expectations.To maintain quality control, it is often necessary to visually inspect products for anomalies. However, manually inspecting products for anomalies can be a time-consuming and error-prone task.

To address this challenge, we need a real-time anomaly detection system that can automatically detect anomalies in products as they are being produced.

Recently, deep learning methods have been proposed for anomaly detection. These methods can automatically learn features from data and have shown promising results on various datasets.

For example, Anomalib is a great repository to benchmark state-of-the-art anomaly detection algorithms that leverage deep learning methods. This library has 8 algorithms (CFlow, DFM, DFKDE, FastFlow, PatchCore, PADIM, STFPM, GANomaly) as of September 2022.

So can we use those algorithms for real-time anomaly detection? Well, it depends on how “real-time” we are willing to detect anomalies.

These methods are often based on a pre-trained network such as Wide-ResNet 50 or ResNet-18. Those networks are often pre-trained on ImageNet which contains 14,197,122 images that are commonly used for training deep neural networks.

Using a large pre-trained network such as Wide-ResNet 50 can allow the detection of small anomalies, but on the other hand, the inference time on an edge device can be quite long because of the large number of parameters that need to be computed in real-time–69 Million parameters for Wide-ResNet 50.

Another approach would be to divide and conquer the problem. First by extracting meaningful features out of the images and then by using anomaly detection algorithms to detect outliers in the feature space.

The key idea is to learn a lower dimensional representation of the visual input and to use this representation to train a classifier that can distinguish between normal and anomalous inputs.

In this blog post, we will explore some of the most popular methods for visual feature extraction, including histograms of oriented gradients (HOG),edge detection, and the method of moments. We will also briefly touch on convolutional neural networks (CNNs), which can be a powerful tool for visual feature extraction.

Finally, we will have a look at two libraries that are extremely useful to benchmark and implement real-time anomaly detection methods with Python.

Enjoy!

Extract features from pictures

There are countless ways to compute and extract features from images. In this blog post, we will explore four popular methods: histogram of orientedgradients (HOG), wavelet edge detection, and convolutional neural networks.

Histogram of Oriented Gradients

The histogram of oriented gradients is a popular technique in image processing and computer vision. The HOG descriptor is able to capture the shape and aspect of an object in a picture. It is based on the idea that the edge of an object is a good indicator of its characteristics.

HOG representation of a cup (image by author)

The HOG descriptor is a vector of length N, where N is the number of bins in the histogram.

Each bin represents the number of pixels in a portion of the image that has a gradient in a particular orientation.

The first step is therefore to compute the gradient of the image in the x and y directions for each pixel.

Wavelet edge detection

A wavelet transform is a mathematical tool that can be used to decompose a signal into its frequency components. The wavelet image decomposition technique can be used in many different applications. For example, it can be used to improve the quality of images, denoise images, or even detect edges.

wavelet decomposition of a cup (image by author)

Once the image has been decomposed with the wavelet transform, the decomposed images can be analyzed to extract information. For example, the horizontal and vertical details can be used to detect the boundaries of objects in the image.

For more information about how to extract edges, the approach in “A Low Redundancy Wavelet Entropy Edge Detection Algorithm” [1] shows a systematic way to extract edges by combining the horizontal and vertical components out of the wavelet decompositions.

Auto-encoder

A particularly powerful approach to unsupervised feature learning is auto-encoding. Auto-encoders are neural networks that are trained to learn a representation of the data that is efficient and compressed. That is, the neural network is trained to map the input data to a lower-dimensional representation, and then back to the original data.

auto-encoderexample (image by author)

The hope is that the lower-dimensional representation will capture the essential structure of the data and that the reconstruction will be close to the original data.

Laplacianauto-encoder

The Laplacian auto-encoder is a variant of the standard auto-encoder and can be used to extract a lower dimensional space while maintaining the same neighbors in the lower dimensional space.

For this purpose, we need to construct the K-Nearest-Neighbor Graph (K-NNG). The K-NNG is a data structure that can be used to store the relationship between data points in a dataset. Each data point is represented as a node in the graph, and the edges between nodes represent the similarity between data points. More specifically, each point is connected to its K nearest neighbors.

The similarity between data points can be measured using a variety of distance metrics, such as Euclidean distance, Manhattan distance, or cosine similarity, but also Wasserstein if we are comparing histograms–such as the HOG.

K-Nearest-NeighborGraph (image by author)

So why is the K-NNG useful for the laplacian auto-encoder?

The laplacian auto-encoder has the same primary goal as any auto-encoder (reduce reconstruction error), but a term is added to the loss function to maintain the same neighbors between the original images and their embeddings a.k.a. lower dimensional space.

example ofsimilar images in encoded latent space (image by author)

The benefit of such an auto-encoder is that the K-NNG in the input is similar to the K-NNG in the lower dimensional representation.

We can therefore expect that similar pictures will end up in a similar area in the latent space, and we can use the lower dimensional representation of the pictures to detect anomalies.

Real-time anomaly detection with Python

Anomaly detection is a process of identifying unusual patterns in data that do not conform to expected behavior. It is an important tool for monitoring and maintaining the health of systems and processes, as it can help detect and diagnose problems early on.

There are multiple approaches, algorithms, and libraries to detect anomalies in real time, each with its advantages and disadvantages. A nice way to get started is to use the following libraries:

  • PyOD is a Python toolkit for detecting outlying objects in multivariate data.
  • PySAD is a Python toolkit for detecting anomalies in streaming data.
example ofreal-time anomaly detection (image by author)

Both PyOD and PySAD are open-source projects released under the BSD License 2.0 and are available on PyPI — they can easily be installed with pip:

pip install pyod pysad

PyOD provides 30 detection algorithms as of September 2022 which is a good thing to benchmark many methods, but it might be hard to decide on which one is best for a particular use case.

To get started, we can train a simple distance-based algorithm such as KNN or a popular algorithm such as iForest (Isolation Forest) to create a baseline that can be used to benchmark other algorithms.

iForest is a particularly versatile and useful algorithm for detecting anomalies in a data set. It is a fast and effective way to find outliers in high-dimensional data. iForest works by constructing several decision trees, each of which is trained on a random subset of the data.

The length of the path to making a prediction indicates if a data point is common. In other words, a point is an outlier if it can be classified with a small number of steps in the decision process.

For iForest, the path lengths of several decision trees are combined to give a final score, which is used to determine whether a point is an outlier. IForest is effective at detecting outliers in a variety of data sets and is also efficient, scalable, and easy to use.

Conclusion

No matter how good our anomaly detection system is, it will never be perfect. There will always be false positives and false negatives. The goal is to minimize these errors as much as possible so that we can take appropriate action when an anomaly is detected.

There are many different ways to approach anomaly detection, and the best approach will vary depending on the specific application. In general, however, the following tips can help build a more effective anomaly detection system:

  1. Collect as much data as possible. The more data you have, the easier it will be to identify patterns and train machine learning models.
  2. Pre-process the data to remove noise and outliers. This will make it easier for machine learning models to learn the underlying patterns.
  3. Use multiple machine learning models. Each model will make different errors, so using multiple models can help to reduce the overall error rate.
  4. Use a combination of supervised and unsupervised machine learning methods. Supervised methods can be used to train models on known anomalies, while unsupervised methods can be used to identify new anomalies.
  5. Monitor the system constantly and fine-tune the models as more data become available.

Thanks for reading.

[1] Tao, Yiting, et al. “A Low Redundancy Wavelet Entropy Edge DetectionAlgorithm.” Journalof Imaging 7.9 (2021): 188.

More from Anthony Cavin

Machine Learning Engineer | Founder of Amigocci.io | For networking & further inquiries: linkedin

Follow me on
We do not only optimize production processes, but also our website! For this, we use tools such as cookies for analysis and marketing purposes. You can change your cookie settings at any time. Information and Settings