Why PANDA's Edge AI platform is built on microservices

PANDA | DRIFT presented by the example of a computer vision application.

14
.
May 2021
by
Michael Welsch
&

If infrastructure code and application logic are too interwoven in a monolithic software, it becomes increasingly difficult to implement new functions after a certain point. However, it is essential for the development of a smart factory that no infrastructural scaling problems occur. Furthermore, several hundred AI models must be trained, organised, rolled out and networked in a smart factory without having to revise already productive software components. Furthermore, the ability to update security-relevant patches for all devices must be guaranteed at all times.

In addition, there are two further challenges in the application of AI in manufacturing: The connection of sensors cannot be easily virtualised and the requirement of low latency or the integration of AI into the real-time environment of automation must be guaranteed.

With these challenges in mind, we developed our DRIFT platform. DRIFT is designed from the outset as a cross-platform microservice architecture, so that individual AI software components can be freely rolled out and networked across the edge and the cloud.

Such an architecture is now state of the art in software development for complex cloud projects.

Setting up a microservice infrastructure and the clean definition of APIs and protocols between the individual services involves a considerable amount of additional work in software development. In addition, the isolation and encryption of the software components means a perceptibly lower overall performance due to the system, since sensor data, for example, must always be copied between services via a network protocol and cannot simply be passed on as with a monolithic approach in memory. However, these disadvantages do not matter if you would otherwise run into hard limits when scaling the software with a monolithic approach. The lost performance can be compensated relatively easily by additional computing power.

The following diagram shows the block diagram for a typical AI application in production that uses a multi-camera system to monitor products on a conveyor belt.

The triggering of the images is initiated by a hardware trigger service, which in this case simply uses fixed time intervals to provide a continuous video stream for subsequent processing. Based on the software trigger, a DRIFT hardware controller generates an exact real-time sequence to trigger the different LEDs and the cameras, so that each photo is taken exactly timed 5 times with different lighting setups of the LEDs. This precise timing would not be possible with a pure software solution in a micro-service and control of the camera via USB alone.

The captured images are first optically and perspective rectified directly in the camera service.

The captured images are then sent to an object recognition algorithm based on a deep convolution network. This service marks the individual objects. For this purpose, the algorithm uses a dual-coral TPU as an AI co-accelerator in a mini PCI-e slot of the edge device, if available, as this function requires high performance.

In a Kalman filter-based tracking algorithm, the detected objects are assigned a virtual serial number. Each object is optimally cropped out of the video stream and the position is corrected pixel by pixel using the averaged speed of the Kalman filter. The stack of five individual images is then compressed and normalised using a wavelet and entropy encoder, whereby noise is reduced and the features essential for evaluation are processed.

These image stacks now go in parallel to three evaluation services, each of which takes only its necessary information part

- an anomaly detection algorithm based on an autoencoder and a SOM-clustering based density estimation algorithm

- a 3D surface defect algorithm based on a photometric stereo method,

- a 2D survey algorithm based on a gradient vector flow algorithm that fits a spline around the objects.

Together with the virtual ID from tracking, the results of the services streams via an OPC server to a central process control centre in production.

In addition to the purely functional scheme, there are at least the same number of support and preview services for controlling and maintaining the edge device, such as a web app for displaying the AI functionality, which in turn is connected to a central user verification.

In total, nine microservices run on the device.

The flexibility of this approach is the basis for an adaptive and uncomplicated use of AI in production.

All nine software components run individually and, in principle, independently. On a device, this would in principle also work in a monolithic approach, but this could not be easily extended. The services are connected to each other via an MQTT broker. During operation, each service can be individually integrated, disconnected and updated.

Additional services can be added at will. If the computing power of the 8-core is not sufficient, another edge device is added and connected via a network connector. Without a change, additional services now run on the added edge device. In principle, the devices can be substituted for a central computing power in the data centre. Only the camera service is bound to the location of the camera.

Another major advantage of encapsulating the individual software components in microservices is their reusability. Individual services are exchanged for other purposes. The services are deliberately kept simple. For the camera service, for example, there are many different variants that have the same external interfaces. DRIFT is a library of services supported by pre-configured hardware modules wherever real-time capability is required. The simplicity and decoupling of data stream and AI logic allows customers to very easily implement their own algorithms apart from the DRIFT library and to do so in the programming language of their choice with the frameworks of their choice.

To do this, the relevant data is subscribed to at the broker and the evaluation is provided again just as easily. The rest organises itself. The data scientist and ML expert can concentrate on the essentials.

In productive operation, new AI algorithms can be tested in this way without endangering the productivity of the running software. Collecting data for training algorithms and applying algorithms is done over the same data flow, which eliminates related sources of error and avoids data conversion overhead. 80% of the time in data science is known to be spent copying, cleaning and converting data

In short, our Edge AI Platform DRIFT is organised in the form of microservices that take care of the data streaming of high-resolution sensor data in particular, so that algorithms can be deployed as quickly and easily as possible, regardless of whether they are build-in AI services from PANDA or methods developed in-house.

Follow me on
We do not only optimize production processes, but also our website! For this, we use tools such as cookies for analysis and marketing purposes. You can change your cookie settings at any time. Information and Settings