In such applications, machine learning methods based on artificial neural networks, known as “Deep Learning”, can be used to train an AI that reliably detects pedestrians, bicycles, cars or road signs in complex traffic situations. Such an AI can be implemented as an edge device in passenger cars or as part of an intersection assistant system in trucks and buses.
A digital eye captures a traffic scene and sends the data to an intelligent control unit - the "brain". The brain processes and interprets the data, then generates a virtual image of the surroundings with classified objects. The resulting knowledge of the surroundings can be used to alert the driver to critical situations or to serve as an information source for the routing decisions of a self-driving car.
This level of image recognition is achieved through complex neural networks. The machine learning process used to develop such an AI involves databases with millions of tagged images, making the AI model figure out what differentiates a car from a bus, or a child from a dog. Over time, the AI model learns which image characteristics are linked to a dedicated object type, ultimately arriving at the best possible set of transfer functions for the target application’s convolutional neural network.
The heart of our AI-powered systems is usually a powerful System-on-a-Chip (SoC) by Xilinx. These two are our chips of choice:
Both the Zynq-7000 and the UltraScale+ MPSoC series combine ARM CPUs with programmable logic components - they are system cores with an FPGA built-in!