By Girish Malipeddi, director of video and imaging solutions management at Xilinx
Edge applications such as advanced driver-assistance systems (ADAS) and autonomous driving (AD) in next-generation cars are fueling the need for large amounts of sensor data from image, radar, and Lidar sensors to be captured and processed to make intelligent decisions in real time. AD platforms are still in their infancy with evolving architectures. These platforms are expected to have many different configurations (number of sensors, resolutions, and types of sensors) needing very flexible yet optimal architectures for edge use cases. Xilinx’s Versal™ AI Core ACAP with extensible I/O flexibility supports many sensor interfaces and varied configurations, with a programmable network on chip (NoC) and scalable memory subsystems that enable efficient data movement around key functional blocks. AI Engines and Adaptable Engines accelerate convolutional neural network (CNN) processing overlays, and image sensor processing functions provide a flexible yet optimal solution for next-generation automotive needs.
Four Camera Object Detection on a Single Versal AI Core ACAP
In the video, one Versal AI Core ACAP captures four live MIPI camera feeds, image sensor prepossessing, and machine learning (ML) inference in real time. The demo specifically shows TinyYOLO ML network object detection on four two-megapixels live camera feeds implemented in the Versal AI Core ACAP’s 14 AI Engine cores. Furthermore, users can implement other machine learning inference applications such as object classification or pixel segmentation and easily scale to many higher resolution image sensors and add different types of sensors such as radar or Lidar. Xilinx provides building block IPs, embedded software, multimedia software plugins, and reference designs for automotive system designers to get a head start on their project development.