We are very excited to let you know that six AI products have been updated and publicly released. This is the first time we have released so many AI products in synchronization. We will post more details about each in future posts, but for now here is a summary.
In DNNDK v3.1, python API support was added to make TensorFlow deployment easier. Zedboard support was added for Zynq-7000 users. Several other enhancements are made to improve the ease-of-use including unification of the complier, the ability to perform an advanced data dump as well as a logging system for error messages.
ML Suite was upgraded to v1.5 to add TensorFlow support. It uses docker images for both Caffe and TensorFlow. New models such as Yolov3 and FPN are enabled with upsampling and deconvolution supported. It also adds a face detection example (using live video input) targeting an Alveo Acceleration card.
Since the release of the Xilinx AI SDK v1.0, we’ve collected a significant amount of valuable feedback from users. In the new AI SDK v2.0, we make it easier to install and use the tools. The supported models have increased to 37 including facial landmark and person reidentification. All the model libraries are open source and several examples about how to do custom post-processing have been included.
We have also made a major update to the DPU reference design. The improved design sees DPU performance increase between 6% and 200% depending on the model. A new low-power mode is enabled to reduce DPU power consumption significantly. It also supports more feature configurations to allow you to customize the DPU based on your exact needs.
Our AI Model Zoo is now publicly available on the Xilinx Github. It includes 37 models and is fully synchronized with the AI SDK v2.0 release. Each model has float and fixed-point files for easy and quick deployment. This also allows quantized accuracy comparison with DNNDK v3.1.
AI Optimizer v1.0 (the former Deephi pruning tool) is available now. It has both node-locked and floating licenses for purchase. It delivers the next level performance boost to AI inference while maintaining accuracy.