07-03-2020 08:11 PM
I am considering using DNNDK & DPU to accelerate my own custom CNN model. In the DNNDK user guide (UG1327, Page 4), it only uses Tensorflow 1.9.0
Is it possible to use other newer Tensorflow versions like 1.15 for instance? Or I have to stick to the Tensforflow 1.9?
If it is possible to use newer versions, do I need to do something different from what is already mentioned in the DNNDK User G guide UG1327? Or everything should remain exactly the same? Should I expect conflicts as I proceed with the work? Or everything is supposed to run smoothly?
07-05-2020 06:13 PM
Hi @asobeih ,
DNNDK is well tested on TensorFlow 1.9 so that TensorFlow 1.9 is the version we recommend customer to install in the UG.
For Vitis AI 1.1 the version is 1.15. So you may turn to Vitis AI 1.1 if you would like to use TensorFlow 1.15 in a more "official" way.
So I would recommend DNNDK 3.1+TF 1.9 or VAI 1.1+TF 1.15. Otherwise you may have a try but I can't gurantee if there would be any issues caused by version mismatch.
07-05-2020 06:18 PM - edited 07-05-2020 06:20 PM
Thank you, @jasonwu.
I know this may sound very basic, but what is the difference between them? My target is to accelerate my custom CNN model using DPU on Zedboard and ZCU 104. To my knowledge, the DNNDK is the engine used to compile the CNN model in order to accelerate it on the DPU.
Also, please consider the flow complexity factor. To my understanding, the DNNDK flow is supposed to be really smooth. How about the Vitis AI flow?
07-05-2020 06:31 PM
Hi @asobeih ,
Yes, your understanding is correct. We are trying to provide ML sollutions smoothly.
If you are using some common ConvNets most of the time you could find reference designs in Xilinx Github. Some of the operations are not supported on DPU but you could still find deployment code in these examples.
But ML is still in developing, so does our ML sollutions. So we still can't support all the ConvNet layers for now. That is why sometime you may need to write custom code for particular unsupported layers.
07-07-2020 06:25 PM
Hi @jasonwu ,
I just have a follow-up question. I was about to use DNNDK within my work but after your demonstration about TensorFlow 1.15 compatibility, I am considering switching to Vitis AI.
I just would like to make sure that Vitis AI can be used to compile my own CNN model (a model that I developed by myself that is not included in the examples) to run on Xilinx DPU 3.0 on ZCU 104. I made sure that the layers that I have used within my own CNN model are already supported by the DPU.
Provided these details, is it possible to use Vitis AI to compile and deploy to run my own CNN model on ZCU 104 to be accelerated by DPU 3.0?
07-07-2020 06:34 PM
Hi @asobeih ,
That is a good question!
Actually we have already sent requests to dev team that we should have a document to describe the support layer clear enough. So that customer would not need to run the compile flow to know that.
But for now this part of doc is under construction. So I am afraid that you still need to run the VAI compile flow to check if all the layers you are using is supported.
07-07-2020 07:58 PM
Thanks for pointing out.
Okay, as an attempt to reach a definite answer, below are the layers used within my CNN model. Is there a way to communicate with the development/documentation team to know if these layers are supported by Vitis AI? And if so, what are the ranges [minimum-maximum] of there parameters (eg. kernel sizes)?
07-08-2020 09:06 AM
Tables 12 and 15 of UG1414 list what is supported and the limitations on kernel sizes.
The link I provided is for Vitis-AI 1.1, and the 1.2 version of the doc will be released soon. Both 1.1 and 1.2 are based on TF 1.15.