cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Explorer
Explorer
396 Views
Registered: ‎02-13-2016

Using DNNDK with Other Tensorflow Versions

Hello,

I am considering using DNNDK & DPU to accelerate my own custom CNN model. In the DNNDK user guide (UG1327, Page 4), it only uses Tensorflow 1.9.0

Is it possible to use other newer Tensorflow versions like 1.15 for instance? Or I have to stick to the Tensforflow 1.9?

If it is possible to use newer versions, do I need to do something different from what is already mentioned in the DNNDK User G guide UG1327? Or everything should remain exactly the same? Should I expect conflicts as I proceed with the work? Or everything is supposed to run smoothly?

Thanks. 

0 Kudos
8 Replies
Highlighted
Explorer
Explorer
331 Views
Registered: ‎02-13-2016

Would someone help me with that, please?
0 Kudos
Highlighted
Moderator
Moderator
313 Views
Registered: ‎03-27-2013

Hi @asobeih ,

 

DNNDK is well tested on TensorFlow 1.9 so that TensorFlow 1.9 is the version we recommend customer to install in the UG.

For Vitis AI 1.1 the version is 1.15. So you may turn to Vitis AI 1.1 if you would like to use TensorFlow 1.15 in a more "official" way.

So I would recommend DNNDK 3.1+TF 1.9 or VAI 1.1+TF 1.15. Otherwise you may have a try but I can't gurantee if there would be any issues caused by version mismatch. 

 

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
0 Kudos
Highlighted
Explorer
Explorer
308 Views
Registered: ‎02-13-2016

Thank you, @jasonwu.

I know this may sound very basic, but what is the difference between them? My target is to accelerate my custom CNN model using DPU on Zedboard and ZCU 104. To my knowledge, the DNNDK is the engine used to compile the CNN model in order to accelerate it on the DPU.

Also, please consider the flow complexity factor. To my understanding, the DNNDK flow is supposed to be really smooth. How about the Vitis AI flow?

0 Kudos
Highlighted
Moderator
Moderator
299 Views
Registered: ‎03-27-2013

Hi @asobeih ,

 

Yes, your understanding is correct. We are trying to provide ML sollutions smoothly.

If you are using some common ConvNets most of the time you could find reference designs in Xilinx Github. Some of the operations are not supported on DPU but you could still find deployment code in these examples.

But ML is still in developing, so does our ML sollutions. So we still can't support all the ConvNet layers for now. That is why sometime you may need to write custom code for particular unsupported layers.

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
0 Kudos
Highlighted
Explorer
Explorer
244 Views
Registered: ‎02-13-2016

Hi @jasonwu ,

I just have a follow-up question. I was about to use DNNDK within my work but after your demonstration about TensorFlow 1.15 compatibility, I am considering switching to Vitis AI.

I just would like to make sure that Vitis AI can be used to compile my own CNN model (a model that I developed by myself that is not included in the examples) to run on Xilinx DPU 3.0 on ZCU 104. I made sure that the layers that I have used within my own CNN model are already supported by the DPU. 

Provided these details, is it possible to use Vitis AI to compile and deploy to run my own CNN model on ZCU 104 to be accelerated by DPU 3.0?

Thanks,

0 Kudos
Highlighted
Moderator
Moderator
241 Views
Registered: ‎03-27-2013

Hi @asobeih ,

 

That is a good question!

Actually we have already sent requests to dev team that we should have a document to describe the support layer clear enough. So that customer would not need to run the compile flow to know that.

But for now this part of doc is under construction. So I am afraid that you still need to run the VAI compile flow to check if all the layers you are using is supported.

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
0 Kudos
Highlighted
Explorer
Explorer
222 Views
Registered: ‎02-13-2016

@jasonwu 

Thanks for pointing out.

Okay, as an attempt to reach a definite answer, below are the layers used within my CNN model. Is there a way to communicate with the development/documentation team to know if these layers are supported by Vitis AI? And if so, what are the ranges [minimum-maximum] of there parameters (eg. kernel sizes)?

ZeroPadding

Conv2D

BatchNormalization

ReLU

DepthwiseConv2D

Concatenate

GlobalAveragePooling2D

Dense

 

Thanks.

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
183 Views
Registered: ‎03-21-2008

Tables 12 and 15 of UG1414 list what is supported and the limitations on kernel sizes.

The link I provided is for Vitis-AI 1.1, and the 1.2 version of the doc will be released soon. Both 1.1 and 1.2 are based on TF 1.15.