cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Visitor
Visitor
717 Views
Registered: ‎03-11-2019

inceptionV1 Quantification

Hello,I try to use the tools to make the inception model into Ultra96 ,but it warning that loss3 is not supported in DPU12.png

Can you tell me how to modify the deploy.prototxt ,make my train done,thanks

0 Kudos
Reply
6 Replies
Moderator
Moderator
686 Views
Registered: ‎08-16-2018

@shennian 

As mentioned in the warning message, only Max-pooling is supported by DPU. Any other pooling will be run on CPU (not on DPU). You can change it in actual caffe model and retrain it, otherwise you can safely ignore this warning. 


/ 7\7     Meher Krishna Patel, PhD
\ \        Senior Product Application Engineer, Xilinx
/ /        
\_\/\7   It is not so much that you are within the cosmos as that the cosmos is within you...
Explorer
Explorer
675 Views
Registered: ‎10-24-2008

@shennianSeconding @meherp's comments.  Please take a look at the main.cc file located in the ZCU102/samples/inceptionv1/src directory for more information.

--Quenton

Visitor
Visitor
647 Views
Registered: ‎03-11-2019

Thanks for your reply .I had already change the average pooling ,actually the pools are not mentioned in the warning ,but I don't know how to remove the warning of loss1 , loss2 and loss3 that are not supported in the DPU.Can you tell me how can I make the loss level work .

0 Kudos
Reply
Moderator
Moderator
624 Views
Registered: ‎08-16-2018

@shennian 

Loss layer is not supported and can be implemented on CPU only. 

[DNNC][Warning] layer [loss] is not supported in DPU, deploy it in CPU instead.

 

@qhall 

Do you have any workaround for it?

 


/ 7\7     Meher Krishna Patel, PhD
\ \        Senior Product Application Engineer, Xilinx
/ /        
\_\/\7   It is not so much that you are within the cosmos as that the cosmos is within you...
0 Kudos
Reply
Explorer
Explorer
532 Views
Registered: ‎10-24-2008

@meherp @shennian The loss3/loss3 layer is actually a Softmax layer.  Softmax is effectively a floating point operation.  The notion for this is that Softmax operator squashes the output propability vector such that all elements in the vector sum to 1.  This makes it easy to compute the output probability as a percentage for all classes contained in the vector.  Each element in the vector becomes a per class probability that you simply multiple by 100% to obtain the probability that the network is predicting for that class.

So, if I understand the question, I can simply tell you that in the current release of the IP, Softmax is computed in software running on the ARM.  The Inceptionv1 main.cc file (ZCU102/samples/inceptionv1/src) provides an example of a Softmax vectorized to run on the NEON co-processor.  In this case, DNNC is simply warning you that it recognizes the layer and that it is aware that it can be deployed on the CPU.  If the layer was completely unknown, you would receive an error rather than a warning.

loss3/loss3 is the only one of these three loss layers that is deployed.  loss1 and loss2 are used during model training, but are removed from the final model prior to deployment.

Compare:

https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/train_val.prototxt

https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt

For an example floating prototxt that DNNC can successfully compile and deploy, see xilinx_dnndk_v2.08/host_x86/models/inceptionv1

--Quenton

0 Kudos
Reply
Explorer
Explorer
551 Views
Registered: ‎10-24-2008

@shennian @meherp

I think I can help.  Loss1 and Loss2 are layers which are normally used in training, not for deployment.  Thus, the prototxt can be modified for those layers.  This can be observed if you compare the Berkeley deploy and train_val prototxt files:

 

https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/deploy.prototxt

https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_googlenet/train_val.prototxt

 

On the other hand, loss3/loss3 is the Softmax operator which is used to generate the output probability predictions.  In the current incarnation of the DPU IP and software, this operator is deployed on the ARM.  Example code is provided in the main.cc file located in the ZCU102/samples/inceptionv1/src illustrating how you can deploy Softmax as vectorized NEON code.

BTW, you should also review the Xilinx provided float.prototxt file which is included in xilinx_dnndk_v2.08/host_x86/models/inception_v1

Because the above float.prototxt has already been deployed, you can see more precisely what modifications the team has made to the model prior to running it through quantization and compilation.

--Quenton

0 Kudos
Reply