03-11-2019 02:27 AM
Hello,I try to use the tools to make the inception model into Ultra96 ,but it warning that loss3 is not supported in DPU
Can you tell me how to modify the deploy.prototxt ,make my train done,thanks
03-11-2019 10:30 AM
As mentioned in the warning message, only Max-pooling is supported by DPU. Any other pooling will be run on CPU (not on DPU). You can change it in actual caffe model and retrain it, otherwise you can safely ignore this warning.
03-11-2019 07:59 PM - edited 03-11-2019 08:02 PM
03-16-2019 12:31 AM
Thanks for your reply .I had already change the average pooling ,actually the pools are not mentioned in the warning ,but I don't know how to remove the warning of loss1 , loss2 and loss3 that are not supported in the DPU.Can you tell me how can I make the loss level work .
03-16-2019 09:27 AM - edited 03-16-2019 09:27 AM
Loss layer is not supported and can be implemented on CPU only.
[DNNC][Warning] layer [loss] is not supported in DPU, deploy it in CPU instead.
Do you have any workaround for it?
03-19-2019 03:49 PM - edited 03-19-2019 03:52 PM
@meherp @shennian The loss3/loss3 layer is actually a Softmax layer. Softmax is effectively a floating point operation. The notion for this is that Softmax operator squashes the output propability vector such that all elements in the vector sum to 1. This makes it easy to compute the output probability as a percentage for all classes contained in the vector. Each element in the vector becomes a per class probability that you simply multiple by 100% to obtain the probability that the network is predicting for that class.
So, if I understand the question, I can simply tell you that in the current release of the IP, Softmax is computed in software running on the ARM. The Inceptionv1 main.cc file (ZCU102/samples/inceptionv1/src) provides an example of a Softmax vectorized to run on the NEON co-processor. In this case, DNNC is simply warning you that it recognizes the layer and that it is aware that it can be deployed on the CPU. If the layer was completely unknown, you would receive an error rather than a warning.
loss3/loss3 is the only one of these three loss layers that is deployed. loss1 and loss2 are used during model training, but are removed from the final model prior to deployment.
For an example floating prototxt that DNNC can successfully compile and deploy, see xilinx_dnndk_v2.08/host_x86/models/inceptionv1
03-19-2019 04:00 PM
I think I can help. Loss1 and Loss2 are layers which are normally used in training, not for deployment. Thus, the prototxt can be modified for those layers. This can be observed if you compare the Berkeley deploy and train_val prototxt files:
On the other hand, loss3/loss3 is the Softmax operator which is used to generate the output probability predictions. In the current incarnation of the DPU IP and software, this operator is deployed on the ARM. Example code is provided in the main.cc file located in the ZCU102/samples/inceptionv1/src illustrating how you can deploy Softmax as vectorized NEON code.
BTW, you should also review the Xilinx provided float.prototxt file which is included in xilinx_dnndk_v2.08/host_x86/models/inception_v1
Because the above float.prototxt has already been deployed, you can see more precisely what modifications the team has made to the model prior to running it through quantization and compilation.