07-03-2019 06:33 AM
I have compressed my model with decent_q (tensorflow). The evaluation model that is generated gives me a sufficient accuracy for my problem. When I deploy the model to the DPUs the accuracy that I measure is terrible. Is it possible that the deployed model and the evaluation model have different behavior? Or is it me, doing something wrong in the C++ code?
07-03-2019 06:43 PM
10-09-2019 10:09 AM
We are facing a similar issue. During evaluation we are getting results to be fine. During deployment in board the results are bad. We expect the eval (post quantisation) and deployment in board to be same. Is there a posibility that they might not be same ?
10-15-2019 10:56 AM
The post quantisation evaluation uses image data scaled by 255. In C++ code for DPU, we scale the image as (image-data/255 -0.5)*2*Scale.
This works for LeNET, miniVGGNet (i/e FCResults from the DPU matches with python evaluation). But for VGG16 we see divergence. Any clue what could be the issue ?