cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
570 Views
Registered: ‎05-17-2019

Different accuracy between evaluation and deployed model

Hello,

I have compressed my model with decent_q (tensorflow). The evaluation model that is generated gives me a sufficient accuracy for my problem. When I deploy the model to the DPUs the accuracy that I measure is terrible. Is it possible that the deployed model and the evaluation model have different behavior? Or is it me, doing something wrong in the C++ code?

0 Kudos
7 Replies
Highlighted
Xilinx Employee
Xilinx Employee
551 Views
Registered: ‎12-09-2015

Re: Different accuracy between evaluation and deployed model

(^^)/

 

Please check the post: DNNDK updated on Xilinx.com (Jun 24, 2019) and see if it solves your issue.

Capture.PNG

0 Kudos
Highlighted
Observer
Observer
449 Views
Registered: ‎05-25-2018

Re: Different accuracy between evaluation and deployed model

We are facing a similar issue. During evaluation we are getting results to be fine. During deployment in board the results are bad. We expect the eval (post quantisation) and deployment in board to be same. Is there a posibility that they might not be same ?

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
407 Views
Registered: ‎11-29-2007

Re: Different accuracy between evaluation and deployed model

Hi,

maybe the input data preprocessing is different?

0 Kudos
Highlighted
Observer
Observer
365 Views
Registered: ‎05-25-2018

Re: Different accuracy between evaluation and deployed model

Hi @gguasti 

    The post quantisation evaluation uses image data scaled by 255. In C++ code for DPU, we scale the image as (image-data/255 -0.5)*2*Scale.

This works for LeNET, miniVGGNet (i/e FCResults from the DPU matches with python evaluation). But for VGG16 we see divergence. Any clue what could be the issue ? 

0 Kudos
Highlighted
Observer
Observer
290 Views
Registered: ‎05-25-2018

Re: Different accuracy between evaluation and deployed model

@gguasti  we got matching results after we changed some learning parameters in the script

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
230 Views
Registered: ‎11-29-2007

Re: Different accuracy between evaluation and deployed model

Hello

in your analysis you are measuring the accuracy. However we should also have an idea of the model robustness to quantization: how much the accuracy varies if we quantize (small changes applied to weights...) the model.

I have seen some times that a model trained with small learning rate can be more robust to quantization. I figure that the minimum of loss function J is less susceptible to small variations of weights w happening during quantization.

Vice versa, it can happen that a high LR brings the model to a lucky minimum, where a small variation of w causes a big increment of J

Are you achieving better results after quantization with models trained with small learning rate?

0 Kudos
Highlighted
Visitor
Visitor
172 Views
Registered: ‎07-29-2019

Re: Different accuracy between evaluation and deployed model

Hi, I'm facing the same problem when training a model from TF Slim on custom dataset. Which learning parameters did you change and how does it reflect on quantization eval accuracy?

Thanks

0 Kudos