UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
257 Views
Registered: ‎05-17-2019

Different accuracy between evaluation and deployed model

Hello,

I have compressed my model with decent_q (tensorflow). The evaluation model that is generated gives me a sufficient accuracy for my problem. When I deploy the model to the DPUs the accuracy that I measure is terrible. Is it possible that the deployed model and the evaluation model have different behavior? Or is it me, doing something wrong in the C++ code?

0 Kudos
4 Replies
Xilinx Employee
Xilinx Employee
238 Views
Registered: ‎12-09-2015

Re: Different accuracy between evaluation and deployed model

(^^)/

 

Please check the post: DNNDK updated on Xilinx.com (Jun 24, 2019) and see if it solves your issue.

Capture.PNG

0 Kudos
Observer kvikramaxlnx
Observer
136 Views
Registered: ‎05-25-2018

Re: Different accuracy between evaluation and deployed model

We are facing a similar issue. During evaluation we are getting results to be fine. During deployment in board the results are bad. We expect the eval (post quantisation) and deployment in board to be same. Is there a posibility that they might not be same ?

0 Kudos
Xilinx Employee
Xilinx Employee
94 Views
Registered: ‎11-29-2007

Re: Different accuracy between evaluation and deployed model

Hi,

maybe the input data preprocessing is different?

0 Kudos
Observer kvikramaxlnx
Observer
52 Views
Registered: ‎05-25-2018

Re: Different accuracy between evaluation and deployed model

Hi @gguasti 

    The post quantisation evaluation uses image data scaled by 255. In C++ code for DPU, we scale the image as (image-data/255 -0.5)*2*Scale.

This works for LeNET, miniVGGNet (i/e FCResults from the DPU matches with python evaluation). But for VGG16 we see divergence. Any clue what could be the issue ? 

0 Kudos