cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Contributor
Contributor
472 Views
Registered: ‎07-05-2018

problem with quantisation tool - decent

I am using the decent tool for quantisation of Resnet model to 8 bit int.

The ouput is generated as deploy.prototxt and deploy.caffemodel.

The problem is that when I read weights in deploy.caffemodel, they are still float 32 bit. What does this means? I supposed the weights to be int 8 bit.

Also, when I see the size of input caffemodel(float.caffemodel) and output caffemodel(deploy.caffemodel), both are exactly of same size i.e. 102.1 MB.

Regards,
Shikha Goel
(Ph.D. , IIT Delhi)
0 Kudos
Reply
1 Reply
Highlighted
Explorer
Explorer
392 Views
Registered: ‎10-24-2008

@anz162112Can you tell us what board you are running this on, as well as advise if you are using the latest stretch image and latest version of DNNDK?

https://www.xilinx.com/products/design-tools/ai-inference/ai-developer-hub.html#edge

Does everything else appear to be as expected?

We are running this tutorial in v2.08 and everything is working as expected:  https://github.com/jimheaton/Ultra96_ML_Embedded_Workshop

--Quenton

0 Kudos
Reply