cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Contributor
Contributor
613 Views
Registered: ‎07-05-2018

quantisation and running of 16 bit int Resnet Model

Hi,

I want to quantise the resnet 32 bit float model to 16 bit int and run it on ZCU102 using Deephi.

Steps I followed are:

1. First I used Decent tool to quantise the model to 16 bit int by changing weight_bit and data_bit to 16 in script. The deploy.prototxt and deploy.caffemodel were generated.

2. Then used DNNC tool to generate elf files for it.

3. Then on ZCU102, installed Deephi. and in samples/model folder, replaced the earlier model files with new model files(the one I generated). I kept the source code(main.cc) same.

4. Run make command. Then run ./resnet.

But the accuracy of 16 bit quantised model is very poor(almost 0) as compared to accuracy of 8 bit quantised model.

I have attached the output of both 8 bit and 16 bit model on same image.

This behaviour is strange. Am I doing something wrong? Do I need to change anything in main.cc for 16 bit inference?

 

Thanks

Regards,
Shikha Goel
(Ph.D. , IIT Delhi)
16_bit_result.jpg
8_bit_result.jpg
0 Kudos
Reply
2 Replies
Highlighted
Moderator
Moderator
590 Views
Registered: ‎08-16-2018

Currently, the DPU processor is designed for 8-bit quantizataion model only. 

Therefore you are getting zero accuracy for the 16-bit model


/ 7\7     Meher Krishna Patel, PhD
\ \        Senior Product Application Engineer, Xilinx
/ /        
\_\/\7   It is not so much that you are within the cosmos as that the cosmos is within you...
Highlighted
Contributor
Contributor
547 Views
Registered: ‎07-05-2018

Okay. Actually in user guide it says that we can quantise to any bit width. So, I wanted to try it for my work.

Thanks

 

Regards,
Shikha Goel
(Ph.D. , IIT Delhi)
0 Kudos
Reply