UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Visitor iljaiv
Visitor
290 Views
Registered: ‎07-18-2018

Quantization problem with len(bin_edges)

Hello,

I'm trying to quantize custom VGG model with 52 layers. Some of the layers have have extremely high len(bin_edges) values during qunatization like shown below:  

 

--------------------------------------------------------------------------------
Processing layer 10 of 52
Layer Name:conv2 Type:Convolution
Inputs: ['pool1'], Outputs: ['conv2']
Quantizing conv input layer ... conv2
Threshold in shape= ()
Quantizing conv weights for layer conv2...
Threshold params shape= (128,)
Min: 0 , Max: 1.0093126
n: 32768 , len(bin_edges): 34165
/usr/src/ml-suite/xfdnn/tools/quantize/quantize_base.py:49: RuntimeWarning: invalid value encountered in divide
/usr/src/ml-suite/xfdnn/tools/quantize/quantize_base.py:49: RuntimeWarning: divide by zero encountered in log2
Mean : th_layer_out: 0.9681146885301648 , sf_layer_out: 2.9545417295759904e-05
Threshold out shape= ()
Min: 0 , Max: 1.0093126
n: 32768 , len(bin_edges): 34165
Mean : th_layer_out: 0.9681146885301648 , sf_layer_out: 2.9545417295759904e-05
bw_layer_in: 16
th_layer_in: 0.06555800884962082
bw_layer_out: 16
th_layer_out: 0.9681146885301648
--------------------------------------------------------------------------------
...
Processing layer 28 of 52 Layer Name:conv4 Type:Convolution Inputs: ['pool3'], Outputs: ['conv4'] Quantizing conv input layer ... conv4 Threshold in shape= () Quantizing conv weights for layer conv4... Threshold params shape= (256,) Min: 0 , Max: 1.0000967 n: 32768 , len(bin_edges): 1241800

 

Original VGG-16 starts with 4009 and decreases to the end of the model, while mine starts at 1026 (I have input of 128x128 rather than 224x224 of VGG-16), but spikes at pool1 and pool3.

After couple of hours there was no progress with quantization of my model. I guess either my model is wrong, the weights or both.

I would like to resolve this issue but since quantization.py script is not available I'm not sure what some of the variables represent. What do the len(bin_edges), th_layer_out and sf_layer_out variables represent?

Kind regards,

Ilja

0 Kudos
2 Replies
Xilinx Employee
Xilinx Employee
270 Views
Registered: ‎11-20-2018

Re: Quantization problem with len(bin_edges)

Hi Ilja,

 

What is the calibration size you're using?  My first suggestion would be to decrease your calibration size.  The time it takes to quantize is proportional to the calibration size.  As for what's a reasonable amount of time, that depends on your network, e..g, number of layers and each layer's parameters.  VGG style networks generally take more time than other networks like GoogLeNet or ResNet.

0 Kudos
Visitor iljaiv
Visitor
255 Views
Registered: ‎07-18-2018

Re: Quantization problem with len(bin_edges)

For my model I used only 1 image for calibration, for VGG 5 images. VGG-16 takes under 30 seconds to quantize with 5 calibration images.

I have attached caffe description of the model. Briefly my model looks like this:

 

Input_Dim128x128
4xConv1_64outputs(kernel_size:3x3)
Pool1(2x2)
4xConv2_128outputs(3x3)
Pool2(2x2)
4xConv3_256outputs(3x3)
Pool3(2x2)
4xConv4_256outputs(3x3)
Pool4(2x2)
4xConv5_512outputs(3x3)
Pool5(2x2)
2xFC_1024outputs
FC_6outputs(classes)

The model has in total 17M parameters and 16M activations compared to VGG-16 138M param and 288M, my model is considerably smaller.

 

0 Kudos