We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

Showing results for 
Search instead for 
Did you mean: 
Visitor sairahul321
Registered: ‎10-09-2018

Quantization support for batch normalization ?

The resnet 50 model part of the models downloaded from the ML lounge works fine with the quantization scripts, but the vanilla version of the resnet 50 model downloaded from the deepdest webpage has warnings showing that quantization for batch normalization is not supported. After looking at the revision information I learnt that its supported part of v1.0 and v1.1, so I have downloaded v1.2 branch, but it too didn't support quantization for batch normalization. Wondering when that would be supported ? 


Also, how was the ResNet50 model downloaded from ML lounge got created ? Any instructions on support to ResNet101 and ResNet 152 networks ? 

0 Kudos
1 Reply
Xilinx Employee
Xilinx Employee
Registered: ‎09-11-2014

Re: Quantization support for batch normalization ?



So the truth is that the quantizer doesn't support batch norm directly. 


In reality, we never want to run any batch norm layers on the FPGA, because it is wasteful. You are just shifting the mean and distribution of your activations. What is done instead of this, is batch-norm merging, where the learned parameters of BN are merged into the weights of the prior layer, typically a convolution.


The mechanics of this is acheived by passing the network through a compiler stage, the compiler writes out a new prototxt & caffemodel using the "anew" flag. That intermediate prototxt & caffe model are passed into the quantizer.


If you look at our jupyter notebook tutorials we show this happening, but perhaps it is not highlighted boldly enough.


Here is a diagram showing the flow: