cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
hungdbk92
Participant
Participant
466 Views
Registered: ‎12-09-2019

Does Vitis Quantizer fully support Convolution Transpose layer?

Hi everybody,

I have a graph which uses convolution transpose:

            layers += [
                torch.nn.ConvTranspose2d(inch, output_channels, kernel_size=4, stride=2, padding=1),
                torch.nn.BatchNorm2d(output_channels),
                torch.nn.ReLU()
            ]

It was trained on Pytorch then converted to Tensorflow using Keras backen (https://github.com/nerox8664/pytorch2keras), the final graph would be like this:

tp.png

Quantize this graph with Vitis-AI 1.1 gives me an error which says that the FusedBatchNorm cannot be quantized:

INFO: Checking Float Graph...
INFO: Float Graph Check Done.
INFO: Calibrating for 10 iterations...
100% (10 of 10) |###########################################################################################################################################################| Elapsed Time: 0:00:07 Time:  0:00:07
INFO: Calibration Done.
INFO: Generating Deploy Model...
2020-05-29 01:20:02.384739: F tensorflow/contrib/decent_q/utils/deploy_quantized_graph.cc:768] Check failed: scale_node.attr().count("wpos") [DEPLOY ERROR] Cannnot find quantize info for weights: 240_1/FusedBatchNorm/scale
Fatal Python error: Aborted

Current thread 0x00007f076c210740 (most recent call first):
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/quantize_graph.py", line 232 in CreateQuantizeDeployGraphDef
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/decent_q.py", line 293 in deploy_frozen
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/decent_q.py", line 327 in quantize_frozen
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/decent_q.py", line 576 in main
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/decent_q.py", line 780 in <lambda>
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/absl/app.py", line 250 in _run_main
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/absl/app.py", line 299 in run
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/python/platform/app.py", line 40 in run
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tensorflow_core/contrib/decent_q/python/decent_q.py", line 781 in run_main
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/bin/vai_q_tensorflow", line 11 in <module>
Aborted (core dumped)

After remove BatchNorm with following code:

torch.nn.ConvTranspose2d(inch, output_channels, kernel_size=4, stride=2, padding=1),
#torch.nn.BatchNorm2d(output_channels),
torch.nn.ReLU()

The graph could be quantized but parts of the conv_transpose layer cannot be deployed on DPU:

INFO: Checking Float Graph...
INFO: Float Graph Check Done.
INFO: Calibrating for 10 iterations...
100% (10 of 10) |###########################################################################################################################################################| Elapsed Time: 0:00:10 Time:  0:00:10
INFO: Calibration Done.
INFO: Generating Deploy Model...
[DEPLOY WARNING] Node 217_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 217_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 218_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 217_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 219_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 219_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 220_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 219_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 221_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 221_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 222_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 221_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 227_1/mul(Type: Mul) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 224_1/Tanh. Please deploy it on CPU.
[DEPLOY WARNING] Node 209_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 209_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 210_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 209_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 211_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 211_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 212_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 211_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 213_crop_1/strided_slice(Type: StridedSlice) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 213_crop_1/strided_slice/stack. Please deploy it on CPU.
[DEPLOY WARNING] Node 214_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 213_crop_1/strided_slice. Please deploy it on CPU.
[DEPLOY WARNING] Node 225_1/mul(Type: Mul) is not quantized and cannot be deployed to DPU,because it has unquantized input node: 216_1/Sigmoid. Please deploy it on CPU.
INFO: Deploy Model Generated.
********************* Quantization Summary *********************      
INFO: Output:       
  quantize_eval_model: ./quantized_output_nhwc/quantize_eval_model.pb       
  deploy_model: ./quantized_output_nhwc/deploy_model.pb

So does Vitis fully support conv2d_transpose layer???

Mismatch information

Bellow are supported layers from Vitis-AI 1.1 documents (https://www.xilinx.com/support/documentation/sw_manuals/vitis_ai/1_1/ug1414-vitis-ai.pdf, page 59 or at https://www.xilinx.com/html_docs/vitis_ai/1_1/yhx1576126008370.html):

doc.png

It says that ConvolutionTranspose layer is supported.

But some others have confirmed it is not supported layer:

https://forums.xilinx.com/t5/AI-and-Vitis-AI/Does-vai-q-tensorflow-decent-q-not-support-Conv2DTranspose/m-p/1069653#M3192

https://forums.xilinx.com/t5/AI-and-Vitis-AI/Support-for-conv

Thank you!

0 Kudos
Reply
1 Reply
graces
Moderator
Moderator
410 Views
Registered: ‎07-16-2008

For BatchNorm, ensure that it's in inference phase when freezing the graph. e.g. For model susing tf.keras, call tf.keras.backend.set_learning_phase(0) before building the graph. 

For torch.nn.BatchNorm2d, the parameter looks to be affine=False.

ConvTranspose2d looks to be supported per UG1414. From the warning, operation StridedSlice cannot be deployed to DPU. Would you please try defining --output_nodes prior to the StridedSlice operator?

 

-------------------------------------------------------------------------
Don't forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------