cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Participant
Participant
361 Views
Registered: ‎04-06-2018

[Decent] Cannot quantize custom model

Hi,

I'm trying to quantize a model that consists in several dense layers followed by conv1d layers. When I run decent_q, it outputs the following warnings:

 

[DEPLOY WARNING] Node conv_0/BiasAdd(Type: BiasAdd) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_0/bias. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_0/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_0/BiasAdd. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_1/BiasAdd(Type: BiasAdd) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_1/bias. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_1/BiasAdd. Please deploy it on CPU.

 

Another thing I don't understand is that the generated deploy_model.pb doesn't decrease in size compared to the frozen model, although it should because of quantized weights. It means that the Dense weights are not quantized.

Furthermore, when I run DNNC, I got the following error:

[DNNC][Error] 'Const' op should be fused with current op [BiasAdd] by DECENT.

The only "unusual" operation used in my model is this one, occuring after dense layers:

layers.Lambda(lambda x: tf.reshape(x, [-1, N, 1]))(estimators[index])

Any ideas?

Thank you,

Neutrinoxy

0 Kudos
3 Replies
Highlighted
Xilinx Employee
Xilinx Employee
295 Views
Registered: ‎07-16-2008

回复: [Decent] Cannot quantize custom model

You may want to take a look at supported network features by DPU:

https://www.xilinx.com/support/documentation/ip_documentation/dpu/v3_0/pg338-dpu.pdf

pg20, Table 7

Try --ignore_nodes to have them left unquantized during quantization.

-------------------------------------------------------------------------
Don't forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------
165 Views
Registered: ‎10-17-2019

回复: [Decent] Cannot quantize custom model

@graces If using the --ignore_nodes command, will the compilation with DNNC still work well?

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
130 Views
Registered: ‎03-21-2008

回复: [Decent] Cannot quantize custom model

The reason that the size of the quantized frozen model is not reduced is becasue the 8 bit quantized weights are still stored as an FP32 value.

0 Kudos