UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Observer neutrinoxy
Observer
305 Views
Registered: ‎04-06-2018

[Decent] Cannot quantize custom model

Hi,

I'm trying to quantize a model that consists in several dense layers followed by conv1d layers. When I run decent_q, it outputs the following warnings:

 

[DEPLOY WARNING] Node conv_0/BiasAdd(Type: BiasAdd) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_0/bias. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_0/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_0/BiasAdd. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_1/BiasAdd(Type: BiasAdd) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_1/bias. Please deploy it on CPU.
[DEPLOY WARNING] Node conv_1/Relu(Type: Relu) is not quantized and cannot be deployed to DPU,because it has unquantized input node: conv_1/BiasAdd. Please deploy it on CPU.

 

Another thing I don't understand is that the generated deploy_model.pb doesn't decrease in size compared to the frozen model, although it should because of quantized weights. It means that the Dense weights are not quantized.

Furthermore, when I run DNNC, I got the following error:

[DNNC][Error] 'Const' op should be fused with current op [BiasAdd] by DECENT.

The only "unusual" operation used in my model is this one, occuring after dense layers:

layers.Lambda(lambda x: tf.reshape(x, [-1, N, 1]))(estimators[index])

Any ideas?

Thank you,

Neutrinoxy

0 Kudos
3 Replies
Xilinx Employee
Xilinx Employee
239 Views
Registered: ‎07-16-2008

回复: [Decent] Cannot quantize custom model

You may want to take a look at supported network features by DPU:

https://www.xilinx.com/support/documentation/ip_documentation/dpu/v3_0/pg338-dpu.pdf

pg20, Table 7

Try --ignore_nodes to have them left unquantized during quantization.

-------------------------------------------------------------------------
Don't forget to reply, kudo, and accept as solution.
-------------------------------------------------------------------------
109 Views
Registered: ‎10-17-2019

回复: [Decent] Cannot quantize custom model

@graces If using the --ignore_nodes command, will the compilation with DNNC still work well?

0 Kudos
Xilinx Employee
Xilinx Employee
74 Views
Registered: ‎03-21-2008

回复: [Decent] Cannot quantize custom model

The reason that the size of the quantized frozen model is not reduced is becasue the 8 bit quantized weights are still stored as an FP32 value.

0 Kudos