cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
234 Views
Registered: ‎12-04-2019

vai_q_pytorch quant_mode 1 export quant config failed

Jump to solution

I am trying to quantize a model which has torch.nn.PixelShuffle().

During Quantization it reported error because of the presence of Pixel Shuffle

and hence I removed the Pixel shuffle layer during quantization.

With Quantization mode 1, the tool can generate Quantizable module but it is reporting error when exporting quant config

 

orkspace/SRHW$ python quantize.py                                                                                                                                     
[NNDCT_NOTE]: Loading NNDCT kernels...                                                                                                                                                                                     
---------------------- Model Loaded  ----------------------                        
---------------------- Trace Started ----------------------                        
---------------------- End of trace ----------------------                         
---------------------- Start SRHW Quantization ----------------------                                                                                                 
[NNDCT_NOTE]: Quantization calibration process start up...                                                                                                            
[NNDCT_NOTE]: =>Parsing SRHW...                                                                                                                                       
[NNDCT_NOTE]: =>Quantizable module is generated.(quantize_result/SRHW.py)                                                                                                                                                  
[NNDCT_NOTE]: =>Exporting quant config.(quantize_result/quant_info.json)        
Traceback (most recent call last):                                                  
File "quantize.py", line 170, in <module>                                            
dirc=args.data_dir,quant_mode=args.quant_mode,val=False)                         
File "quantize.py", line 130, in quantization                                        
quantizer.export_quant_config()                                                  
File "/home/htic/.conda/envs/pytorch-xilinx/lib/python3.7/site-
packages/pytorch_nndct/quantization/torchquantizer.py", line 221, in 
export_quant_config                 
self.organize_quant_pos()                                                        
File "/home/htic/.conda/envs/pytorch-xilinx/lib/python3.7/site-
packages/pytorch_nndct/quantization/torchquantizer.py", line 203, in 
organize_quant_pos                  
(out_name, self.quant_config['blobs'][input_name][1]))                         
KeyError: None

 

Looking into the torchquantizer.py script

https://github.com/Xilinx/Vitis-AI/blob/master/Vitis-AI-Quantizer/vai_q_pytorch/pytorch_binding/pytorch_nndct/quantization/torchquantizer.py

in line 200 self.configer.quant_inputs() returns  None keyword

Why is this occuring?

How to resolve it?

I am waiting for a reply

Thanks in advance

 

0 Kudos
Reply
1 Solution

Accepted Solutions
167 Views
Registered: ‎12-04-2019

Performing forward pass (evaluation) before exporting quant_config solved the issue

View solution in original post

0 Kudos
Reply
1 Reply
168 Views
Registered: ‎12-04-2019

Performing forward pass (evaluation) before exporting quant_config solved the issue

View solution in original post

0 Kudos
Reply