cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
ru2308hoi
Visitor
Visitor
227 Views
Registered: ‎04-14-2021

Vitis AI deploy_model.pb or quantize_eval_model.pb?

Jump to solution

After doing quantization two .pb are generated. I am targeting at  DPUCZDX8G( Ultra96) , which one should I used to generate .xmodel? 

It seems User guide asks me to use deploy_model.pb; but there is an error when generating .xmodel.

quantize_eval_model.pb can generate .xmodel but the functionality fails. 

 

In User guide 

Table 8: vai_q_tensorflow Output Files

deploy_model.pb :  Quantized model for VAI compiler (extended TensorFlow format) for targeting DPUCZDX8G implementations.

quantize_eval_model.pb : Quantized model for evaluation (also, VAI compiler input for most DPU architectures, like DPUCAHX8H, DPUCAHX8L, and DPUCADF8H)

0 Kudos
1 Solution

Accepted Solutions
jheaton
Xilinx Employee
Xilinx Employee
202 Views
Registered: ‎03-21-2008

For  Vitis-AI.3 or higher use quantize_eval_model.pb for the compiler.

For Vitis-AI1.2 and earlier use deploy_model.pb.

View solution in original post

1 Reply
jheaton
Xilinx Employee
Xilinx Employee
203 Views
Registered: ‎03-21-2008

For  Vitis-AI.3 or higher use quantize_eval_model.pb for the compiler.

For Vitis-AI1.2 and earlier use deploy_model.pb.

View solution in original post