UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Contributor
Contributor
473 Views
Registered: ‎11-08-2018

Can I reuse quantized model working on tensorflow?

After compiling I can get compiled model, After quantizing the compiled model I can get qunatized model (int 8 or int16).

Can i reuse this quantized model working on tensorflow?

0 Kudos
2 Replies
Contributor
Contributor
442 Views
Registered: ‎11-08-2018

Re: Can I reuse quantized model working on tensorflow?

In addition to the above post, I would like to add some detail discussion on it.

-> In xilinx ml-suite, after quantizing the compiled model, I want to reuse this quantized model again on tensorflow before working on fpga.

-> I want .pb file in addition to the .json (int8 or int16) file, as the output of quantizer.

0 Kudos
Xilinx Employee
Xilinx Employee
432 Views
Registered: ‎11-20-2018

Re: Can I reuse quantized model working on tensorflow?

The quantizer in ml-suite does not change the input model, rather it produces parameters to interpret the provided floating-point model and execute with ml-suite using int8 or int16.

0 Kudos