07-14-2019 01:51 AM
Hello everyone, I have a question about quantizing floating-point model and weights with the help of decent, dnndk.
My operations and its results are shown here.
I can't make sure whether it is well-done, and I can't find any output file, so I doubt that there is something wrong.
It's my first time to learn dnndk, and I will be very grateful to you for answering my questions!!!
07-15-2019 05:26 AM
Hi @liuyang151617 ,
What you are showing in the screenshot below is the help output of decent. I think maybe in the command line you added a parameter that isn't quite right (weights_bit). Generally, you don't need to add the weights_bit parameter - it should automatically quantize to 8 bit. Otherwsie your command line should be ok (you can check the example command lines output in the help as well for examples).
You also need to have calibration images for the quantization process and you need to ensure that your prototxt points to these as well as the calibration.txt as part of the the input layer. The calibration.txt file just needs to list the image names along with a number in the right column (it can just be '0' for each entry). I've attached an example here for reference from the SSD tutorial.
Our edge AI tutorials include information on this as well as the DNNDK user guide:
Here are a couple examples: