UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Visitor m.walz
Visitor
357 Views
Registered: ‎02-08-2018

SSD support in DNNC

Hi,

 

I try to compile classic SSD+VGG16 with DNNDK 2.06 beta. The slides and also the User-Guide claim to have "One-click compilation support" for SSD and VGG16.

However when I try to compile it I get:


[DNNC][Error] Unrecognized layer type [Normalize], Maybe you can delete it in deploy.prototxt and try again.

Is it planned to add support for the Normalize-Layer in future releases? Or is there some kind of replacement?

 

Best regards

0 Kudos
2 Replies
Xilinx Employee
Xilinx Employee
270 Views
Registered: ‎02-18-2013

Re: SSD support in DNNC

Hi m.Walz, 

 

  We made some modification with the original caffe-trained SSD network as follows,

 

  a) Change the normalize layer to BatchNorm + Scale in the train.prototxt and test.prototxt, then retraining or finetune.

  b) Remove the MultiBoxLoss layer in  train.prototxt before Decent. 

  c) For the convenience of accuracy test, train.protxt and test.prototxt are merged to train_test.prototxt. 

      - Copy train.prototxt to train_test.prototxt

      - Copy & paste the first layer of test.prototxt (AnnonateData layer) after the data layer of train_test.prototxt

      - Copy & paste the last 5 layers starting from mbox_conf_reshape layer from test.prototxt to the end of train_test.prototxt,

         then add the following parameter in these layers,

         include {

         phase: TEST

         }

 

  I hope this answers your question.

 

Regards,

Andy

0 Kudos
Visitor m.walz
Visitor
147 Views
Registered: ‎02-08-2018

Re: SSD support in DNNC

Hey Andy,

thank you very much for your efforts. Sorry for my late reply, I was really stuck with other projects.

I tried your changes, but there are still issues:

1) With c), you refer to the accuracy test in decent during quantization? I still get "Calibration iter: 1/100 ,loss: 0" here. Could you please post a train_test.prototxt for reference?

2) After decent tool, I get a deplay.prototxt/caffemodel and a fix_train_test.prototxt/caffemodel. When I try to feed them into dnnc, I get those messages:

deploy.prototxt
are you kidding me? I spent 24 hours to dig such bugs, please don't do such stupid things ever again.12 bits cannot represent value larger than 4095, but 6143 is given
But probably deploy.prototxt is intended to be the pruned network and should not be fed into dnnc, isn't it?


fix_train_test.prototxt:
Message type "caffe.FixedParameter" has no field named "follow_data_layer".
But there is a follow_data_layer:true parameter in the generated FixedNeuron layer.
When I remove the follow_data_layer parameter, I get
Check failed for condition [layer_id < model.layer_size()] in [/tmp/DNNDK_All_V010_Package/dnnc/src/parser/caffe/caffeparser.cc:129] :Failed to find parameter for [mbox_conf_reshape] in caffemodel, make sure name in prototxt is consistent with that in caffemodel.And when I remove those layers, I get
Invalid caffemodel, fixinfo for layers are missing.

It is a quite difficult for me to "debug" here. Could you please provide some basic example for SSD (or a modified version without normalize etc)? A tutorial similar to the resnet50 tutorial would be really great!

 

 

0 Kudos