10-08-2018 08:10 AM
I try to compile classic SSD+VGG16 with DNNDK 2.06 beta. The slides and also the User-Guide claim to have "One-click compilation support" for SSD and VGG16.
However when I try to compile it I get:
[DNNC][Error] Unrecognized layer type [Normalize], Maybe you can delete it in deploy.prototxt and try again.
Is it planned to add support for the Normalize-Layer in future releases? Or is there some kind of replacement?
10-17-2018 08:01 PM
We made some modification with the original caffe-trained SSD network as follows,
a) Change the normalize layer to BatchNorm + Scale in the train.prototxt and test.prototxt, then retraining or finetune.
b) Remove the MultiBoxLoss layer in train.prototxt before Decent.
c) For the convenience of accuracy test, train.protxt and test.prototxt are merged to train_test.prototxt.
- Copy train.prototxt to train_test.prototxt
- Copy & paste the first layer of test.prototxt (AnnonateData layer) after the data layer of train_test.prototxt
- Copy & paste the last 5 layers starting from mbox_conf_reshape layer from test.prototxt to the end of train_test.prototxt,
then add the following parameter in these layers,
I hope this answers your question.
11-12-2018 02:23 AM
thank you very much for your efforts. Sorry for my late reply, I was really stuck with other projects.
I tried your changes, but there are still issues:
1) With c), you refer to the accuracy test in decent during quantization? I still get "Calibration iter: 1/100 ,loss: 0" here. Could you please post a train_test.prototxt for reference?
2) After decent tool, I get a deplay.prototxt/caffemodel and a fix_train_test.prototxt/caffemodel. When I try to feed them into dnnc, I get those messages:
are you kidding me? I spent 24 hours to dig such bugs, please don't do such stupid things ever again.12 bits cannot represent value larger than 4095, but 6143 is given
But probably deploy.prototxt is intended to be the pruned network and should not be fed into dnnc, isn't it?
Message type "caffe.FixedParameter" has no field named "follow_data_layer".
But there is a follow_data_layer:true parameter in the generated FixedNeuron layer.
When I remove the follow_data_layer parameter, I get
Check failed for condition [layer_id < model.layer_size()] in [/tmp/DNNDK_All_V010_Package/dnnc/src/parser/caffe/caffeparser.cc:129] :Failed to find parameter for [mbox_conf_reshape] in caffemodel, make sure name in prototxt is consistent with that in caffemodel.And when I remove those layers, I get
Invalid caffemodel, fixinfo for layers are missing.
It is a quite difficult for me to "debug" here. Could you please provide some basic example for SSD (or a modified version without normalize etc)? A tutorial similar to the resnet50 tutorial would be really great!