UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Observer jiaoliu
Observer
905 Views
Registered: ‎05-15-2019

dnndkv3 decent failed

i'm using mobilenetv2-ssdlite on caffe, after running ./decent.sh ,output is as follows:

travis@PC:~/ssd-ssd/DNNDK_Project$ ./decent.sh 
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0516 11:34:54.908308  2311 gpu_memory.cpp:99] GPUMemory::Manager initialized with Caching (CUB) GPU Allocator
I0516 11:34:54.908716  2311 gpu_memory.cpp:101] Total memory: 11720130560, Free: 11224940544, dev_info[0]: total=11720130560 free=11224940544
I0516 11:34:54.908723  2311 decent.cpp:255] Using GPUs 0
I0516 11:34:54.908959  2311 decent.cpp:260] GPU 0: GeForce GTX 1080 Ti
[libprotobuf ERROR google/protobuf/text_format.cc:274] Error parsing text-format caffe.NetParameter: 2518:13: Message type "caffe.ScaleParameter" has no field named "dilation".
F0516 11:34:55.482614  2311 upgrade_proto.cpp:125] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: /home/travis/ssd-ssd/DNNDK_Project/float.prototxt
*** Check failure stack trace: ***
./decent.sh: 行 25:  2311 已放弃               (核心已转储) $DECENT quantize -model ${model_dir}/float.prototxt -weights ${model_dir}/float.caffemodel -output_dir ${output_dir} -method 1
travis@PC:~/ssd-ssd/DNNDK_Project$ 

 

0 Kudos
13 Replies
Xilinx Employee
Xilinx Employee
885 Views
Registered: ‎12-09-2015

Re: dnndkv3 decent failed

(^^)/

 

"[libprotobuf ERROR google/protobuf/text_format.cc:274]" seems to indicate a Caffe error and not a "decent" error. Do you have the Caffe version of SSD installed as shown here?

 

Capture.PNG

 

0 Kudos
Observer jiaoliu
Observer
883 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

i installed https://github.com/chuanqi305/ssd  as my caffe version ssd

0 Kudos
Xilinx Employee
Xilinx Employee
876 Views
Registered: ‎12-09-2015

Re: dnndkv3 decent failed

0 Kudos
Observer jiaoliu
Observer
872 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

https://github.com/chuanqi305/MobileNetv2-SSDLite

Note

2.MobileNet on Tensorflow use ReLU6 layer y = min(max(x, 0), 6), but caffe has no ReLU6 layer. Replace ReLU6 with ReLU cause a bit accuracy drop in ssd-mobilenetv2, but very large drop in ssdlite-mobilenetv2. There is a ReLU6 layer implementation in my fork of ssd.

i think https://github.com/chuanqi305/ssd also works ,maybe i should check if i've installed correctely.

i have another question ,i noticed the ai sdk have the ssd and mobilenet v2,but don't have ssdlite,Is it possible that ssdlite is not supported?

0 Kudos
Xilinx Employee
Xilinx Employee
859 Views
Registered: ‎12-09-2015

Re: dnndkv3 decent failed

(^^)/

 

Using Caffe only, please check if you can run inference without any errors.

 

0 Kudos
Observer jiaoliu
Observer
850 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

inference is ok,i just changed the dnndkproject/float.prototxt ,then i run ./decent_ssd.sh,i got this

I0516 16:07:14.656232  5671 net.cpp:205] Memory required for data: 857732520
I0516 16:07:14.656236  5671 layer_factory.hpp:123] Creating layer conv_1/depthwise/margin2_fixed
I0516 16:07:14.656241  5671 net.cpp:140] Creating Layer conv_1/depthwise/margin2_fixed
I0516 16:07:14.656245  5671 net.cpp:481] conv_1/depthwise/margin2_fixed <- conv_1/depthwise/margin2
I0516 16:07:14.656249  5671 net.cpp:442] conv_1/depthwise/margin2_fixed -> conv_1/depthwise/margin2 (in-place)
I0516 16:07:14.656276  5671 net.cpp:190] Setting up conv_1/depthwise/margin2_fixed
I0516 16:07:14.656282  5671 net.cpp:197] Top shape: 10 96 75 1 (72000)
I0516 16:07:14.656286  5671 net.cpp:205] Memory required for data: 858020520
I0516 16:07:14.656291  5671 layer_factory.hpp:123] Creating layer conv_1/project
I0516 16:07:14.656298  5671 net.cpp:140] Creating Layer conv_1/project
I0516 16:07:14.656302  5671 net.cpp:481] conv_1/project <- conv_1/depthwise_slice_1_split_0
I0516 16:07:14.656308  5671 net.cpp:455] conv_1/project -> conv_1/project
I0516 16:07:14.656468  5671 layer_factory.hpp:123] Creating layer conv_1/project
I0516 16:07:14.656751  5671 net.cpp:190] Setting up conv_1/project
I0516 16:07:14.656759  5671 net.cpp:197] Top shape: 10 24 75 75 (1350000)
I0516 16:07:14.656762  5671 net.cpp:205] Memory required for data: 863420520
I0516 16:07:14.656767  5671 layer_factory.hpp:123] Creating layer conv_1/project_fixed
I0516 16:07:14.656771  5671 net.cpp:140] Creating Layer conv_1/project_fixed
I0516 16:07:14.656775  5671 net.cpp:481] conv_1/project_fixed <- conv_1/project
I0516 16:07:14.656780  5671 net.cpp:442] conv_1/project_fixed -> conv_1/project (in-place)
I0516 16:07:14.656807  5671 net.cpp:190] Setting up conv_1/project_fixed
I0516 16:07:14.656812  5671 net.cpp:197] Top shape: 10 24 75 75 (1350000)
I0516 16:07:14.656816  5671 net.cpp:205] Memory required for data: 868820520
I0516 16:07:14.656819  5671 layer_factory.hpp:123] Creating layer conv_1/project_conv_1/project_fixed_0_split
I0516 16:07:14.656824  5671 net.cpp:140] Creating Layer conv_1/project_conv_1/project_fixed_0_split
I0516 16:07:14.656828  5671 net.cpp:481] conv_1/project_conv_1/project_fixed_0_split <- conv_1/project
I0516 16:07:14.656834  5671 net.cpp:455] conv_1/project_conv_1/project_fixed_0_split -> conv_1/project_conv_1/project_fixed_0_split_0
I0516 16:07:14.656839  5671 net.cpp:455] conv_1/project_conv_1/project_fixed_0_split -> conv_1/project_conv_1/project_fixed_0_split_1
I0516 16:07:14.656868  5671 net.cpp:190] Setting up conv_1/project_conv_1/project_fixed_0_split
I0516 16:07:14.656874  5671 net.cpp:197] Top shape: 10 24 75 75 (1350000)
I0516 16:07:14.656877  5671 net.cpp:197] Top shape: 10 24 75 75 (1350000)
I0516 16:07:14.656882  5671 net.cpp:205] Memory required for data: 879620520
I0516 16:07:14.656884  5671 layer_factory.hpp:123] Creating layer conv_1/depthwise/relu
I0516 16:07:14.656893  5671 net.cpp:140] Creating Layer conv_1/depthwise/relu
I0516 16:07:14.656895  5671 net.cpp:481] conv_1/depthwise/relu <- conv_1/depthwise_slice_1_split_1
F0516 16:07:14.656903  5671 net.cpp:450] Top blob 'conv_1/depthwise' produced by multiple sources.
*** Check failure stack trace: ***
./decent_ssd.sh: 行 13:  5671 已放弃               (核心已转储) decent quantize -model ${model_dir}/float.prototxt -weights ${model_dir}/float.caffemodel -output_dir ${output_dir} -gpu 0 -auto_test
travis@PC:~/ssd-ssd/DNNDK_Project$ 
0 Kudos
Xilinx Employee
Xilinx Employee
845 Views
Registered: ‎12-09-2015

Re: dnndkv3 decent failed

(^^)/

 

Would you please attach float.prototxt to this thread?

0 Kudos
Observer jiaoliu
Observer
832 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

i change .prototxt to .txt , it reminds me :The attachment's float.prototxt content type (application/octet-stream) does not match its file extension and has been removed.

0 Kudos
Observer jiaoliu
Observer
830 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

maybe you need

0 Kudos
822 Views
Registered: ‎03-27-2019

Re: dnndkv3 decent failed

hi,@ :

         when i execute ./dnnc.sh,there will report error   9: 0-10-15 Field is too long! I also  try to delete prototxt file layer by layer and just a input layer and conv layer,but this problem still exist. I also compare caffemodel file but find nothing except .  sometimes , there will have a error like this:

dnnc: /tmp/DNNC_V010_Package/dnnc/submodules/asicv2com/src/SlNode/SlNodeConv.cpp:82: void SlNodeConv::generate_convinit_op(const YAggregationType&, const YAggregationType&, uint32_t, uint32_t): Assertion `shift_cut >= 0' failed.

can you give me some advices ? I have spent several days , but haven't any process. Thank you in advance.

0 Kudos
Xilinx Employee
Xilinx Employee
805 Views
Registered: ‎12-09-2015

Re: dnndkv3 decent failed

(^^)/

 

Using Netscope on float.txt shows a recursive loop with "conv_1/depthwise".

 

Capture.PNG

 

I don't think this recursion is currently supported.

 

This may be a by-product of the conversion from the original tensorflow model to this caffe model.

 

Maybe you can try with the original tensorflow model and see if that goes through the tools.

0 Kudos
Observer jiaoliu
Observer
796 Views
Registered: ‎05-15-2019

Re: dnndkv3 decent failed

Thank you for helping me find the problem. I'll try tensorflow and then give you feedback.

0 Kudos
Visitor dipak
Visitor
265 Views
Registered: ‎09-19-2019

Re: dnndkv3 decent failed


@jiaoliu wrote:

Thank you for helping me find the problem. I'll try tensorflow and then give you feedback.


Got the following problem while using ssdlite_mobilenetv2 (Tensorflow version). Anyone able to compile frozen graph with decent_q successfully ?

2019-10-03 12:08:11.504320: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/ClipToWindow_1/Reshape (op: Reshape, T: 9)
2019-10-03 12:08:11.504336: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/MultiClassNonMaxSuppression/ClipToWindow/Reshape (op: Reshape, T: 9)
2019-10-03 12:08:11.504353: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_3/cond/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504364: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_3/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504372: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_3/cond/concat (op: ConcatV2, T: 3)
2019-10-03 12:08:11.504379: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_1/cond/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504390: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_1/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504397: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond_1/cond/concat (op: ConcatV2, T: 3)
2019-10-03 12:08:11.504405: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond/cond/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504415: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond/ExpandDims (op: ExpandDims, T: 3)
2019-10-03 12:08:11.504425: W tensorflow/contrib/decent_q/utils/graph_quantizer.cc:93] Found node with non-quantizable T: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond/cond/concat (op: ConcatV2, T: 3)
Traceback (most recent call last):
  File "/home/user/decent/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 418, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 0 of node Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList_149/cond_3/cond/strided_slice/aquant was passed int32 from Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList_149/cond_3/cond/strided_slice:0 incompatible with expected float.
0 Kudos