cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
neutrinoxy
Contributor
Contributor
1,154 Views
Registered: ‎04-06-2018

Wrong computations in dense layer

Hello,

I want to run a Cifar-10 classifier on DNNDK, for benchmarking purposes. I used DNNDK 3.0 with a tensorflow model.

However, the output of my network is wrong almost all the time. I checked the output values of all layers in the dump directory, and compared them to the ones I get using tensorflow with the quantized model. The convolution layers outputs are pretty similar, but the output of the first dense layer is completely different.

The NN architecture is this one : conv -> maxpool -> conv -> maxpool -> conv -> conv -> maxpool (,1,1,256) -> Flatten -> dense (256) -> dense (10)

I had similar issues with Xilinx CHaiDNN, but I managed to patch this bug because the code is open-source (the error was some wrong truncations that had an impact only on small CNNs like this one), but since DNNDK code is not publicly available, well I cannot patch it this time. I suspect some side effects once more.

I can share the input, output, weights and biases of the dense layer if needed. I computed them myself, and it simply doesn't match with the dumped output.

I know that it's useless to run a CIFAR-10 application, but I think that an AI inference tool should be able to handle it.

Best regards.

9 Replies
anaub408
Observer
Observer
1,122 Views
Registered: ‎09-13-2018

Hello,

could you please tell me how you have read the values in the .bin files in the dumb folder? Is there a easy way to look at these? I can't open these files in a readable format...
I think that I might have the same problem as you. The output of my network differ from the one of the frozen and quantized model.

Thanks in advance.

0 Kudos
neutrinoxy
Contributor
Contributor
1,082 Views
Registered: ‎04-06-2018

Hello @anaub408,

I justed opened them using python. You can then convert bytes to int8 values.

If you don't have a ReLU activation after a layer, you need to perform a small change due to the representation of negative values:

if x > 128:
    x = -(256-x)

Then, you have to guess your quantization scale (I didn't find a way to get the value...).

 

 

0 Kudos
anaub408
Observer
Observer
1,019 Views
Registered: ‎09-13-2018

Hello neutrinoxy,

thanks for your hint. I tried a lot of different things but not to just open them in python.
Sometimes it really is that easy...

0 Kudos
shuaizh
Xilinx Employee
Xilinx Employee
995 Views
Registered: ‎01-07-2019

Hi,

Really sorry for the delayed response due to preparation and Xilinx sales conference. Could you send me the related files of the dense layer (input, output, weights, bias of both quantized network and dpu running result)? If possible, could you also share me the pb files and way of dumping layer result? 

We will do detailed analysis and feedback the solution.

Thank you for your understanding so much.

Regards!

malie431
Visitor
Visitor
957 Views
Registered: ‎05-20-2019

Hi @shuaizh ,

I have the same problem like @neutrinoxy. The output of my flatten layer after the conv2d is quite similar to the output of the conv-layer dumped output computed on the DPU. But the output after the first dense layer is completly different to the one in the dumped output. 

My architecture is:
conv2d > flatten > dense > dense

Is there already a solution to that problem?

Regards!

0 Kudos
malie431
Visitor
Visitor
808 Views
Registered: ‎05-20-2019

Hey @shuaizh ,

are there any updates on this issue?

Regards

0 Kudos
neutrinoxy
Contributor
Contributor
798 Views
Registered: ‎04-06-2018

Hello,

Yes, this bug has been fixed in last DNNDK version.

shuaizh
Xilinx Employee
Xilinx Employee
784 Views
Registered: ‎01-07-2019

shuaizh
Xilinx Employee
Xilinx Employee
781 Views
Registered: ‎01-07-2019

Thank you @neutrinoxy .

0 Kudos