cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rubenlesmes
Visitor
Visitor
440 Views
Registered: ‎07-20-2021

TensorFlow 2.0 model inference not working as expected

Hello,

I am trying to do inference with a compiled mnist model. The model has been built with Keras Functional API and with the permitted layers.

inputs = tf.keras.Input(shape=(28, 28, 1))
x = tf.keras.layers.Conv2D(32, (3, 3), activation='relu')(inputs)
x = tf.keras.layers.MaxPooling2D((2,2))(x)
x = tf.keras.layers.Conv2D(16, (3, 3), activation='relu')(x)
x = tf.keras.layers.MaxPooling2D((2,2))(x)
x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(128, activation='relu')(x)
x = tf.keras.layers.Dense(64, activation='relu')(x)
outputs = tf.keras.layers.Dense(10, activation='softmax')(x)

mnist = tf.keras.Model(inputs=inputs, outputs=outputs)

 

I have done inference with the pre-compiled models (float and quantized) and they seem to do well, but after compilation with vai_c_tensorflow2 the dpu inference seems to load null weights.

dpu = overlay.runner

inputTensors = dpu.get_input_tensors()
outputTensors = dpu.get_output_tensors()

shapeIn = tuple(inputTensors[0].dims)
shapeOut = tuple(outputTensors[0].dims)
outputSize = int(outputTensors[0].get_data_size() / shapeIn[0])

output_data = [np.empty(shapeOut, dtype=np.float32, order="C")]
input_data = [np.empty(shapeIn, dtype=np.float32, order="C")]
image = input_data[0]

image[0,...] = test_data[0]
job_id = dpu.execute_async(input_data, output_data)
dpu.wait(job_id)

 Output data looks like this:

[array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)]

 

Is there any other processing or known bugs about Tensorflow 2 models compilation?

Any ideas would be appreciated.

0 Kudos
10 Replies
chaoz
Xilinx Employee
Xilinx Employee
389 Views
Registered: ‎09-14-2018

Hi @rubenlesmes 

I don't  see any relate bugs in Tensorflow 2 models compilation,

Actually I have made a similar design before, could you try with the same steps and files and see is there any difference?

https://github.com/lobster1989/Mnist-classification-Vitis-AI-1.3-TensorFlow2

Chao
----------------------------------------------------------------------------------------------
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------

Chao
----------------------------------------------------------------------------------------------
如果帖子有帮助,别忘“接受为解决方案”或“奖励”。谢谢!
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------
0 Kudos
rubenlesmes
Visitor
Visitor
363 Views
Registered: ‎07-20-2021

Same problem with your design, I had to recode the compiler to the card I'm using, a Xilinx Zynq UltraScale+ MPSoC Ultra96-V2.
I believe that the issue could be the architecture file arch.json as the one I'm using is the one on DPU-PYNQ repository.

Is there any other arch.json file I could test for the platform I am currently using?

0 Kudos
chaoz
Xilinx Employee
Xilinx Employee
347 Views
Registered: ‎09-14-2018

arch.json come from your DPU configuration. Is the arch.json consistent with your hardware design?

By the way, are you running the xmodel file on board and getting the output?

Chao
----------------------------------------------------------------------------------------------
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------

Chao
----------------------------------------------------------------------------------------------
如果帖子有帮助,别忘“接受为解决方案”或“奖励”。谢谢!
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------
0 Kudos
rubenlesmes
Visitor
Visitor
344 Views
Registered: ‎07-20-2021

I have not gone into hardware designing and there's no default arch.json in my UltraScale+ card, I'm using the one provided by the DPU-PYNQ repository. Is there any way to make sure It is consistent with my hardware?

Yes, I'm running the xmodel file on board and getting null outputs, there's a preview of the printed output in the first post.

0 Kudos
chaoz
Xilinx Employee
Xilinx Employee
308 Views
Registered: ‎09-14-2018

Normal we first make a Vivado hardware platform design, then we intergrated DPU in to hardware design accordding to DPU-TRD and then a arch.json file is generated for the paritcular DPU configuration.

But in case the DPU configuration is not consistent, you would get an error in the first place when running model on board.

By the way, how did you integrate DPU into hardware design? Or you just used a released platform design?

 

Reference of DPU-TRD,

https://github.com/Xilinx/Vitis-AI/tree/master/dsa/DPU-TRD

 

Chao
----------------------------------------------------------------------------------------------
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------

 

Chao
----------------------------------------------------------------------------------------------
如果帖子有帮助,别忘“接受为解决方案”或“奖励”。谢谢!
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------
0 Kudos
rubenlesmes
Visitor
Visitor
284 Views
Registered: ‎07-20-2021

I was just trying to avoid making a Vivado hardware platform design, If that's possible, as It's not within my scope of research. So I was trying with arch.json files found in DPU-PYNQ repository or with the following configurations, which are the ones associated with DPUCZDX8G_ISA0_B1600_MIN and DPUCZDX8G_ISA0_B1600_MAX.

{"fingerprint":"0x1000020f2014404"}
{"fingerprint":"0x100002022010104"}

I have not gotten any errors while loading the xmodel into the platform.

0 Kudos
chaoz
Xilinx Employee
Xilinx Employee
277 Views
Registered: ‎09-14-2018

What is your targeted board?

Did you use a prebuilt platform for the board?

Chao
----------------------------------------------------------------------------------------------
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------

Chao
----------------------------------------------------------------------------------------------
如果帖子有帮助,别忘“接受为解决方案”或“奖励”。谢谢!
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------
0 Kudos
rubenlesmes
Visitor
Visitor
271 Views
Registered: ‎07-20-2021

My targeted board is an Avnet Zynq UltraScale+ MPSoC Ultra96-V2.

I am not sure what you mean by prebuilt platform, If that's the image burned into the SD card then yes, It's the build ultra96v2_oob_2020_1_210303_8GB.zip found on the Avnet starter guide.

0 Kudos
chaoz
Xilinx Employee
Xilinx Employee
270 Views
Registered: ‎09-14-2018

Ok.

So I guess ultra96v2_oob_2020_1_210303_8GB.zip should have the same DPU intergrated in it's platform, otherwise I think error will be thrown.

Could you please attach the full log of running application on board? Have you done any custom to app_mt.py?

 

Chao
----------------------------------------------------------------------------------------------
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------

 

Chao
----------------------------------------------------------------------------------------------
如果帖子有帮助,别忘“接受为解决方案”或“奖励”。谢谢!
Don't forget to "Accept as solution" or "Kudo" if it helps. Thanks!
----------------------------------------------------------------------------------------------
0 Kudos
rubenlesmes
Visitor
Visitor
263 Views
Registered: ‎07-20-2021

I have attached the entire log of the notebook running the inference on board. During the creation of the xmodel I did not edit the app_mt.py file.

0 Kudos