cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
francocapraro12
Observer
Observer
280 Views
Registered: ‎09-27-2018

Error using vai_tensorflow2 with a custom model

Jump to solution

Hello, Im using VITIS AI docker (with GPU and conda tensorflow2 ) so i use TF2.3.0 and Keras 2.4.0.
I do the train of my model and save the h5. Then i do the quantize getting a weird line :

[INFO] Start CrossLayerEqualization...
10/10 [==============================] - 0s 37ms/step
[INFO] CrossLayerEqualization Done.
[INFO] Start Quantize Calibration...
WARNING:tensorflow:6 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fbd9fcc5d40> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
2/2 [==============================] - 0s 11ms/step
[INFO] Quantize Calibration Done.
[INFO] Start Generating Quantized Model...
[Warning] Skip quantize pos adjustment for layer quant_dense_17, its quantize pos is [i=None, w=5.0, b=7.0, o=0.0]
[INFO] Generating Quantized Model Done.
total 188
-rw-r--r-- 1 vitis-ai-user vitis-ai-group 188768 Apr 9 08:48 quantized_model.h5

 

After the quantize  i evaluate the model. When i try to do the Compilation i have erros :

**************************************************
* VITIS_AI Compilation - Xilinx Inc.
**************************************************
[INFO] Namespace(inputs_shape=None, layout='NHWC', model_files=['quantize_results/quantized_model.h5'], model_type='tensorflow2', out_filename='vai_c_output/rfClassification_org.xmodel', proto=None)
[INFO] tensorflow2 model: quantize_results/quantized_model.h5
/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py:1809: H5pyDeprecationWarning: dataset.value has been deprecated. Use dataset[()] instead.
value = param.get(group).get(ds).value
[INFO] parse raw model : 0%| | 0/9 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/bin/xnnc-run", line 33, in <module>
sys.exit(load_entry_point('xnnc==1.3.0', 'console_scripts', 'xnnc-run')())
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/__main__.py", line 194, in main
normal_run(args)
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/__main__.py", line 178, in normal_run
in_shapes=in_shapes if len(in_shapes) > 0 else None,
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/xconverter.py", line 131, in run
xmodel = CORE.make_xmodel(model_files, model_type, _layout, in_shapes)
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/core.py", line 104, in make_xmodel
model_files, layout, in_shapes=in_shapes, model_type=model_t
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 97, in to_xmodel
model_name, raw_nodes, layout, in_shapes, model_fmt, model_type
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 163, in create_xmodel
xmodel = cls.__create_xmodel_from_tf2(name, layers, layout, in_shapes)
File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow2/lib/python3.7/site-packages/xnnc/translator/tensorflow_translator.py", line 381, in __create_xmodel_from_tf2
bottom: List[str] = [x for x in layer.get("inbound_nodes")]
TypeError: 'NoneType' object is not iterable

Someone had the same problem ? 

0 Kudos
1 Solution

Accepted Solutions
francocapraro12
Observer
Observer
227 Views
Registered: ‎09-27-2018

I change the model and also change the way of define the model .
I change from:

model = Sequential()
model.add(Conv2D(32, (3,3), strides=(1,1)...
....

......

 

To:

rf_input = Input(shape=(32,32,1))
x = Conv2D(num_filters, (3,3), activation='relu', padding='same')(rf_input)...

....

.....

 

Now its quantizing and compiling.

 

View solution in original post

4 Replies
Ric_D
Visitor
Visitor
257 Views
Registered: ‎03-09-2021

Hello,

I am unfortunately having a similar warning and error. The warning which induces the error is probably:

[Warning] Skip quantize pos adjustment for layer quant_dense_17, its quantize pos is [i=None, w=5.0, b=7.0, o=0.0]

This warning is dependent on the model weights/bias etc as the warning indicates.
When you get rid of the warning, you can probably compile the model as well.

I have tried different training versions of the same model. Some have this warning and some don't. Unfortunately, I wasn't able to mitigate the issue for the versions which don't work. I assume, if you are able to change the affected bias, you can get the model to compile. However, it does not explain, why this issues exists. Potentially it is a bug.

If you have any updates, please let me know here. I will do the same.

Regards,

Ric

0 Kudos
francocapraro12
Observer
Observer
228 Views
Registered: ‎09-27-2018

I change the model and also change the way of define the model .
I change from:

model = Sequential()
model.add(Conv2D(32, (3,3), strides=(1,1)...
....

......

 

To:

rf_input = Input(shape=(32,32,1))
x = Conv2D(num_filters, (3,3), activation='relu', padding='same')(rf_input)...

....

.....

 

Now its quantizing and compiling.

 

View solution in original post

Ric_D
Visitor
Visitor
140 Views
Registered: ‎03-09-2021

I had already implemented the model in your suggested way, so it appears to be a different issue for me.

 

0 Kudos
francocapraro12
Observer
Observer
123 Views
Registered: ‎09-27-2018

Ric, try this model and then u can optimize it to your aplication.

I add just the imports and the model definition, quantization and compilation..
Best,
Franco


0 Kudos