05-10-2019 05:35 AM
is there a way to quantize and compile a tensorflow network without any conv layer inside? Or why is it necessary to have at least one conv layer in the network?
We are trying to get a non-convolutional network running on the dpu. The small network contains the following layers: flatten, dense, activation (relu / softmax),
But while compiling we get the fatal error: [DNNC][Fatal] Check failed for condition [infer_shape_handler != nullptr] in [/tmp/DNNC_V010_Package/dnnc/dnnc_impl/core/layer.cc:162] : Infer shape handler for [Conv] is missing.
It would be really nice to compile networks without any conv layer, because we are facing some problems, were our input data aren't any images.
We are using the dnndk version xlnx_dnndk_v3.0_190430, a tensorflow network, the tensorflow cpu version, ubuntu16.04 and our destination platform is a zcu104 board.
Thanks in advance.
06-03-2019 09:06 AM
I have figured a way to implement Fully Connected Networks using the DNNDK framework.
1) Assume that you have a FC Network with N layers, the size of the 1st Layer is D and your Network Input size is K.
3) Reshape your Network inputs so that they are 2D (kxk). For example if you have 100 inputs reshape them to be 10x10. Apply some form of zero padding if the input size cannot be factored.
2) Replace the 1st FC Layer with a 2D Convolutional Layer with parameters: D filters of size kxk, that is filters that have the same size as your input and are as many as your layer size. This makes the Conv layer to have equivalent operation with a fully connected layer, with the only exception that it operated in 2D. Note: Train your network with the 1st FC layer, not with the Conv layer. I had accuracy loss during training with the Conv layer which may be due to different gradient computation and weight updates. After the training just reshape the weights of your 1st FC layer from KxD to kxkxD and set them to the Conv layer.
3) Apply a Flatten operation to the outputs of the Conv layer and feed them to the rest of your FC Network.
4) If your Network has operations that are not supported by the DNNDK remove them after the training and try to calculate them in CPU.
I am attaching a graph of the first layers of my network (thanks to netron tool). It originally had 81 inputs, which were reshaped to 9x9 and the size of all of its hidden layers is 400. My network was trained with Keras and the model was converted to Tensorflow frozen graph, which was used for the Decent tool.
I hope you find this guide useful and it doesn't violate any of the DNNDK terms.
06-04-2019 11:51 PM
great that you found a solution for this problem! I really appreciate that you posted this step-by-step guide.
Is it possible that you share the code of your keras model creation before and after this work around? This would make it even clearer for Keras users like me.
Thanks a lot!
10-18-2019 12:36 AM
thx mparmpas122321 for solution.
I test it and it works, but it seems to be limited up to 256 input size. I have input size almost 5000. :-( Nevermind, here is my code:
def FCreplace(modelOld): layers = [l for l in modelOld.layers] #original input shape train_images Nx1x1x100 but you need to change it to Nx10x10x1 x = tf.keras.Input(train_images.shape[1:], name="input") k=int(np.sqrt(train_images.shape * train_images.shape * train_images.shape)) net=tf.keras.layers.Conv2D(layers.units,kernel_size=(k,k),activation="relu", name='conv2D')(x) net=tf.keras.layers.Flatten(name='flat')(net) for i in range(3, len(layers)): net = layers[i](net) model = tf.keras.Model(inputs=x, outputs=net, name='myNetwork') model.layers.set_weights([modelOld.layers.get_weights().reshape((k,k,1,layers.units)), modelOld.layers.get_weights()]) model.summary() return model
I had problems with using reshape layer, so you need to rashape input data. Be aware of layers name.
Good luck to all.
10-18-2019 02:06 AM
Old one is InputLayer(Nx1x1x100)->Flat->Dense0->Dense1-> ...->DenseM
And new one is InputLayer(Nx10x10x1)->Conv->Flat->Dense1->...->DenseM
After third layers it just copy of your network.
Is it clear?
10-18-2019 02:30 AM
Thank you for your reply!
That means if I have input dimension of (N x 50 x 512 x 1) I should reshape it to (N x 160 x 160 x 1) and pass it to Conv2D right!
10-18-2019 03:02 AM
I am happy that it worked for you! Considering reshape, I had similar issue when I used it as the first layer of my network. Apparently, the first layer of the network must be Convolutional for the tool to work.
I haven't tested it for network input size greater than 81, but I assume that if you reshape your inputs and weights properly, exploiting the 3rd dimension also, you can have up to 256*(16*16)=65536 inputs (?), if we take into account the DPU IP product sepcifications. Correct me if I am wrong.
10-18-2019 03:28 AM
It forks for me with 1x1x100 -> 10x10x1 where kernel size is 10.
I also tested it with 30x30x1 and ended with error where (kernel_size - strides)<=(3*channel_parallel*strides).
where as you mension supported parameters, maximal usable input should be 16x16x1 -> 1x1x256
Is it possible do transformation with 16x16xN size?
10-18-2019 03:52 AM
I have a model as follows:
Layer (type) Output Shape Param #
reshape_1 (Reshape) (None, 50, 512) 0
flatten_1 (Flatten) (None, 25600) 0
dense_1 (Dense) (None, 512) 13107712
dense_2 (Dense) (None, 1024) 525312
dense_3 (Dense) (None, 512) 524800
dense_4 (Dense) (None, 2) 1026
Is it at all possible to convert the top layer to conv2D and be able to run it in DNNC. If so what should be the input shape? The input shape for the above model is : (N x 50 x 512 x 1)
10-18-2019 04:10 AM
10-18-2019 04:20 AM
Actualized version of code here:
def FCreplace(modelOld): layers = [l for l in modelOld.layers] s=train_images.shape # shape Nx7x7x100 x = tf.keras.Input(s[1:], name="input") net=tf.keras.layers.Conv2D(layers.units,kernel_size=(s,s), activation="relu", name='conv2D_1')(x) net=tf.keras.layers.Flatten(name='flat')(net) for i in range(3, len(layers)): net = layers[i](net) model = tf.keras.Model(inputs=x, outputs=net, name='myNetwork') model.layers.set_weights([modelOld.layers.get_weights().reshape((s,s,s,layers.units)), modelOld.layers.get_weights()]) model.summary() return model
For example with these models:
10-18-2019 04:34 AM
Ok so the model would be like:
model = Sequential()
model.add(Reshape((16,16,100), input_shape=(50, 512, 1)))
model.add(Conv2D(100,(16,16), padding="same", activation="relu"))
But how to remove Reshape layer because it is necessary for reshaping input?
10-18-2019 11:50 AM
You can just reshape the inputs before you feed them to the network. For example,
import numpy as np
x = np.reshape(x, (...))
y = model.predict(x)
07-19-2020 05:33 AM