cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
matthieu_elsys
Visitor
Visitor
2,137 Views
Registered: ‎12-03-2018

DPU-TRD on ZCU104

Jump to solution

Hello,

I have a ZCU 104 eval board and I try to follow the DPU TRD explained in the DPU documentation :

https://www.xilinx.com/support/documentation/ip_documentation/dpu/v3_0/pg338-dpu.pdf

I have succefully generated a bitstream in vivado 2019.2 thanks to the script provided by here :

https://forums.xilinx.com/t5/AI-and-Vitis-AI/DPU-TRD-for-ZCU104/m-p/968612#M316

I created a boot image and a rootfs with petalinux 2019.2 as explained in the DPU product guide p.51 and in the petalinux documentation.

I used xilinx-zcu104-v2019.2-final.bsp to create the petalinux project.

Then I try to run two resnet50 examples : 

  • the first one is present in the vitis-ai docker :

vitis-AI/mpsoc/dnndk_samples_zcu104/resnet50

  • and the other one is in the TRD :

zcu102-dpu-trd-2019-1-timer/apu/apps/resnet50

I optain the same Output with both :

[DNNDK_XRT] Cannot find device, index: 0

I saw that dpuOpen() function needed a device in /dev/dpu but I haven't it

How can I fix this error ?

 

Thank you

 

Best regads,

 

Matthieu

 

 

 

 

0 Kudos
1 Solution

Accepted Solutions
jheaton
Xilinx Employee
Xilinx Employee
2,106 Views
Registered: ‎03-21-2008

It looks like are using the Vivado TRD that is paired with the older DNNDK tools.

For Vitis AI, we recommend that you use the Vitis Flow. This is a bit differnet than the Vivado flow, and is our recommened path going forward.

Note that Vivado still gets called under the hood by Vitis when building the hw platform.

Using the Vitis flow will give other advantages such as being able to accelerated sw pre-processing in the PL for things like image scaling using the Vitis Vision Libaries.

The DPU FLow for Vitis is here: https://github.com/Xilinx/Vitis-AI/blob/master/DPU-TRD/prj/Vitis/README.md

View solution in original post

0 Kudos
22 Replies
jheaton
Xilinx Employee
Xilinx Employee
2,107 Views
Registered: ‎03-21-2008

It looks like are using the Vivado TRD that is paired with the older DNNDK tools.

For Vitis AI, we recommend that you use the Vitis Flow. This is a bit differnet than the Vivado flow, and is our recommened path going forward.

Note that Vivado still gets called under the hood by Vitis when building the hw platform.

Using the Vitis flow will give other advantages such as being able to accelerated sw pre-processing in the PL for things like image scaling using the Vitis Vision Libaries.

The DPU FLow for Vitis is here: https://github.com/Xilinx/Vitis-AI/blob/master/DPU-TRD/prj/Vitis/README.md

View solution in original post

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
2,100 Views
Registered: ‎03-21-2008

One thing I forgot to menton is that the tutorial assumes you are using the ZCU102, and instructs you to download the Vitis ZCU102 base platform. You will want to download the Vits base platform for the ZCU104 instead: https://www.xilinx.com/member/forms/download/design-license-xef.html?filename=zcu104_base_2019.2.zip

0 Kudos
matthieu_elsys
Visitor
Visitor
2,009 Views
Registered: ‎12-03-2018

Thank you for your help,

I will try the Vitis Flow

Best regards,

Mattheu

0 Kudos
matthieu_elsys
Visitor
Visitor
1,986 Views
Registered: ‎12-03-2018

I tried the Vitis Flow,

I have this issue when I try to run the resnet50 available on the DPU TRD :

"[DNNDK] DPU configuration mismatch for kernel resnet50 - parameter: RAM Usage, DPU kernel: Low, DPU IP: High."

By default the application is built for ZCU 102 how can I change that ?

 

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
1,949 Views
Registered: ‎03-21-2008

I  went through buidling the DPU TRD using the Vitis Flow,  and was able to run the resnet50 example.

Here is what I did.

1) Download the zcu104 base platform

2) Enabled URAM Support in prj/Vitis/dpu_conf.vh

Untitled picture.png

3) Changed the following in the MakeFile: 

XOCC_OPTS = -t ${TARGET} --platform ${SDX_PLATFORM} --save-temps --config ${DIR_PRJ}/config_file/prj_config_104_2dpu --xp param:compiler.userPostSysLinkTcl=${DIR_PRJ}/strip_interconnects.tcl

 

4) Created XILINX_XRT envar (/opt/xilinx/xrt in my case). Created SDX_PLATFORM envar to point to the zcu104 platfom

5) Ran: make KERNEL=DPU_SM DEVICE=zcu104

 

The once the hw was built I ran the resnet50 example and did not see any warning messages.

0 Kudos
asobeih
Explorer
Explorer
760 Views
Registered: ‎02-13-2016

Hi @jheaton,

I have been working on customizing the DPU on my ZCU104 for months now trying to make things work eventually and the experience was NOT smooth at all.

 

I am going to try the steps you mentioned here, however, I just would like to know if these steps work with either changing the number of cores, the DPU architecture, or both altogether. The reason I am asking is that I am coming from Vivado and it was not a pleasant experience as I ended up with so many violations and errors when I tried to do some customizations.

Looking forward to your support, and thanks in advance!

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
713 Views
Registered: ‎03-21-2008

@asobeth

For Vitis-AI 1.3.1 you can use the Vitis flow for the DPU_TRD.
To use a ZCU104 you can get the Vitis platform from https://github.com/Xilinx/Vitis_Embedded_Platform_Source/tree/master/Xilinx_Official_Platforms/xilinx_zcu104_base

0 Kudos
asobeih
Explorer
Explorer
707 Views
Registered: ‎02-13-2016

@jheaton 

I am so sorry. It looks like that you missed my question, which is as follows:

I just would like to know if these steps work with either changing the number of cores, the DPU architecture, or both altogether. The reason I am asking is that I am coming from Vivado and it was not a pleasant experience as I ended up with so many violations and errors when I tried to do some customizations.

Furthermore, You mentioned that I can use Vitis flow with Vitis AI 1.3.1. What about Vitis AI 1.2? Is it recommended also to use the Vitis flow with it instead of Vivado?

 

Thanks.

Thanks.

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
696 Views
Registered: ‎03-21-2008

@asobeih  Sorry I missed that. Yes, you can change the number of cores and the architecture. See https://github.com/Xilinx/Vitis-AI/blob/master/dsa/DPU-TRD/prj/Vitis/README.md for examples.

You can also use the Vitis AI 1.2 with the Vitis flow for the DPU  TRD. You will want to use Vitis 2020.1 to build the hw and use 2020.1 zcu104 platform.

0 Kudos
asobeih
Explorer
Explorer
564 Views
Registered: ‎02-13-2016

@jheaton Hi,

I attempted the same exact steps you mentioned in your previous replies. I tried it two times. In each attempt, I encountered the following error, which I cannot understand at all.

I am using Vitis Unified Platform 2020.1, Vitis AI 1.2.1 release from Github, and Ubuntu 16.04.

Attempt #1:

 

:~/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis$ make KERNEL=DPU_SM DEVICE=zcu104 -j32
v++ -t hw --platform /home/abdelrahman.sobeih/dpu_work/zcu104_base/zcu104_base.xpfm -p binary_container_1/dpu.xclbin --package.out_dir binary_container_1 --package.rootfs xilinx-zynqmp-common-v2020.1/rootfs.ext4 --package.sd_file xilinx-zynqmp-common-v2020.1/Image
/media/hamamgpu/Drive2/Vitis/ins_dir/Vivado/2020.1/bin/vivado -mode batch -source scripts/gen_dpu_xo.tcl -tclargs binary_container_1/dpu.xo DPUCZDX8G hw zcu104
/media/hamamgpu/Drive2/Vitis/ins_dir/Vivado/2020.1/bin/vivado -mode batch -source scripts/gen_sfm_xo.tcl -tclargs binary_container_1/softmax.xo sfm_xrt_top hw zcu104

****** Vivado v2020.1 (64-bit)
  **** SW Build 2902540 on Wed May 27 19:54:35 MDT 2020
  **** IP Build 2902112 on Wed May 27 22:43:36 MDT 2020
    ** Copyright 1986-2020 Xilinx, Inc. All Rights Reserved.


****** Vivado v2020.1 (64-bit)
  **** SW Build 2902540 on Wed May 27 19:54:35 MDT 2020
  **** IP Build 2902112 on Wed May 27 22:43:36 MDT 2020
    ** Copyright 1986-2020 Xilinx, Inc. All Rights Reserved.

Option Map File Used: '/media/hamamgpu/Drive2/Vitis/ins_dir/Vitis/2020.1/data/vitis/vpp/optMap.xml'

****** v++ v2020.1 (64-bit)
  **** SW Build 2902540 on Wed May 27 19:54:35 MDT 2020
    ** Copyright 1986-2020 Xilinx, Inc. All Rights Reserved.

ERROR: [v++ 60-602] Source file does not exist: /home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/binary_container_1/dpu.xclbin
INFO: [v++ 60-1662] Stopping dispatch session having empty uuid.
INFO: [v++ 60-1653] Closing dispatch client.
Makefile:76: recipe for target 'package' failed
make: *** [package] Error 1
make: *** Waiting for unfinished jobs....
source scripts/gen_sfm_xo.tcl
# if { $::argc != 4 } {
#     puts "ERROR: Program \"$::argv0\" requires 4 arguments!\n"
#     puts "Usage: $::argv0 <xoname> <krnl_name> <target> <device>\n"
#     exit
# }
# set xoname    [lindex $::argv 0]
# set krnl_name [lindex $::argv 1]
# set target    [lindex $::argv 2]
# set device    [lindex $::argv 3]
# set suffix "${krnl_name}_${target}_${device}"
# source -notrace ./scripts/package_sfm_kernel.tcl
source scripts/gen_dpu_xo.tcl
# if { $::argc != 4 } {
#     puts "ERROR: Program \"$::argv0\" requires 4 arguments!\n"
#     puts "Usage: $::argv0 <xoname> <krnl_name> <target> <device>\n"
#     exit
# }
# set xoname    [lindex $::argv 0]
# set krnl_name [lindex $::argv 1]
# set target    [lindex $::argv 2]
# set device    [lindex $::argv 3]
# set suffix "${krnl_name}_${target}_${device}"
# source -notrace ./scripts/package_dpu_kernel.tcl
ERROR: [Common 17-685] Unable to load Tcl app xilinx::questa
ERROR: [Common 17-69] Command failed: ERROR: [Common 17-685] Unable to load Tcl app xilinx::questa


    while executing
"source -notrace ./scripts/package_dpu_kernel.tcl"
    (file "scripts/gen_dpu_xo.tcl" line 31)
INFO: [Common 17-206] Exiting Vivado at Sat Apr  3 22:09:55 2021...
INFO: [IP_Flow 19-234] Refreshing IP repositories
INFO: [IP_Flow 19-1704] No user IP repositories specified
INFO: [IP_Flow 19-2313] Loaded Vivado IP repository '/media/hamamgpu/Drive2/Vitis/ins_dir/Vivado/2020.1/data/ip'.
WARNING: [IP_Flow 19-2162] IP 'fp_exp' is locked:
* IP definition 'Floating-point (7.1)' for IP 'fp_exp' (customized with software release 2019.2) has a different revision in the IP Catalog. * Current project part 'xc7vx485tffg1157-1' and the part 'xc7z020clg400-2' used to customize the IP 'fp_exp' do not match.
WARNING: [IP_Flow 19-2162] IP 'fp_add' is locked:
* IP definition 'Floating-point (7.1)' for IP 'fp_add' (customized with software release 2019.2) has a different revision in the IP Catalog. * Current project part 'xc7vx485tffg1157-1' and the part 'xc7z020clg400-2' used to customize the IP 'fp_add' do not match.
WARNING: [IP_Flow 19-2162] IP 'fp_acc' is locked:
* IP definition 'Floating-point (7.1)' for IP 'fp_acc' (customized with software release 2019.2) has a different revision in the IP Catalog. * Current project part 'xc7vx485tffg1157-1' and the part 'xc7z020clg400-2' used to customize the IP 'fp_acc' do not match.
WARNING: [IP_Flow 19-2162] IP 'fp_div' is locked:
* IP definition 'Floating-point (7.1)' for IP 'fp_div' (customized with software release 2019.2) has a different revision in the IP Catalog. * Current project part 'xc7vx485tffg1157-1' and the part 'xc7z020clg400-2' used to customize the IP 'fp_div' do not match.
WARNING: [IP_Flow 19-2162] IP 'fp_convert' is locked:
* IP definition 'Floating-point (7.1)' for IP 'fp_convert' (customized with software release 2019.2) has a different revision in the IP Catalog. * Current project part 'xc7vx485tffg1157-1' and the part 'xc7z020clg400-2' used to customize the IP 'fp_convert' do not match.
Makefile:62: recipe for target 'binary_container_1/dpu.xo' failed
make: *** [binary_container_1/dpu.xo] Error 1
INFO: [IP_Flow 19-5654] Module 'sfm_xrt_top' uses SystemVerilog sources with a Verilog top file. These SystemVerilog files will not be analysed by the packager.
INFO: [IP_Flow 19-5107] Inferred bus interface 'M_AXI' of definition 'xilinx.com:interface:aximm:1.0' (from Xilinx Repository).
INFO: [IP_Flow 19-5107] Inferred bus interface 's_axi_control' of definition 'xilinx.com:interface:aximm:1.0' (from Xilinx Repository).
INFO: [IP_Flow 19-5107] Inferred bus interface 'aresetn' of definition 'xilinx.com:signal:reset:1.0' (from Xilinx Repository).
INFO: [IP_Flow 19-5107] Inferred bus interface 'aclk' of definition 'xilinx.com:signal:clock:1.0' (from Xilinx Repository).
INFO: [IP_Flow 19-5107] Inferred bus interface 'interrupt' of definition 'xilinx.com:signal:interrupt:1.0' (from Xilinx Repository).
INFO: [IP_Flow 19-4728] Bus Interface 'aresetn': Added interface parameter 'POLARITY' with value 'ACTIVE_LOW'.
INFO: [IP_Flow 19-4728] Bus Interface 'interrupt': Added interface parameter 'SENSITIVITY' with value 'LEVEL_HIGH'.
INFO: [IP_Flow 19-4728] Bus Interface 'aclk': Added interface parameter 'ASSOCIATED_BUSIF' with value 'M_AXI'.
INFO: [IP_Flow 19-4728] Bus Interface 'aclk': Added interface parameter 'ASSOCIATED_RESET' with value 'aresetn'.
INFO: [IP_Flow 19-2181] Payment Required is not set for this core.
INFO: [IP_Flow 19-2187] The Product Guide file is missing.
INFO: [IP_Flow 19-795] Syncing license key meta-data
INFO: [IP_Flow 19-234] Refreshing IP repositories
INFO: [IP_Flow 19-1704] No user IP repositories specified
INFO: [IP_Flow 19-2313] Loaded Vivado IP repository '/media/hamamgpu/Drive2/Vitis/ins_dir/Vivado/2020.1/data/ip'.
WARNING: [filemgmt 56-99] Vivado Synthesis ignores library specification for Verilog or SystemVerilog files. [/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/packaged_kernel_sfm_xrt_top_hw_zcu104/src/DPUCZDX8G_v3_3_0_vl_sfm.sv:]
WARNING: [filemgmt 56-99] Vivado Synthesis ignores library specification for Verilog or SystemVerilog files. [/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/packaged_kernel_sfm_xrt_top_hw_zcu104/src/sfm_xrt_top.v:]
# if {[file exists "${xoname}"]} {
#     file delete -force "${xoname}"
# }
# package_xo -xo_path ${xoname} -kernel_name ${krnl_name} -ip_directory ./packaged_kernel_${suffix} -kernel_xml ./kernel_xml/sfm/kernel.xml
WARNING: [Vivado 12-4404] The CPU emulation flow in v++ is only supported when using a packaged XO file that contains C-model files, none were found.
INFO: [Common 17-206] Exiting Vivado at Sat Apr  3 22:10:00 2021...

 

 

Attempt #2:

 

 

:~/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis$ make KERNEL=DPU_SM DEVICE=zcu104
/media/hamamgpu/Drive2/Vitis/ins_dir/Vivado/2020.1/bin/vivado -mode batch -source scripts/gen_dpu_xo.tcl -tclargs binary_container_1/dpu.xo DPUCZDX8G hw zcu104

****** Vivado v2020.1 (64-bit)
  **** SW Build 2902540 on Wed May 27 19:54:35 MDT 2020
  **** IP Build 2902112 on Wed May 27 22:43:36 MDT 2020
    ** Copyright 1986-2020 Xilinx, Inc. All Rights Reserved.

INFO: [Common 17-724] Xilinx Tcl Store apps are automatically updated.
source scripts/gen_dpu_xo.tcl
# if { $::argc != 4 } {
#     puts "ERROR: Program \"$::argv0\" requires 4 arguments!\n"
#     puts "Usage: $::argv0 <xoname> <krnl_name> <target> <device>\n"
#     exit
# }
# set xoname    [lindex $::argv 0]
# set krnl_name [lindex $::argv 1]
# set target    [lindex $::argv 2]
# set device    [lindex $::argv 3]
# set suffix "${krnl_name}_${target}_${device}"
# source -notrace ./scripts/package_dpu_kernel.tcl
WARNING: [IP_Flow 19-3833] Unreferenced file from the top module is not packaged: '/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/dpu_ip/Vitis/dpu/hdl/DPUCZDX8G.v'.
WARNING: [IP_Flow 19-5101] Packaging a component with a SystemVerilog top file is not fully supported. Please refer to UG1118 'Creating and Packaging Custom IP'.
CRITICAL WARNING: [HDL 9-806] Syntax error near "module". [/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/packaged_kernel_DPUCZDX8G_hw_zcu104/src/DPUCZDX8G_v3_3_0_vl_dpu.sv:101]
CRITICAL WARNING: [HDL 9-806] Syntax error near "generate". [/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/packaged_kernel_DPUCZDX8G_hw_zcu104/src/DPUCZDX8G_v3_3_0_vl_dpu.sv:162]
ERROR: [IP_Flow 19-259] [HDL Parser] Failed analyze operation while parsing HDL.
ERROR: [IP_Flow 19-258] [HDL Parser] Error parsing HDL file '/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/prj/Vitis/packaged_kernel_DPUCZDX8G_hw_zcu104/src/DPUCZDX8G_v3_3_0_vl_dpu.sv'.
WARNING: [IP_Flow 19-378] There are no ports found from top-level file '/home/abdelrahman.sobeih/dpu_work/Vitis-AI/DPU-TRD/dpu_ip/DPUCZDX8G_v3_3_0/hdl/DPUCZDX8G_v3_3_0_vl_dpu.sv'.
ERROR: [Common 17-39] 'ipx::package_project' failed due to earlier errors.

    while executing
"ipx::package_project -root_dir $path_to_packaged -vendor xilinx.com -library RTLKernel -taxonomy /KernelIP -import_files -set_current false"
    (file "./scripts/package_dpu_kernel.tcl" line 28)

    while executing
"source -notrace ./scripts/package_dpu_kernel.tcl"
    (file "scripts/gen_dpu_xo.tcl" line 31)
INFO: [Common 17-206] Exiting Vivado at Sat Apr  3 22:13:47 2021...
Makefile:62: recipe for target 'binary_container_1/dpu.xo' failed
make: *** [binary_container_1/dpu.xo] Error 1

 

 

 

0 Kudos
asobeih
Explorer
Explorer
519 Views
Registered: ‎02-13-2016

Hi @jheaton

I am still looking forward to your response. Hope you would help me with this issue ASAP.

 

Thanks.

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
490 Views
Registered: ‎03-21-2008

@asobeih 

Since you are using 1.2. make sure you are doing the following

  • Using the the 2020.1 version of the zcu104 platform and not the 2020.2 version.
  • Using the 1.2 branch of Vitis-AI for the DPU_TRD

Also do you have a valid vivado license?

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
486 Views
Registered: ‎03-21-2008

@asobeih 
Not sure if this is the issue you are seeing, but please see the following post about tcl apps not loading:

https://forums.xilinx.com/t5/Installation-and-Licensing/Error-encountered-during-project-creation-Unable-to-load-Tcl-app/td-p/642255

0 Kudos
asobeih
Explorer
Explorer
455 Views
Registered: ‎02-13-2016

Hi @jheaton ,

Thanks a lot for your support. I managed to solve the problem. I found out that I needed to do the following:

  • Change Vitis-AI branch from v1.2.1 to v1.2
  • Use version 2020.1 of ZCU104 base files

I managed to generate the sd_cards files that shall be used to program the FPGA with the DPU. However, I still have a question. To my understanding, Vitis AI compiler takes a parameter called ---arch that is in JSON format to generate kernels supported by the implemented DPU core(s). So, my question is, where can I find this JSON file or how can I generate it?

I followed the steps mentioned here: https://github.com/Xilinx/Vitis-AI/tree/v1.2/DPU-TRD/prj/Vitis 

Looking forward to your prompt response.

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
451 Views
Registered: ‎03-21-2008

Hi @asobeih , glad you have made progress!

The dlet command is used to generate the arch.json file, from the hw handoff file.

Page 119 of the Vitis-AI 1.2 User Guide explains how to do this : https://www.xilinx.com/support/documentation/sw_manuals/vitis_ai/1_2/ug1414-vitis-ai.pdf

 

0 Kudos
asobeih
Explorer
Explorer
381 Views
Registered: ‎02-13-2016

Hi @jheaton

I generated the sd_card folder and copied the files to the SD Card. When I tried to run the "dexplorer -w" command, I got the following error:

[DNNDK_XRT] Cannot find device, index: 0

In an attempt to solve this issue, I did the following:

Download the Vitis AI Runtime 1.2.0,

Copy and untar it on the board,

Install the Vitis AI Runtime. Execute the following command in order.
#cd centos
#rpm -ivh --force libunilog-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libxir-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libtarget-factory-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libvart-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libvitis_ai_library-1.2.0-r<x>.aarch64.rpm 


This did not solve the problem. Furthermore, I considered neglecting this whole "dexplorer -w" and going through the flow with files that I know that have been proven to work on the ZCU 104 image file that is provided in the Vitis AI repo on Github. However, in my implementation, when I try to run the same API code, it gives me the error message below

Aborted!

Those are the libraries that I use in my Python API:

 

from ctypes import *
import cv2
import numpy as np
import runner
import os
import xir.graph
import pathlib
import xir.subgraph
import threading
import time
import sys
import argparse
import math

 

#this is part of the code that is used to configure the DPUs with the model kernels
def runDPU(id,start,dpu,img):

'''get tensor'''
inputTensors = dpu.get_input_tensors()
outputTensors = dpu.get_output_tensors()
outputHeight = outputTensors[0].dims[1]
print ("output Height is \n" + str (outputHeight) + "\n\n")
outputWidth = outputTensors[0].dims[2]
print ("output Width is \n" + str (outputWidth) + "\n\n")
outputChannel = outputTensors[0].dims[3]
print ("output Channel is \n" + str (outputChannel) + "\n\n")
outputSize = outputHeight*outputWidth*outputChannel
print ("output Size is \n" + str (outputSize) + "\n\n")

batchSize = inputTensors[0].dims[0]
n_of_images = len(img)
count = 0
write_index = start
while count < n_of_images:
if (count+batchSize<=n_of_images):
runSize = batchSize
else:
runSize=n_of_images-count
shapeIn = (runSize,) + tuple([inputTensors[0].dims[i] for i in range(inputTensors[0].ndim)][1:])

'''prepare batch input/output '''
outputData = []
inputData = []
outputData.append(np.empty((runSize,outputHeight,outputWidth,outputChannel), dtype = np.float32, order = 'C'))
inputData.append(np.empty((shapeIn), dtype = np.float32, order = 'C'))


I am not sure what is going on. This flow is not going smooth as it is expected to be. The on-board Linux even has problems with logging in to the board using SSH (I am sure that my network configurations are correct).

Looking forward to your support. 

0 Kudos
asobeih
Explorer
Explorer
378 Views
Registered: ‎02-13-2016

Hi @jheaton I just have one more question. If I would like to implement single-core DPU on ZCU104, which configurations should I use?

 

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
361 Views
Registered: ‎03-21-2008

@asobeih, use the prj_config_1dpu.

0 Kudos
asobeih
Explorer
Explorer
358 Views
Registered: ‎02-13-2016

@jheaton You missed my first post. Please consider the one I posted earlier regarding the problems that I have running the DPU on the FPGA:

https://forums.xilinx.com/t5/AI-and-Vitis-AI/DPU-TRD-on-ZCU104/m-p/1227016/highlight/true#M7224

 

 

0 Kudos
asobeih
Explorer
Explorer
331 Views
Registered: ‎02-13-2016

Hi @jheaton 

Sorry if I caused you any confusion. I am re-posting the question that I meant. Kindly, be noted that the following inquiry is concerned with dual-core DPU B4096 architecture on ZCU104 using Vitis flow (2020.1):

Hi @jheaton

I generated the sd_card folder and copied the files to the SD Card. When I tried to run the "dexplorer -w" command, I got the following error:

[DNNDK_XRT] Cannot find device, index: 0

In an attempt to solve this issue, I did the following:

Download the Vitis AI Runtime 1.2.0,

Copy and untar it on the board,

Install the Vitis AI Runtime. Execute the following command in order.
#cd centos
#rpm -ivh --force libunilog-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libxir-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libtarget-factory-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libvart-1.2.0-r<x>.aarch64.rpm
#rpm -ivh --force libvitis_ai_library-1.2.0-r<x>.aarch64.rpm 


This did not solve the problem. Furthermore, I considered neglecting this whole "dexplorer -w" and going through the flow with files that I know that have been proven to work on the ZCU 104 image file that is provided in the Vitis AI repo on Github. However, in my implementation, when I try to run the same API code, it gives me the error message below

Aborted!

Those are the libraries that I use in my Python API:

 

from ctypes import *
import cv2
import numpy as np
import runner
import os
import xir.graph
import pathlib
import xir.subgraph
import threading
import time
import sys
import argparse
import math

 

#this is part of the code that is used to configure the DPUs with the model kernels
def runDPU(id,start,dpu,img):

'''get tensor'''
inputTensors = dpu.get_input_tensors()
outputTensors = dpu.get_output_tensors()
outputHeight = outputTensors[0].dims[1]
print ("output Height is \n" + str (outputHeight) + "\n\n")
outputWidth = outputTensors[0].dims[2]
print ("output Width is \n" + str (outputWidth) + "\n\n")
outputChannel = outputTensors[0].dims[3]
print ("output Channel is \n" + str (outputChannel) + "\n\n")
outputSize = outputHeight*outputWidth*outputChannel
print ("output Size is \n" + str (outputSize) + "\n\n")

batchSize = inputTensors[0].dims[0]
n_of_images = len(img)
count = 0
write_index = start
while count < n_of_images:
if (count+batchSize<=n_of_images):
runSize = batchSize
else:
runSize=n_of_images-count
shapeIn = (runSize,) + tuple([inputTensors[0].dims[i] for i in range(inputTensors[0].ndim)][1:])

'''prepare batch input/output '''
outputData = []
inputData = []
outputData.append(np.empty((runSize,outputHeight,outputWidth,outputChannel), dtype = np.float32, order = 'C'))
inputData.append(np.empty((shapeIn), dtype = np.float32, order = 'C'))


I am not sure what is going on. This flow is not going smooth as it is expected to be. The on-board Linux even has problems with logging in to the board using SSH (I am sure that my network configurations are correct).

Looking forward to your support. 

0 Kudos
jheaton
Xilinx Employee
Xilinx Employee
182 Views
Registered: ‎03-21-2008

@asobeih,

I think you will get the dexplorer message you are seeing if you try to run dexplorer -w without first installing the dnndk tools.

If you install the dnndk tools see https://github.com/Xilinx/Vitis-AI/blob/v1.2.1/mpsoc/README.md and then run dexplorer -w it should work.
Here's what you should see if you built the TRD with 2 DPUs.

Capture.PNG 

 

 

0 Kudos