cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
deepg799
Explorer
Explorer
2,036 Views
Registered: ‎01-20-2019

!Need help to Port the Xilinx 8-Stream VCU + CNN Demo Design on Vitis_AI

Below is the Xilinx 8-Stream VCU+CNN demo design implemented to works with DNNDK API's. I would like to port the same solution on Vitis_AI platform instead of using the DNNDK API's.

https://github.com/Xilinx/Embedded-Reference-Platforms-User-Guide/blob/2019.2/Docs/overview.md

I would be very helpful for me if any one can guide me on how do I can use the Vitis-AI API's on this solution. High level steps would be okay for me.

Thanks in advance.

 

 

0 Kudos
33 Replies
jasonwu
Moderator
Moderator
1,788 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

DNNDK APIs are still supported on Vitis AI platform, you can get more information from here:

https://github.com/Xilinx/Vitis-AI/tree/master/mpsoc

You can select the v1.1 tag if you are using Vitis AI 1.1 +Vitis 2019.2.

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
deepg799
Explorer
Explorer
1,770 Views
Registered: ‎01-20-2019

@jasonwu 

Thanks for your valuable inputs.

My requirement is to use the Vitis-AI1.0 library stack for face-detection instead of DNNDK

As you can see in the below screen-shot vitis-AI library can be ported to this solution but I am struggling to make it up. it would be great if any high-level procedure is here to help me.

image.png

 

Below are the observation points from this task.

______________________________________  

1- Installed the required packages for the vitis-AI support in rootfs.

2- able to compile the vitis-ai facedetection same code by replacing it with DNNDK in the  gstsdxfacedetect application source code file.

But when running the plugin getting the below error for the line " auto model = vitis::ai::FaceDetect::create("densebox_320_320");"

gst-plugin-scanner:2777): GStreamer-WARNING **: 15:39:58.610: Failed to load plugin '/usr/lib/gstreamer-1.0/libgstsdxfacedetect.so': /usr/lib/gstreamer-1.0/libgstsdxfacedetect.so: undefined symbol: _ZN5vitis2ai10FaceDetect6createERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEb"

Thank you in advance. 

0 Kudos
jasonwu
Moderator
Moderator
1,749 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

I haven't tried step2step flow before with DPU+VCU.

You can find the facedetect Vitis AI example from https://github.com/Xilinx/Vitis-AI/tree/master/Vitis-AI-Library/facedetect

And since the reference design you refer to is a VCU+ML demo the gstreamer associated packages still need to be enabled.

You could get the configurations from that reference design or from this VCU TRD:

https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/18841711/Zynq+UltraScale+MPSoC+VCU+TRD

And here is a another example on ZCU106:

https://forums.xilinx.com/t5/AI-and-Vitis-AI/Vitis-AI-DPU-TRD-for-ZCU106/td-p/1099635

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
deepg799
Explorer
Explorer
1,727 Views
Registered: ‎01-20-2019

@jasonwu 

Thanks for the valuable input.

The previous issue got solve by linking the vitis-ai library to the vitis compiler while compiling the source code.

Since the reference design you refer to is a VCU+ML demo the gstreamer associated packages still need to be enabled - I replace the SYSROOT with my updated rootfs which have all the required packages enabled.

 When I am calling the "model_ = vitis::ai::FaceDetect::create("densebox_320_320");
 " function instread of "kernel = dpuLoadKernel(kernel_name_.c_str())"
in the gst_sdx_face_detect_start() function I am getting the below error while running the pipeline.

 

[ 86.459574] [drm] Pid 2635 opened device
[ 86.463518] [drm] Pid 2635 closed device
[ 86.467478] [drm] Pid 2635 opened device
[ 87.288811] [drm] Finding IP_LAYOUT section header
[ 87.288818] [drm] Section IP_LAYOUT details:
[ 87.293618] [drm] offset = 0x126ae18
[ 87.297884] [drm] size = 0xa8
[ 87.301629] [drm] Finding DEBUG_IP_LAYOUT section header
[ 87.304758] [drm] AXLF section DEBUG_IP_LAYOUT header not found
[ 87.310063] [drm] Finding CONNECTIVITY section header
[ 87.315972] [drm] Section CONNECTIVITY details:
[ 87.321016] [drm] offset = 0x126aec0
[ 87.325537] [drm] size = 0x94
[ 87.329287] [drm] Finding MEM_TOPOLOGY section header
[ 87.332421] [drm] Section MEM_TOPOLOGY details:
[ 87.337466] [drm] offset = 0x126ad48
[ 87.341986] [drm] size = 0xd0
[ 87.347948] [drm] No ERT scheduler on MPSoC, using KDS
[ 87.356631] [drm] scheduler config ert(0)
[ 87.356633] [drm] cus(2)
[ 87.360638] [drm] slots(16)
[ 87.363333] [drm] num_cu_masks(1)
[ 87.366291] [drm] cu_shift(16)
[ 87.369771] [drm] cu_base(0x80000000)
[ 87.372993] [drm] polling(0)
[ 87.381068] [drm] Pid 2635 opened device

Caught SIGSEGV
#0 0x0000007f8b0b5800 in waitpid () from /lib/libpthread.so.0
#1 0x0000007f8b0f3a60 in g_on_error_stack_trace ()
#2 0x0000005576604ac0 in ?? ()
#3 <signal handler called>
#4 0x0000007f8a37af18 in vitis::ai::XdpuRunner::XdpuRunner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
#5 0x0000007f8a37c144 in vitis::ai::DpuRunner::create_dpu_runner(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
#6 0x0000007f89778afc in vitis::ai::DpuTaskImp::DpuTaskImp (
#7 0x0000007f8977539c in vitis::ai::DpuTask::create (model_name=...)
#8 0x0000007f8977c620 in vitis::ai::init_tasks (model_name=...)
#9 vitis::ai::ConfigurableDpuTaskImp::ConfigurableDpuTaskImp (
#10 0x0000007f89779b34 in vitis::ai::ConfigurableDpuTask::create (
#11 0x0000007f8a2b9f18 in vitis::ai::TConfigurableDpuTask<vitis::ai::FaceDetect>::TConfigurableDpuTask (need_preprocess=true, model_name=...,
#12 vitis::ai::DetectImp::DetectImp (this=0x5577cbab80, model_name=...,
#13 0x0000007f8a2b9d54 in vitis::ai::FaceDetect::create (model_name=...,
#14 0x0000007f8a3b4314 in deephi::DenseBox::Init(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) ()
#15 0x0000007f8a3b7284 in gst_sdx_face_detect_start(_GstBaseTransform*) ()
#16 0x0000007f8ad29a54 in ?? () from /usr/lib/libgstbase-1.0.so.0
#17 0x0000007f8ad29ce0 in ?? () from /usr/lib/libgstbase-1.0.so.0
#18 0x0000007f8b2e9be0 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#19 0x0000007f8b2ea5a4 in gst_pad_set_active ()
#20 0x0000007f8b2c59b8 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#21 0x0000007f8b2d9940 in gst_iterator_fold ()
#22 0x0000007f8b2c6528 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#23 0x0000007f8b2c8748 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#24 0x0000007f8b2c8a64 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#25 0x0000007f8a2a3cc8 in gst_sdx_base_change_state (element=0x5577ca89b0,
#26 0x0000007f8b2cb14c in gst_element_change_state ()
#27 0x0000007f8b2cb898 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#28 0x0000007f8b2a58d4 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#29 0x0000007f8b2cb14c in gst_element_change_state ()
#30 0x0000007f8b2cb348 in gst_element_change_state ()
#31 0x0000007f8b2cb898 in ?? () from /usr/lib/libgstreamer-1.0.so.0
#32 0x000000557660286c in ?? ()
#33 0x0000007f8af59ce4 in __libc_start_main () from /lib/libc.so.6
#34 0x0000005576602f78 in ?? ()
Spinning. Please run 'gdb gst-launch-1.0 2635' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.
^C[ 97.460357] [drm] Pid 2635 closed device
[ 97.464339] [drm] Pid 2635 closed device

Any input will be helpful here.

Thanks in advance. 

 

0 Kudos
jasonwu
Moderator
Moderator
1,718 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

Thanks for your update.

It is strange that you can see both the VAI and VCU associated so files in the print out.

I would suggest you to test with each seperately.

For example test with facedetect with usb camera and check if it can work.

Then use the command examples in VCU TRD to test the VCU part.

 

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
deepg799
Explorer
Explorer
1,712 Views
Registered: ‎01-20-2019

@jasonwu 

Thanks for the quick update.

It is strange that you can see both the VAI and VCU associated so files in the print out - The prints are coming together bcz we implemented face-detection application as a Gstreamer plugin in.

My platform is enabled with VCU + Vitis-ai stack and I can able to run all vitis-ai samples applications with RTSP stream support.

I also validated the 8-ch VCU + CNN (face-detection application) demo on my platform by referring the below link. In this demo they used DNNDK API's to build the gstsdxfacedetect plugin.

https://github.com/Xilinx/Embedded-Reference-Platforms-User-Guide

Now I am just trying to use Vitis-AI API's instead of DNNDK API's because in the above link they mentioned that the solution can be ported to the Vitis-AI also.

My video pipeline is same as reference design but only change required is to use the Vitis-AI API's instead of DNNDK.

very new in this environment. I would be great if you can help me to port the same.

For your reference attaching the required files.

Thanks in advance

0 Kudos
jasonwu
Moderator
Moderator
1,645 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

If so it is more like a custom code debugging.

Would you please provide a step2step flow so that I can reproduce this issue on my side?

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
watari
Teacher
Teacher
1,634 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

I'm not familiar with VCU + CNN. But I'd like to confirm GStreamer's debug log to investigate route cause.

Would you share this debug log with environment "GST_DEBUG=3" or "gst-debug-level=3" option, if possible ?

 

I guess you might need to add some caps. in it.

 

Best regards,

deepg799
Explorer
Explorer
1,622 Views
Registered: ‎01-20-2019

Hi @watari , @jasonwu 

Sure, I will provide u the same shortly

0 Kudos
deepg799
Explorer
Explorer
1,602 Views
Registered: ‎01-20-2019

@watari 

Please find the attached log.

@jasonwu 

Following the steps followed to target the vitis-ai library instead of DNNDK.

1 - Set the SYSROOT variable. generated from the custom BSP, enabled with Vitis-AI + VCU features.
2 - Open the Vitis-AI reference project files by refering the steps mentioned in '6.2.2.1. Import Existing GStreamer Workspaces' section

https://github.com/Xilinx/Embedded-Reference-Platforms-User-Guide/blob/2019.2/Docs/tool-flow-tutorials.md

3 - Remove the Face-detection inference code and just validated the sdxfacedetect as a passthru plugin. (Attaching the referece code for the same)

     RTSP->dec->scaler->sdxfacedetect(passthru)->display

boot the board and copied the below files to rootfs.
        cp libgstsdxtrafficdetect.so /usr/lib/gstreamer-1.0 // Modified plugin.
        cp libn2cube.so /usr/lib // Not modified
        cp libdpuaol.so /usr/lib // Not modified
       cp libgstxclallocator.so /usr/lib //Not modified
       cp libgstsdxbase.so /usr/lib //Not modified

Run the below pipeline to validate the pass thru design.

GST_DEBUG=6 gst-launch-1.0 filesrc location=demo_inputs/face_15fps.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! videoscale method=0 ! video/x-raw, width=480,height=360 ! videoconvert ! queue ! sdxfacedetect ! queue ! videoconvert ! autovideosink

4 - After validating the passthru design. included the vitis-ai header files and added the single line of code in the DenseBox::Init(const string& kernel_name) function defined in densebox.cpp file.

model_ = vitis::ai::FaceDetect::create("densebox_320_320"); 

5 - linked the required libraries required for Vitis-AI - "pthread, vart-util, vitis_ai_library-facedetect"

c/c++ build settings->c/c++ Build->settings->ARM A53 Linux g++ compiler->libraries

6- Re-generated the libgstsdxfacedetect.so file.

Boot the board & replace the gstsdxfacedetect plugin in the following path.

cp libgstsdxtrafficdetect.so /usr/lib/gstreamer-1.0 // Modified plugin.

Attaching the modified source code where you can just command the below line in DenseBox::Init(const string& kernel_name)
function defined in densebox.cpp file

//model_ = vitis::ai::FaceDetect::create("densebox_320_320");

 

jasonwu
Moderator
Moderator
1,590 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

Thanks for your update. But I feel a little confused according to your description.

 Set the SYSROOT variable. generated from the custom BSP, enabled with Vitis-AI + VCU features

So did you create the platform by yourself? Have you tested your modified application on the reference design?

 

boot the board and copied the below files to rootfs.
        cp libgstsdxtrafficdetect.so /usr/lib/gstreamer-1.0 // Modified plugin.
        cp libn2cube.so /usr/lib // Not modified
        cp libdpuaol.so /usr/lib // Not modified
       cp libgstxclallocator.so /usr/lib //Not modified
       cp libgstsdxbase.so /usr/lib //Not modified

 Why are you copying so many files to the system manually?

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
0 Kudos
deepg799
Explorer
Explorer
1,578 Views
Registered: ‎01-20-2019

@jasonwu 


So did you create the platform by yourself? - Yes, I am using the custom platform. And also I ported the Xilinx 8-Ch + CNN Vitis application project (gstsdxfacedetect plugin) to my custom platform and it is working. Now just wanted to use the Vitis-AI facedetect API's instead of DNNDK and struggling to replace the DNNDK API's code with the Vitis-AI API;s code in the gstsdxfacedetect project files. (shared with you) 

Have you tested your modified application on the reference design - No I did not tested on reference design. I am testing on my platform because the application behave will same on both the platform.

Vitis IDE required cross-compiler to compile the plugin project. so I generated cross compiler from my petalinux BSP by executing the below command.
          petalinux-build --sdk

Why are you copying so many files to the system manually?

. Because these files are dependency files for gstsdxfacedetet plugin. so I am coping these binaries files to my SD_card.  please have a look to the below link to get more detail on the same.

        https://github.com/Xilinx/Embedded-Reference-Platforms-User-Guide/blob/2019.2/Docs/tool-flow-tutorials.md

please let me know I am doing any thing wrong here inorder to port the DNNDK api's to vitis-ai

0 Kudos
jasonwu
Moderator
Moderator
1,571 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

Got it.

I am afraid that I haven't test that design before. So I need to go through the tutorial and create the design/platform.

It may take me some time.

In the meanwhile I would suggest you to try the same as I did and check if the issue still occur.

And please feel free to try other forum members' debug suggestions. That may save you some time.

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
0 Kudos
deepg799
Explorer
Explorer
1,559 Views
Registered: ‎01-20-2019

@jasonwu 

Thanks for you valuable input.

In the meanwhile I would suggest you to try the same as I did and check if the issue still occur. - means? 

And please feel free to try other forum members' debug suggestions. That may save you some time. - Sure I will.

 

Meanwhile I also debugging the same. surely, I will update you if I have any improvement on the same.

0 Kudos
deepg799
Explorer
Explorer
1,556 Views
Registered: ‎01-20-2019

@jasonwu 

Instead of creating the BSP & FPGA design I would suggest you to use the sd_card folder binaries. 

You can re-generate the Vitis plugin project files located in the below path to understand the flow. Since I will take few mint. 

zcu104_vcu_ml_2019_2_demo.zip

         -> workspaces

0 Kudos
jasonwu
Moderator
Moderator
1,553 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

Thanks for your input.

In the meanwhile I would suggest you to try the same as I did and check if the issue still occur. - means? 

I would suggest you to reproduce the issue on the reference design not your own platform.

 

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
watari
Teacher
Teacher
1,438 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

I confirmed debug log file. However, there wasn't enough information to investigate the route cause.

Would you share the following log files ?

 

GST_DEBUG=3 gst-launch-1.0 filesrc location=demo_inputs/face_15fps.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! videoscale method=0 ! video/x-raw, width=480,height=360 ! videoconvert ! queue ! sdxfacedetect ! queue ! videoconvert ! autovideosink

2)

GST_DEBUG=5 gst-launch-1.0 filesrc location=demo_inputs/face_15fps.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! videoscale method=0 ! video/x-raw, width=480,height=360 ! videoconvert ! queue ! sdxfacedetect ! queue ! videoconvert ! autovideosink

 

Best regards,

0 Kudos
deepg799
Explorer
Explorer
1,420 Views
Registered: ‎01-20-2019

@watari 

Thanks for your support.

Attaching the same.

0 Kudos
deepg799
Explorer
Explorer
1,412 Views
Registered: ‎01-20-2019

Hii @jasonwu 

I have checked on reference design (zcu104) also and getting the same issue.

0 Kudos
deepg799
Explorer
Explorer
1,364 Views
Registered: ‎01-20-2019

Hii @jasonwu 

Just to inform you.

The Xilinx 8-Ch streams reference design is integrated with the Vitis-1.0 support and my shared binaries & steps are for Vitis-AI1.1 so you needs to modify the source code accordingly. 

Thanks & regards,

Gagandeep 

watari
Teacher
Teacher
1,277 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

From your shared debug 5 log file, autovideosink required proper caps. on the pipefile. (see 1).)

However you didn't set it.

Also, drm went to proper stage by autovideosink on gstreamer pipeline. (see 2).) Because it set insufficient parameter on drm, drm clashed.

So, you encountered this issue.

 

1)

>0:04:46.923887050 2709 0x5599ee1a00 DEBUG GST_CAPS gstutils.c:3065:gst_pad_query_caps:<autovideosink0-actual-sink-kms:sink> query returned video/x-raw, format=(string){ BGRA, BGRx, RGBA, RGBx, RGB, BGR, GRAY8, UYVY, YUY2, YVYU, I420, YV12, Y42B, NV12, NV21, NV16, GRAY10_LE32, NV12_10LE32, NV16_10LE32 }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-raw(format:Interlaced), format=(string){ BGRA, BGRx, RGBA, RGBx, RGB, BGR, GRAY8, UYVY, YUY2, YVYU, I420, YV12, Y42B, NV12, NV21, NV16, GRAY10_LE32, NV12_10LE32, NV16_10LE32 }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ], interlace-mode=(string)alternate; video/x-raw(memory:XLNXLL), format=(string){ BGRA, BGRx, RGBA, RGBx, RGB, BGR, GRAY8, UYVY, YUY2, YVYU, I420, YV12, Y42B, NV12, NV21, NV16, GRAY10_LE32, NV12_10LE32, NV16_10LE32 }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ]; video/x-raw(format:Interlaced, memory:XLNXLL), format=(string){ BGRA, BGRx, RGBA, RGBx, RGB, BGR, GRAY8, UYVY, YUY2, YVYU, I420, YV12, Y42B, NV12, NV21, NV16, GRAY10_LE32, NV12_10LE32, NV16_10LE32 }, width=(int)[ 1, 2147483647 ], height=(int)[ 1, 2147483647 ], framerate=(fraction)[ 0/1, 2147483647/1 ], interlace-mode=(string)alternate

0:04:47.257489700 2[ 467.616369] [drm] Pid 2709 closed device

(snip)

>stelement.c:2899:gst_element_set_state_func:<autovideosink0-actu[ 467.796986] [drm] Pid 2709 opened device
>al-sink-kms> final: setting state from NULL to READY
>[ 467.805581] [drm] Pid 2709 closed device
>[ 467.815521] [drm] Pid 2709 opened device
>[ 467.819455] [drm] Pid 2709 closed device

 

2)

>0:05:00.308127900 2709 0x5599ee1a00 DEBUG GST_CAPS gstpad.c:2733:gst_pad_get_current_caps:<videoconvert1:sink> get current pad caps (NULL)
>0:05:00.308157880 2709 0x5599ee1a00 DEBUG GST_CAPS gstpad.c:2733:gst_pad_get_current_caps:<videoconvert1:src> get current pad caps (NULL)

 

## Solution

a) Make sure proper caps before autovideosink.

b) Make sure current drm graph by the following command and set proper parameters.

modetest -M xlnx

 

Best regards,

jasonwu
Moderator
Moderator
1,261 Views
Registered: ‎03-27-2013

Hi @deepg799 ,

 

So with your latest comment I can see you modify the Vitis version/libraries/SW code from this reference design.

To be honest it may be a little hard to debug this kind of issue out.

 

And I can @watari provide very detailed suggestions, would you please have a try first?

Best Regards,
Jason
-----------------------------------------------------------------------------------------------
Please mark the Answer as "Accept as solution" if the information provided is helpful.

Give Kudos to a post which you think is helpful and reply oriented.
-----------------------------------------------------------------------------------------------
deepg799
Explorer
Explorer
1,176 Views
Registered: ‎01-20-2019

Hi @jasonwu 

Thanks for your valuable inputs.

I have successfully ported the Vitis-AI stacks on this solution. please let me know if you would like to know the procedure for the same.

 

Hi @watari 

1 - Getting the below error while running the command "modetest -M xlnx"

      failed to open device 'xlnx': No such file or directory

2- Pipeline is failed to configure video mode when I am trying to use kmmsink instead of autovideosink. (attaching the log with this replay)

GST_DEBUG=3 gst-launch-1.0 filesrc location=demo_inputs/video.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! videoscale method=0 ! video/x-raw, width=420,height=420 ! videoconvert ! queue ! yolov3 ! fpsd
isplaysink name=fpssink text-overlay=false video-sink="kmssink bus-id=fd4a0000.zynqmp-display fullscreen-overlay=1" sync=true -v -vm

GstMessageError, gerror=(GError)NULL, debug=(string)"../../../git/sys/kms/gstkmssink.c\(1435\):\ gst_kms_sink_set_caps\ \(\):\ /GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstKMSSink:kmssink0:\012failed\ to\ configure\ video\ mode";
ERROR: from element /GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstKMSSink:kmssink0: GStreamer error: negotiation problem.
Additional debug info:
../../../git/sys/kms/gstkmssink.c(1435): gst_kms_sink_set_caps (): /GstPipeline:pipeline0/GstFPSDisplaySink:fpssink/GstKMSSink:kmssink0:
failed to configure video mode

ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
0:00:02.852036540 2794 0x55762c60f0 ERROR kmssink gstkmssink.c:587:configure_mode_setting:<kmssink0> cannot find appropriate mode
0:00:02.852162490 2794 0x55762c60f0 WARN kmssink gstkmssink.c:1435:gst_kms_sink_set_caps:<kmssink0> error: failed to configure video mode

 

DP mode is set to 1024x768 resolution.

Could you please help me on how I can set proper video mode in order to use the kmmsink element instead of autovideosink.

Thanks in advance

watari
Teacher
Teacher
1,130 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

1)

>1 - Getting the below error while running the command "modetest -M xlnx"

>      failed to open device 'xlnx': No such file or directory

 

OK. Would you share the following result ?

 

modetest

 

2)

>DP mode is set to 1024x768 resolution.

>Could you please help me on how I can set proper video mode in order to use the kmmsink element instead of autovideosink.

 

2-a)

>0:00:02.301661860 2680 0x5599a140f0 WARN basetransform gstbasetransform.c:1355:gst_base_transform_setcaps:<videoscale0> transform could not transform video/x-raw(memory:GLMemory), format=(string)RGBA, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)sRGB, framerate=(fraction)15/1 in anything we support
>Got message #66 from element "pipeline0" (stream-start): GstMessageStreamStart, group-id=(uint)1;
>Got message #77 from pad "h264parse0:sink" (property-notify): 0:00:02.506303740 2680 0x5599a140f0 WARN basetransform gstbasetransform.c:1355:gst_base_transform_setcaps:<videoscale0> transform could not transform video/x-raw(memory:GLMemory), format=(string)RGBA, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, colorimetry=(string)sRGB, framerate=(fraction)15/1 in anything we support

>GstMessagePropertyNotify, property-name=(string)caps, property-value=(GstCaps)"video/x-h264\,\ stream-format\=\(string\)avc\,\ alignment\=\(string\)au\,\ level\=\(string\)3.1\,\ profile\=\(string\)high\,\ codec_data\=\(buffer\)0164001fffe1001a6764001facd9405005bb0110000003001000000301e0f183196001000668ebe3cb22c0\,\ width\=\(int\)1280\,\ height\=\(int\)720\,\ framerate\=\(fraction\)15/1\,\ pixel-aspect-ratio\=\(fraction\)1/1";

 

2-b)

>Got message #82 from pad "h264parse0:src" (property-notify): GstMessagePropertyNotify, property-name=(string)caps, property-value=(GstCaps)"video/x-h264\,\ stream-format\=\(string\)byte-stream\,\ alignment\=\(string\)au\,\ level\=\(string\)3.1\,\ profile\=\(string\)high\,\ width\=\(int\)1280\,\ height\=\(int\)720\,\ framerate\=\(fraction\)15/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ interlace-mode\=\(string\)progressive\,\ chroma-format\=\(string\)4:2:0\,\ bit-depth-luma\=\(uint\)8\,\ bit-depth-chroma\=\(uint\)8\,\ parsed\=\(boolean\)true";
>/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)15/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
>Got message #83 from element "omxh264dec-omxh264dec0" (latency): no message details
>Redistribute latency...
>Got message #84 from pad "omxh264dec-omxh264dec0:sink" (property-notify): GstMessagePropertyNotify, property-name=(string)caps, property-value=(GstCaps)"video/x-h264\,\ stream-format\=\(string\)byte-stream\,\ alignment\=\(string\)au\,\ level\=\(string\)3.1\,\ profile\=\(string\)high\,\ width\=\(int\)1280\,\ height\=\(int\)720\,\ framerate\=\(fraction\)15/1\,\ pixel-aspect-ratio\=\(fraction\)1/1\,\ interlace-mode\=\(string\)progressive\,\ chroma-format\=\(string\)4:2:0\,\ bit-depth-luma\=\(uint\)8\,\ bit-depth-chroma\=\(uint\)8\,\ parsed\=\(boolean\)true";
>0:00:02.535279550 2680 0x5599a11b70 ERROR kmssink gstkmssink.c:587:configure_mode_setting:<kmssink0> cannot find appropriate mode
>/GstPipeline:pipeline0/GstOMXH264Dec-omxh264dec:omxh264dec-omxh264dec0.GstPad:sink: caps = video/x-h264, stream-format=(string)byte-stream, alignment=(string)au, level=(string)3.1, profile=(string)high, width=(int)1280, height=(int)720, framerate=(fraction)15/1, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, chroma-format=(string)4:2:0, bit-depth-luma=(uint)8, bit-depth-chroma=(uint)8, parsed=(boolean)true
>0:00:02.840580140 2680 0x5599a11b70 WARN kmssink gstkmssink.c:1435:gst_kms_sink_set_caps:<kmssink0> error: failed to configure video mode

 

Before I reply you, I ask you some questions.

Q)

Refer 2-a) and 2-b). You can find the route cause in 2-a) and 2-b).

In this case, you must clear input resolution and output resolution (1024x768).

Would you share them ?

 

I assume that input resolution is 1280x720p.

In this case, you must do downscale video stream with proper format in videoscale element.

Is it right ?

Also, you must convert proper video format by something.

I'm not sure. I guess you may use videoconvert element.

 

Then, you must set proper parameter on drm graph.

However I don't know your drm graph.

So, would you share the following result to understand your environment ?

 

ls /proc/device-tree/amba_pl@0/

 

After I get your result, I will explain more details.

 

Best regards,

deepg799
Explorer
Explorer
1,103 Views
Registered: ‎01-20-2019

Hii@watari

Thanks for the update.

Would you share the following result ?

$modetest

trying to open device 'i915'...failed
trying to open device 'amdgpu'...failed
trying to open device 'radeon'...failed
trying to open device 'nouveau'...failed
trying to open device 'vmwgfx'...failed
trying to open device 'omapdrm'...failed
trying to open device 'exynos'...failed
trying to open device 'tilcdc'...failed
trying to open device 'msm'...failed
trying to open device 'sti'...failed
trying to open device 'tegra'...failed
trying to open device 'imx-drm'...failed
trying to open device 'rockchip'...failed
trying to open device 'atmel-hlcdc'...failed
trying to open device 'fsl-dcu-drm'...failed
trying to open device 'vc4'...failed
trying to open device 'virtio_gpu'...failed
trying to open device 'mediatek'...failed
trying to open device 'meson'...failed
trying to open device 'pl111'...failed
trying to open device 'stm'...failed
no device found

would you share the following result to understand your environment?

root@vitis-ai:~# ls /proc/device-tree/amba_pl@0/

#address-cells #size-cells compatible interrupt-controller@80020000 misc_clk_0 misc_clk_1 misc_clk_2 name ranges vcu@a0000000 zyxclmm_drm

In this case, you must do downscale video stream with proper format in videoscale element. Is it right ? - Yes, currently I am using the videoscale element.

gst-launch-1.0 filesrc location=demo_inputs/video.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! videoscale method=0 ! video/x-raw, width=420,height=420 ! videoconvert ! queue ! yolov3 ! fpsdisplaysink name=fpssink text-overlay=false video-sink="kmssink bus-id=fd4a0000.zynqmp-display fullscreen-overlay=1" sync=true -v -vm

would you suggest the right parameters for the same?

Then, you must set proper parameter on drm graph. - would you please help me to do the same.

 

0 Kudos
watari
Teacher
Teacher
1,090 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

Uhm, it seems that there is not proper drm on your linux.

So, you can't launch gstreamer.

 

Would you share full boot log message to make sure boot sequence and drm graph ?

 

Best regards,

0 Kudos
deepg799
Explorer
Explorer
1,073 Views
Registered: ‎01-20-2019

Hii @watari 

Please find the attached log for the same.

 

I can able to run the gstreamer pipeline with autovideosink element, not with the kmmsink.

0 Kudos
watari
Teacher
Teacher
1,055 Views
Registered: ‎06-16-2013

Hi @deepg799 

 

Sorry. I misunderstood your environment.

 

# About VCU + CNN Demo

Step 1.

Make sure number of files in /sys/devices/platform/<amba_pl@0 or amba>/<zyxclmm_drm directory name>/

Step 2.

If you didn't find full drm files in this directory or number is less than 6, you must execute insmode zocl.ko .

=> Make sure ko file in your library directory. (Maybe you find it in /lib/modules/<kernel version name>/extra/)

 

After that, I guess you can achieve what you want to do.

 

# About gstreamer

 

>I can able to run the gstreamer pipeline with autovideosink element, not with the kmmsink.

 

In your case, at least there is something wrong on drm graph. (KMS/DRM uses DP Tx as drm connector.)

Make sure the following result whether drm graph is fine or not.

 

ls /sys/devices/platform/amba/fd4a0000.zynqmp-display/drm

 

# Note

 

autovideosink element searches proper sink device and constructs and connects proper drm graph.

So, you can run gstreamer pipeline without any error.

 

I hope this helps.

 

Best regards,

0 Kudos
deepg799
Explorer
Explorer
1,018 Views
Registered: ‎01-20-2019

@watari 

Thanks for the update.

I can able to use the kmmsink with the following pipeline.

 

gst-launch-1.0 filesrc location=<path_to_h264_file> ! qtdemux name=demux demux.video_0 ! h264parse ! omxh264dec ! queue max-size-bytes=0 ! fpsdisplaysink name=fpssink text-overlay=false video-sink="kmssink bus-id=fd4a0000.zynqmp-display fullscreen-overlay=1" sync=true -v

 

Here I am not using video scale to scale down the video file.

0 Kudos