Showing results for 
Show  only  | Search instead for 
Did you mean: 
Registered: ‎09-05-2019

Pull the VCU encoded data to PL and then push it back to the Gstreamer Pipeline

Hello everyone,

I am using the "Zynq UltraScale+ MPSoC VCU TRD 2020.1 - HDMI Video Capture" Design to capture Video from a camera, encode it with the VCU IP and send it with Ethernet using RTP/UDP.

So my current Gstreamer Pipeline looks like this: 

$ gst-launch-1.0 v4l2src device=/dev/video0 io-mode=4 ! video/x-raw, format=NV12, width=3840, height=2160, framerate=60/1 ! omxh265enc qp-mode=auto gop-mode=basic gop-length=60 b-frames=0 target-bitrate=60000 num-slices=8 control-rate=constant prefetch-buffer=true low-bandwidth=false filler-data=true cpb-size=1000 initial-delay=500 periodicity-idr=60 ! video/x-h265, profile=main, alignment=au ! queue ! mpegtsmux alignment=7 name=mux ! rtpmp2tpay ! udpsink host= port=5004



Now I want to get the VCU encoded data to Vivado for processing and after that I want to push it back to the Gstreamer pipeline and continue until udpsink, like above.

So I want to have this pipeline: 

v4l2src device=/dev/video0 ==> encode with VCU IP ==> (pull/read the encoded data to PL ==> process with own IP ==> push back to Gstreamer Pipeline) ==> mpegtsmux ==> rtpmp2tpay ==> udpsink  

I have an idea about how this could be achieved using Framebuffer Read IP (to pull/read the encoded data to the PL) and Framebuffer Write IP (to push it back to the memory/Gstreamer Pipeline). I thought maybe splitting the Gstreamer Pipeline in two Pipelines could help:

1. Pipeline: v4l2src device=/dev/vide0 ==> encode with VCU IP ==> which sink?(FramebufferRead/fakesink?)

---between the two Pipelines the data is processed in Vivado and pushed back with a FramebufferWrite---

2. Pipeline: (Framebuffer Write) ==> mpegtsmux ==> rtpmp2tpay ==> udpsink

Is this idea feasible? If so, how should I assign the Addresses of the FramebufferRead and FramebufferWrite in the Address Editor.

To which addresses saves the VCU IP the encoded data?

And in which relation is the block design in Vivado to the Gstreamer pipeline, does it control the dataflow between the memory areas of the IP-cores used in the Gstreamer pipeline?    


Thanks in advance

0 Kudos
1 Reply
Registered: ‎10-13-2019


you need v4l2convert for that. check mem2mem design flow in wiki.
you could split the pipeline in half but it will screw up when trying to sync the two pipes so you won't get any decent performance out of it. you might also be tempted to do it in userspace with appsink/appsrc elements. but don't do that either the high cpu usage will make it unusable.

hw would be like : vfb read --> pl stream ip --> vfb write
then in device tree (system-user.dtsi) create a mem2mem node like:

		compatible = "xlnx,mem2mem";
		dmas = <&v_frmbuf_rd_0 0>, <&v_frmbuf_wr_0 0>;
		dma-names = "tx", "rx";


if building on a older version (before 2020.2) device tree generator will miss reset pins for video frame buffers.
you must connect vfb reset pins to emio gpio on zynq (for reference look at how VCU reset is connected in zcu104 bsp).
then add reset nodes to device tree (system-user.dtsi) for both of video frame buffers:

	reset-gpios = <&gpio 79 1>; /* the ofset of gpio output pins starts after the input pins of emio gpio */

	reset-gpios = <&gpio 80 1>;

your gstreamer pipeline would be like:

gst-launch-1.0 videotestsrc ! capture-io-mode=4 output-io-mode=4 disable_passthrough=1 ! fakesink
gst-launch-1.0 videotestsrc ! capture-io-mode=4 output-io-mode=5 disable_passthrough=1 ! fakesink /* set io mode properly if you connect to a plugin that supports dmabuf like omxdec


workaround for sub-device mode:

unfortunately the mem2mem driver is buggy and can't support the "with sub-device mode" but there is a workaround for it.
the easiest way i set your pipeline in no subdev mode and then control your ip with a non v4l2 driver.(anything will work so just go for what's neater for you. try: UIO driver, IOCTL based driver or just mmap the registers space.)
keep in mind that the mem2mem driver expects your stream ip to work in free running mode. so config & start you IP then call the gstreamer pipeline.

another(and proper way) is to write a v4l2 subdev driver for it but no matter what I did I couldn't get xilinx-m2m to work with subdev drivers either on VPSS pipeline and custom pipelines that we developed. but the workaround works fine without any performance loss but a little inconvenient.

there's another issue with mem2mem that when resolutions of the sink and source differ, it can't set the active window properly. I've patched the driver but couldn't make time to send it to the maintainer yet(sorry).
but if you gonna do scaling on it you must separate the active window variable in xilinx-m2m.c.
current implementation uses one active v4l2_rect for both sides and messes up when two different resolutions are set. ( no wonder you see VPSS scaler related questions around here alot).

if you got any further questions feel free to ask.

mksafavi [at]


mksafavi [at]
0 Kudos