UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
Participant shrtique
Participant
3,999 Views
Registered: ‎08-22-2016

Petalinux VDMA s2mm mm2s + Python1300

Hello, guys!

I have a project which includes Image Sensor OnSemi PYTHON1300 which is connected to ZYNQ7020.

As a start example I used MicroZed.org project FMC-HDMI-CAM + PYTHON-1300-C Vivado HLS Reference Design (http://microzed.org/support/design/6251/51 ). From this project I’ve learned how to work with PYTHON1300 and S2MM VDMA channel under V4L2-framework. And now I need to launch little bit more difficult system:

 

system.png

 

 

 

So, the architecture of the system is very similar to FMC-HDMI-CAM + PYTHON-1300-C Vivado HLS Reference Design. The difference is that my Image Processing module doesn’t have Linux Driver, cuz now it doesn’t need to be configure form OS.

 

 

How to use VideoCAP input when you don’t have sub-dev in Linux?

 

I tried to deceive Linux and put “fake” AXI-Switch IP block (it has V4L2 Driver). I just put it without physical connection to any of the blocks, and made the following DevTree:

For Python1300:

 

&amba {

	axi_vdma_4: axivdma@43000000 {
		compatible = "xlnx,axi-vdma-1.00.a";
		reg = <0x43000000 0x10000>;
		xlnx,flush-fsync = <1>;
		xlnx,num-fstores = <1>;
		#dma-cells = <1>;
		
		dma-s2mmchannel@43000030 {
			compatible = "xlnx,axi-vdma-s2mm-channel";
			interrupt-parent = <&intc>;
			interrupts = <0 52 4>;
			xlnx,datawidth = <0x40>;
		};
	};

	python_spi0: spi@43c10000 {
		compatible = "xlnx,python-spi-3.1";
		reg = <0x43c10000 0x1000>;
		status = "okay";
		clocks = <&clkc 15>;
		clock-names = "ref_clk";
		num-cs = <1>;
		#address-cells = <1>;
		#size-cells = <0>;
		
		python1300_sensor_0: python1300_sensor@0 {
			compatible = "onsemi,python1300-1.00.a";
			reg = <0>;
			spi-max-frequency = <100000>;

			ports {
				#address-cells = <1>;
				#size-cells = <0>;

				port@0 {
					reg = <0>;
					xlnx,video-format = <XVIP_VF_MONO_SENSOR>;
					xlnx,cfa-pattern = "mono";
					xlnx,video-width = <8>;
					python_sensor_source: endpoint {
						remote-endpoint = <&python_rxif_sink>;
					};
				};
			};
		};
	};

	python1300_rxif_0: python1300_rxif@43c20000 {
		compatible = "xlnx,v-python1300-rxif-3.1";
		reg = <0x43c20000 0x10000>;
		clocks = <&clkc 16>;
		
		regv18-gpios = <&gpio0 57 0>;
		regv33-gpios = <&gpio0 55 0>;
		regvpix-gpios = <&gpio0 56 0>;
		resetn-gpios = <&gpio0 59 1>; /* specify unused GPIO */

		ports {
			#address-cells = <1>;
			#size-cells = <0>;

			port@0 {
				reg = <0>;
				xlnx,video-format = <XVIP_VF_MONO_SENSOR>;
				xlnx,cfa-pattern = "mono";
				xlnx,video-width = <8>;
				python_rxif_sink: endpoint {
					remote-endpoint = <&python_sensor_source>;
				};
			};
			
			port@1 {
				reg = <1>;
				xlnx,video-format = <XVIP_VF_MONO_SENSOR>;
				xlnx,cfa-pattern = "mono";
				xlnx,video-width = <8>;
				python_rxif_source: endpoint {
					remote-endpoint = <&vcap_python_in>;
				};
			};
		};
	};


	
	vcap_python {
		compatible = "xlnx,video";
		dmas = <&axi_vdma_4 1>;
		dma-names = "port0";

		ports {
			#address-cells = <1>;
			#size-cells = <0>;

			port@0 {
				reg = <0>;
				direction = "input";
				vpipe-names = "python1300";
				vcap_python_in: endpoint {
					remote-endpoint = <&python_rxif_source>;
				};
			};
		};
	};
	
	
};

And for ImgProc w/ fake AXI-Switch:

 

 

&amba {

	axi_vdma_0: axivdma@43010000 {
		compatible = "xlnx,axi-vdma-1.00.a";
		reg = <0x43010000 0x10000>;

		xlnx,flush-fsync = <1>;
		xlnx,num-fstores = <1>;

		#dma-cells = <1>;
		dma-channel@0x43010000 {
			compatible = "xlnx,axi-vdma-mm2s-channel";
			interrupt-parent = <&intc>;
			interrupts = <0 35 4>;
			clocks = <&clkc 15>;
			clock-names = "axis";
			xlnx,datawidth = <0x40>;
		};
		dma-channel@0x43010030 {
			compatible = "xlnx,axi-vdma-s2mm-channel";
			interrupt-parent = <&intc>;
			interrupts = <0 36 4>;
			clocks = <&clkc 15>;
			clock-names = "axis";
			xlnx,datawidth = <0x40>;
		};
	};


	axis_switch_0: switch@0x43c00000 {
		compatible = "xlnx,v-switch-1.0";
		reg = <0x43c00000 0x10000>;
		clocks = <&clkc 15>;

		#xlnx,inputs = <2>;
		#xlnx,outputs = <1>;

		ports {
			#address-cells = <1>;
			#size-cells = <0>;

			port@0 {
				reg = <0>;
				switch_in0: endpoint {
					remote-endpoint = <&vcap_gamma_out>;
				};
			};
			port@1 {
				reg = <1>;
				
			};

			port@2 {
				reg = <2>;
				switch_out0: endpoint {
					remote-endpoint = <&vcap_gamma_in>;
				};
			};
			
			
		};
	};

	vcap_gamma {
		compatible = "xlnx,video";
		dmas = <&axi_vdma_0 1>, <&axi_vdma_0 0>;
		dma-names = "port0", "port1";

		ports {
			#address-cells = <1>;
			#size-cells = <0>;

			port@0 {
				reg = <0>;
				direction = "input";
				vcap_gamma_in: endpoint {
					remote-endpoint = <&switch_out0>;
				};
			};

			port@1 {
				reg = <1>;
				direction = "output";
				vcap_gamma_out: endpoint {
					remote-endpoint = <&switch_in0>;
				};
			};
		};
	};
};

 

 

  • The second problem is connected with initialization of these two VDMAs under V4L2-framework.

Firstly I’ve just used code example from FMC-HDMI-CAM + PYTHON-1300-C Vivado HLS Reference Design. In this example Python1300 S2MM channel is initialized as DMA, Image Processing MM2S channel as MMAP and ImgProc S2MM as DMA (but I suppose in my situation it could be MMAP too).

According to V4L2 API to share buffers between Python1300 S2MM DMA and Image Processing MM2S MMAP we should use ioctl VIDIOC_EXPBUF (https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/vidioc-expbuf.html ). In API’s example we get file descriptor from MMAP buffer, and assign it to DMA’s buffers, it seems OK. But if we look at the example’s code from Microzed.org, we could see, that they assign exported descriptor from MM2S MMAP buffers to MM2S MMAP buffers again (line 18)! And after that they queue only ImgProc MM2S and S2MM channels, so that I couldn’t see any mention of Python1300 S2MM DMA channel…

Snippet of the code:

 

#define BUFFER_CNT 3

int i;
for(i=0;i<BUFFER_CNT;i++)
{
	info->s2m_stream_handle.video_in.vid_buf[i].index =i ;
	info->s2m_stream_handle.video_post_process_in.vid_buf[i].index=i;
	info->s2m_stream_handle.video_post_process_out.vid_buf[i].index =i;

	/* export buffer for sharing buffer between two v4l2 devices*/
	memset(&eb, 0, sizeof(eb));
	eb.type = info->s2m_stream_handle.video_post_process_in.buf_type;
	eb.index = i;
	ret = ioctl(info->s2m_stream_handle.video_post_process_in.fd, VIDIOC_EXPBUF, &eb);
	ASSERT(ret< 0, "VIDIOC_EXPBUF failed: %s\n", ERRSTR);

	info->s2m_stream_handle.video_post_process_in.vid_buf[i].dbuf_fd =eb.fd;	

	/*Queue buffer for video_post_process_in pipeline*/
	v4l2_queue_buffer(&info->s2m_stream_handle.video_in,						
			& (info->s2m_stream_handle.video_post_process_in.vid_buf[i]));

	/* Queue buffer for video_post_process_out pipeline */
	v4l2_queue_buffer(&info->s2m_stream_handle.video_post_process_out,			
			& (info->s2m_stream_handle.video_post_process_out.vid_buf[i]));
}

void v4l2_queue_buffer(struct v4l2_dev *dev, const struct buffer *buffer)
{
	struct v4l2_buffer buf;
	int ret;

	memset(&buf, 0, sizeof buf);
	buf.type = dev->buf_type;
	buf.index = buffer->index;
	buf.memory = dev->mem_type;
	if(dev->mem_type == V4L2_MEMORY_DMABUF) {
	buf.m.fd = buffer->dbuf_fd;
	}
	ret = ioctl(dev->fd, VIDIOC_QBUF, &buf);
	if (ret < 0)
		ASSERT(ret, "VIDIOC_QBUF(index = %d) failed: %s\n", buffer->index, ERRSTR);
}



 

 

 

  • So, how this approach works? I can’t match it with the API’s description… But somehow… it works, but not all the stages!

There are some results of debugging my system with “strange” application code based on microzed’s example and previously mentioned “fake” dev-tree.

medias_v5.png

 

debug.png

 

As you could see on diagram, there is some data streaming from RAM to ImgProc module (M_AXIS_MM2S), ImgProc module works fine and it streams data to ImgProc S2MM channel (S_AXI_S2MM), but there is no DATA on the path to RAM Memory… (M_AXI_S2MM).

 

 I really hope, that someone have met this problem....

 

Thanks!

10 Replies
Observer lrocher
Observer
1,040 Views
Registered: ‎09-28-2017

Re: Petalinux VDMA s2mm mm2s + Python1300

hello

when i try your solution with the vdma and signal processing i have a problem :

it don't want execute ret = ioctl(dev->fd, VIDIOC_REQBUFS, &rqbufs);

and i have a problem when i execute v4l2-compliance

 

Buffer ioctls:
test VIDIOC_REQBUFS/CREATE_BUFS/QUERYBUF: OK
fail: ../../../v4l-utils-1.12.3/utils/v4l2-compliance/v4l2-test-buffers.cpp(571): q.has_expbuf(node)
test VIDIOC_EXPBUF: FAIL

0 Kudos
Participant shrtique
Participant
1,020 Views
Registered: ‎08-22-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

@lrocher

Hi!

I'm off work now. Could you please wait till Monday 01.14? I will check our code structure and consult with my teammate.

And could you provide little bit more information about your problem? I want to understand with which part you have a problem, wether it's about S2MM or MM2S VDMA channel? Your Device tree and code of v4l2 stream initialization could also be useful.

 

Btw, I haven't solve the main problem. I've just put it off for some time. Now  I use only S2MM channel of VDMA  in my projects, all the image processing is being done "on the fly" w/o buffering the whole image. Just accumulate image data enough for "filtering window" and after that process everything in real time.

Something like this: Image 10x10 pixels, LINE_BUFFER SIZE = IMG_WIDTH - KERNEL_SIZE = 10 - 5  = 5

//    DATA         KERNEL_BUFFER 5x5      LINE_BUFFER
// 47, 46, 45 --> [ 44, 43, 42, 41, 40 ] --> [ 39, 38, 37, 36, 35 ] --> *
//                * --> [ 34, 33, 32, 31, 30 ] --> [ 29, 28, 27, 26, 25 ] --> *
//                * --> [ 24, 23, 22, 21, 20 ] --> [ 19, 18, 17, 16, 15 ] --> *
//                * --> [ 14, 13, 12, 11, 10 ] --> [ 09, 08, 07, 06, 05 ] --> *
//                * --> [ 04, 03, 02, 01, 00 ] 

0 Kudos
Explorer
Explorer
366 Views
Registered: ‎07-06-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Hi @shrtique ,

I'm trying to do something similar but only implementing the "image processing " block of your diagram having on my case a couple of processing IPs : "Sensor Demosaic" +  "VPSS-CSC" where instead the python camera the data source would be a memory buffer. 

I'm facing same problem as @lrocher  and I'm having an error when I run the buffer request ioctl (VIDIOC_REQBUFS), I post the problem with a quick testing code  here .

I'm not sure what I'm missing.... How do you initialize the v4l2 driver? could you please share the code where you initialize the v4l2 pipelines?

Thanks

0 Kudos
Participant shrtique
Participant
343 Views
Registered: ‎08-22-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Hi, all.

To help you, I only could provide you a snippet of our code for initialization of v4l2. The main thing is that it works for us only with 2015.4 version of petalinux.

Recenlty we tried to move to the 2018.2, and it doesn't work. I described this problem here (last message): https://forums.xilinx.com/t5/Video/Incorrectly-generated-DT-for-Test-Pattern-Generator/m-p/981878?attachment-id=70587#M25724

The main problem is that after linux booting I should have video device (vdma), sub-device (any IP-module that is involved in pipeline and it should be configured during stream start), and media device (it's like a wrapper of video and sub devices into the same pipeline). But in 2018.2 I don't have this media device, and I still haven't investigated why.

@joseer I looked briefly through your topic, and for me it seems that you also have video device and sub-devices, but without media. The thing is, that video and sub-devices have to be put into device tree, but creation of media entity is up to drivers responsibility. We spent two weeks going through drivers of v4l2 for 2018.2 but without any success. All the official examples for 2018.2 are based on Zynq UltraScale and so it has their own drivers.

0 Kudos
Explorer
Explorer
331 Views
Registered: ‎07-06-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Hi @shrtique ,

Many thanks for your help and quick answer. I'll check the code and try to find out where the problem is...

I have the media (media0) device created, and I can print it (media-ctl -p) . I've got all video  subdevices added to the device tree...also the video0 and video1 devices are created...

 

 

 

0 Kudos
Participant shrtique
Participant
288 Views
Registered: ‎08-22-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

 

@joseer 

Sorry, my fault that I missed media-entity log. So seems that everything is probed succesfully in your design.

Btw which chip do you use, is it ZYNQ 7000 or UltraScale?

0 Kudos
Explorer
Explorer
270 Views
Registered: ‎07-06-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Hi @shrtique ,

Thanks for your answer, 

Sorry I didn't mention the chip I'm targeting, It is a Zynq Ultrascale+ device.

Still don't understand why  I've got returned  EINVAL after requesting buffers (VIDIOC_REQBUFS) or I've got wrong capabilities when I query VIDIOC_QUERYCAP all from user space (xsdk)....where using yavta plugin (same code than I'm showing in my topic ) all of this works....

0 Kudos
Participant shrtique
Participant
248 Views
Registered: ‎08-22-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

@joseer  Ok, I see... It's really strange, but still not a rare case when you are working with PetaLinux + ZYNQ...

The only thing according to my experience I could recommend you - try it with different versions of Vivado and Petalinux.

Another variant - drivers debugging. That's a huge work, cause in different versions of PetaLinux they sometimes change drivers dramatically. To do this, me and my colleague were searching text (error messages from console) in driver's files and marking some places in the code with printk, after that reassembling kernel. With new information we were searching for particular function calls. And step by step unraveling v4l2 stuff.

Maybe it will be possible to find out with such printk messages what's the differents in settings between user interface sequence and yavta plugin.

With some problems in 2015.4 we've managed to make some fixies with such approach. But still with that problem in 2018.2 we failed and returned to 2015.4.

 

updated: fyi I attached drivers that we were investigated. spi folder and drivers that are connected with python1300 are not iterested for you. But other files maybe could save your time little bit. But it should be noted that we worked with ZYNQ7000 and they could use different driver for DMA :(

0 Kudos
Highlighted
Explorer
Explorer
226 Views
Registered: ‎07-06-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Hi @shrtique , thanks for that, I'll have a look to the drivers...

Yes, we tried it with different vivado versions but always >=2018.2 and with same results. It is because of that we're thinking that we're doing something wrong..... I might test it with much older versions and see if the problem is driver version related, I'm trying to avoid the drivers debugging route at the moment (but anyway I'm seeing that coming)...

Regarding your issue, I had similar problems due to the "compatible" option in the device tree, have you tried different versions? I don't know about the chroma resample IP but for instance the tpg got the xlnx,v-tpg-8.0 version... on another hand, have you tried to remove the vpipe-names = "python1300"; option from the video_cap node?

0 Kudos
Participant shrtique
Participant
218 Views
Registered: ‎08-22-2016

Re: Petalinux VDMA s2mm mm2s + Python1300

Yes, I tried different compatibles. First of all I stared to work just with the pipeline where were only my ip-cores and so the drivers for them. When it didn't work I decided to create the pipeline with some some standart cores and thier drivers. Result was the same.

Unfortunately I can't remember if I tried to work without vpipe-names, but now I don't have a quick chance to check it :) 

If you have tried older versions, please make a note here or in your thread even if results are bad :)

0 Kudos