09-09-2020 07:12 PM
I am doing a FB RD - FB WR as mem2mem device (https://xilinx-wiki.atlassian.net/wiki/spaces/A/pages/80707705/Mem+2+Mem+without+any+Sub-device+device)
It works if FB RD and FB WR are of the same xlnx,vid-formats . However, if FB WR format is different from FB RD, it does not work (I would put a bridge IP to do a color space conversion).
I'd like to ask if this is even possible that using mem2mem to achieve this? How to control the FB WR format?
The below command did not return any error, but is not effective:
v4l2-ctl --device /dev/video0 --set-fmt-video=width=1280,height=720,pixelformat=BGR
09-10-2020 07:19 AM
How are you actually creating and testing your pipeline? Are you using gstreamer or writing your own application to directly use the v4l2 API?
The frame buffer DMAs are like auxiliary devices of a V4L2 pipeline. They aren't V4L2 devices themselves, but rather are usable by other V4L2 devices (like Xilinx's V4L2 driver). The driver for the DMAs has special exported functions in order to control the formats of the engine. If you look at the various Xilinx V4L2 or DRM drivers, you will see that they actually directly call these functions to change the formats. So, if you are using a standard Xilinx device, you can simply use the V4L2 userspace API to change the format. However, if you are doing something special or custom, you might need to write your own kernel driver to create a logical device in order to allow you to do this from userspace via V4L2 or other method (as DMA has no direct userspace interface).
09-10-2020 09:30 AM
I would want to use gstreamer, and I intend to do the same as v4l2videoconvert plugin does with mem2mem. The only different is that, instead of copying the format, I want to save the frame as new format. Do you have any suggestions for a reference samples that I could follow?
Thanks so much in advance.
09-10-2020 11:00 AM
@peakpeak could you put a small text diagram of what your complete pipeline would look like to help describe the flow you're trying to achieve for me? Not necessarily a gstreamer one, just the actual flow of data between components:
video-source -> component1 -> component2 -> sink
09-10-2020 06:46 PM
It's like below
gst-launch-1.0 filesrc location=/media/card/video.mp4 ! qtdemux ! h264parse ! omxh264dec internal-entropy-buffers=3 ! queue ! v4l2video0convert output-io-mode=5 capture-io-mode=4 disable-passthrough=1 import-buffer-alignment=true ! videoscale ! video/x-raw, width=1920, height=1080, format=BGR ! queue ! fakevideosink
where: v4l2video0convert would do the CSC (input:NV12, output:BGR), the videoscale would do the resize (input:BGR, output:BGR).