08-29-2019 03:16 PM - edited 08-29-2019 03:16 PM
I've currently got working a project using gstreamer for video encoding using the VCU. It takes video frames from a USB camera, encode them and then save it to a file.
The camera interface is done at the PS side and the frames are pushed to the gstreamer pipeline.
Now I need to do some frame processing in the PL side before pushing the frames into the pipeline, is there a gstreamer element that allows this?
What would be the best and more efficient way to do this?
Any ideas/help would be appreciated.
Thanks in advance
09-04-2019 03:30 AM
It depends on what you are trying to do. If you processing is only scaling using the Video Processing subsystem, then yes, you will be able to use Gstreamer because the VPSS as a V4L2 driver which can be called using Gstreamer.
But if you are using your custom IP, it will work only if you have the proper drivers.
09-04-2019 06:31 AM - edited 09-05-2019 12:31 AM
Hi @florentw ,
Thanks for your answer,
What I'm trying to do is to convert raw bayer frames to RGB using the Sensor Demosaic IP, then convert them to NV12 and finally feed them to the VCU for encoding.
The data (frames) are received though USB and buffered to memory in the PS domain, I'm able to push the frames to a gstreamer pipeline using the appsrc element, encode and save them in a file, but obviously the resulting video color format is wrong, so I need to do a bayer interpolation and conversion to get the correct NV12 format supported by the VCU.
I could use the PS and video convert element to process and prepare the frames but that is quite processing intensive for the CPU so I need to use the PL in order to try to keep as much as possible the frame rate.
So in summary what I'm trying to do is (In red is the step I need to implement) :
USB frames --> memory (PS)---> push to Gstreamer pipeline (PS) --> Bayer demosaic&NV12 conversion (PL) -> back to Gstreamer pipeline (PS)
USB frames --> memory (PS)--->Bayer demosaic&NV12 conversion (PL) -->memory (PS) --> push to Gstreamer pipeline (PS)
I saw the mem2mem framework as a potential solution but unfortunatelly it doesn't like that supports Y8/GRAY8 or similar format needed to send the raw data to the PL....
Do you know what would be the best way or ways to do this?
09-11-2019 03:55 AM
I am not sure if that would work but what if you read the data as YUYV8. You will get the data as if it was 2 pixel per clock (just assume the chroma sample is actually a Y) and then you use an AXI4-Stream data width converter to move back to one pixel per clock.
This is just a idea, I have no idea if this will integrate well in the framework.
Let me know if you find a solution (or if you try and it work), I would be interested
09-11-2019 04:36 AM
Hi @florentw ,
Thanks for the reply and suggestion.
In summary what I'm seeing are two potential solutions/options, please fell free to add more alternatives (if there're or anyone know), or correct this ones:
- Frame conversion within the gstreamer pipeline: setup the mem2mem driver framework for a passthrough data but insert a axi4-Stream data width converter IP with the sensor demosaic IP, so the PL will look like:
frame-read-IP(YUYV8)-->axi4-Stream converter -> Sensor demosaic IP(RGB) ->frame-write-IP
- Frame conversion before to push to the gstreamer pipeline: setup a video pipeline (will be the same as above) and use the v4l2 driver to send the raw frames memory 2 memory (PS->PL->PS).
The first one seems the easiest but not sure if it would work, I'll try to post the results...