06-11-2020 12:37 AM
I'm working on a video processing project involving frame rate conversion, based on artix or kintex device (depends, actually). The video is streaming and the output must have relatively low latency. The source fps can vary in wide range dynamically - let say something like 30 to 120 Hz, the output expects 60 fps. As far as I have implemented it it is based on VDMA capabilities to repeat/drop frames. And yes, it is working, but of course when the input fps is different from 30/60/120 (as the output produces 60 fps) there are the expected effects like choppy video, e.t.c... So it is not good solution.
I'm looking for other way to compensate the difference in fps. What is the right way to handle the situation to produce smooth video output? So far I haven't found such IP or XAPP describing fps conversion process involving motion compensation or something to improve the quality of the streaming video. Can you give me some directions here, how to proceed? Some "practical theory" will be good help too.
06-11-2020 11:21 PM - edited 06-11-2020 11:22 PM
I don't have experienced with Xilinx IPs for video.
I have used triple frame buffer for frame rate conversion from 1080p24 to 1080p60 to prevent video display error.
With double frame buffer, Vido could be displayed, but in moving pictures, the video could happen the split of vertical area.
In frame rate conversion from 1080p30 to 1080p60, I think that you could used duble frame buffer.
Try to use triple frame bufferring concept for your case.
06-11-2020 11:31 PM
@k621219 , thank you for your answer. As far as you have read my question - I'm doing fps conversion already using VDMA - it should use at least triple buffer concept to work, so it is done. And it is working - the frame rate conversion with repeating or dropping frames, there aren't artifacts as you are describing, because of the right moment of buffer switching. But.. well, the stream with moving objects is not smooth, it is understandable, so I need to improve it somehow. Any other suggestions?
06-11-2020 11:52 PM - edited 06-11-2020 11:55 PM
30fps ---> 60fps , frame repition, I think that just frame repetion will be different with orignal 60fps (if you have original video).
In our experinece, just frame repetion was similar with 30fps video display effect.
I think that you could use two frame adding image instead of just frame repetion frame.
But I am not sure that it will be good for your project.
Before you do it, By using ffmpeg software, try to convert from 30 fps to 60 fps by using video files.
Compare with difference between just frame repetion and ffmpeg result.
The ffmpeg software is free and very useful for video project.
120fps ---> 60fps, frame skip, I think that it is same with original 60fps video.
06-12-2020 12:06 AM
I am not exactly sure how you could do a motion adaptation with a low latency. I would say that you need at least 1 frame + latency because you need to have one frame in advance to be able to recreate an intermediate frame if needed.
So my thoughts is that you fill some buffers (a bit like a bit FIFO) and depending on the speed of the reception and consumption, you recreate frames to try to compensate.
Maybe there are some AI net which can help without knowing what the future frame will be...
06-12-2020 12:08 AM
@k621219 Yes, you are right, but what if I have input fps for example.. 44Hz? or 102Hz? The output is always 60Hz. So It is normal to have some frames dropped or repeated, but the end result is choppy video - like freezing or jumping of the moving objects. I'm looking for some ways to smooth the video in such situation. What do you mean by "use two frame adding image"? May be creating new frame from two frames (or more) to fill the gaps in the output stream?
06-12-2020 01:26 AM
@florentw Thank you, may be I didn't explain well my requirements - "relatively low latency" means something like up to 3-5 frames delay for "smoothing" the video. There are additional delays in the processing, so I can spend maximum 5 frames delay here, before it became critical. The important thing is to preserve the objects edges and shape as close as possible to the reality, so simple frame addition probably will not be the best.
06-12-2020 08:15 PM - edited 06-12-2020 08:30 PM
This is a example. It is a linear interpolation.
In pixel and line level, the linear interpolation is used usually.
In frame level, but I had not used also except for two average.
If you could do linear interpolation for your frame conversion, I think that the result video will be better.
But I think that it will not be simple technically.
So I recommand to use ffmpeg software for frame rate conversion.
I think that ffmpeg software will use just repeting or skippim for frame rate conversion.
Compare the differentiation between your result and ffmpeg result.
I think that you might find that your logic or Xilinx IP have some bugs.
I did google search with "frame rate conversion algorithm"
I think that you could find a solution.
06-14-2020 03:02 PM
Do you really need motion compensation ?
As you mentioned, I'm sure that you only need to implement frame control function to avoid frame tearing and flicker.
If yes, you should consider how to control frame buffer like using double or triple buffer.
06-15-2020 06:26 AM
@k621219 It is not possible to publish any video related to the project as it is commercial. So, I'm sorry, but no.
@watari Actually, I'm not sure really about what I need, as far as now I'm researching the options. As fact you are right - most probably I will need to manage buffering to keep the video source, processing and sink in synchronized. Well, it is also problematic as task, because of the nature of processing, requires to buffer some frames and also have some limitations like up to 100fps capability.
I was hoping to keep the source at variable frame rate (30-120fps), resynchronize it to fixed processing rate and keep the output also fixed. Which is the main bottle neck actually - the sink have limited capability like 60 fps, but the source may vary. This will be the perfect solution, but the difference in the fps produces problematic video when the objects are moving.
06-15-2020 02:49 PM
I'm sure that you only need to consider motion compensation, when it goes up to frame rate.
Because there are some objects which are moving and/or staying on generated frame.
Also, when it goes down to frame rate, you only need to manage buffering to keep the video source processing, sink in synchronized with multi frame buffers and to avoid frame tearing and flicker.
It's an easy solution if you didn't need to consider the sink at variable frame rate.
I suggest you to consider to manage frame buffer with triangle on screen or paper in geometry.
It's an easy way to understand difference of pixel clock between source and sink at vertical frame rate.
06-26-2020 01:47 AM
Probably I will play with the buffers or synchronization between source, processing and sink. So far I will the topic open as there is no viable solution, but will post update if needed.