12-13-2018 06:38 PM
Good Evening Folks,
Attached are a script and output file showing my system's failure to pass 3G and 12G video through using NV16 (4:2:2).
You will see where I have commented out where NV12 (4:2:0) "works".
What am I doing wrong?
Any guidance is appreciated. I have done quite a few iterations to identify what works, and I am at a point where I have eroded into the scalp where my hair used to be.
Run with one of the following two commands:
./xilinx-question.002.bash 3G ./xilinx-question.002.bash 12G
I am running on a ZCU-106, rev C, under 2018.2 until my download completes, then I will update.
12-17-2018 10:46 AM
Sorry for the frustration, but NV16 is not supported in the 2018.1 or 2018.2 versions of the ZCU106 VCU TRD, only NV12 is supported.
If you look at the hardware you will see that the LSBs are removed and the data feeding the Frame Buffer is only 8-bits (NV12).
12-17-2018 05:44 PM
Good Afternoon @chrisar,
I just tried this with the 2018.3 pre-build vcu_sdirxtx. No joy.
Attached are the previous output and current output.
Can you give me any advise?
Scott M. Peimann
12-21-2018 10:56 AM
Good Morning @chrisar,
I appreciate your help on this topic. I must apologise to you, as my previous response was useless, and frankly intellectually void. Please tell me when I waste your time with something like that.
My previous post was vague about what I wanted to communicate - what I should have written is that we are not capable of invoking NV16 (4:2:2 8 bit) pass-through using the 2018.3 pre-build vcu_sdirxtx.
I understand that ten bit video is supposed to be fully functional on the ZCU-106 hardware under 2018.3, however the Linux drivers, Linux utilities, and vcu_gst_app have not been brought up to a tested level of 10 bit functionality.
The problem/goal which we are discussing here is to demonstrate simple, correct behaviour of the system:
Under this test, I would expect to see the data stream transmit from the FPGA differing from the input stream by two bits.
When slightly modified, the bash file, xilinx-question.002.bash will run NV12 pass-through of both 3G and 12G video. Using an external bit-stream analyser to inspect the input and output streams, I see colour transforms of single-colour input, which I have documented for one of our optics people to confirm. If he sees these as problematic transforms, it will be a new topic.
My working NV12 (4:2:0) pass-through is attached as part of this post as xilinx-question.002.NV12.bash. Also attached is a copy of the execution output. To use this script, it requires on argument, 3G or 12G, to specify the SDI input. You will likely have to configure the autostart.sh script, so that modetest is compatible. See my post here, wherein I ask about modetest.
Invocation of SDI pass-through using NV12 on FPGA:
./xilinx-question.002.NV12.bash 3G ./xilinx-question.002.NV12.bash 12G
So, how should I modify the script to give NV16 pass-through?
The two locations marked "DO WE CHANGE HERE FOR NV12 versus NV16?" are where I believe that the script should be modified. Unfortunately neither, or both together, of the commented out changes work.
Apparently I am doing something incorrectly, I am unable to figure out what it might be.
@chrisar, again, I apologise for my previous post on this topic, your time is valuable to me, wasting it is unacceptable.
Hopefully this post is complete enough that you have the "right" information to help me with this problem. If you need more information, I will get it for you.