UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

cancel
Showing results for 
Search instead for 
Did you mean: 
714 Views
Registered: ‎06-07-2019

Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi everyone, 

I am using the VCU on the ZCU104 board. I have upload the images contained in the BSP for the ZCU104 (https://www.xilinx.com/member/forms/download/xef.html?filename=xilinx-zcu104-v2019.1-final.bsp) using the software PetaLinux 2019.1. I have installed the software and send the binaries following the instructions of the 2 documents below:

pg252 - H.264/H.265 Video Codec Unit v1.2

ug1144 - petalinux tools reference guide

I have tried several example and all worked. I have compress images and videos of 8 and 10 bits, it worked.

 

Now, I have a 12bits monochrome image (only luma sample), I want to compress it using the VCU Control Software Encoder, so I converted the image into a 8 bits images and into a 10 bits images (for that I have used cross-multiplication, the rule of three). Obviously, with such a simple transformation, the quality of the image is slightly reduced, but that is not the point of this thread. 

I compressed the 8 bits image with this command:

ctrlsw_encoder -i image_8.bin -o image_8.hevc --ip-bitdepth 8 --input-width 3648 --input-height 3648 --chroma-mode CHROMA_MONO --input-format Y800 --ratectrl-mode CONST_QP --sliceQP 0

And the 10 bits image with this one:

ctrlsw_encoder -i image_10.bin -o image_10.hevc --ip-bitdepth 10 --input-width 3648 --input-height 3648 --chroma-mode CHROMA_MONO --input-format Y010 --ratectrl-mode CONST_QP --sliceQP 0

Here is the size of the 2 compressed files:

image_8.hevc = 4.7 Mbytes

image_10.hevc = 2.1 Mbytes

When I decompress it, and print the two images side by side that visually that the 10bits image is way worse than the 8bits one. Many details disappeared on the 10bits image

Why does the compressed image coded over 10 bits have a lower size than the 8bits one ? In the command above, I have specifically asked for the same QP (Quantization Parameter), but is it the same really the same for an image/video of 8 or 10 bits ?

Do you guys have any idea about this ?

Thank you for your time

 

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
Xilinx Employee
Xilinx Employee
311 Views
Registered: ‎08-01-2007

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

@alexandre99999Thanks for pointing out that I just pointed you back to your own post.
I started looking into this more and was able to find some additional information and I have replied to your original post with some clarification to the 10-bit Y-Only formats.

You can find my reply here:
https://forums.xilinx.com/t5/Video/Why-does-the-VCU-Control-Software-Encoder-does-not-output-the/m-p/1017636/highlight/true#M27423

Chris
Video Design Hub | Embedded SW Support

---------------------------------------------------------------------------
Don’t forget to Reply, Kudo, and Accept as Solution.
---------------------------------------------------------------------------
10 Replies
Xilinx Employee
Xilinx Employee
666 Views
Registered: ‎08-01-2007

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

@alexandre99999This is an interesting observation.  The fact that the 10-bit compressed otuput size is smaller than the 8-bit compressed video doesn't make sense.  It seems like there is a problem with out the input data is being treated.

Can you provide the original 12-bit image along with a descrition on the tools and command lines you used to convert from 12-bit to 10-bit?

We'd like to be able to reproduce your results.

Chris
Video Design Hub | Embedded SW Support

---------------------------------------------------------------------------
Don’t forget to Reply, Kudo, and Accept as Solution.
---------------------------------------------------------------------------
Scholar watari
Scholar
636 Views
Registered: ‎06-16-2013

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi @alexandre99999 

 

I'm not sure. But I guess it seems packing issue of luma signal.

Would you make sure to pack this luma signal in binary file of 8bit, 10bit and 12 bit ?

 

Best regards,

619 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi ! 

thank you for your answer, I have joined :

  • The (python) script I use to convert the original image (lena_1016x256_12bits.txt) file from 12 to 8/10bits,
  • The (perl) script to convert the txt file in 8 bits to the Y800 format (Input file contains monochrome 8-bit video samples in big endian),
  • The (perl) script to convert the txt file in 10 bits to the Y010 format (Input files contains monochrome 10-bit video samples each stored in a 16-bit word, for more information on Y010, see this post : https://forums.xilinx.com/t5/Video/Why-does-the-VCU-Control-Software-Encoder-does-not-output-the/td-p/987267),
  • The 12bits image, the converted 10bits image and the converted 8bits image (unfortunately, I cannot join the image of 3648x3648 for confidentiality reasons, but the problem is the same),
  • The 8bits compressed file and 10bits compressed file.

Perl and python scripts are commented, but you can ask me for more explanation if needed.

The image looks like this:

lena_1016x256_12bits.jpg

 

Here is the command I use to convert each file: 

  1. resize.py lena_1016x256_12bits.txt lena_1016x256_08bits.txt 12 8
  2. resize.py lena_1016x256_12bits.txt lena_1016x256_10bits.txt 12 10
  3. txt2y800.pl lena_1016x256_08bits.txt
  4. txt2y010.pl lena_1016x256_10bits.txt
  5. ctrlsw_encoder -i lena_1016x256_08bits.txt.bin -o lena_1016x256_08bits.txt.bin.hevc --ip-bitdepth 8 --input-width 1016 --input-height 256 --chroma-mode CHROMA_MONO --input-format Y800 --ratectrl-mode CONST_QP --sliceQP 0
  6. ctrlsw_encoder -i lena_1016x256_10bits.txt.bin -o lena_1016x256_10bits.txt.bin.hevc --ip-bitdepth 10 --input-width 1016 --input-height 256 --chroma-mode CHROMA_MONO --input-format Y010 --ratectrl-mode CONST_QP --sliceQP 0

Here is the weights of several file:

lena_1016x256_08bits.txt.bin 254.0K
lena_1016x256_08bits.txt.bin.hevc 165.9K
lena_1016x256_10bits.txt.bin 508.0K
lena_1016x256_10bits.txt.bin.hevc 86.4K

lena_1016x256_10bits.txt.bin is about twice the size of lena_1016x256_08bits.txt.bin, which is expected because, for 8bits, we encode value on 8bits whereas for 10bits, we encode pixel value on 16bits (twice as much).

Thank you for helping!

 

0 Kudos
617 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

here are joined the remaining files

0 Kudos
572 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi @watari

I do not believe it is a packing issue of the luma signal because I can decompress with the VCU the two images (the 8bits one and the 10bits one), and I can print them (see the actual pictures that have been compressed). If it was a packing issue, I think that the VCU would have decompressed the files but the binairy output would have make no sense, and no useful data could extracted from it. But this is not the case here:

So, I decompressed the 2 images, transformed them back in text file (perl scripts are attached). 

In the attachments, I have also joined:

  • The decompressed 8bits file from the VCU (lena_1016x256_08bits.txt.bin.hevc.bin)
  • The decompressed 10bits file from the VCU (lena_1016x256_10bits.txt.bin.hevc.bin)
  • The txt file containing the decompressed 8bits luma pixels (lena_1016x256_08bits.txt.bin.hevc.bin.txt)
  • The txt file containing the decompressed 10bits luma pixels (lena_1016x256_10bits.txt.bin.hevc.bin.txt)

Here are the command to use them:

  1. To decompress the 8 bits file: 
    ctrlsw_decoder -i lena_1016x256_08bits.txt.bin.hevc -o lena_1016x256_08bits.txt.bin.hevc.bin
  2. To decompress the 10 bits file: 
    ctrlsw_decoder -i lena_1016x256_10bits.txt.bin.hevc -o lena_1016x256_10bits.txt.bin.hevc.bin
  3. To convert the 8bits binairy in format Y800 to a text file: 
    ./y8002txt.pl lena_1016x256_08bits.txt.bin.hevc.bin
  4. To convert the 10bits binairy in format Y010 to a text file: 
    ./y0102txt.pl lena_1016x256_10bits.txt.bin.hevc.bin

 

 

0 Kudos
518 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi, if the problem if the way the input data is being treated, can you give me more precision about the format Y010 or an example ?

0 Kudos
Xilinx Employee
Xilinx Employee
409 Views
Registered: ‎08-01-2007

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

It is likely that the problem is the formatting.  Take a look at the following forum post where another user provides useful information about the Y010 format.

https://forums.xilinx.com/t5/Video/Why-does-the-VCU-Control-Software-Encoder-does-not-output-the/m-p/988970/highlight/true#M26076

 

Chris
Video Design Hub | Embedded SW Support

---------------------------------------------------------------------------
Don’t forget to Reply, Kudo, and Accept as Solution.
---------------------------------------------------------------------------
0 Kudos
384 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

Hi! 

Thanks for your answer! The funny fact is that I wrote the post and the conclusion of the link you gave. With the format I gave as an answer, I have been able to compress, then decompress some images and to see the decompressed image. However, these was big image with a lot of details, and I could so that a lot of those details had been gone. An the other hand, the 8bits format did not show such behavior. To confirm this I used metrics such as PSNR to confirm that. 

It gave me graphics like these: 

xilinx2.PNGxilinx1.PNG

So, I had conclude

  • that the format I found and I used was not the real Y010, and that I was mistaken at some point
  • Or, I missed used the vcu control software (even though I use almost the same command)
  • or, the VCU has a different behavior in 8 and 10bits (not the same quantization step for the same quantization index for example)

As you and an other user stated that it might be a formatting problem, I am hoping that you could give me more precision about the formatting of the Y010 (than in the pg252 doc) or some example. On the other hand, I have dig in and it seems that an other format exist for monochrome image/video in 10bits : the XV10. But this format is not documented.

I look forward to hearing from you soon.

0 Kudos
370 Views
Registered: ‎06-07-2019

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

I am sorry for the various English mistakes I made in my previous post, I will be more careful next time.

"I could saw that a lot of those details had been gone"

"I wrongly used the vcu control software"

Cordially

 

0 Kudos
Xilinx Employee
Xilinx Employee
312 Views
Registered: ‎08-01-2007

Re: Lower quality with 10bits depth pixels than 8bits with VCU

Jump to solution

@alexandre99999Thanks for pointing out that I just pointed you back to your own post.
I started looking into this more and was able to find some additional information and I have replied to your original post with some clarification to the 10-bit Y-Only formats.

You can find my reply here:
https://forums.xilinx.com/t5/Video/Why-does-the-VCU-Control-Software-Encoder-does-not-output-the/m-p/1017636/highlight/true#M27423

Chris
Video Design Hub | Embedded SW Support

---------------------------------------------------------------------------
Don’t forget to Reply, Kudo, and Accept as Solution.
---------------------------------------------------------------------------