11-04-2019 07:47 PM
I am using the example design for LDPC IP with the following paramters:
Algorithm: Normalized MinSum
And I am simulating this core against its bit level accurate MATLAB model (provided with the core)
I am fixinig the configuration between both runs however I am getting missmatch results between MATLAB reference model and IP RTL simulation. Any ideas how to match their results?
11-04-2019 11:22 PM
The only thing I am concered about is the input scaling, it is not that clear in the product guide.
It says that input should be scaled to -7.75 to 7.75 even though each llr is represented in 8 bits(6,2) which spans from -31.75 to 31.75.
11-06-2019 10:09 AM
Hi @momran ,
Can you please specify how the LLR is calculated. LLR should be calculated as mentioned in the PG281. [ LLR = (Pr=1/Pr=0)]
The LLR calculation in MATLAB function is generally inverse to what is mentioned in PG281. [ LLR = (Pr=0/Pr=1)]
The MEX-model have an inbuilt saturation function to limit the LLR within the range, so user may not be concerned about it. Also the sc_idx=12 (0.75) is a good starting point for the decoder operation.
11-10-2019 11:24 PM
Hi @momran ,
What I meant to say was if the LLR function from MATLAB library is used, then this LLR will be generally inverted to LLR calculation required as in PG281.
LLR calc should be P(1)/P(0) for both IP and Mex model. How is the LLR computed in both these models in your design ?
11-13-2019 03:27 AM
I am not using LLR function from MATLAB library.
I am using the MEX model that comes with the Xilinx LDPC IP and nowhere in the pg280 it says that LLRs are caculated differently, hence they should be the same.
Also the model is bit accurate so it should be matching excatly the hardware output, given the configuration is the same.
My question about the normalization, if I choose a scaling factor of 12, what effect should it do for the input LLRs