Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

- Community Forums
- :
- Forums
- :
- Hardware Development
- :
- AI Engine, DSP IP and Tools
- :
- LDPC IP simulation and matlab missmatch

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

momran

Observer

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-04-2019 07:47 PM

658 Views

Registered:
10-26-2018

LDPC IP simulation and matlab missmatch

I am using the example design for LDPC IP with the following paramters:

Standard: DoCSIS

Operation: Decode

Algorithm: Normalized MinSum

And I am simulating this core against its bit level accurate MATLAB model (provided with the core)

I am fixinig the configuration between both runs however I am getting missmatch results between MATLAB reference model and IP RTL simulation. Any ideas how to match their results?

8 Replies

Highlighted
##

The C-Model should match IP core simulation, check if the parameters can match, if the data input can match.

nathanx

Moderator

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-04-2019 10:07 PM

639 Views

Registered:
08-01-2007

Highlighted
##

momran

Observer

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-04-2019 11:22 PM

626 Views

Registered:
10-26-2018

It says that input should be scaled to -7.75 to 7.75 even though each llr is represented in 8 bits(6,2) which spans from -31.75 to 31.75.

Highlighted
##

vkanchan

Xilinx Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-06-2019 10:09 AM

569 Views

Registered:
09-18-2018

Hi @momran ,

Can you please specify how the LLR is calculated. LLR should be calculated as mentioned in the PG281. [ LLR = (Pr=1/Pr=0)]

The LLR calculation in MATLAB function is generally inverse to what is mentioned in PG281. [ LLR = (Pr=0/Pr=1)]

The MEX-model have an inbuilt saturation function to limit the LLR within the range, so user may not be concerned about it. Also the sc_idx=12 (0.75) is a good starting point for the decoder operation.

Highlighted
##

momran

Observer

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-10-2019 11:01 PM

516 Views

Registered:
10-26-2018

I have fixed the noramlziation factor for both.

I will check the LLR calculation in the MATLAB.

Highlighted
##

Also can you specify where in the pg280 it says that LLR in the c or MATLAB model the LLR is calculated as P(0)/P(1)

momran

Observer

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-10-2019 11:07 PM

514 Views

Registered:
10-26-2018

Highlighted
##

vkanchan

Xilinx Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-10-2019 11:24 PM

508 Views

Registered:
09-18-2018

Hi @momran ,

What I meant to say was if the LLR function from MATLAB library is used, then this LLR will be generally inverted to LLR calculation required as in PG281.

LLR calc should be P(1)/P(0) for both IP and Mex model. How is the LLR computed in both these models in your design ?

Highlighted
##

momran

Observer

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

11-13-2019 03:27 AM

476 Views

Registered:
10-26-2018

I am not using LLR function from MATLAB library.

I am using the MEX model that comes with the Xilinx LDPC IP and nowhere in the pg280 it says that LLRs are caculated differently, hence they should be the same.

Also the model is bit accurate so it should be matching excatly the hardware output, given the configuration is the same.

My question about the normalization, if I choose a scaling factor of 12, what effect should it do for the input LLRs

Highlighted
##

Have you found a solution? I am also getting a mismatch is simulations on the encoder side of things.

brendan.gee

Visitor

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

12-05-2019 11:49 AM

363 Views

Registered:
07-17-2019