07-20-2017 04:37 PM
I am passing bytes from the PS to the PL into a BRAM FIFO. I am then sending the data one bit at a time to the convolutional encoder core per its requirements. The encoder is set to k=1/2 rate 7 [171,133]. It is passing the two LSBs to a glue logic core that is splitting them to a 16b output bus where bit_1 (from the conv. enc core) is mapped to bit 8 (for the Viterbi core) and bit_0 (from the conv. enc core) is mapped to bit 0 (for the Viterbi core).
The Viterbi core is standard, no puncturing, hard coded, with the same codes. The output data is then used to reconstruct bytes which is sent to another BRAM FIFO which the PS reads from.
Now no matter how much data I pass to the TX FIFO when I read back from the RX FIFO I am always 26 bytes short. Everything up until those final bytes is correct though; the cores are successfully encoding/decoding the data.
I added a register to the intermediate glue logic core to count the amount of data that passes through it and it is working as expected. For example I pass 100 bytes worth of data to the convolutional encoder core and see 800 transactions pass through my glue logic to the viterbi core. So I don't know why it is dropping 26 bytes worth of data... Any ideas?
07-24-2017 09:52 AM
It's been a while since I worked with a convolutional encoder or a Viterbi but doesn't the encoder spread out information from a single bit period over the length of the encoder so that if your encoder is of length n, the information in one byte is contained in 8+n bits? The encoder should be contiguous between bytes so that 50 bytes should require (50*8 + n) bits. The Viterbi may require some history as it is basing its output on the current and previous n-1 bits to determine the most likely output. Long story short, you might need to clock some extra bytes through to get all of your information.
07-24-2017 10:38 AM
From what I can tell the convolutional encoder takes in a single bit at a time then outputs the two parity bits associated with it from the core. So 50 bytes will take (50*8) transactions of single bits and each transaction has a latency of 4 cycles with my current setup. Since these cores are all using axi-stream I am simply providing data as fast as the cores are requiring it and then clearing the viterbi core internal states once all of my data has passed by asserting the reset line on the core.
I was running under the impression that simply providing the parity bits directly to the viterbi core in order was all that needed to be done at the top level; the ip core documentation doesn't specify doing anything more. Are you suggesting that I need to push some redundant information through the core in order to recover all the data? Again all of the initial data is correctly decoded and it is only the tail of my message that is being dropped.
07-27-2017 02:52 PM
I've discovered a pretty sloppy solution and would like feedback on what exactly is happening here and how to fix it properly.
I am writing blocks of 255 symbol bytes to the convolutional encoder as described previously. In addition at the end of each block I am transmitting an extra 25 bytes of all zeros. If I don't transmit these tail values then the Viterbi doesn't flush out all of my valid symbols.
let S=255 symbol bytes (Information) and T=25 Tail Zero Bytes (no Information)
I initialize the core by writing T to it.
Every following transaction I write S+T and before I read my data I need to flush out T. The remaining 255 symbols are my valid data.
Then the operational steps are ::
all following transactions :
This core needs to be pipelined between other cores so in order to perform this solution in logic I need to write glue logic around the inputs and outputs of the Viterbi core to follow these steps. I'd really rather not add all the extra latency and logic to the fabric. This also CANNOT be the intended use of the core so what am I missing -- what is the proper usage? And yes, I feel bad about having to post this hacky solution.