01-10-2018 08:28 AM
We need to implement an LVDS serial interface for a quad ADC device on a Kintex Ultrascale device.
The ADC delivers the serial clock, a frame clock and 16 bit data for each of the 4 channels divided on two wires (8 bit per wire).
We are driving the ADC with 65Mhz which results in a data clock of 260MHz DDR.
The Clock enters the FPGA on a QBC pin (T2U N6/7). The ADC interface is spread on Byte2 and Byte3 in the IO-Bank.
The deserialization is done using Bitslices generated by High Speed SelectIO Wizard (see graphics and receiver_wiz.vhd).
After deserialization a bitslip is done. For the simulation (see receiver_testbech.vhd) I just use a shift by 1 bit to align the data correctly. Bitslip is done for every wire separately. Data is combined afterwards to a 16bit word.
The simulation first generates a pattern which is necessary for my bitslip logic. After Bitslip estimation is done the simulation generates a counter counting from 0 to 65000.
We observe strange effects on the parallel data. Every 256 samples we get 2 faulty samples (see waveform images at marker position), but then the counter continues correctly for the next 256 samples. The error only occurs on the MSB wire. The other wire seems ok.
We cannot figure out what we are doing wrong. So any help is highly appreciated!
Thanks for your help,
01-10-2018 11:20 PM
thanks for the quick response.
We are using a Kintex Ultrascale (xcku035-fbva900-2-e) and use Vivado 2017.4.
01-15-2018 10:25 PM
01-18-2018 03:48 AM
thank you very much for your effort!
In the meantime we have solved the issue.
For the bitslip we used a 3 element shift register but filled it from the wrong side. As our test pattern was a counter it also seemed to work, just the two errors every 256 samples occurred. Using "random" numbers would have unveiled the problem.
05-19-2018 09:52 AM
We have a similar requirement of data deception a TI QUAD ADC, 16-bit. (Peak data rate : 900 Mbps/lane)
Device : ZU+ MPSoC
I have few questions before I start the implementation
To start with, I have
1. LVDS reference design for 7 series FPGA using SERDES (Planning to change IO macros for ultrascale)
2. High Speed SelectIO Wizard
Can you please suggest which could be better in-terms of
1. Ease of doing
2. Resource consumption
Also, can you explain the concept of per-bit skew and bitslip
Thanks a ton