03-17-2015 03:45 AM
I have a board with Kintex-7 and AD9637 (Data Sheet atached), and need to read the ADC data into the FPGA.
The AD9637 uses DDR and one differential LVDS data lane per ADC.
For 75Msps, this results in a line rate of 900Mbit/second. According to DS182, even the HR pins of a -1 speed grade are able to deal with this data rate.
But UG471 implies that the ISERDESE2 element of 7-series FPGAs is not able to deal with 12bit data. I say "implies" because UG471 tells nothing about 12 bit. It just leaves a gap between 10 bit and 14bit.
Then I read XAPP524, and it sounds as if it is possible to read data from a 12bit ADC.
Table 1 in XAPP524 lists
- 12 bit resolution
- 80MHz sample rate
- 1-wire interface
as a valid combination (Comments = OK).
Is it possible to read data from a 12bit ADC with 1-wire interface with a Kintex-7?
I am looking forward to your answers.
03-19-2015 12:07 AM
03-19-2015 01:55 AM
I tried the XAPP524 example design with Vivado 2014.4.
Simulation looks good, but in practice it fails.
Is it possible to read data from a 12bit 75Msps ADC into a Kintex-7 using a 1-wire interface?
Yes or no?
03-23-2015 04:03 AM
anybody out there who is able to answer my question?
Is there anybody who has succeeded in reading data into a Kintex-7 from a serial 12 bit ADC at 50Msps?
03-27-2015 01:45 AM
it should be possible to achvie the max data rates when connecting to external serial ADC.
What problems do you see?
Does it work at lower sample rate?
03-27-2015 03:22 AM
It is interesting, that all answers related to this topic, also your answer, contains a "should". Seems as if Xilinx never tested it. Or perhaps Xilinx has tested it, knows that it does not work, and wants to keep this a secret.
The system is running at fixed frequencies of 50MHz or 75MHz. No chance to try a lower saple rate.
The problem is, that it seems as if the inverter for the data clock (to sample the data from the falling edge) disappears in real implementation. In the simulation, everaything is ok. But in reality, the data of rising and falling edge is always the same.
Is it possible to read data from a 12bit 75Msps ADC into a Kintex-7 using a 1-wire DDR interface?
YES or NO?
03-31-2015 12:12 AM
that's what Xilinx support is famous for: you ask a simple "YES" or "NO" question, and the only answer you get is a "should".
No real answers, just miracle hints like the prophecy of Nostradamus.
That's Xilinx support.
I recognized difference between XAPP524 and my implementation:
XAPP524 used ISE, and I use Vivado.
Is it possible to read data from a 12bit 75Msps ADC into a Kintex-7 using a 1-wire DDR interface?
YES or NO?
03-31-2015 11:22 AM - edited 03-31-2015 11:24 AM
I can answer, and answer is YES.
If it works on your desk depends on your mileage and the effort you are ready to put in (measured in man-months).
We are running a design in Xilinx where there are 40 lvds lines coming in at 1.25Gbit, and 56 lines going out also at 1.25Gbit sec. Dual channel ADC-DAC with sample rate of 2.4GSPS. It works. Getting mutilple parallel serial streams working is harder then serialization on one lane only.
If you have single lane at 900MBit/sec, you sure can get it working.
So the anser is YES..
But I am doing electronics it since 1979, so estimating how long it would take for me would not be fair...
Well what is the problem, do you see bad data or what is wrong?
If you do not describe the problem you have, no one can help you.
04-02-2015 07:42 AM
thanks for your reply.
It's good to know that in principle reading data from a 12-Bit ADC (1-wire DDR) into a Kintex-7 will work.
On our board, we have 4 AD9637 ADCs connected to a XC7K70T-1FBG676C.
Each AD9637 contains 8 ADCs, and provides an LVDS data-clock, an LVDS frame-clock and eight LVDS data-lines.
Both, data- and frame-clock are connected to xRCC-Pins.
The data-lines are all located in the same bank / clock region as the corresponding clocks.
The frame-clock frequency can be selected as 50MHz or 75MHz.
If we configure the ADCs to output only 10 bit data, we can use the ISERDES in DDR Networking mode, and everything works fine at 50MHz and 75MHz frame-clock frequency.
But for our application we need the full 12-bit resolution of the ADCs.
As the ISERDES does not support 12-bit, we implemented the input-structure described in XAPP524.
In XAPP524, two ISERES in 6-bit configuration are used to read the 12-bit data. The second ISERDES is clocked with an inverted clock to read the data from the falling edge.
During simulation, everything is ok and we see correct data. But when we load the "real" FPGA, we see corrupted data.
In the reality, the data from the falling edge alsways equals the data from its preceeding rising edge:
ADC-Data [11:0] = 1110 0001 0000
FPGA-Data[11:0] = 1111 0000 0000
We verified the ADC-data with a scope.
It seems as if the clock for the second ISERDES is not inverted in the real configuration data. We used the FPGA-Editor to verify the implemented design, and found that the clock for the second ISERDES is inverted there.
Every help form your side is appreciated.
Perhaps you can provide some details or answers to the following questions:
- why does the ISERDES not work in 12-bit configuration?
- are there any known pitfalls when using two 6-bit SERDES to read 12-bit DDR data into the FPGA?
- is XAPP524 also working with Vivado 2014.4?
05-25-2015 08:14 PM
Finally, I found someone that has seen my issue. Unfortunately, I'm seeing the same issue as you on an Artix 7 200T chip. I'm trying to interface with a AD9653, which as four, 16 bit ADCs. It has an LVDS data clock, LVDS frame clock, and two LVDS data lines for each ADC. I configured the ISERDES in 8 wide, DDR mode using the NETWORKING mode. I've been using the different output modes on the chip to verify I've been reading the data correctly, and it appears the data that is on the falling edge of the data clock is the same as previous rising edge.
I've been racking my brain all weekend trying to figure out what I did wrong, when I found the XAPP524. It appears that they completely disregaurd that DDR mode doesn't work, so they put two ISERDES in SDR mode; one for rising edge and one for falling edge. I'm guessing this is their work around? Two ISERDES at half the rate?
I'm going to try out using two ISERDES blocks in SDR mode and see if this fixes the issue. Presumably, since the ISERDES work in SDR mode then this work around will work. Of course, I won't know unless I try it. I hope there are enough ISERDES in the bank to support this. I don't know if there is an ISERDES block per pin or LVDS pair. Thankfully for my application, the bank I used is pretty bare on pin usage, so I think my application may be fine.
OP, have you been able to make any headway on your end? It appears the XAPP524 didn't work? Were you able to fix it another way?
06-03-2015 03:52 AM
Yes, you need to use two ISERDES in SDR mode. Don't ask why Xilinx is not able to implement the ISERDES in a way that supports also 12 and 16 bit DDR. You won't get an answer.
We did not succeed in smapling our 12 bit ADC data until now. But hope dies last. :-)
We also tried Vivado 2015.1, but that did not help.
One outstanding test is to implement it with ISE14.7
Have you made any progress?
06-03-2015 11:13 AM
using 2 SERDES in SDR mode connected to IBUF_OUT_DIFF just gives two times more data, that is the reason in XAPP524
ISEREDES in DDR mode of course works also we have not seen problems
06-03-2015 05:47 PM
I was able to solve my issue. I had never used a ISERDES block before, so it took me a while to figure out what the bit slip operation was doing. Unfortunately my bit slipage was a factor of three bit off from the divided clock. In the DDR mode, this would had been an easy fix by applying three high impulses to the bit slip input. However, since the ISERDES were in SDR mode I had to hack together a fix to align the bits correctly.
Thankfully I was running the ADC at half the highest qualified rate. I was running the AD9653 at 62.5 MHz instead of 125 MHz. This made the data clock at 500 MHz and my frame clock was 62.5 MHz, so I didn't run into any issues with running the ISERDES at too high of a clock rate. I don't know if the ISERDES would have worked running at 1 GHz in SDR mode.
trenz-al, the purpose of this post was calling out that the ISERDESE2 block doesn't work in a DDR mode. Sure it works in simulation, but in real life it doesn't latch the data correctly. The data bit latched at the fallling edge of the clock is a duplicate of the data bit latched on the previous rising clock edge. Niels and I experienced the exact same issue. Thankfully there was a work around for me. Hopefully you were able to work around it, Niels.
Hope this is fixed in the Ultrascale chips...
06-05-2015 01:11 AM
we have many designs. both serial ADC and parallel ADC
in one case there with e2v ADC with total 40 lines of LVDS at data rate of 1Gbit per each lane, DDR mode clock 500mhz
we have used serial ADC with clock rate above 500mhz as well
we do not have currently a design that uses 800mhz clock rate
06-05-2015 01:42 PM
So in your case, you only have a data width of 2? Sure, that might work.
My case is the following. I have a data rate of 500 Mbit with a data clock at 250 MHz and a frame clock at 62.5 MHz. I configure the ISERDESE2 with the following configuration
DATA_RATE => "DDR", -- Data-rate ("SDR" or "DDR")
DATA_WIDTH => 8, -- Parallel data width selection (2-8)
IOBDELAY => "NONE", -- Using IDELAY module
INTERFACE_TYPE => "NETWORKING", -- "NETWORKING", "NETWORKING_PIPELINED" or "RETIMED"
SERDES_MODE => "MASTER" -- "NONE", "MASTER" or "SLAVE"
Using the above configuration, the data on the falling edge is incorrect. The only thing I can think that I did wrong is I forgot to hook up the data and frame clocks to dedicated clock pins. Instead of using a BUFIO for the data clock and BUFR for the frame clock, I have a BUFG for both. The documentation says this is still a valid clocking method.
06-08-2015 02:04 AM
Good news: We got it!
Our usecase is the following: We have octal-ADCs with one (differential) data lane for each ADC. One common frame-clock and one common data-clock signal for all 8 data lanes.
The ADC is running at 50MHz. This means 50Mz frame-clock and (due to 12 bit resolution) 600MBit/s data rate with 300MHz data-clock frequency.
But we have always seen the same data on rising and falling data-clock edge. To be more precise: the falling edge data was always the rising edge data.
Due to my colleague, who is our VHDL-expert, the tap we were caught in was a failure in the Xilinx documentation.
By the way: to me it seems as if XAPP524 documentation and source-code do not match.
After some more testing I will come back and tell you the secret.
But it will take a few days.
06-25-2015 05:09 AM
Sorry for the delay, but here is the next part of the story:
It seams, as if you have to do the following trick:
When you use two ISERDES in SDR mode no build a "12bit ISERDES", you have to set
INTERFACE_TYPE = "NETWORKING", as requied by UG471, Table 3-3.
In the "NETWORKING Interface Type" section of "ISERDESE2 Clocking Methods", you find the following sentance:
"This also prohibits using DYNCLKINVSEL and DYNCLKDIVINVSEL."
If we follow this instruction, we get always the rising edge value for the falling edge too.
But when we set the attributes DYNCLKINVSEL = TRUE and DYNCLKDIVINVSEL = TRUE, doing something that XILINX prohibits, we receive correct data.
Is'nt that crazy?
I am looking forward to your answers.
06-29-2015 05:15 AM
diy you try to use local inverted CLK?
it would immerse into ISERDES :) and no need to mess up the DYN stuff at all.
Xilinx tries to make it EASY for you.. forgetting to tell some details.
06-29-2015 08:28 AM
That's what I am talking about.
We used the inverted clock (DYNCLKINVSEL and DYNCLKDIVINVSEL = FALSE), and saw the inverted clock in the simulation.
But in practice, the inversion was not done.
When we just set DYNCLKINVSEL and DYNCLKDIVINVSEL = TRUE (nothing else is changed), the clock inversion is done.
I am looking forward to your answer.
06-30-2015 01:27 AM
then it must be tool version related BUG I guess.
XAPP524 uses ISERDES in SDR mode with local clk inversion.
If what you say is true, and it only works in simulation then it means that XAPP524 has never been verified on silicon and can not work on real silicon at all.
06-30-2015 01:35 AM
there is one difference between our implementation and XAPP524:
XAPP524 uses ISE, and we use Vivado (14.4 and 15.1).
but as you told me three weeks ago:
> xapp524 is outdated and out of sync.
> but it can save as some starting point...
09-22-2015 11:59 AM
I'm just at the beginning stages of implementing XAPP524 and have loaded the design into Modelsim. Configurations are set for 2 wire, 16bit mode, Byte wise, MSB First. I changed the input vector file to just send A5A5 but what come out of DatOut is C3C3. Does this make sense. I would expect that after all the ISERDES blocks and the bit swapping logic, I would get A5A5.
Stuck and confused. This is my first attempt at implementing a SERDES interface and nervous since the supplied simulation is giving results I don't understand.
12-03-2015 02:28 AM
Let's see if Niels reacts to this ping.
You say you use 12-bit LVDS ADC in 1-wire mode.
How did you handle the instantiation of doublenibbledetect?
When I set width to 12 bit and wire to 1 I got wrong size of data buses to that module.
I have posted a question to this forum regarding this issue:
Apparently one more person has seen this problem.
It would be very interesting to know how much your final result diverge from the original xapp524.
12-03-2015 03:15 AM
we implemented 12-bit LVDS ADC in 1-wire mode, but it works only up to app. 60Msps.
Intensive research showed that above 60Msps, the output-timing of the ADC changes slightly. A "feature" not documented in the ADCs data-sheet. (sad but true).
If you want to work with a 12-bit LVDS ADC in 1-wire mode, you have to do the following trick: First, you operate the ISERDES in 1:6 DDR mode, and after that, you combine the two 6-bit values to your 12 bit sample.
If you want to use any automatically generated cores from Xilinx, forget it. 12 bit is simply not supported by them.
If you want to make it your own, forget about XAPP524. It was developed and teste with ISE, and the restrictions of Vivado makes this approach useless.
At the moment, we use the 1:6 DDR ISERDES and combine the two 6-bit values to the 12-bit sample. If you want to run the ADC at high sample-rates that result in data line rates above 600Mbit/s, you have to do the following in addition:
- clock distribution delay compensation
- frame-clock delay calibration
- dynamic data-line delay calibration to always center the sampling point into the middle of the data-valid-window.
12-03-2015 05:00 PM - edited 12-03-2015 05:08 PM
XAPP524 is a dog's breakfast, you're better off not bothering with it IMO. If you can possibly run in 2-lane mode, and can match your trace lengths to within about 2 cm, you should have no problems running at 60 MS/s with nothing more than the default SelectIO wizard-generated core and a simple bitslip FSM to align the frame clock.
One thing I have found, though, is that with certain parts -- specifically the LTC2173 (4x 80 MS/s, 16-bit serialization in 2 lane mode) -- the relationship between the frame and data clock changes when you reset the part through its SPI port as the data sheet recommends. That means that the bitslip state machine has to be reset, and probably the deserializer's IO_reset input as well to be safe. Here's what I ended up with:
// // Bitslip state machine to synchronize with LTC2173 FCO signal // localparam LBS_RESET = 3'd0; localparam LBS_INIT = 3'd1; localparam LBS_STARTUP = 3'd2; localparam LBS_POLL = 3'd3; localparam LBS_SLIP = 3'd4; reg [2:0] LBS_state = LBS_INIT; reg [7:0] LBS_wait = 0; reg [3:0] LBS_count = 4'b0000; reg [0:0] LBS_reset_request = 0; reg [0:0] LBS_reset_req0 = 0; reg [0:0] LBS_reset_req1 = 0; reg [0:0] LBS_reset_req2 = 0; reg [0:0] LBS_reset_pend = 0; always @(posedge LTC2173_CLK) // 80 MHz (i.e., 320 MHz DCO divided by 4) begin LBS_wait <= LBS_wait - 1; LBS_reset_req2 <= LBS_reset_req1; LBS_reset_req1 <= LBS_reset_req0; LBS_reset_req0 <= LBS_reset_request; if ((LBS_reset_req2 == 1'b0) && (LBS_reset_req1 == 1'b1)) begin LBS_reset_pend <= 1'b1; end case (LBS_state) LBS_RESET: begin LTC2173_reset <= 1'b1; LBS_reset_pend <= 0; LBS_wait <= 0; LBS_state <= LBS_INIT; end LBS_INIT: begin if (LBS_wait == 1) begin LTC2173_reset <= 1'b0; // IO_reset input to ISERDESE2 LBS_state <= LBS_STARTUP; end end LBS_STARTUP: begin if (LBS_wait == 1) begin LTC2173_bslip <= 1'b0; LBS_count <= 4'b0000; LBS_state <= LBS_POLL; end end LBS_POLL: begin if (LBS_reset_pend) begin LBS_state <= LBS_RESET; end else begin if (LTC_FCO != 8'hF0) begin LTC2173_bslip <= 1'b1; LBS_count <= 4'b0000; LBS_state <= LBS_SLIP; end end end LBS_SLIP: begin LTC2173_bslip <= 1'b0; LBS_count <= LBS_count + 4'b0001; if (LBS_count == 4'b1111) begin LBS_state <= LBS_POLL; end end default: begin LBS_state <= LBS_RESET; end endcase end
After resetting the chip, or in any other situation where the clock phase relationships might change, the command handler needs to assert LBS_reset_request temporarily. Seems trivial and obvious enough, but I was getting some pretty confusing results until I realized it was necessary. It is all too easy to blame the serdes and its associated clock/delay logic when the real problem is elsewhere.