cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Visitor
Visitor
1,581 Views
Registered: ‎06-08-2018

v7-690t-2ffg1927 suporrt 80 GTHs?

recently I have a project,use 40 GTHs,and the 690t suporrts 80 GTHs. Below is my parameters setting.

RX:line rate 6.25G;

coding:8b/10b;

internal data with:32bits;

PLL seletion:Quad PLL;

gth count:32 gths.

 

TX:line rate 5.0G;

coding:8b/10b;

internal data with:32bits;

PLL seletion:Quad PLL;

gth count:40 gths.

But when I instanced the GTH IP core and implement the project,the bufg resource can not meet the device, and the MAP ERROR : TOO MANY COMPS OF TYPE "BUFG" TO FOUND TO FIT THIS DEVICE.

Design summary says the project uses 46 bufgs,but the device only has 32 bufgs.

And I check the gt_rxusrclk_source.v ,I found that RX IP core uses 33 bufgs and the TX IP core uses 11 bufgs.

So I have a question, 7 series family says 690T has 80 GTHs, I want to know how do I correctly use the GTH IP core and make the bufg resource meet my project ? 

I was very impatient to the way to solve this question,and it is important for me to know why .

thank you very much.

0 Kudos
7 Replies
Highlighted
Guide
Guide
1,567 Views
Registered: ‎01-23-2009

There is no simple answer to this question...

 

Each individual IP uses some clocking resources - thus when you simply instantiate them, the clocking resources add up, and quickly exceed the capabilities of the device.

 

Some IPs allow for "common resources" to be "shared" - multiple instances of the IP can share some of the resources that can be done in common. These are often clocking resources; i.e.  when there is no need to have a different clock resource for each instance of the IP.

 

But ultimately, it is up to you to "design" the clocking strategies that enable the combinations you need - some are possible, others are not. For example, if your GTHs each individually use the recovered clock for the received data (RXUSRCLK), then there is one such clock domain (hence one BUFG) for each GTH - this quickly exhausts your number of BUFGs. However, if they are using the PLL clock (and doing "clock correction"), then you can use one BUFG for the received data for all the GTHs that are using the same clock frequency.

 

Similarly, the TXUSRCLK. If each one is running at a different frequency from a different REFCLK, then they need a separate TXUSRCLK (or maybe not, depending on "clock correction"). If all GTHs are using the same REFCLK and need the same frequency then then can use the same TXUSRCLK (derived from any one of them and distributed to all of them).

 

Furthermore, (I think) it is possible to clock the RXUSRCLK/TXUSRCLK from BUFH buffers (assuming you manage your clock regions properly.

 

So there are solutions to using fewer clocking resources when using larger numbers of GTHs. If you are doing a lot of channel bonding, then you can definitely use all available GTHs without running out of clock resources. But you need to do the planning yourself. In general, just dropping down multiple instances of the same or different IPs will not get you near full utilization.

 

Avrum

Highlighted
Xilinx Employee
Xilinx Employee
1,558 Views
Registered: ‎08-07-2007

hi @ningfen0916

 

you have to do something to save BUFGs.

 

for example, if the 40 transmitters in the same column are sharing the same clock source oscilator, you can use a single BUFG to try all TXUSRCLK2/TXUSRCLK.

 

if the remote partner TX are all clocked by the same source oscillator (synchronous), all the receivers in the same column can share a signal BUFG to drive the RXUSRCLK/RXUSRCLK2. The tool doesn't know this info, so by default it assumes all receivers are asynchronous to each other. so the tool instantiate a BUFG for each receiver. this consumes a lot of BUFG resources. 

 

Thanks,

Boris

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
Highlighted
Visitor
Visitor
1,518 Views
Registered: ‎06-08-2018

Hi,@

 

However, if they are using the PLL clock (and doing "clock correction"), then you can use one BUFG for the received data for all the GTHs that are using the same clock frequency.

 

I think your reply means that:

 

At TX GTX IP interface,one oscillator came into FPGA,and the GTX REFCLK select Quad PLL and doing "clock correction".And the tx line rate is 6.25Gbps,4GTHs came out.

 

At RX GTX IP interface,another one oscillator came into FPGA and do the same as the TX IP core.

And thus the RX can use one BUFG to drive 4 rxusrclk.

 

i.e. 

BUFG rxoutclk_bufg0_i
(
.I (gt0_rxoutclk_i),
.O (gt0_rxusrclk_i)
);


gt0_rxusrclk_i can replace the previous clk (means that undo the "clock correction")gt0_rxusrclk_i~gt3_rxusrclk_i .

 

So is it necessary to do "channel bonded" when I do the "clock correction"?

 

If this can work,it does save many BUFGs.

 

would receive your reply, thank you.

 

BUFG.png
0 Kudos
Highlighted
Visitor
Visitor
1,517 Views
Registered: ‎06-08-2018

Hi,@

 

If the remote partner TX are all clocked by the same source oscillator (synchronous),all the receivers in the same column can share a signal BUFG to drive the RXUSRCLK/RXUSRCLK2.what do you mean?

 

I think your idea is the TX:one oscillator drive the 4 GTHs in one Quad(Quad PLL selection), and the RX do the same as the TX. Thus I can  use one BUFG to drive 4 RXUSRCLK/RXUSRCLK2 in one Quad.Just like the TX IP core do.

 

If my thought is right, So is it necessary to do "channel bonded" when I do the "clock correction"?

 

And do you think my idea is right? Looking forward to your reply, thank you. Boris

BUFG.png
0 Kudos
Highlighted
Visitor
Visitor
1,428 Views
Registered: ‎06-08-2018

Hi,@
Below was my reply ,I hope you can see it and give my advice.
Sorry to trouble you.
thank you.

0 Kudos
Highlighted
Visitor
Visitor
1,427 Views
Registered: ‎06-08-2018

Hi,@
Below was my reply ,I hope you can see it and give my advice.
Sorry to trouble you.
thank you.

0 Kudos
Highlighted
Xilinx Employee
Xilinx Employee
1,355 Views
Registered: ‎08-07-2007

hi @ningfen0916

 

when you use clock correction, you don't have to use channel bonding.

 

clock correction is used to resolve the clock frequency offset between recover clock and RXUSRCLK/2.

 

channel bonding is used to resolve the lane to lane inter skew.

 

if your case, if you can make sure TXUSRCLK/2s are operationg exactly at the same frequency, then you can share a common BUFG betwen them. RXUSRCLK/2s are the same and if RXUSRCLK/2s are operating at the same frequency as TXUSRCLK/2s (no PPM offset), they can share the BUFG with TXUSRCLK/2s.

 

For example, you have 4 SGMII IPs with clock correction inside GT, you can share a single BUFG for 4 TXUSRCLK/2s and 4 RXUSRCLK/2s.

 

Thanks,

Boris

 

 

------------------------------------------------------------------------------
Don't forget to reply, give kudo and accept as solution
------------------------------------------------------------------------------
0 Kudos