cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Adventurer
Adventurer
10,037 Views
Registered: ‎02-08-2013

10G PCS/PMA Planahead constraints

Jump to solution

Hi all,

 

this is essentially the same problem as my thread here but I guess it is suited to IP discussion as much as timing, but I'll try and be more specific from the IP core perspective in this thread.

 

I'm using Planahead 14.6 in Windows (64-bit). I have 10Gig MAC, PCS/PMA and tranceivers in my design giving me a 10Gig Ethernet link.

 

My PCS/PMA core was created from v2.6 wizard. My design currently has a timing score of 0. TRCE shows no problems with my constraints, but the clock interaction report does show some failing paths (side question: why is my timing score 0 with failing paths for the min_max clock interaction report?).

 

Using only the constraints found in the example design UCF files for the MAC and PCS/PMA core the link barely runs. I started converting an extra set of SDC constraints in the Vivado section of the PCS/PMA guide which certainly improved things. The link is more consistent but still sending erroneous packets. 

 

Does anyone have a set of constraints they have had work for this core in Planahead? 

 

Here are the original elastic buffer related constraints I found in the IP example design files:

 

NET "*elastic_buffer_i?can_insert_wra" TIG; # I have kept this as is, no sdc equivalent

NET "*elastic_buffer_i*rd_truegray<?>" MAXDELAY = 6.0 ns; # removed, see SDC example below..
NET "*wr_gray*<?>" MAXDELAY = 6.0 ns; # removed..
NET "*rd_lastgray*<?>" MAXDELAY = 6.0 ns; # removed..

 

Here is an example of a constraint I *think* I've successfully got into my design from the Vivado SDC equivalent: 

 

SDC: 

set_max_delay -from [get_cells * -hierarchical -filter {NAME =~ *ten_gig_eth_pcs_pma_block* && NAME =~ *rd_truegray_reg*}] -to [get_cells -hierarchical -filter {NAME =~ *ten_gig_eth_pcs_pma_block* && NAME =~
*rag_writesync0_reg*}] -datapath_only 5.800

 

My UCF equivelant:

 

INST "*ten_gig_eth_pcs_pma_inst*rd_truegray_?" TNM = FFS "elastic_buff_maxdelay1_frm";
INST "*ten_gig_eth_pcs_pma_inst*rag_writesync0_?" TNM = FFS "elastic_buff_maxdelay1_to";

 

TIMESPEC TS_elastic_buff_maxdelay1 = FROM "elastic_buff_maxdelay1_frm" TO "elastic_buff_maxdelay1_to" 5.800 ns DATAPATHONLY;

 


 

The following is an example of an SDC contraint I can see in the Vivado section, find in Planahead using tcl console, but can't seem to specify in the UCF:

 

SDC:

 

set_max_delay -from [get_cells -hierarchical -filter {NAME =~*ten_gig_eth_pcs_pma_block* && REF_NAME =~ RAMD32}] -to [get_pins -of_objects [get_cells -hierarchical -filter {NAME =~ *dp_ram_i*fd_i*}] -filter {NAME =~ *D}] -datapath_only 2.400

 

TCL Command and results on the implemented netlist:

 

foreach i [get_pins *ten_gig_eth_pcs_pma_inst*dp_ram_i*fd_i*D -hierarchical] {puts "$i"}

 

my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/txratefifo_i/asynch_fifo_i/dp_ram_i/GLOOP[39].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/rx_elastic_buffer_i/rx_elastic_buffer_i/asynch_fifo_i/dp_ram_i/GLOOP[18].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/rx_elastic_buffer_i/rx_elastic_buffer_i/asynch_fifo_i/dp_ram_i/GLOOP[23].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/txratefifo_i/asynch_fifo_i/dp_ram_i/GLOOP[8].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/rx_elastic_buffer_i/rx_elastic_buffer_i/asynch_fifo_i/dp_ram_i/GLOOP[46].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/rx_elastic_buffer_i/rx_elastic_buffer_i/asynch_fifo_i/dp_ram_i/GLOOP[51].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/txratefifo_i/asynch_fifo_i/dp_ram_i/GLOOP[18].fd_i/D


my_design_top/Ten_gig_pcs_pma/U0/G_IS_GTX.ten_gig_eth_pcs_pma_inst/ten_gig_eth_pcs_pma_inst/txratefifo_i/asynch_fifo_i/dp_ram_i/GLOOP[23].fd_i/D

 

.etc

 

My UCF interpretation that doesn't work:

 

 

PIN "*ten_gig_eth_pcs_pma_inst*dp_ram_i*fd_i*D" TNM = "elastic_buff_maxdelay4_to";

 

 

TIMESPEC TS_elastic_buff_maxdelay4 = FROM RAMS TO "elastic_buff_maxdelay4_to" 2.400 ns DATAPATHONLY;

 

Am I completely barking up the wrong tree? The fact that the network link has become more stable with each one seems to be a sign I'm going in the right direction. ANY help would be much appreciated, thanks.

 

0 Kudos
1 Solution

Accepted Solutions
Highlighted
Adventurer
Adventurer
16,693 Views
Registered: ‎02-08-2013

Re: 10G PCS/PMA Planahead constraints

Jump to solution

Going to go ahead and plant a solution here:

 

The 10G PCS/PMA core is timing critical. As such create a wapper for your 10Gig interface block and take care of any unused ports, constant drivers etc on the partition boundary.

 

Implement partitions and re-use successfull fibre interface runs.

 

Here is a start:

 

http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_3/Hierarchical_Design_Methodology_Guide.pdf

View solution in original post

0 Kudos
2 Replies
Highlighted
Adventurer
Adventurer
9,809 Views
Registered: ‎02-08-2013

Re: 10G PCS/PMA Planahead constraints

Jump to solution

Having been working on this it seems that the cores are so timing critical that any slight timing errors else where in the design could cause issues, but once a successfull implementation has been found, to partition the design and use pblocks in order to preserve the succesfull build.

 

I seem to have partitioned the fibre interface on my design, but now Planahead gives me an ACCESS_VIOLATION error and crashes when I try and open my implemented design (64-bit Windows, 16GB RAM!)... 

0 Kudos
Highlighted
Adventurer
Adventurer
16,694 Views
Registered: ‎02-08-2013

Re: 10G PCS/PMA Planahead constraints

Jump to solution

Going to go ahead and plant a solution here:

 

The 10G PCS/PMA core is timing critical. As such create a wapper for your 10Gig interface block and take care of any unused ports, constant drivers etc on the partition boundary.

 

Implement partitions and re-use successfull fibre interface runs.

 

Here is a start:

 

http://www.xilinx.com/support/documentation/sw_manuals/xilinx13_3/Hierarchical_Design_Methodology_Guide.pdf

View solution in original post

0 Kudos