cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Visitor
Visitor
8,707 Views
Registered: ‎04-29-2016

Spartan 3A GCLK to Input Pin Latches Timing Constraint

Jump to solution

I have an external 10 nsec period clock connected to one of the FPGA GCLK inputs. The rising edge of this clock coincides with the data transition of an external data bus also connected to the FPGA. How do I define a timing constraint to delay the latch of the data bus pin buffers to allow time for the input to stabilize? I realize I can change the logic to clock on the negative edge but I want to do this relative to the rising edge. I tried the following but it made no difference:

 

OFFSET IN = 3 ns BEFORE “Gpmc_Clk” HIGH;

 

It seems there should be an easy way to add an internal async delay path (e.g. 3 gates with 1nsec per gate) between the output of the GLCK input pin buffer and the internal clock used for the data bus input pin latches.

 

Below are excerpts from the PAR and TWR.

Constraints.gif

0 Kudos
1 Solution

Accepted Solutions
Highlighted
Visitor
Visitor
13,326 Views
Registered: ‎04-29-2016

I did the pin swap as a test just to see if the IOB Delay setting followed the pin or the signal (it followed the signal). Also, no delays are specified in the constraints file.

 

You've been very helpful.

 

What prompted this extended series of follow up questions was Austin Lesea's earlier statement regarding asynchronous designs and that a DCM is the only way to guarantee a design meets timing. Unfortunately, there are many real world interfaces that are considered synchronous even though the "clock" isn't continuous. According to Austin, the FPGA considers this async even though the FPGA chip has several GCLK input pins along with FFs in the IOBs! Those GCLK pins appear to support arbitrary periods down to 0 Hz.

 

I love it when designs actually work but it also makes me nervous when the term "async" is assigned to applications such as mine. For a newbie, this is very hard to understand. The tools can obviously determine all timing paths from my input signals and source supplied GCLK. They also seem smart enough to tweak the IOB Delay setting.

 

To take it a step further, shouldn't the tools also be able to use IOB Delay as a valid way to correct for input skew? Input skew is sometimes forced on a designer due to routing of PCB traces. Being able to define these timing constraints would be an elegant way for the tools to use IOB Delay to correct input skew relative to a GCLK.

View solution in original post

0 Kudos
11 Replies
Highlighted
Scholar
Scholar
8,610 Views
Registered: ‎02-27-2008

Synchronous design,

 

There does not exist a precise delay element in 3A.  Instead, use the DCM to delay the clock (fixed phase shift).  That is the correct and proper way to control the timing, and gaurantee the results.

 

What describe is asynchronous design, and while you could add delay by hand, or through RTL using a wrapper (and keep and save directives to prevent the removal of the logic by the tools, as the tools will remove that logic).

Austin Lesea
Principal Engineer
Xilinx San Jose
0 Kudos
Highlighted
Teacher
Teacher
8,559 Views
Registered: ‎11-14-2011

On top of what Austin has written, you should really understand what the OFFSET constraint is for.

 

You cannot use that constraint to attempt to force the tools to change the physical interface to meet your requirements. The OFFSET constraint should be used to reflect the reality of your physical interface. You do not say from where the data that is synchronous to your input clock is coming from but that interface must be defined somewhere (or you will have to figure it out through experimentation).

 

For example, if the external data is valid 3 ns before the rising edge of the clock and retains its validity for 5 ns, you should use the OFFSET constraint thusly:

 

NET "example<*>" OFFSET = IN 3 ns VALID 5 ns BEFORE "clock" RISING;

 

then the tools will work to place the capturing FFs (I assume they are FFs even though you wrote latches) in the correct place to meet this constraint (in terms of setup and hold). If they cannot meet this constraint your timing will fail and you will either have to check your interface reality or employ clock phase shifting as suggested by Austin.

 

In summary, confirm the ACTUAL physical interface requirements and THEN use the OFFSET constraint with exactly those numbers. OFFSET does not, will not and cannot introduce delay elements to help you meet timing.

 

----------
"That which we must learn to do, we learn by doing." - Aristotle
0 Kudos
Highlighted
Visitor
Visitor
8,495 Views
Registered: ‎04-29-2016

Thanks for the clarification.

 

So for an async design, how would we define a timing constraint to check that Gpmc_Oen ‘0’ shown in Example #1 below is clocked into the FF at Edge B and never at Edge A? Gpmc_Clk is an external clock input connected to one of the FPGA GCLK pins. The external Gpmc_Oen input is synchronous with Gpmc_Clk. What timing constraint would also cover Example #2?

GPMC Timing.png

0 Kudos
Highlighted
Teacher
Teacher
8,488 Views
Registered: ‎11-14-2011

You shouldn't really try to do asynchronous design in an FPGA - that was Austin's point. Is the incoming clock to used "as is" or do you pass it through some clock conditioning circuit (e.g. PLL, DCM, MMCM)? However, using the information you have provided:

 

In example A, with a clock period of 10 ns then the time gap between the valid edge of Gpmc_Oen and the capturing edge of the clock is simply 1 period minus the delay given = 10ns - 2.7 ns = 7.3 ns, therefore

 

NET "Gpmc_Oen" OFFSET IN 7.3 ns BEFORE "Gpmc_Clk" RISING;

 

Note for hold time calculation you should also provide a VALID value (for how long is the Gpmc_Oen signal valid?). 

 

In example B you will have to assume the worst case, that is the shortest time between the valid signal and the capturing edge of the clock. In this case you state it is 3 ns, therefore

 

NET "Gpmc_Oen" OFFSET IN 3 ns BEFORE "Gpmc_Clk" RISING;

 

If the tools struggle with the timing you could also try ensure that the capturing flip flop is placed inside the IOB as this provides the shortest path between the pin and any FF in the device.

----------
"That which we must learn to do, we learn by doing." - Aristotle
0 Kudos
Highlighted
Visitor
Visitor
8,461 Views
Registered: ‎04-29-2016

We’re using the clock “as is”. It’s not processed through a DCM or PLL. Unfortunately, Gpmc_Clk is not continuous. It only bursts during the transfer so I don’t think a PLL can lock fast enough so we’re doing this async.

 

The circuit is now running and it meets all timing constraints. There is no corruption.

 

Here’s a final question. How close to Edge A can Gpmc_Oen go low so it is guaranteed to NOT clock ‘0’ into the FF at Edge A? In other words, how much time must pass after Edge A so changes in Gpmc_Oen are ignored by the FF on that edge? Would this be covered by the timing constraint you provided? There must be deterministic behavior of the FPGA for a particular set of input pins (one used as GCLK the other as data) related to this value. Would this be checked by the VALID constraint you mentioned? I read somewhere that omitting VALID essentially means 0 hold but I’m not sure 0 hold checks for the sequence we want to avoid.

0 Kudos
Highlighted
Teacher
Teacher
8,453 Views
Registered: ‎11-14-2011

You are probably correct that a clock management tile couldn't lock fast enough on a non-continuous clock to suit your application (there are other methods to get around this but they are not relevant if your application is now working).

 

To answer your query about the capture clock edges, the short answer is I have no idea. Now you are looking at extreme details of the physical aspect of the FPGA construction, from delay times through the pin to the relevant FF, whether the FF is located in the IOB or not, where the clock tree routing is in relation to the FF, etc.

 

I'd be sure that all this information is contained in the datasheet and your timing report could give you a pretty good idea, too, if you are willing to sift through the numbers. FPGA editor is probably the tool to use to trace absolute path length and delay for any given signal but I'm not sure I see the point, really, if you pass timing and your system is fully functional. Academic, as they say.

 

If your output enable is captured 1 clock earlier than anticipated, what difference does it make? I can imagine that the RTL is based on the value of the signal on the clock edge to output your data, e.g.

 

if rising_edge(clock) then

  if (oen = '0') then

    dataout <= new_data;

  else

    dataout <= old_data; -- this else branch could be left out as it is implied anyway

  end if;

end if;

 

and the receiving end is controlling the assertion and negation of the oen signal, capturing the data your FPGA presents when it is ready. Is your system so dependent on exact clock edges that this will matter?

 

Anyway, good that you have solved your initial problem.

 

----------
"That which we must learn to do, we learn by doing." - Aristotle
0 Kudos
Highlighted
Visitor
Visitor
8,404 Views
Registered: ‎04-29-2016

The reason it must capture on Edge B and not Edge A is that is the first event of a multi-cycle (20+ clocks) burst transfer. Gpmc_Oen is just one of 19 signals (16 data, 3 control) involved in this source synchronous (Gpmc_Clk) interface. If the FPGA detects Gpmc_Oen early, we skip a step in the remaining sequence or create incongruity with the other signals or host. We must ensure the FPGA will clock all 19 signals on the same edge, starting at Edge B. All FFs must only clock data that is present between 3 ns before and up to 2.7 ns after and never outside this range. The timing constraint I entered in the UCF is slightly tighter than this and even though this runs, I'm not sure this checks what I want.

 

Attached is the ISE project. The code is very basic. It simply echoes back what it was sent. If someone can look at the reports and explain if the conditions I described are guaranteed it would be greatly appreciated.

 

This code also contains a DCM. This is not used for the GPMC interface.

0 Kudos
Highlighted
Teacher
Teacher
8,400 Views
Registered: ‎11-14-2011

Right, OK. You had mentioned the data was burst format - sorry that I forgot.

 

In the timing report (either the .twr or the .twx file, depending on your presentation preference), near the end you can see the "Datasheet report". This contains some useful numbers for your external interfaces. Read it (togther with UG612 if you have to) and understand what these numbers mean.

 

Of special interest, in this case, is the table relating to the OFFSET = IN constraint. There you can see the setup/hold times for each signal. Note this line here:

 

Gpmc_Oen | 2.494(R)| 0.048(R)| 0.006| 2.452| -1.223|

 

My understanding of this is that the best the tools can do is to have 2.494 ns of setup for the oen signal (given that you constrained it to 2.5 ns). This number, then, is the minimum time that oen can go low before the capture clock edge, which in terms of a clock cycle is very far away from the previous clock edge. I'd be pretty confident that you will never capture oen on the "previous" clock edge.

 

What does the datasheet for the other end of the interface state? Is there ever a physical chance that oen will appear before the previous clock edge? Frankly (and I understand your thoroughness) I think you will be absolutely fine with the performance you have.

----------
"That which we must learn to do, we learn by doing." - Aristotle
0 Kudos
Highlighted
Visitor
Visitor
8,166 Views
Registered: ‎04-29-2016

The TWR info you mentioned is very helpful. I noticed it showed Gpmc_Oen is skewed from the other signals (e.g. hold to clock edge shifted 1 ns).

 

TWR Setup_Hold.png

I swapped pin locations with Gpmc_Wen but Gpmc_Oen still had the skew. The pinout report shows that IOB Delay for Gpmc_Oen was set to “BOTH” and Gpmc_Wen was set to “IFD”.

IOB Delay.png

 

I did a document search but I can’t find what this means or how it’s edited. This implies a delay of unknown duration is being added to the IOB. Is this related to the IFD_DELAY_VALUE described in the Spartan 3A data sheet? Is the timing report showing the effect of this delay?

 

Another possibility is IOB Delay is being added when the ISE implements the design in order to meet user defined timing constraints. Any clarification on these questions would be greatly appreciated.

0 Kudos
Highlighted
Teacher
Teacher
6,370 Views
Registered: ‎11-14-2011

If you swapped the pins for the read and write enable wouldn't this affect your interface behaviour? I assume you did this as a test.

 

I further assume that you have not set any delay values in your constraints file and you are, probably rightly, wondering where the difference is coming from. It could well be related to the IOB Delay indicated in the pinout report. If you are not setting this value in your UCF then I can only assume that something in your strategy is affecting this or the tools are interpreting the other constraints in such a way that results in this setting.

 

Before I dive into it I should say that, beyond the quest of knowledge, if your interface is working I wouldn't go chasing ghosts too much. You'll likely confuse yourself (if that hasn't happened already!) or end up making some unnecessary changes that ultimately breaks your working design.

 

Anyway, UG625 (Constraints guide) has information pertaining to IOB delay. Of most relevance is:

 

Constraint Values

• NONE

Sets the delay to OFF for both the IBUF and IFD paths.

– The following statement sets the delay to OFF for the IBUF and IFD paths.

INST “xyzzy” IOBDELAY=NONE

– For Spartan®-3 devices, the default is not set to NONE. This allows the device

to achieve a zero hold time.

• BOTH

Sets the delay to ON for both the IBUF and IFD paths.

• IBUF

– Sets the delay to OFF for any register inside the I/O component.

– Sets the delay to ON for the registers outside of the component if the input

buffer drives a register D pin outside of the I/O component.

• IFD

– Sets the delay to ON for any register inside the I/O component.

– Sets the delay to OFF for the registers outside the component if a register

occupies the input side of the I/O component, regardless of whether the register

has the IOB=TRUE constraint.

 

 

You could try setting the value to NONE for both oen and wen and see if that makes a difference. However, as the two signals seem to have IO registers, I would have thought that BOTH and IFD have the same affect (as I interpret the constraints guide).

 

You could also experiment with the IFD DELAY VALUE itself and see what sort of affect that has. This (very old) thread has some details on that -> here. The constraints guide also has further info. I'd check your strategy to see if this setting is at AUTO, in which case the tools are doing what they do best on their own.

----------
"That which we must learn to do, we learn by doing." - Aristotle
0 Kudos
Highlighted
Visitor
Visitor
13,327 Views
Registered: ‎04-29-2016

I did the pin swap as a test just to see if the IOB Delay setting followed the pin or the signal (it followed the signal). Also, no delays are specified in the constraints file.

 

You've been very helpful.

 

What prompted this extended series of follow up questions was Austin Lesea's earlier statement regarding asynchronous designs and that a DCM is the only way to guarantee a design meets timing. Unfortunately, there are many real world interfaces that are considered synchronous even though the "clock" isn't continuous. According to Austin, the FPGA considers this async even though the FPGA chip has several GCLK input pins along with FFs in the IOBs! Those GCLK pins appear to support arbitrary periods down to 0 Hz.

 

I love it when designs actually work but it also makes me nervous when the term "async" is assigned to applications such as mine. For a newbie, this is very hard to understand. The tools can obviously determine all timing paths from my input signals and source supplied GCLK. They also seem smart enough to tweak the IOB Delay setting.

 

To take it a step further, shouldn't the tools also be able to use IOB Delay as a valid way to correct for input skew? Input skew is sometimes forced on a designer due to routing of PCB traces. Being able to define these timing constraints would be an elegant way for the tools to use IOB Delay to correct input skew relative to a GCLK.

View solution in original post

0 Kudos