cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted
Visitor
Visitor
7,015 Views
Registered: ‎05-05-2011

BUFPLL warning at >400 MHz clk freq

Hello-

 

Another FPGA newbie here.  I'm using an SP605 board, and when I use the Clocking Wizard to multiply the on-board 200 MHz clock to frequencies >400 MHz, I get the warning that this multiplied clock must drive a BUFPLL.  However, if I just ignore the warning and run the hardware with a 600 MHz clk, everything works OK (correct output is relayed via serial port...and monitoring some signals with an o-scope gives correct results as well). 

 

I'm sure there are dangers in doing this, despite the initially promising results.  What are they?  Any workarounds?

 

More specifically, I know the global clk routing network for Spartan 6 is only spec'd to 400 MHz, and the higher clock speeds are for IO resources via the BUFPLL.  Does this mean my synthesized 600 MHz clock isn't being routed via the dedicated global clock network? 

 

Thanks for any help!

0 Kudos
6 Replies
Highlighted
Scholar
Scholar
7,011 Views
Registered: ‎02-27-2008

Re: BUFPLL warning at >400 MHz clk freq

c,

 

As with any silicon device, it probably will work at higher speeds, lower voltages, and higher temperatures than specified:  that is nothing new at all.  We design in margin in order to meet all requirements at the worst case process corner, voltage, and temperature.  If you are not at the worst case process (because the chip you have is typivcal process corner), or the operating temperature is less than the worst case, or the supply voltage is right at the specification, you may expect the device to perform better than specified.

 

The specifications tell you where we tested, and characterized it, and what Xilinx guarantees ("Recommended Operating Specifications").

 

Going faster, etc., is something you may choose to do (for fun or profit) but it is entirely up to you to solve any problems (expect no support or help from Xilinx).

 

As long as you remain within (or right at) the "Absolute Maximum" specifications, no damage to the device will result (that is still guaranteed by the data sheet, and something we stand behind).

 

Austin Lesea
Principal Engineer
Xilinx San Jose
Highlighted
Visitor
Visitor
7,004 Views
Registered: ‎05-05-2011

Re: BUFPLL warning at >400 MHz clk freq

Thanks for your quick response.  Your answer makes sense.

 

Still, one specific question.  If one does 'accept the risk' and uses a clock frequency on a Spartan 6 device >400 MHz for the FPGA fabric (not IO), will it use the global clk network if connected to FPGA fabric logic?  Is it still trying to drive a BUFG, or is it like you selected 'no buffer'?

 

I tried to use 'resource estimation' in the clocking wizard to figure this out, but I'm still unsure.  I've attached two pictures to show my confusion.  If you initially request a 400 MHz clk driving BUFG, resource estimator shows 2 BUFGs.  If you then change '400' to '600', resource estimator shows 1 BUFG, even though the 'drives' column has a grayed out 'no buffer' (bottom pic).  Instead, if you initially request a 400 MHz clk driving 'no buffer', resource estimator shows no BUFGs, including when you later change 400 to 600 MHz (top pic).  So, there's different estimated resources for a 600 MHz signal with a grayed out 'drives no buffer' column, depending on what you had typed in before requesting 600 MHz.

 

Is this just some artifact of the GUI?  Or will the buffer/global routing of the 600 MHz signal actually be different in the two cases?

 

Thanks!

clkwizard.jpg
0 Kudos
Highlighted
Instructor
Instructor
7,000 Views
Registered: ‎07-21-2009

overclocking your FPGA

Still, one specific question.  If one does 'accept the risk' and uses a clock frequency on a Spartan 6 device >400 MHz for the FPGA fabric (not IO), will it use the global clk network if connected to FPGA fabric logic?  Is it still trying to drive a BUFG, or is it like you selected 'no buffer'?

I'm 99% sure XST will infer a BUFG for driving the global clock distribution network.  You can verify this in your design -- the BUFG usage will show up in the post-synthesis design summaries.

 

Worst case is that you manually instantiate a BUFG primitive -- a simple fix -- but I doubt you will need to resort to that.

 

The datasheet specs cover all production devices over the full (commercial) die temperature range and voltage range.  You cannot control the manufacturing process variations, but you can control voltage and operating die temperature.  You will extend the performance range of your FPGA by operating in the optimal 'corner' of voltage and temp ranges -- the high end of supply voltage and the low end of die temperature.  This is completely analogous to 'overclocking' CPUs and memory devices, the same factors and considerations are at work.

 

-- Bob Elkind

SIGNATURE:
README for newbies is here: http://forums.xilinx.com/t5/New-Users-Forum/README-first-Help-for-new-users/td-p/219369

Summary:
1. Read the manual or user guide. Have you read the manual? Can you find the manual?
2. Search the forums (and search the web) for similar topics.
3. Do not post the same question on multiple forums.
4. Do not post a new topic or question on someone else's thread, start a new thread!
5. Students: Copying code is not the same as learning to design.
6 "It does not work" is not a question which can be answered. Provide useful details (with webpage, datasheet links, please).
7. You are not charged extra fees for comments in your code.
8. I am not paid for forum posts. If I write a good post, then I have been good for nothing.
Highlighted
Visitor
Visitor
6,969 Views
Registered: ‎05-05-2011

Re: overclocking your FPGA

As a followup, I checked the synthesis report before and after manually instantiating a BUFG primitive, and the synthesis report did indeed show BUFG utilization increased by 1 upon manual instantiation.  I haven't exhaustively checked how this affects hardware performance, but it certainly hasn't made performance worse, so I guess I'll leave it in there.

 

Thanks for the advice Bob.

 

Highlighted
Instructor
Instructor
6,964 Views
Registered: ‎07-21-2009

surprise! XST defies expectations

Corran,

 

Thanks for the followup post.  As you might have guessed, I am quite surprised by your results (XST did NOT automatically infer a BUFG for your fabric clock, defying my expectation).

 

I wonder if there is a deterministic reason why XST did not infer a BUFG for your clock.  If you kept your Clock Wizard-generated "core" for the clock multiplication, perhaps this might have had some sway over XST.

 

My understanding of the Spartan-6 PLL, which I largely attribute to Jonathan Heslip, is that the PLL's performance or behaviour is not affected by any frequency specification attribute applied or attached to the PLL instantiation.  Any frequency or clock period attribute is purely for the benefit of logic simulation.

 

Based on your reported results, I wonder if frequency/period attributes are also used by XST to alter synthesis. It would be 'interesting' if XST's results would differ when clock frequency-related attributes are stripped from (or drastically altered within) the Wizard-generated PLL instantiation.

 

At the very least, you have shaken a previously held 'understanding' of how XST behaves, and I'm less likely to take XST 'for granted' in similar circumstances.

 

Good luck with your design.

 

- Bob Elkind

SIGNATURE:
README for newbies is here: http://forums.xilinx.com/t5/New-Users-Forum/README-first-Help-for-new-users/td-p/219369

Summary:
1. Read the manual or user guide. Have you read the manual? Can you find the manual?
2. Search the forums (and search the web) for similar topics.
3. Do not post the same question on multiple forums.
4. Do not post a new topic or question on someone else's thread, start a new thread!
5. Students: Copying code is not the same as learning to design.
6 "It does not work" is not a question which can be answered. Provide useful details (with webpage, datasheet links, please).
7. You are not charged extra fees for comments in your code.
8. I am not paid for forum posts. If I write a good post, then I have been good for nothing.
0 Kudos
Highlighted
Observer
Observer
6,927 Views
Registered: ‎03-17-2011

Re: surprise! XST defies expectations

 

I can confirm the behaviour as described by corran42. I also noticed that as soon as you exceed the limit, the Clocking Wizard will not use a output buffer. That limit being 400 MHz for speedgrade -3, and 375 MHz limit for speedgrade -2.

In and of itself not a problem since it is nice enough to grey out the buffer and give a fat red warning that you should be using a BUFPLL.
As for why XST doesn't infer a bufg is not so strange. The core generator .xco file has for example these settings:

CSET clkout1_drives=BUFG
CSET clkout2_drives=No_buffer

In this case clkoutput1 is a 200 Mhz clock, and clkoutput2 a 400 MHz clock on a speedgrade -2 spartan-6. So for the first clock it will do a bufg and for the 400 Mhz one it will refuse to do a buffer. Bring your own so to speak.
When checking the generated HDL it has something like this:

  // Output buffering
  //-----------------------------------

  BUFG clkout1_buf
   (.O   (clk_out_200_0),
    .I   (clkout0));

  assign clk_out_400_0_nobuf = clkout1;

So a BUFG primitive for the 200 MHz clock, and no buffer for the 400 MHz. You can add a BUFG in your own module... Of course the Component Switching Limit Check fails for this BUFG, but we knew that, right?

Related to this, when I can make sure the voltage is "not too low" and the temps stay below say 50 degrees C, is it reasonable to attempt 400 MHz on a speedgrade -2 spartan 6 for a small region? Small region of lets say max 10% of the die, and all single logic level directly followed by fabric flip-flop.

This is not for production, but purely prototyping. However if someone in the know can say that this is futile, then that would be good to know. My guess would be that it will most likely work just fine, but I have no direct experience with using spartan-6 outside of frequency spec (by ~ 10% in this case).

Thanks!

 

 

 

0 Kudos