cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
anding
Adventurer
Adventurer
6,078 Views
Registered: ‎04-04-2010

Simulation of the Divider Core

Jump to solution

The datasheet for the Divider Core explains that it can divide in 1 clock cycle, yet in simulation it seems to take 15 or 16 clock cycles to accomplish the division!  Please see the attached image how I set up the divider core.   I then run the following testbench as below and obtain the simulation result attached in the next post. 

 

Why is this happening?  Is there a better way to simulate the Divider?

 

Many thanks indeed....

 

[Code]

  LIBRARY ieee;
  USE ieee.std_logic_1164.ALL;

  ENTITY Testbench_Divider IS
  END Testbench_Divider;

  ARCHITECTURE behavior OF Testbench_Divider IS

component Divider
 port (
 clk: IN std_logic;
 rfd: OUT std_logic;
 dividend: IN std_logic_VECTOR(1 downto 0);
 divisor: IN std_logic_VECTOR(15 downto 0);
 quotient: OUT std_logic_VECTOR(1 downto 0);
 fractional: OUT std_logic_VECTOR(15 downto 0));
end component;

signal clk:  std_logic;
signal rfd:  std_logic;
signal dividend:  std_logic_VECTOR(1 downto 0);
signal divisor:  std_logic_VECTOR(15 downto 0);
signal quotient:  std_logic_VECTOR(1 downto 0);
signal fractional:  std_logic_VECTOR(15 downto 0);

constant clk_period : time := 10 ns;
 
 BEGIN

  uut: Divider
  port map (
   clk => clk,
   rfd => rfd,
   dividend => dividend,
   divisor => divisor,
   quotient => quotient,
   fractional => fractional);

   clk_process :process
   begin
  clk <= '0';
  wait for clk_period/2;
  clk <= '1';
  wait for clk_period/2;
   end process;

   tb : PROCESS
     BEGIN
        dividend <= "01";
    divisor <= X"0001";
        wait;
     END PROCESS tb;
  
  END;

[/code]

Divider.png
0 Kudos
1 Solution

Accepted Solutions
gszakacs
Instructor
Instructor
7,365 Views
Registered: ‎08-14-2007

I think you need to re-read the core datasheet.  "Clocks per division" is NOT the latency through

the divider.  It indicates the number of clock cycles before you can start a new division.  So if

this is 1, then you can change your inputs on each clock cycle, and the output will provide

the result on successive clock cycles AFTER the latency, which shows up as 22 (clock cycles)

in your screen shot.

 

Second, You need to take into account the reset timing when you generate your testbench logic.

For most simulations, there is an implicit 100 ns "GSR" pulse that holds any instantiated

primitives or other Xilinx library elements in their INIT state.  So you can't really count

the clock cycles of latency unless the stimulus changes some time after 100 ns from

the start of simulation.  For fully behavioral simulations (and this will depend on your

CoreGen project settings when you build the core) you may not see the effect of

GSR, but it's generally a good idea to add this initial delay to the testbench so you

don't get surprises when you run into instantiated primitives, or try to do a post-translate

simulation.

 

-- Gabor

-- Gabor

View solution in original post

4 Replies
anding
Adventurer
Adventurer
6,077 Views
Registered: ‎04-04-2010

Here is the simulation result

simulation.png
0 Kudos
anding
Adventurer
Adventurer
6,075 Views
Registered: ‎04-04-2010

ISE and ISim 12.3 nt64 running on Windows 7 64 bit

0 Kudos
gszakacs
Instructor
Instructor
7,366 Views
Registered: ‎08-14-2007

I think you need to re-read the core datasheet.  "Clocks per division" is NOT the latency through

the divider.  It indicates the number of clock cycles before you can start a new division.  So if

this is 1, then you can change your inputs on each clock cycle, and the output will provide

the result on successive clock cycles AFTER the latency, which shows up as 22 (clock cycles)

in your screen shot.

 

Second, You need to take into account the reset timing when you generate your testbench logic.

For most simulations, there is an implicit 100 ns "GSR" pulse that holds any instantiated

primitives or other Xilinx library elements in their INIT state.  So you can't really count

the clock cycles of latency unless the stimulus changes some time after 100 ns from

the start of simulation.  For fully behavioral simulations (and this will depend on your

CoreGen project settings when you build the core) you may not see the effect of

GSR, but it's generally a good idea to add this initial delay to the testbench so you

don't get surprises when you run into instantiated primitives, or try to do a post-translate

simulation.

 

-- Gabor

-- Gabor

View solution in original post

anding
Adventurer
Adventurer
6,038 Views
Registered: ‎04-04-2010

Hi Gabor,

 

Thank you very much indeed.  I learned something that "throughput" and "latency" are different things.  It's not obvious until I really thought about it, but it is all there in the datasheet as you say.  Thank you also for the tip about the 100ns GSR pulse - leaving that out could be another source of unexpected problems!

0 Kudos