Don't have a Xilinx account yet?

  • Choose to receive important news and product information
  • Gain access to special content
  • Personalize your web experience on Xilinx.com

Create Account

Username

Password

Forgot your password?
XClose Panel
Xilinx Home
Reply
Regular Visitor
aliasnikhil
Posts: 22
Registered: ‎06-06-2012
0

Drive FPGA inputs using testbench OR Chipscope

Hi All,

 

I have a design created using System Generator. I need to provide test vectors to it from either my testbench or using Chipscope.

 

In Simulink, I used to give my testvectors from the MATLAB workspace. If I generate my HDL code, the system generator allows me to create a testbench with test vectors created from the MATLAB workspace. So, I can simulate it.

 

Now, I need to program my FPGA board using a JTAG USB cable and apply test vectors from MATLAB or the test bench that was created by System Generator. In Simulink, I instantiated the ChipScope block to monitor the outputs but I need to apply test vectors as input too. How do I do it?

 

Thanks!

-Nik

Expert Contributor
eilert
Posts: 2,426
Registered: ‎08-14-2007
0

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Nik,

what you are looking for is called HW-Cosimulation.

There's a special chapter in the sysgen documentation about this topic.

 

Some Boards are alredy supported as HW-Cosim targets for sysgen, others need to be specified by the user.

Here's some thred leading to extra help for the second case:

http://forums.xilinx.com/t5/Xilinx-Boards-and-Kits/ML605-Led-Testing-by-Hardware-Co-Simulation/m-p/186906#M5125

 

Have a nice simulation

  Eilert

 

Regular Visitor
aliasnikhil
Posts: 22
Registered: ‎06-06-2012
0

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Eilert,

 

Thanks for the reply! However I have the folloowing concerns:

 

1) Running a JTAG co-sim would be slow since I want the design to run in excess of 100MHz and if I am not wrong, JTAG does not support such high data transfer speed.

 

2) However, the ethernet port does support speeds upto 1Gbps and the ML605 board does support ethernet point-to-point HW Co-Sim. So, now I can run my design at high speeds and also give test vectors to it from Simulink. Also, I read somewhere that we cannot use both HW Cosim and chipscope together as they both use JTAG. But if I am using Ethernet for HW Cosim, then could I also use Chipscope on JTAG?

 

3) But we can only select some pre-set clock speeds like 200MHz, 66.6MHz etc from the system generator token. What if I wanted to run it even faster, for example, at 300MHz? How do I set this custom clock speed?

 

4) Also, while running the co-simulation, I would expect the Simulink and FPGA to run together, at the same clock speed, lets say 200MHz. So, can I just give this as the 'Simulink clock period' in the System Generator token in Simulink and expect them to run together? I need this since the Simulink/MATLAB workspace is providing the input test vectors and they need to update at each clock cycle. Is this possible?

 

The link you sent me isn't needed right now but I surely would need it in the future and thanks for saving me some time there!

 

It would be great if I got help with my above doubts and then I could carry on my simulations!

 

Thanks!

-Nik

Expert Contributor
eilert
Posts: 2,426
Registered: ‎08-14-2007

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Nik, if you want your design to run at full speed and maybe even drive physical I/Os you can use the "free running" mode of the HW-Cosim feature. But this brings some limitations too. As you figured out, the synchronization between the PC (Simulink) and the FPGA is the problem and the chosen interface might create a bottleneck. You might use some BRAMs for buffering some I/O data, so that in the end you have a similar situation as with Chipscope. Only that you don't have to leave the sysgen environment and everything might be a littl more convenient. Using the Ethernet seems a good idea at first glance, but there's a pitfall. Even if you have a board that is a supported HW-Cosim target for a Ethernet connection you probably don't have the full Ethernet-tri-mac license, so this interface is limited to 7 hours runtime determined by the eval license conditions. Furthermore the Ethernet connection doesn't work with the "Free running" mode. It's sad that the things we can imagine are not yet implemented in a useful way in sysgen. There are other solutions that might come closer to your intention, but you can expect them to be very expensive, and if they offer a Matlab/Simulink integration is then even another question. Most of these systems com frome the ASIC Prototyping and Verification scene. (search for "RocketDrive" to get an idea, even if this specific product and company is not available any more) About your 4th question: Is there any Interface capable of providing 200MSPS continuously with an yet unknown wordwith? Even a 10GB serial interface without handshaking overhead would be limited to 50 Bit. But you hardly find any such interface on a PC. Maybe some N-Lane PCIe Interface is performant enough, but then you still need to implement the PCIe interfaces in your FPGA first and how do you tell Matlab to use these interfaces...? The ML605 Board and FPGA might be well equipped for such a scenario, yet the software isn't. I don't know what you are designing, but if you are just verifying your design it might be OK to let everything run slower and use the powerful software environment. Otherwise you have to write some driver for your OS and some application that does the I/O stuff. but that's a hard thing to do, since you have to acheive datarates that exceeds those of graphic cards. There might be some other approach if you work on finite ammounts of data (e.g. for image processing). In that case you can make your design access the DDR memory of your board. Then you use the JTAG or Ethernet Interface to load some data into the DDR-RAM and set a Start trigger for your design. Since there's no I/O necessary until the data has been processed your algorithm should run at full speed and write the result to the DDR-RAM. After Simulink receives some Done signal it can read out the result for verification. This appoach also needs some skill and working time, but can be done with the existing environment. Things that are simple when done locally (that means inside the FPGA) are becoming a pain when you want to get in touch with the world outside the FPGA. Have a nice simulation Eilert
Regular Visitor
aliasnikhil
Posts: 22
Registered: ‎06-06-2012
0

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Eilert,

 

Thanks for all the info! Yes it was seemingly impossible, even by intuition, that the interface isn't fast enough.

 

I just want to verify it. So, as you and many others have repeatedly said, I can run at slower speed. But here is another question: If a design runs at slower speed, it doesn't mean it would run at higher speeds (due to setup or hold time violations). Correct me if I am wrong.

 

My simulation wouldn't take more than 7 hours. So, that wouldn't bother me. And, PCI crossed my mind but its complicated enough for me to just ignore it.

 

I do not need I/O capability anytime soon. Just want to verify it at full speed. BUT I need to supply data from software. So, I have 2 approaches now: Use BRAMS to store data (just like chipscope) or use DDR memory.

 

1) BRAM: Chipscope helps to output data and I have tried that in sysgen environment. How can I use it to send data as well? VIO? or anything else? I don't see VIO in sysgen.

 

2) DDR-RAM: How can I do this? I don't know anyway to access this memory.

 

Also, I need a HW Cosim only because I have the input data in my MATLAB environment. I actually do not need to, literally, co-simulate. Chipscope could have worked fine as well if I had known how to send input data from software to the FPGA. Run-time transfer wouldn't have been possible in this case but atleast to buffer in BRAM like in point 1 above.

 

Any pointers/ links to the above methods would be really helpful, Eilert. I really need them.

 

 

Thanks a lot!

-Nik

 

 

P.S: Just FYI, I am creating a digital calibration scheme. So, I create an LUT with the input data and then use that LUT to calibrate the data that comes in later. Pretty simple! But speed is a concern. We had the data coming in from a 1GHz source and FPGAs usually don't work so fast. So, I store that data in software like MATLAB and then carry on further.

Expert Contributor
eilert
Posts: 2,426
Registered: ‎08-14-2007
0

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Nik,

thanks for the infos about the design background. This helps a lot for understanding the problems you are facing.

 

It seems like there are some points unclear about how HW-Cosim works and how speed is affected.

Actually with sysgen you should always create a fully synchronous design.

That's a basic requirement for HW-Cosim to work at all.

Sysgen adds some extra hardware to your design which handles the IO and (very important) the synchronisation between your design and the Matlab/Simulink environment.

So how's that done?

Well basicly by adding a global clock enable to your design to keep it on hold while Matlab /Simulink is busy with other stuff, slow as it is compared to the FPGA.

 

Aside from physical limitations, what difference makes it for some algorithm implemented in a fully synchronous design wether it runs at 1Hz or 100 GHZ. Actually none at all, as long as the inputs are provided at the matching clock cycles.

Just thik you are running a simulation with clock cycles numbered from 1 to N. There is no time unit involved. And actually that's why event based software simulation of digital circuits works too. You add in the timescale later and it makes no difference in the behavior of your circuit, only your interpretation of it changes.

 

Now back into the physical world.

After synthesis your circuit can work with a maximum possible frequency, which means any clock frequency below that limit is OK.

Setup/Hold violations, which can cause metastability effects, are then only happeneing if your external Inputs are not matching the clock. So for a full design with PCB and external circuitry connected to the FPGA you have to make sure that input signals don't change at active clock edges. This can become difficult for high speed designs.

 

So for the pure purpose of simulating your design without physical I/Os HW-Cosim is as good as software simulation of the netlist. Actually it only gives a gain in simulation speed for very large designs.

 

With Chipscope you are limited to use some very low numbers of test vectors (or a repetitive sevence) , and this also limits the number of events that can be recorded because both applications require BRAMs.

Chipscope will be usefull when you are going to check the behavior after implementing your design and running it wit real input data from outside the FPGA.

 

For your approach of using a LUT for "calibrating" input data, there's something I don't understand yet.

A LUT is like a ROM, so your data must be able to address some LUT content to create some different output (the calibrated data?). 

Now you are saying that your data comes in at 1GSPS. So, knowing that a FPGA can't work at these datarates straightaway, how are you planning to implement your calibration-LUT in the end?

I just wonder about the hazzle of trying to do tests at some 100MSPS when your final datarate is in the GSPS region.

 

Have a nice simulation

  Eilert

 

 

Regular Visitor
aliasnikhil
Posts: 22
Registered: ‎06-06-2012
0

Re: Drive FPGA inputs using testbench OR Chipscope

Hi Eilert,

 

I am trying to calibrate an ADC that can run at any sampling rate. We are more concerned with higher rates like 1GHz. Yes, you are right; why go through the hassle when I know it wouldn't run at such high speeds? Well, if they made FPGAs that ran at 1GHz, I would use them right away. There are 2 parts to my design. The first part creates the LUT (implemented as a RAM, not a ROM) by operating on the input data for some time. And then, after that is done, I use the LUT to "calibrate-back" the data that comes in, for rest of the time.

 

Now, since I know that FPGA doesn't work so fast, it wouldn't make sense making the ADC run slower. Defeats the purpose of our research. Instead, we save the output data of the fast ADC to a file (at 1GSPS) and do post-processing (create the LUT and use that for look-up). I can do this in MATLAB in a matter of seconds but we wanted to do that on real hardware. So, the hassle of implementing it on an FPGA.

 

My design isn't that big either. Not a great deal of adders and no multipliers at all. I am not trying to save simulation time but just trying to find a way to provide test vectors to my FPGA from the MATLAB workspace. After I do this, I intend to have physical I/O pins (haven't thought it through right now!) that would do this without any interaction from Simulink, probably via a logic analyzer.

 

Okay, so chipscope cannot give lot of non-repetitive input vectors to my FPGA (non-repetitive being the key word). Right? So, running my FPGA at the oscillator clock and at the same time having a continuous stream of input data that can keep up is a challenge.

 

I have 3 options at my hand to overcome this input test vector problem:

 

1) I did some reading and figured out that "Shared Memories" might be what will be of help to me. How about that? But I might have to interrupt my design many times in between for the data to get buffered in the memory. I read about this frame-based HW co-sim here: XAPP1031

 

Unfortunately, if I may say so, my design works at speed of input data: meaning that it doesn't need any latent time to process it. It does real-time processing.

 

2) Using the external DDR memory but this seems complicated and I might have to leave the Simulink environment to do it. to make matters worse, I may not understand the sysgen generated code and editing it to accomodate the DDR interface can be a pain. DDR2_Tutorial

 

3) Could I program the ROM in sysgen itself to store the test vectors during compilation and then use it to drive my design inputs?

 

What do you think? Which one suits the best to this senario?

 

Thanks!

-Nik

 

 

P.S: If I were to use physical I/Os, how many can I access on the ML605 board so as to be driven from an external source?