03-19-2013 07:39 PM
I have a design created using System Generator. I need to provide test vectors to it from either my testbench or using Chipscope.
In Simulink, I used to give my testvectors from the MATLAB workspace. If I generate my HDL code, the system generator allows me to create a testbench with test vectors created from the MATLAB workspace. So, I can simulate it.
Now, I need to program my FPGA board using a JTAG USB cable and apply test vectors from MATLAB or the test bench that was created by System Generator. In Simulink, I instantiated the ChipScope block to monitor the outputs but I need to apply test vectors as input too. How do I do it?
03-20-2013 01:22 AM
what you are looking for is called HW-Cosimulation.
There's a special chapter in the sysgen documentation about this topic.
Some Boards are alredy supported as HW-Cosim targets for sysgen, others need to be specified by the user.
Here's some thred leading to extra help for the second case:
Have a nice simulation
03-20-2013 10:05 AM
Thanks for the reply! However I have the folloowing concerns:
1) Running a JTAG co-sim would be slow since I want the design to run in excess of 100MHz and if I am not wrong, JTAG does not support such high data transfer speed.
2) However, the ethernet port does support speeds upto 1Gbps and the ML605 board does support ethernet point-to-point HW Co-Sim. So, now I can run my design at high speeds and also give test vectors to it from Simulink. Also, I read somewhere that we cannot use both HW Cosim and chipscope together as they both use JTAG. But if I am using Ethernet for HW Cosim, then could I also use Chipscope on JTAG?
3) But we can only select some pre-set clock speeds like 200MHz, 66.6MHz etc from the system generator token. What if I wanted to run it even faster, for example, at 300MHz? How do I set this custom clock speed?
4) Also, while running the co-simulation, I would expect the Simulink and FPGA to run together, at the same clock speed, lets say 200MHz. So, can I just give this as the 'Simulink clock period' in the System Generator token in Simulink and expect them to run together? I need this since the Simulink/MATLAB workspace is providing the input test vectors and they need to update at each clock cycle. Is this possible?
The link you sent me isn't needed right now but I surely would need it in the future and thanks for saving me some time there!
It would be great if I got help with my above doubts and then I could carry on my simulations!
03-21-2013 06:08 AM
03-21-2013 10:40 AM
Thanks for all the info! Yes it was seemingly impossible, even by intuition, that the interface isn't fast enough.
I just want to verify it. So, as you and many others have repeatedly said, I can run at slower speed. But here is another question: If a design runs at slower speed, it doesn't mean it would run at higher speeds (due to setup or hold time violations). Correct me if I am wrong.
My simulation wouldn't take more than 7 hours. So, that wouldn't bother me. And, PCI crossed my mind but its complicated enough for me to just ignore it.
I do not need I/O capability anytime soon. Just want to verify it at full speed. BUT I need to supply data from software. So, I have 2 approaches now: Use BRAMS to store data (just like chipscope) or use DDR memory.
1) BRAM: Chipscope helps to output data and I have tried that in sysgen environment. How can I use it to send data as well? VIO? or anything else? I don't see VIO in sysgen.
2) DDR-RAM: How can I do this? I don't know anyway to access this memory.
Also, I need a HW Cosim only because I have the input data in my MATLAB environment. I actually do not need to, literally, co-simulate. Chipscope could have worked fine as well if I had known how to send input data from software to the FPGA. Run-time transfer wouldn't have been possible in this case but atleast to buffer in BRAM like in point 1 above.
Any pointers/ links to the above methods would be really helpful, Eilert. I really need them.
Thanks a lot!
P.S: Just FYI, I am creating a digital calibration scheme. So, I create an LUT with the input data and then use that LUT to calibrate the data that comes in later. Pretty simple! But speed is a concern. We had the data coming in from a 1GHz source and FPGAs usually don't work so fast. So, I store that data in software like MATLAB and then carry on further.
03-22-2013 12:59 AM
thanks for the infos about the design background. This helps a lot for understanding the problems you are facing.
It seems like there are some points unclear about how HW-Cosim works and how speed is affected.
Actually with sysgen you should always create a fully synchronous design.
That's a basic requirement for HW-Cosim to work at all.
Sysgen adds some extra hardware to your design which handles the IO and (very important) the synchronisation between your design and the Matlab/Simulink environment.
So how's that done?
Well basicly by adding a global clock enable to your design to keep it on hold while Matlab /Simulink is busy with other stuff, slow as it is compared to the FPGA.
Aside from physical limitations, what difference makes it for some algorithm implemented in a fully synchronous design wether it runs at 1Hz or 100 GHZ. Actually none at all, as long as the inputs are provided at the matching clock cycles.
Just thik you are running a simulation with clock cycles numbered from 1 to N. There is no time unit involved. And actually that's why event based software simulation of digital circuits works too. You add in the timescale later and it makes no difference in the behavior of your circuit, only your interpretation of it changes.
Now back into the physical world.
After synthesis your circuit can work with a maximum possible frequency, which means any clock frequency below that limit is OK.
Setup/Hold violations, which can cause metastability effects, are then only happeneing if your external Inputs are not matching the clock. So for a full design with PCB and external circuitry connected to the FPGA you have to make sure that input signals don't change at active clock edges. This can become difficult for high speed designs.
So for the pure purpose of simulating your design without physical I/Os HW-Cosim is as good as software simulation of the netlist. Actually it only gives a gain in simulation speed for very large designs.
With Chipscope you are limited to use some very low numbers of test vectors (or a repetitive sevence) , and this also limits the number of events that can be recorded because both applications require BRAMs.
Chipscope will be usefull when you are going to check the behavior after implementing your design and running it wit real input data from outside the FPGA.
For your approach of using a LUT for "calibrating" input data, there's something I don't understand yet.
A LUT is like a ROM, so your data must be able to address some LUT content to create some different output (the calibrated data?).
Now you are saying that your data comes in at 1GSPS. So, knowing that a FPGA can't work at these datarates straightaway, how are you planning to implement your calibration-LUT in the end?
I just wonder about the hazzle of trying to do tests at some 100MSPS when your final datarate is in the GSPS region.
Have a nice simulation
03-22-2013 01:49 AM
I am trying to calibrate an ADC that can run at any sampling rate. We are more concerned with higher rates like 1GHz. Yes, you are right; why go through the hassle when I know it wouldn't run at such high speeds? Well, if they made FPGAs that ran at 1GHz, I would use them right away. There are 2 parts to my design. The first part creates the LUT (implemented as a RAM, not a ROM) by operating on the input data for some time. And then, after that is done, I use the LUT to "calibrate-back" the data that comes in, for rest of the time.
Now, since I know that FPGA doesn't work so fast, it wouldn't make sense making the ADC run slower. Defeats the purpose of our research. Instead, we save the output data of the fast ADC to a file (at 1GSPS) and do post-processing (create the LUT and use that for look-up). I can do this in MATLAB in a matter of seconds but we wanted to do that on real hardware. So, the hassle of implementing it on an FPGA.
My design isn't that big either. Not a great deal of adders and no multipliers at all. I am not trying to save simulation time but just trying to find a way to provide test vectors to my FPGA from the MATLAB workspace. After I do this, I intend to have physical I/O pins (haven't thought it through right now!) that would do this without any interaction from Simulink, probably via a logic analyzer.
Okay, so chipscope cannot give lot of non-repetitive input vectors to my FPGA (non-repetitive being the key word). Right? So, running my FPGA at the oscillator clock and at the same time having a continuous stream of input data that can keep up is a challenge.
I have 3 options at my hand to overcome this input test vector problem:
1) I did some reading and figured out that "Shared Memories" might be what will be of help to me. How about that? But I might have to interrupt my design many times in between for the data to get buffered in the memory. I read about this frame-based HW co-sim here: XAPP1031
Unfortunately, if I may say so, my design works at speed of input data: meaning that it doesn't need any latent time to process it. It does real-time processing.
2) Using the external DDR memory but this seems complicated and I might have to leave the Simulink environment to do it. to make matters worse, I may not understand the sysgen generated code and editing it to accomodate the DDR interface can be a pain. DDR2_Tutorial
3) Could I program the ROM in sysgen itself to store the test vectors during compilation and then use it to drive my design inputs?
What do you think? Which one suits the best to this senario?
P.S: If I were to use physical I/Os, how many can I access on the ML605 board so as to be driven from an external source?