03-09-2018 08:55 AM
I've run into a small FPGA team who are using cocotb. It utilizes Python for testbenches, instead of (system)verilog. I can't find much on it except a dated blog or two from the developers promoting it.
If anyone has info or experience with it please share.
03-09-2018 03:27 PM
Not heard of it - but looks similar to MyHDL and VUnit (although MyHDL is solds as an entire development option, with auto generation of HDL from Python).
These have been around a few years and havent really seen them getting much traction, outside a few engineers and the odd company/team. If it works for them, great. More testing, and better testing is always a good thing.
I think the main problem is the fact they are open source. They may be amazing, but the level of support is just not there like you get for VHDL and UVM. Im sure some python devs will shout about how easy it is and how its amazing, and how we're all stupid for not using it. But everyone seems fine without it?
They are up against the fact that it adds a layer of abstraction, and most FPGA engineers dont trust extra abstraction. And without proper vendors pushing it, it really wont get very far. Then as the popularity wanes, people will be confused how to use these old tools that no-one else seems to use any more.
Ill wait until something like this is pushed and supported by training providers and tool vendors.
PS. This is my opinion. I have seen respected collegues use these tools to great effect, but nothing has yet convinced me I need these in my life.
03-10-2018 01:13 PM
Cocotb can neither be compared to MyHDL nor to VUnit.
MyHDL is a hardware description language build on top of Python.
VUnit is a unit testing framework implementing an xUnit framework for VHDL (and Verilog and SystemVerilog). It uses Python as to orchestrate simulators.
Cocotb is a co-simulation framework using a native binary library written in C and C++ to attache a Python interpreter to a simulators API. This is either VPI (Verilog Programming Interface) for GHDL, VHPI (VHDL Programming Interface) for Riviera-PRO / Active-HDL or FLI (Foreign Language Interface) for ModelSim/QuestaSim. These libraries allow deep interaction with the simulation kernel to create, control and events and signal values.
The Vivado Simulator does not support such interface and thus can not be used with Cocotb. Moreover, the newest xSim will add a co-simulation interface, but does not adhere to language (VHPI, VPI) or industry standards (FLI) for simulator interfacing.
I personally, liked the idea of Cocotb, but I also consider this framework dead. Several tries to preserve the momentum by me and others have been rejected. The developers got new jobs and do not work on Cocotb anymore, nor have they assigned new developers for that project. See this discussion: https://github.com/potentialventures/cocotb/issues/513
The PoC-Library has experimented with Cocotb and implemented several advanced testbenches in Python. See this advanced cache testbench: https://github.com/VLSI-EDA/PoC/blob/master/tb/cache/cache_par_cocotb.py?ts=2
The free VHDL simulator GHDL is currently adding native Python support. This adds a native Python API, which offers new possibilities in cross language verification. Sets see what the future brings...
03-10-2018 03:51 PM
03-11-2018 12:04 AM
Cocotb can neither be compared to MyHDL nor to VUnit.
Maybe not directly because they have different targets, but they were all designed with the main goal being to improve testing, with the massive array of python libraries available to generate test vectors being cited by all.
Dont get me wrong, I am totally in favor of anything that improves testing. And VUnit is something that has come up on my radar and will be looking into more. But what you're up against is an FPGA industry that has a certain apathy towards testing.
For every big (and many small) company "doing it right", there are many more engineers in smaller companies where designs are tested by waveform and on the chip. Anyone coming to a company like this and trying to introduce something like CocoTB or Vunit will likely get penalised for "not getting things to market quick enough". They are trapped in a never ending cycle of rushed designs, patchy interfaces and a general feeling of "I dont really know how this works". Then they sell their product and move on to the next one where it all happens again. Many engineers would like to do better testing, but they often feel pressured to get the firmware on the chip, and dont have the time/skills to investigate how to do it better.
Looking through what I would consider the two larger training providers (doulos and FirstEDA), there is no mention of VUnit, CocoTB or similar. Their main courses are VHDL, SV and UVM. Without training being available, it will be hard to get the message across. I think most usages will be via "word of mouth", where someone has done the legwork, got it all set up, and set up training internally. Engineers move on and take this knowledge with them, but I feel without the training and industry support from the big 3 (mentor, Active, cadence) to keep up with an industry standard methodology there is a danger of bit rot over time as the teams using it will inevitably move on.
On a cynical note, I suspect the support is lagging because vendors/trainers do not see a money making opportunity with it. Look at HLS and OpenCL as an example. C to gates has been around for a long time, But things like Handel-C, Catapault-C and System-C were not supported natively by Xilinx/Altera and developed by 3rd party companies, and therefore pretty niche. But now Xilinx and Altera/Intel have seen the huge potential to pull software developers into their market with OpenCL and HLS, they are providing big support for them. Xilinx and Altera LOVE inefficient but easy to code designs - it sells bigger chips, and more chips.
03-11-2018 01:27 AM
Doulos is starting to offer first webinars around UVVM and is offering webinars since some time for OSVVM - a similar framework.
The companies behind UVVM and OSVVM offer trainings. It's their business model to deliver these trainings for the provided open source frameworks. Both are also VHDL verification frameworks. UVVM is on the market since 2 years; OSVVM for more then 10 years. While OSVVM is already integrated in all enterprise simulators, UVVM is now getting integrated.
03-11-2018 03:22 AM
Yes, I have seen those webinars, and I am actively using OSVVM (I have known about OSVVM since the initial posts about it on comp.lang.vhdl).
I much prefer OSVVM as it is more a toolbox, while UVVM is an entire framework (and I also think if you wanted UVM, why not just use UVM?). Something like OSVVM is much easier for a casual designer to use as they can pick and chose bits to use in their own tests, and build it up over time. Something like UVVM they have to put their DUT into a UVVM testbench, which is a lot of initial work that many cannot afford or are just put off by, and is likely a test methodogy many have not seen or used before. (and the cynic in me thinks its all just a ruse to offer training in something)
I think something as heavyweight as UVVM can suffer more from bit rot than OSVVM.
03-11-2018 07:33 AM
03-12-2018 10:55 AM
Your responses and views are very interesting and appreciated. BTW, I never found much on cocoTB except a few red flags indicating it lost what traction it may have had.
Regardless, one immediate downside is 'skill-set portability' (if you will). Any FPGA designer, who would normally leverage from their years of HDL-based testbench experience, is basically reduced to a newbie when resorting to another language/method such as cocoTB or similar. There's got to be a significant upside for it to make sense, but I'm not seeing it, and the reason for this post.
In general, when assessing new tools and methods, I think it's important to take into account the size and complexity of the design/FPGAs involved. Excuse the analogy, but like boats, FPGA designs can range from small, simple runabouts, to enormously large, complex cruise ships. Picture an 8 passenger ski boat docked along side an 8500 passenger Royal Caribbean cruiser. (For what it's worth, the analogy isn't too far off if using passenger capacity and logic cell density; the largest is about 150,000% larger then the smallest.)
With this perspective in mind, one might draw different conclusions when assessing new tools and methods...
For smaller, simpler designs, I suspect most would agree it makes more practical sense to simply develop testbenches in the same hardware description language used for RTL. Regarding abstraction, VHDL and System Verilog offer plenty without having to switch to a non hardware language like Python.
For large, complex designs especially team-based such as FPGA emulation of ASICs (with generational roadmaps offering lots of reuse opportunity), I suspect most would agree investing in something like UVM may make sense.
Vendors of course, who have an obvious agenda, will lean toward their solutions, which may or may not really make sense. Many articles and benchmarks promoting great advantages are written or influenced by vendors so follow the money.
The question in my mind is where on the size/complexity spectrum does it begin to make sense to develop testbenches in anything other then a hardware description language. And where on the spectrum does it make sense to adopt something complex like UVM.
For me personally, most of my designs have been medium sized designs (which isn't a helpful metric especially considering the boat analogy). I've always done just fine writing testbenches, even complex ones with reuse in mind, using VHDL and System Verilog. I never perceived this to be a limitation or problem needing a solution.
03-12-2018 01:41 PM
04-10-2018 03:35 PM
After just reading through the comments, I thought I'd throw in another perspective -- mostly from the ASIC development side and less so from the FPGA prototyping/emulation arena.
We are actually just starting to transition to cocotb instead of SV/UVM for testing (we'll see how it goes). There are a few motivators for doing this:
* We are trying to verify a mixed-signal design where dependencies on the analog portion of the design often requires binding a virtual interface deep into the (often changing) hierarchy to test what we want. Also, as new tests are written, they often require additional signals in the hierarchy which then requires updating the virtual interface and the binding. This list gets long. Keeping both the virtual interface and bindings up to date is a bit of a pain, and cocotb basically removes the need for the virtual interfaces -- so almost no setup/maintenance required.
* Our lab test infrastructure is based off of python, so switching to cocotb/Python would allow us to reuse tests/code between the lab and verification environments. Only low level interface functions need to be swapped out.
* Data manipulation is easier in python. So for example, in a mixed-signal simulation, when we are looking at signal sets that are not strictly digital, there are more tools at our disposal to determine if the an analog signal (that we can capture) has the proper characteristics than if we were operating entirely in SV.
* Logging results. Once we have our results, there are many more options to save the results in whatever format and location we'd like since we're in Python. This seems to allow a more natural transition to CI or other automated methodologies.
In the few trials we've run so far, I'd say it was far easier to setup the test infrastructure (from scratch) to the point where engineers can start writing tests -- this was almost trivial. I'd say writing assertions in cocotb is a bit more cumbersome or rather less compact than SV unless you create some kind of convenience class.
Our situation is probably not typical, so take my comments with a grain of salt. So far we haven't been limited by cocotb compared to what we had been doing. Perhaps we'll run into some other roadblocks as we use it more, but so far, it looks promising. I for one hope its development continues.
06-14-2018 12:03 PM
My two bit's as a cocotb user,
#2 prevents me from using a License based simulator.
GHDL/iverilog are fast compared to xsim. but do not support SV and UVM. UVVM does not work on iverilog. Also, There is hardly any documentation on UVVM
Of all the toolkits that I have explored, cocotb seems most suitable for the 2 constraints listed above.
The toolkit is usable as is today, there are a few "nice to have" features missing but no show stoppers.
Regarding long term viability of the project.
My experience with the framework is as follows:
11-14-2018 04:21 PM - edited 11-14-2018 04:21 PM
It should definitely be noted that cocotb has a new lease of life. The list of maintainers is growing.
@lionfred hits the nail on the head with his points regarding the reason for adopting cocotb. I'm particularly fond of the feature allowing one to use the same Python scripts against simulations as on the bench by simply swapping out low level transport functions.
A pity xsim doesn't support a standard interface like VPI. Their users' loss, I suppose.
11-18-2018 05:07 PM
@lasplund : Large companies are using VUnit for their FPGA development of high-end products shipping in million of units.
Like which large company uses it?? Any names
>> ASIC teams use VUnit for their RTL to reduce the number of bugs handed over to the verification teams where they become much more costly to debug.
Which ASIC teams? from which company, any names please.
>>Tool vendors support VUnit by providing tool provider licenses
Which vendors support? Names please
11-18-2018 06:41 PM
I'm not allowed to name big companies that use VUnit, but I can assure, I'm not Lars and the company I know is a top-ten customer of Xilinx.
Vendors supporting VUnit: e.g. Sigasi
VUnit is shipped with OSVVM, which in turn is supported by ModelSim, QuestaSim, Active-HDL, Riviera-PRO, ...
To conclude, only Xilinx does not support any of the emerging open source verification libraries. Some of them have gained almost 20% market share according to the latest Wilson Research Report about ASIC / FPGA designs.