cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Highlighted
Contributor
Contributor
11,318 Views
Registered: ‎02-27-2016

Simple revision control flow?

I have been using Vivado since version 2013.x and it seems we STILL do not have a GOOD revision control flow for Vivado.  The newest effort as documented in UG1198 doesn't seem any better.

 

  1. IP core containers are unusable in revision control; they reduce the large numbers of text files that must be revision controlled and replace them with a binary file that is difficult for main stream revision control systems to efficiently handle and makes tracking changes almost impossible.

  2. UG892 (and the quick take video at http://www.xilinx.com/video/hardware/vivado-design-suite-revision-control.html) recommends revision controlling project XPR files yet these XPR files are modified when opened if they have been checked out to a different path than the one they were committed from (common for multi-user projects).

  3. UG1198 recommends essentially copying BD and IP sources to a working directory instead of directly using the revision controlled versions.  How do I move changes from my working version to the revision controlled version then?  Copy the entire working directory back to the revision controlled directory? ... if so then we defeat the purpose of revision control which is to monitor, track, and record REAL changes to files.  Copying from my working directory to my revision controlled directory will result in MANY files being touched and viewed as changed even though they really haven't.

  4. UG1198 goes over the "recommended" way to move from a fresh checkout to the "work" directory (even if I don't agree with the process) but doesn't go over how to move the other way; that is how to capture changes made to the BD, to the top level project, etc and capture those changes with revision control.

  5. UG1198 relies heavily on creation scripts and makefiles to generate projects and other outputs.  If these scripts and makefiles where generated by Vivado as part of the normal flow that would be one thing, but the fact that they need to be hand managed and generated, so often, is problematic.  As an example, the 'setup' target in 'Makefile' lists SOME sources as dependencies of the setup process.  To do this I am essentially maintaining two different source file lists for the same project: the sources that I add and maintain via the top level project and which are listed in the project create Tcl script, and the ones in the Makefile.  Duplicating this level of information is tedious, burdensome, and easily leads to errors when re-creating and on-boarding a new engineer.

My initial issues with revision control and Vivado, especially with IP and IPI was the excessive number of duplicate files.  Honestly, I can deal with that, but the inability to EASILY check out a project to different machines, to different paths, and open those projects without changing revision controlled files is at this point unacceptable.  Yes, I understand the answer "well you can script it this way, and that way, to make it work," but such an answer is a cop-out.  If I am spending as much, or more, time managing and scripting the revision control flow as I do my build flow, something is very wrong.  As it stands, I am almost better simply saving off Tar-balls of my project (this is pretty equivalent to copying working directory to revision directory).

 

Really, it feels like Xilinx, instead of acknowledging there is a problem with revision control integration, instead is attempting to band-aid the situation and get their users to accept a flawed solution.  It seems with each release of Vivado there are tweaks 'improve' revision control support that only serve to complicate the situation without really improving the core problems.  Take IP containers for instance; yes they reduce the number of revision controlled files but they don't solve the issue of files being touched or updated simply by opening a project or IP without actually making any functional or configuration change.

 

Some of my issues may be miss-understanding the current documentation regarding revision control recommendations, but if that is the case then I argue the issue is inconsistent, unclear documentation lacking completeness.

 

The fact that revision control KEEPS coming up means something is wrong.  I urge you, Xilinx, to open up communication with the community to better understand the real issues with revision control and to understand what it is your customers really need.

DornerWorks
https://goo.gl/8wtknW
18 Replies
Highlighted
Explorer
Explorer
11,305 Views
Registered: ‎03-31-2016

I agree that Xilinx has some serious issue with multi user revision controlled projects.  

 

Doing with a custom part in MIG is nearly impossible due to this http://www.xilinx.com/support/answers/66678.html.

 

I think the core containers are not perfect but they do work for Xilinx(except for custom part MIG) or third party IP. Yes you do have to mark it as binary but you should not care about what the specific differences are, you don't have control at that level.

 

The file contains all the settings and the source (except for the MIG custom part config, you really dropped the ball on that Xilinx) so it SHOULD offer a nice way to build the exact same IP regardless of tool version.  The only thing I would wish for is the ability to easily extract the settings/version for diffing.  

 

We make relative small use of the IP so going the managed IP project route with core containers with the managed IP project, the xcix files, and the entire ip_user_files(so we dont have to regenerate simulation models each checkout).

 

0 Kudos
Highlighted
Contributor
Contributor
11,302 Views
Registered: ‎02-27-2016

Maybe I was too harsh on core-containers. I can see where, if the IP really never changes, they aren't really an issue. I think my problem with them is that they (for me) have a narrow use case and only solve a "nuisance" issue, rather than a serious "usability" and "functionality" issue.

Feature and option improvements are great...if the "basics" are solid and I don't think that the "basics" of revision control is solid. I'd rather see the "basics" fixed first, then add features, options, etc.
DornerWorks
https://goo.gl/8wtknW
0 Kudos
Highlighted
Explorer
Explorer
11,171 Views
Registered: ‎04-22-2015

No, Corrin, I don't think you were too harsh on core containers.  In fact I think your OP hit every nail square on the head.

 

I am using a number of off-the-shelf Xilinx peripherals and yes you don't really need source management on those per se.  However I also am developing a significant number of my own IP blocks, since that is the only path available to get custom RTL into an IPI top-level design.  Managing custom IP in Vivado is a horrorshow every step of the way.  The revision control mess is only the beginning.  The debug/test cycle for a custom IP block is nasty, I'm continually fighting IPI - sometimes it will happily let me upgrade it after a bugfix, sometimes it throws a padlock on it and I have to rebuild the whole mess from the start.  Xilinx is going encryption-happy with their IP, which complicates use of 3rd party tools among other things.

 

Back to the point, it really looks to me like one of two situations: Either the developers at Xilinx have no understanding whatsoever of how revision control is used, or the Vivado tool infrastructure was developed with so little discipline that it has rapidly grown out of any hope of control, and that implementing something like sane version control at this point is simply not possible (seriously, my current project has maybe 40-50 files written by me, the as-built directory tree is well north of 5,000 files.  What are they all for?).  I very much believe the latter, that Vivado is simply out of control.  I'm convinced ${DIETY} ${PRONOUN}self doesn't understand how it all works.  It seems to me Vivado was designed as an infrastructure easy to add shiny new features to, but with no concern whatsoever for practical design management.

 

 

Highlighted
Advisor
Advisor
11,099 Views
Registered: ‎10-10-2014

it's a disaster indeed ...the only archiving that works is a zip file of your entire folder structure...  (and the project -> archive does work too, sort of). Version control seems to be forgotten from the start. A missed opportunity when they switched from ISE to Vivado.

 

UG1198 is a start in the right direction, but way beyond a truly real life situation where you work on multiple projects, re-using the same IP but in different versions, and so on. And it's a project / job on its own to keep your design under version control, besides just developing your design. 

 

Xilinx should start separating sources & settings away from temporary files that can be re-generated, now we're on our own to find out which files you can regenerate and which not. That shouldn't be too hard ...

 

 

** kudo if the answer was helpful. Accept as solution if your question is answered **
0 Kudos
Highlighted
Explorer
Explorer
11,068 Views
Registered: ‎07-14-2014

So it's not just me that has issues with revision control and Vivado then? Phew...

 

The only way I've found to deal with it is completely ignore Vivado's folder structure and generate my own. It's taken about 2 years but I've almost got things to the point where things are almost sane, but it does rely on heavily modified scripts to re-build projects.

 

Since we're sharing, 2 of my biggest issues are a) the fact that the Vivado generated TCL files seem to use absolute paths to things (i'm looking at you write_bd_tcl), as this makes things really difficult to clone designs/repositories into different folder structures) and means that every time I re-use the above TCL command I have to manually edit the resulting TCL file to use a relative path.

b) Archiving the project completely destroys any carefully crafted directory structure you may have, rendering any scripts you had that traversed you directory structure, useless

 

Oh and one more thing, they always seem to include the version number of the generated IP in the file/directory name which means that you end up with a load of old files in the history that are not used any more and adding new files/folders for every IP update which causes our repository to balloon in size after a while. Surely this information could be consigned to the header and leave the file/folder names the same?

 

Sorry, rant over (at least for now). Just had to get that off my chest

 

Just my 2p worth

 

Simon

0 Kudos
Highlighted
Contributor
Contributor
11,008 Views
Registered: ‎02-27-2016

I gave up on the Vivado directory structure a long time ago; I only ever use remote source files for the exact reason(s) stated above.

 

I have tried to move more and more of my projects to Tcl scripts but the reality is that to make use of Vivado, especially when dealing with Zync and MPSoC you have to use IPI and once you do that it is pretty difficult to move to a non-Project based flow and once that is done you are left with 'write_project_tcl' and 'write_bd_tcl'.  I don't mind that, except, as pointed out, all too often they need to be hand edited to work afterwards.

 

This also then goes against numerous statements elsewhere about revision controlling the *.bd and *.xpr files.

DornerWorks
https://goo.gl/8wtknW
0 Kudos
Highlighted
Scholar
Scholar
10,992 Views
Registered: ‎09-16-2009

Corrin,

 

We're in the same place.  We exclusively use non-project TCL flows.  Just checkin RTL, scripts, XDC into revision control.

Xilinx makes this (too) difficult but manageable. XCI files are just checked in for reference/logging NOT for synthesis nor simulation. No BD files, DCP, OOC, XPR, PRJ or other silly made up acronym gets checked in to revision control.

 

However as you point out, for MPSoC (not so much ZYNQ for us, but MPSoC), you gotta use the stupid IPI in order to generate the .hdf file for handoff to your SW team.

 

Now the hdf file is just a zip file - just like a dcp (sigh - why the obfuscation Xilinx? -it's a zip file - call it a zip file).  Within the hdf (zip) files, it's basically just a few files TCL, or C that have registers writes from the MPSOC to configure peripherals/etc.  I've considered just unzipping this file and checking in the derived objects themselves into revision control and forgetting the IPI here too.  The register writes fairly well commented, and documentation's pretty good.  This has drawbacks however, as the whole MPSoC is so new, so a lot is fluid right now.

 

So I'm struggling too with what to do about the .bd files or xpr with the MPSOC (or whatever else).   This whole thing really grates our team. 

 

We don't use the IPI to do ANYTHING with our PL side logic.  (i.e. we don't include bitstreams in the hdf).  We just treat the MPSoC (PS8) as a hard core (just like the old PPC405/440).  It's just a few AXI buses, and some other (easily handled) interfacing.  Very straightforward to handle in Verilog/VHDL.  Why Xilinx thinks schematic capture (IPI) or designing in TCL (write_bd.tcl) is a good idea is beyond many of us.  You need to express design connectivity and parameter settings.  Why not use the more expressive, industry standard, more powerful, and better supported design languages already in use for 25 years?

 

Where Xilinx just completely misses the boat - not even discussing at all in UG1198 is merging, and multiple users simultaneously working on the same IP.  Branching and Merging are required.  None of UG1198 fits with (nor discusses) branches and merges.  (Many times the "other" user is myself - working on a separate problem)

 

Xilinx is way behind the times here, trying to put a band-aid on a gushing wound.  They've tried to back fit revision control on a very poor foundation, and are finding it very difficult. It's very frustrating as we're actually happy with the technology, happy with the design IP, happy with documentation, happy with back end tools. Design IP organization, and tying everything together?  Hopelessly lost. 

 

Regards,

 

Mark

 

Highlighted
Contributor
Contributor
10,894 Views
Registered: ‎02-27-2016

@markcurry

 

I completely agree with your feelings regarding UG1198.

 

As for IPI, my experience with it with regards to revision control has been terrible; I actually really want to use it.  We tried something similar to what you suggest before and it worked well for us for small designs, but once we started integrating larger numbers of IP we really started to see the benefit of working with IPI, and then once we started packaging even per-project IP it got even better.

 

I think both project and non-project flows have their place and I want to see Xilinx support both revision control and IP integration equally well in both and right now that isn't the case: you really need non-project flows for revision control but a project-flow for day-to-day work with IPI for Zynq and MPSoC.

DornerWorks
https://goo.gl/8wtknW
0 Kudos
Highlighted
Scholar
Scholar
10,887 Views
Registered: ‎09-16-2009


@corrin.meyer wrote:

 

As for IPI, my experience with it with regards to revision control has been terrible; I actually really want to use it.  We tried something similar to what you suggest before and it worked well for us for small designs, but once we started integrating larger numbers of IP we really started to see the benefit of working with IPI, and then once we started packaging even per-project IP it got even better.

 

 


I just can't fathom why there's such a desire for schematic capture.  We buried that as a form of digital logic design 25 years ago.  There wasn't any clamor to bring it back.  Xilinx just keeps shoving it down our throats as the only way of using much of their IP.  Sure it makes a neat "gee whiz" presentation that an A.E. can show us for toy designs.  But as a method of designing at today's level of integration?  Boggles the mind.

 

--Mark

 

 

 

 

0 Kudos
Highlighted
Advisor
Advisor
9,903 Views
Registered: ‎10-10-2014

@markcurry I understand your remark and see the advantage of pure text based coding, but I do see the following advantage in IPI :

 

* for large/complex Zynq based designs, you can very quickly, with a few clicks modify connections (i.e. add a 2nd axi interface), modify a fifo's depth, ... no hassle with all the port mappings etc in HDL (especially in VHDL)

* when IP's are upgraded, the same applies : no hassle with keeping port mappings & generic mappings up-to-date

* the schematic top level view does give a clear overview of the entire design

* I do like to use custom IP's, in which you code in HDL in your way, and then connect it in a zynq system. 

 

However what I'm missing is something that other big competitor had 20 years ago : an easy mixture of both : you could just put a block on a schematic, go inside the block and enter plain HDL code. That was something like the best of both worlds. If you want to do this now in IPI, you have to build an entire custom IP, package it, ...  

 

But I'm not a full time FPGA designer, so you might be more right than me with my limited experience :-)

** kudo if the answer was helpful. Accept as solution if your question is answered **
Highlighted
Contributor
Contributor
9,771 Views
Registered: ‎02-27-2016

I think that for some people the idea of IPI works well, and for others it is less ideal.  But I would like to steer this conversation back towards revision control.  I would ask that anyone who wants to see revision control improve provide specific issues here (which may include IPI).

DornerWorks
https://goo.gl/8wtknW
0 Kudos
Highlighted
Advisor
Advisor
9,769 Views
Registered: ‎10-10-2014

@corrin.meyer sorry, I let myself go with my frustrations, but that's not consctructive :-) back to the topic ... 

 

I have one 'solution', and I did post it here in the past. In a nutshell :

 

'write_project_tcl' and 'write_bd_tcl' can be used to recreate an entire IPI design, provided that source files and optionally custom IP can be found at the correct locations. In the post I describe how I 'merge' both to have a single script that recreates my design, but it's not ideal - I would expect Vivado to give me 1 command that creates that script instead of manual merging.

 

also Jeffrey.Johnsons comments on that post are valuable info / an alternative way of doing this, but it requires more work. I would like to focus on developing my application instead of tcl scripts to recreate it.

 

 

** kudo if the answer was helpful. Accept as solution if your question is answered **
0 Kudos
Highlighted
Contributor
Contributor
9,472 Views
Registered: ‎02-27-2016

I think this is the crux of it.  Yes, there are ways to get version control to work, but they require a lot of additional Tcl scripting and jumping through hoops to make it work.  Just because there is A way to get version control to work, doesn't mean it should be THE way to get it to work.

 

  1. There is a lot of reinventing of the wheel going on.  Yes we have some resources from Xilinx that document their recommended way of doing things, but it covers so little of tasks associated with revision control that a lot is still left up to the reader to figure out.

  2. Because there is so much reinventing of the wheel going on, there are MANY MANY different ways to get things done.  This is not necessarily a bad thing but if every engineer does it different, and every project does it different, then it leads to loss of efficiency.

  3. The fact that there is SO much documentation regarding how to do revision control and it still can't cover all aspects of it indicates that something is not quite right.  It seems as if there are different processes for revision controlling a project, which is different than revision controlling a block-diagram, which is different from revision controlling a configured IP, which is different from revision controlling custom IP, which is different from...
DornerWorks
https://goo.gl/8wtknW
Highlighted
Adventurer
Adventurer
9,099 Views
Registered: ‎08-31-2009

This is the most comprehensive discussion that I've seen so far, regarding the need for robust revision control integration within Vivado.

 

Similar to everyone else's experience, I have bailing-wired together a solution that works for me. And as everyone else reports, it's complicated, fragile and requires me to remember subtle nuances.

 

The magnitude of the problem is this. I easily spend more than 50% of my time fighting with Vivado itself, rather than in creative development efforts. I work a lot with IP Packager and see problems like: paths getting changed from relative to absolute, disappearing source files, IP sources not getting updated in the main project, IP Packager hanging on open/close, and outright crashes of Vivado. It's maddening.

 

I can't prove it, but I think a lot of these issues are due to the fact that I have source code kept outside the Vivado project directory, in a place where I can revision control it. I suspect that that the Vivado developers don't experience this heartburn because they don't follow this use case.

 

Charlie

 

Highlighted
Scholar
Scholar
9,093 Views
Registered: ‎09-16-2009

 

I'd just be happy with revision control "friendly" rather than integration.  In fact I'd prefer it this way. 

 

There's too many variables in how folks use version control and configuration management.  We don't need another solution - we just need a flow that's friendly to the (many) tools already available.  No reason to recreate the wheel.  I think recreating the wheel is part of the problem in the first place...

 

Regards,

 

Mark

 

 

Highlighted
Contributor
Contributor
9,075 Views
Registered: ‎02-27-2016

@markcurry

 

I whole heartily agree.  I don't need new Gui option or additional Tcl scripts to integrate with revision control systems.  That will only work if I follow a specific process exactly and likely only with specific revision controls systems.  Instead, I advocate a "back to basics" with respect to revision control.

 

DornerWorks
https://goo.gl/8wtknW
Highlighted
Teacher
Teacher
8,145 Views
Registered: ‎07-09-2009

I also spend an in ordinate amount of time sorting out vivado....


<== If this was helpful, please feel free to give Kudos, and close if it answers your question ==>
0 Kudos
Highlighted
Explorer
Explorer
7,858 Views
Registered: ‎04-22-2015

To me the biggest complicating factors are:

 

1. Identifying the files or elements that are truly necessary to rebuild a design from sources.  If I'm instantiating a Xilinx IP core, I don't need an .xci and a complete set of copies of all the source files.  Two lines of tcl are sufficient: one to invoke the raw IP block and one to set the attributes to configure it.

 

2. Documentation necessary to run the process steps.  This is especially true for IPI packaging and SDK integration.  The ipx:: and hsi:: resources will do the trick but are completely undocumented.  For ipx:: you can at least walk through the GUI and watch the log to see what's going on; that gets the basics.  Documentation on how to use propagated parameters in custom IP modules is sorely needed; it appears on the forums that a handful of users have gotten it to work but the posts are thin on details.  For hsi:: everything is buried under dozens of layers of Java.  There is a manual that alleges to document the hardware/software integration commands, but it amounts to no more than walking through one example with no documentation at all on the commands themselves.

 

3. A way to readily remove *all* generated files, leaving only the essential sources.  Right now I need to take great care to mirror all design modifications into my scripts, if I miss something it's gone. 

 

I agree, this seems to be the most comprehensive thread about the actual desired use and features to support real version control (and the closely associated issue, repeatability of a build).  It would be really neat if someone from Xilinx would poke their nose in for a look someday.

 

ken