cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
andrei.tunea
Observer
Observer
2,416 Views
Registered: ‎04-09-2013

Remote hosts - Good practices

Dear community,

 

I have an older PC lying around that I want to use as a remote host for synthesis and implementation, since it's faster than my notebook. So I grabbed the ug904, went through all the steps described there and managed to get some things working: my laptop has Ubuntu 16.04 LTS installed, as well as Vivado 2017.4. On the remote host I installed Ubuntu 16.04 LTS, but no Vivado. I am mounting the Vivado installation folder from my notebook (/opt/Xilinx) to the same path on the remote host. The same for the project folder. The project files are stored on my notebook, the remote host mounts that folder from the notebook under the same path. So if my project files are under /project on the notebook, that folder is mounted on the remote host under /project.

 

The mounting is done with sshfs, manually, i.e. I log into the remote host, mount the folders, logout. And when the jobs are done, I unmount them manually.

 

To create a Vivado environment on the remote host, I source /opt/Xilinx/.../settings64.sh

 

ssh is used with key agent forward so no password is requested when the notebook sends jobs to the remote host, or when the remote host mounts the folders from the notebook.

 

The method described until this point works, but it's definitely not user friendly. I would like to automate the mounting/unmounting of the folders and the creation of the Vivado environment. The way I envision my setup is:

* remote host and notebook are on the same physical network

* server runs permanently

* notebook runs "on demand". The user working at the notebook sends jobs to the remote host. The remote host mounts all the folders it needs (Vivado install, project files), runs the jobs, unmounts.

 

My first question: Is this the way that Xilinx intended remote hosts to be used?

Second: If yes, how would one mount and unmount automatically?

Third: If not, how do others use remote hosts?

 

With best regards,

Andrei

0 Kudos
4 Replies
anatoli
Moderator
Moderator
2,337 Views
Registered: ‎06-14-2010

Hello @andrei.tunea,

 

“Is this the way Xilinx intended remote hosts to be used?”  No.  

First of all, Xilinx is not in charge for automatically mounting and unmounting drives, therefore we can't help on this much.  If you want to figure out a way to do that (via some scripting perhaps, etc.), this is up to you. Perhaps other Forum users who'd have more knowledge in this area may be able to help/advice on this matter? However, Xilinx hasn't done any testing like that.

 

Secondly, much of the benefit of using the faster remote machine is going to be negated by having Vivado and the project files local.  It will work but slowed down by network speed.

 

 

Kind Regards,
Anatoli Curran,
Xilinx Technical Support
------------------------------------------------------------------------------------------------

Don’t forget to reply, kudo, and accept as solution.

If starting with Versal, take a look at our Versal Design Process Hub and our
Versal Blogs

------------------------------------------------------------------------------------------------
0 Kudos
franzforstmayr
Contributor
Contributor
2,221 Views
Registered: ‎09-20-2017

Hello @anatoli,

so what's a good practice?

Should i install vivado only on my build server? Or on the build server and the local machines? 

Actually i've installed vivado both on the server and my machine. Has the vivado on my local machine to be visible from my build host?

 

Regards Franz

0 Kudos
kkoorndyk
Contributor
Contributor
2,155 Views
Registered: ‎02-12-2009

I'm not terribly familiar with Vivado support of LSF but it is a topic I have some familiarity with from my previous employer many moons ago.  We had set up a cluster using primary interface machines running Gentoo with a large array of disk-less nodes running openMosix (maybe?  memory is a bit fuzzy) from CD.  We had a separate machine with all of the tools installed which the primary interface nodes NFS mounted permanently.  The whole cluster was on it's own gigabit LAN and the interface nodes had a second NIC to connect to the corporate network.

 

Users would SSH to one of the interface nodes and kick off builds like normal.  There was no need to dispatch builds or simulations using special commands because the interface nodes were also capable of running the builds/simulations.  The load balancing software was running "under the hood" and would move jobs to disk-less nodes to "balance the load".

 

In our case, we had a dedicated development cluster, so it wasn't a big deal to have the tools drive permanently mounted.

 

In your case, it sounds like you want to be able to dynamically mount/unmount the tools drive.  Is that just because you want to be able to share your laptop's drive and have the tower run the tools from there when the laptop is present on the network?

 

Perhaps you could add a script that mounts the drive on login by dropping it in /etc/profile.d

 

We really haven't seen the need for a cluster anymore unless we're designing a large ASIC and running regression simulations.  Instead, it's just easier to SSH to our development servers and run from the command line.

 

 

 

 

 

DornerWorks
https://goo.gl/LNexn5



Xilinx Alliance Program - Premier Tier
0 Kudos
andrei.tunea
Observer
Observer
1,974 Views
Registered: ‎04-09-2013

Hi ,

 

in the meantime I also gave up the idea of launching jobs on remote hosts. I set up a pc with ubuntu and all the vivado tools and me and my colleagues are connecting through ssh/remote desktop to it and doing our design work on it.

 

with best regards,

andrei

0 Kudos