[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Embarrassingly Parallel Tasks on HPC

From: Erik Petigura
Subject: Re: Embarrassingly Parallel Tasks on HPC
Date: Wed, 29 Aug 2012 15:41:02 -0700

Hi, Matt.

Thanks for your advice!  All I had to do was copy the public keys to the 
~/.ssh/known_hosts file.  The only other thing I needed to change was to 
disable the host key checking since I was getting prompted with:

RSA key fingerprint is ...
Are you sure you want to continue connecting (yes/no)? 

I found a good tutorial on how to disable StrictHostKeyChecking here

Again, thanks for your help! I'm currently running on 64 cores :-).


PS:  I found another tool called load_balance which looks like it does the same 
thing.  It looks to be built on MPI and boost.

On Aug 29, 2012, at 10:40 AM, Matt Oates (Home) <> wrote:

> Hi Erik,
> Welcome to the list.
> On 29 August 2012 18:10, Erik Petigura <> wrote:
>> Now, I'm trying to migrate this code to a larger computer ("carver" at
>> NERSC).  The command no longer works because carver requires a password for
>> each ssh connection to a child node.
> Is there a reason you can't just setup passwordless logins using keys?
> If all the nodes on "carver" have a shared home where you login to you
> only have to do ssh-copy-id once and then you no longer need to use
> passwords.
> On the computer you want to launch a job from you can do:
> ssh-keygen -t rsa
> This will hopefully create some key files: ~/.ssh/id_rsa and ~/.ssh/
> Then do something like:
> ssh-copy-id -i ~/.ssh/
> From the help pages of Carver online it very much looks like they
> demand you use their PBS queue submission tools just as a matter of
> billing though!
> Best,
> Matt.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]