espressomd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ESPResSo-users] Problem MPI+GPU using LBM


From: Markus Gusenbauer
Subject: [ESPResSo-users] Problem MPI+GPU using LBM
Date: Mon, 01 Jul 2013 11:17:35 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5

Hi all,

I've tried to run a simple simulation using 2 CPU + GPU. I have a channel with lbfluid, from the left I put a certain velocity. Without lbboundary the simulation runs fine. As soon as I add a lbboundary it crashes:

[mgusenbauerMint13:12827] *** An error occurred in MPI_Bcast
[mgusenbauerMint13:12827] *** on communicator MPI_COMMUNICATOR 3
[mgusenbauerMint13:12827] *** MPI_ERR_TRUNCATE: message truncated
[mgusenbauerMint13:12827] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 12827 on
node mgusenbauerMint13 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------


Here is the tcl-script:


setmd time_step 0.1
setmd skin 0.2
thermostat off

setmd box_l 40 40 100

lbboundary wall normal 1 0 0 dist 0.5 type 501
lbboundary wall normal -1 0 0 dist -39.5 type 501
lbboundary wall normal 0 1 0 dist 0.5 type 501
lbboundary wall normal 0 -1 0 dist -39.5 type 501


lbfluid gpu grid 1 dens 1.0 visc 1.5 tau 0.1 friction 0.5

set i 0
while { $i < 100 } {
    puts "$i / 100 \r"

    for { set iii 0 } { $iii < 40} { incr iii } {
        for { set jjj 0 } { $jjj < 40 } { incr jjj } {
            for { set kkk 0 } { $kkk < 1 } { incr kkk } {
                lbnode $iii $jjj $kkk set u 0.0 0.0 0.1
            }
        }
    }

    integrate 1
    incr i
}


Same simulation works fine using MPI+CPU. Any ideas?

Markus







reply via email to

[Prev in Thread] Current Thread [Next in Thread]