[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Parallelization

From: Jean-Noël Grad
Subject: Re: Parallelization
Date: Wed, 29 Dec 2021 19:20:08 +0100
User-agent: Roundcube Webmail/1.3.17

Dear Ahmad Reza,

Thank you for pointing that out. I can't find the information in the user guide. We need to write it down.

ESPResSo supports OpenMPI, MPICH and MPICH+UCX. To run a simulation in parallel, invoke the command `mpiexec -n 4 ./pypresso`. It should also be possible to run a Jupyter notebook in parallel, but I can't remember the exact syntax, and restarting the Python session will disable parallelism if I recall correctly. To run GDB in parallel, invoke `mpiexec -np 2 xterm -fa 'Monospace' -fs 13 -e ./pypresso --gdb`.

To control how the system volume is partitioned across MPI ranks, add `system.cell_system.node_grid = [i,j,k]` to the script, with the product i*j*k being equal to the number of MPI ranks. This is useful for example when particles move preferentially along a specific direction and the selected cell system only has one cell in that direction. When using FFT-based methods, an additional constraint i <= j <= k is enforced due to a limitation of our FFT algorithm. For example when running P3M electrostatics with 4 MPI ranks, [4,1,1] and [2,2,1] are the only valid partitions. When using GPU-based methods, there is the additional constraint that the GPU device is only visible on MPI rank 0, therefore one has to account for the overhead of gathering data from all ranks to rank 0 to communicate it to the GPU, followed by a broadcast of the GPU results from rank 0 to all ranks, at every time step.

Finally, the size of the Particle struct matters a lot in parallel simulations. Using a custom myconfig.hpp file containing only the bare minimum of features that are relevant to the simulation can significantly speed up parallel simulations.

Best regards,

On 2021-12-29 18:08, Ahmad Reza Motezakker wrote:
Dear EspressoMD users,

Is there any tutorial or any document on EspressoMD parallelization? I
really appreciate your help. Any suggestion or experience will be
really helpful for me.

P.S. I am not familiar with parallelization.

Merry Christmas,

Ahmad Reza

reply via email to

[Prev in Thread] Current Thread [Next in Thread]