[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Getfem-users] GetFEM and MPI

From: Torquil Macdonald Sørensen
Subject: Re: [Getfem-users] GetFEM and MPI
Date: Fri, 20 Nov 2015 23:09:22 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.3.0

Dear Yves and Konstantinos,

Thanks for you responses. I was able to achieve interpolation for my
MPI-enabled GetFEM program with both methods, due to your answers. For
completeness, I post them here:

* Using high-level interpolation, as I understood the suggestion by Yves:

std::vector<double> V_dist(mesh_fem.nb_dof());
getfem::interpolation_function(mesh_fem, V_dist, myfunc);
getfem::ga_workspace workspace;
workspace.add_fem_constant("v", mesh_fem, V_dist);
workspace.add_expression("v", mesh_im);
std::vector<double> V(mesh_fem.nb_dof());
getfem::ga_interpolation_Lagrange_fem(workspace, mesh_fem, V);
getfem::vtk_export exp_vtk("data_myfunc_high_level.vtk", true);
exp_vtk.write_point_data(mesh_fem, V, "myfunc_high_level");

* Using interpolation_function, MPI_SUM_VECTOR, and then compensating
for redundant contributions (inspired by get_averaged_sigmas(), as
mentioned by Konstantinos):

std::vector<double> V_dist(mesh_fem.nb_dof());
getfem::interpolation_function(mesh_fem, V_dist, myfunc);
std::vector<int> count_dist(mesh_fem.nb_dof());
std::vector<int> count(mesh_fem.nb_dof());
auto unity = [](auto a) { return(1.0); };
getfem::interpolation_function(mesh_fem, count_dist, unity);
getfem::MPI_SUM_VECTOR(count_dist, count);
std::vector<double> V(mesh_fem.nb_dof());
getfem::MPI_SUM_VECTOR(V_dist, V);
for(uint dof = 0; dof != mesh_fem.nb_dof(); ++dof) V[dof] /= count[dof];
getfem::vtk_export exp_vtk("data_myfunc_noassembly.vtk", true);
exp_vtk.write_point_data(mesh_fem, V, "myfunc_noassembly");

Best regards and thanks,
Torquil Sørensen

On 19/11/15 21:00, Yves Renard wrote:
> Dear Torquil,
> Concerning Q1, its depend on which interpolation you use. The interpolation 
> of the high generic assembly language perform a MPI_SUM_VECTOR(result)
> and divide the components for dof which are on multiple mpi regions so that 
> the result is the same on each rank. It is recommended to use
> this interpolation instead of the old one for which the result remains 
> distributed, yes. There is no specific function. You can use
> MPI_SUM_VECTOR(result) but you have to also sum a integer vector denoting the 
> nonzero components on each rank in order to divide the multiple
> computed dofs if you want to use the same strategy than the one used in the 
> interpolation of the high generic assembly language.
> Q2 : No this is no supported by Getfem. Moreover, I do  not see how it is 
> possible in 2D and 3D for the dof that are shared with several regions ...
> Best regards,
> Yves.
> ----- Mail original -----
> De: "Torquil Macdonald Sørensen" <address@hidden>
> À: "getfem-users" <address@hidden>
> Envoyé: Jeudi 19 Novembre 2015 15:45:12
> Objet: [Getfem-users] GetFEM and MPI
> Hi!
> I'm using an MPI-enabled GetFEM, and I have two questions. I have read
> Partitioning of the mesh seems to work fine on my system.
> Q1: I've been trying to use getfem::interpolation_function. In the
> reference documentation it says: "with the parallized version
> (GETFEM_PARA_LEVEL >= 2) the resulting vector V is distributed". So
> after running getfem::interpolation_function, my vector v is different
> on each mpi_rank, as expected. Each rank has zeroes in V for DOFs that
> are not in the corresponding MPI mesh region. How do I then collect the
> vector components into a vector that is common for all MPI ranks, or at
> least gather them to rank 0? Is there a functon for this in GetFEM, or
> must I manually loop through the MPI region and MPI_Send the vector
> components to rank 0 and/or the other MPI ranks? This is OK, but I'm
> just wondering if there is a built-in function in GetFEM already which I
> haven't noticed.
> Q2: GetFEM doesn't seem to choose a DOF ordering so that a given MPI
> rank will have a continuous range of DOF indices associated to it. I
> would like to pass assembled matrices to PETSc, but PETSc relies on each
> MPI rank being associated with a continuous range of row numbers. Is
> there a simple way to achieve this automatically in GetFEM?
> Best regards and thanks,
> Torquil Sørensen
> _______________________________________________
> Getfem-users mailing list
> address@hidden

reply via email to

[Prev in Thread] Current Thread [Next in Thread]