[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [fem-fenics] MPI parallelisation
From: |
Eugenio Gianniti |
Subject: |
Re: [fem-fenics] MPI parallelisation |
Date: |
Thu, 31 Jul 2014 18:54:14 +0000 |
> Hi Eugenio,
>
> we mark the subdomain in the mesh.oct files in order to be consistent with
> the mesh representation in the msh pkg. In fact, the (p, e, t) representation
> contain this information and so we keep it also in fem-fenics. I do agree
> with you that it is not widely used, but for example in the msh_refine
> function it is necessary in order to give back in Octave a refined mesh with
> all the subdomain available (if they were present in the non-refined mesh).
I noticed that they are also used to apply DirichletBC. Indeed, currently I got
parallel assembly working and running a full Neumann problem yields the same
solution both in serial and in parallel execution. On the other hand,
DirichletBCs do not work in parallel due to the missing markers, and DOLFIN
1.4.0 still does not support them, so problems with Dirichlet boundary
conditions cannot be solved in parallel (better, the code runs fine till the
end, but the solution is crap).
After going through DOLFIN code, I figured out that dolfin::DirichletBC can be
instantiated also giving as argument a MeshFunction identifying subdomains. I
would then move the information from the Mesh itself to two MeshFunctions, one
for the boundary facets and one for the region identifiers. I wonder where it
is better to store such objects. Should I just add them as members of the mesh
class or implement a new class to wrap MeshFunction? Probably with the first
approach the only change visible by the user would be a new mesh argument
needed by DirichletBC.
Eugenio
- [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/16
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/27
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Carlo de Falco, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Carlo de Falco, 2014/07/28
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/29
- Re: [fem-fenics] MPI parallelisation, c., 2014/07/29
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/28
- Re: [fem-fenics] MPI parallelisation,
Eugenio Gianniti <=
- Re: [fem-fenics] MPI parallelisation, Marco Vassallo, 2014/07/31
- Re: [fem-fenics] MPI parallelisation, Eugenio Gianniti, 2014/07/31