qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] Exporting Guest RAM information for NUMA bi


From: Bharata B Rao
Subject: Re: [Qemu-devel] [RFC PATCH] Exporting Guest RAM information for NUMA binding
Date: Mon, 21 Nov 2011 20:48:07 +0530
User-agent: Mutt/1.5.21 (2010-09-15)

On Tue, Nov 08, 2011 at 09:33:04AM -0800, Chris Wright wrote:
> * Alexander Graf (address@hidden) wrote:
> > On 29.10.2011, at 20:45, Bharata B Rao wrote:
> > > As guests become NUMA aware, it becomes important for the guests to
> > > have correct NUMA policies when they run on NUMA aware hosts.
> > > Currently limited support for NUMA binding is available via libvirt
> > > where it is possible to apply a NUMA policy to the guest as a whole.
> > > However multinode guests would benefit if guest memory belonging to
> > > different guest nodes are mapped appropriately to different host NUMA 
> > > nodes.
> > > 
> > > To achieve this we would need QEMU to expose information about
> > > guest RAM ranges (Guest Physical Address - GPA) and their host virtual
> > > address mappings (Host Virtual Address - HVA). Using GPA and HVA, any 
> > > external
> > > tool like libvirt would be able to divide the guest RAM as per the guest 
> > > NUMA
> > > node geometry and bind guest memory nodes to corresponding host memory 
> > > nodes
> > > using HVA. This needs both QEMU (and libvirt) changes as well as changes
> > > in the kernel.
> > 
> > Ok, let's take a step back here. You are basically growing libvirt into a 
> > memory resource manager that know how much memory is available on which 
> > nodes and how these nodes would possibly fit into the host's memory layout.
> > 
> > Shouldn't that be the kernel's job? It seems to me that architecturally the 
> > kernel is the place I would want my memory resource controls to be in.
> 
> I think that both Peter and Andrea are looking at this.  Before we commit
> an API to QEMU that has a different semantic than a possible new kernel
> interface (that perhaps QEMU could use directly to inform kernel of the
> binding/relationship between vcpu thread and it's memory at VM startuup)
> it would be useful to see what these guys are working on...

I looked at Peter's recent work in this area.
(https://lkml.org/lkml/2011/11/17/204)

It introduces two interfaces:

1. ms_tbind() to bind a thread to a memsched(*) group
2. ms_mbind() to bind a memory region to memsched group

I assume the 2nd interface could be used by QEMU to create
memsched groups for each of guest NUMA node memory regions.

In the past, Anthony has said that NUMA binding should be done from outside
of QEMU (http://www.kerneltrap.org/mailarchive/linux-kvm/2010/8/31/6267041)
Though that was in a different context, may be we should re-look at that
and see if QEMU still sticks to that. I know its a bit early, but if needed
we should ask Peter to consider extending ms_mbind() to take a tid parameter
too instead of working on current task by default.

(*) memsched: An abstraction for representing coupling of threads with virtual
address ranges. Threads and virtual address ranges of a memsched group are
guaranteed (?) to be located on the same node.

Regards,
Bharata.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]