qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings


From: Greg Kurz
Subject: [Qemu-devel] [PATCH] mmap-alloc: use same backend for all mappings
Date: Mon, 30 Nov 2015 11:51:57 +0100
User-agent: StGit/0.17.1-dirty

Since commit 8561c9244ddf1122d "exec: allocate PROT_NONE pages on top of RAM",
it is no longer possible to back guest RAM with hugepages on ppc64 hosts:

mmap(NULL, 285212672, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0x3fff57000000
mmap(0x3fff57000000, 268435456, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED, 
19, 0) = -1 EBUSY (Device or resource busy)

This is due to a limitation on ppc64 that requires MAP_FIXED mappings to have
the same page size as other mappings already present in the same "slice" of
virtual address space (Cc'ing Ben for details). This is exactly what happens
when calling mmap() above: first one uses native host page size (64k) and
second one uses huge page size (16M).

To be sure we always have the same page size, let's use the same backend for
both calls to mmap(): this is enough to fix the ppc64 issue.

This has no effect on RAM based mappings.

Signed-off-by: Greg Kurz <address@hidden>
---

This is a bug fix for 2.5

 util/mmap-alloc.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index c37acbe58ede..0ff221dd94f4 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -21,7 +21,8 @@ void *qemu_ram_mmap(int fd, size_t size, size_t align, bool 
shared)
      * space, even if size is already aligned.
      */
     size_t total = size + align;
-    void *ptr = mmap(0, total, PROT_NONE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+    void *ptr = mmap(0, total, PROT_NONE,
+                     (fd == -1 ? MAP_ANONYMOUS : 0) | MAP_PRIVATE, fd, 0);
     size_t offset = QEMU_ALIGN_UP((uintptr_t)ptr, align) - (uintptr_t)ptr;
     void *ptr1;
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]