[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v1 2/9] util/mmap-alloc: factor out calculation of the pagesize f
From: |
David Hildenbrand |
Subject: |
[PATCH v1 2/9] util/mmap-alloc: factor out calculation of the pagesize for the guard page |
Date: |
Tue, 9 Feb 2021 14:49:32 +0100 |
Let's factor out calculating the size of the guard page and rename the
variable to make it clearer that this pagesize only applies to the
guard page.
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Cc: Igor Kotrasinski <i.kotrasinsk@partner.samsung.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
util/mmap-alloc.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index 890fda6a35..8bdf1f9df8 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -82,6 +82,16 @@ size_t qemu_mempath_getpagesize(const char *mem_path)
return qemu_real_host_page_size;
}
+static inline size_t mmap_guard_pagesize(int fd)
+{
+#if defined(__powerpc64__) && defined(__linux__)
+ /* Mappings in the same segment must share the same page size */
+ return qemu_fd_getpagesize(fd);
+#else
+ return qemu_real_host_page_size;
+#endif
+}
+
void *qemu_ram_mmap(int fd,
size_t size,
size_t align,
@@ -89,12 +99,12 @@ void *qemu_ram_mmap(int fd,
bool shared,
bool is_pmem)
{
+ const size_t guard_pagesize = mmap_guard_pagesize(fd);
int prot;
int flags;
int map_sync_flags = 0;
int guardfd;
size_t offset;
- size_t pagesize;
size_t total;
void *guardptr;
void *ptr;
@@ -115,8 +125,7 @@ void *qemu_ram_mmap(int fd,
* anonymous memory is OK.
*/
flags = MAP_PRIVATE;
- pagesize = qemu_fd_getpagesize(fd);
- if (fd == -1 || pagesize == qemu_real_host_page_size) {
+ if (fd == -1 || guard_pagesize == qemu_real_host_page_size) {
guardfd = -1;
flags |= MAP_ANONYMOUS;
} else {
@@ -125,7 +134,6 @@ void *qemu_ram_mmap(int fd,
}
#else
guardfd = -1;
- pagesize = qemu_real_host_page_size;
flags = MAP_PRIVATE | MAP_ANONYMOUS;
#endif
@@ -137,7 +145,7 @@ void *qemu_ram_mmap(int fd,
assert(is_power_of_2(align));
/* Always align to host page size */
- assert(align >= pagesize);
+ assert(align >= guard_pagesize);
flags = MAP_FIXED;
flags |= fd == -1 ? MAP_ANONYMOUS : 0;
@@ -191,8 +199,8 @@ void *qemu_ram_mmap(int fd,
* a guard page guarding against potential buffer overflows.
*/
total -= offset;
- if (total > size + pagesize) {
- munmap(ptr + size + pagesize, total - size - pagesize);
+ if (total > size + guard_pagesize) {
+ munmap(ptr + size + guard_pagesize, total - size - guard_pagesize);
}
return ptr;
@@ -200,15 +208,8 @@ void *qemu_ram_mmap(int fd,
void qemu_ram_munmap(int fd, void *ptr, size_t size)
{
- size_t pagesize;
-
if (ptr) {
/* Unmap both the RAM block and the guard page */
-#if defined(__powerpc64__) && defined(__linux__)
- pagesize = qemu_fd_getpagesize(fd);
-#else
- pagesize = qemu_real_host_page_size;
-#endif
- munmap(ptr, size + pagesize);
+ munmap(ptr, size + mmap_guard_pagesize(fd));
}
}
--
2.29.2
- [PATCH v1 0/9] RAM_NORESERVE, MAP_NORESERVE and hostmem "reserve" property, David Hildenbrand, 2021/02/09
- [PATCH v1 1/9] softmmu/physmem: drop "shared" parameter from ram_block_add(), David Hildenbrand, 2021/02/09
- [PATCH v1 2/9] util/mmap-alloc: factor out calculation of the pagesize for the guard page,
David Hildenbrand <=
- [PATCH v1 3/9] util/mmap-alloc: factor out reserving of a memory region to mmap_reserve(), David Hildenbrand, 2021/02/09
- [PATCH v1 4/9] util/mmap-alloc: factor out activating of memory to mmap_activate(), David Hildenbrand, 2021/02/09
- [PATCH v1 5/9] softmmu/memory: pass ram_flags into qemu_ram_alloc_from_fd(), David Hildenbrand, 2021/02/09
- [PATCH v1 6/9] softmmu/memory: pass ram_flags into memory_region_init_ram_shared_nomigrate(), David Hildenbrand, 2021/02/09
- [PATCH v1 8/9] util/mmap-alloc: support RAM_NORESERVE via MAP_NORESERVE, David Hildenbrand, 2021/02/09
- [PATCH v1 7/9] memory: introduce RAM_NORESERVE and wire it up in qemu_ram_mmap(), David Hildenbrand, 2021/02/09
- [PATCH v1 9/9] hostmem: wire up RAM_NORESERVE via "reserve" property, David Hildenbrand, 2021/02/09