qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] fix the memory leak for share hugepage


From: haifeng.lin
Subject: [Qemu-devel] [PATCH] fix the memory leak for share hugepage
Date: Fri, 17 Oct 2014 16:27:17 +0800

From: linhaifeng <address@hidden>

The VM start with share hugepage should close the hugefile fd
when exit.Because the hugepage fd may be send to other process
e.g vhost-user If qemu not close the fd the other process can
not free the hugepage otherwise exit process,this is ugly,so
qemu should close all shared fd when exit.

Signed-off-by: linhaifeng <address@hidden>
---
 exec.c | 12 ++++++++++++
 vl.c   |  7 +++++++
 2 files changed, 19 insertions(+)

diff --git a/exec.c b/exec.c
index 759055d..d120b73 100644
--- a/exec.c
+++ b/exec.c
@@ -1535,6 +1535,18 @@ void qemu_ram_remap(ram_addr_t addr, ram_addr_t length)
         }
     }
 }
+
+void qemu_close_all_ram_fd(void)
+{
+    RAMBlock *block;
+
+    qemu_mutex_lock_ramlist();
+    QTAILQ_FOREACH(block, &ram_list.blocks, next) {
+        close(block->fd);
+    }
+    qemu_mutex_unlock_ramlist();
+}
+
 #endif /* !_WIN32 */
 
 int qemu_get_ram_fd(ram_addr_t addr)
diff --git a/vl.c b/vl.c
index aee73e1..0b78f3f 100644
--- a/vl.c
+++ b/vl.c
@@ -1658,6 +1658,7 @@ static int qemu_shutdown_requested(void)
     return r;
 }
 
+extern void qemu_close_all_ram_fd(void);
 static void qemu_kill_report(void)
 {
     if (!qtest_driver() && shutdown_signal != -1) {
@@ -1671,6 +1672,12 @@ static void qemu_kill_report(void)
             fprintf(stderr, " from pid " FMT_pid "\n", shutdown_pid);
         }
         shutdown_signal = -1;
+
+        /* Close all ram fd when exit. If the ram is shared by othter process
+         * e.g vhost-user, it can free the hugepage by close fd after qemu 
exit,
+         * otherwise the process have to exit to free hugepage.
+         */
+        qemu_close_all_ram_fd();
     }
 }
 
-- 
1.9.0





reply via email to

[Prev in Thread] Current Thread [Next in Thread]