qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH v2 2/3] hw/ide/pci: Fix memory leak


From: Shannon Zhao
Subject: [Qemu-devel] [PATCH v2 2/3] hw/ide/pci: Fix memory leak
Date: Tue, 26 May 2015 09:46:06 +0800

From: Shannon Zhao <address@hidden>

valgrind complains about:
==16447== 16 bytes in 2 blocks are definitely lost in loss record 1,304 of 3,310
==16447==    at 0x4C2845D: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==16447==    by 0x2E4FD7: malloc_and_trace (vl.c:2546)
==16447==    by 0x64C770E: g_malloc (in /usr/lib64/libglib-2.0.so.0.3600.3)
==16447==    by 0x36FB47: qemu_extend_irqs (irq.c:55)
==16447==    by 0x36FBD3: qemu_allocate_irqs (irq.c:64)
==16447==    by 0x3B4B44: bmdma_init (pci.c:464)
==16447==    by 0x3B547B: pci_piix_init_ports (piix.c:144)
==16447==    by 0x3B55D2: pci_piix_ide_realize (piix.c:164)
==16447==    by 0x3EAEC6: pci_qdev_realize (pci.c:1790)
==16447==    by 0x36C685: device_set_realized (qdev.c:1058)
==16447==    by 0x47179E: property_set_bool (object.c:1514)
==16447==    by 0x470098: object_property_set (object.c:837)

Signed-off-by: Shannon Zhao <address@hidden>
Signed-off-by: Shannon Zhao <address@hidden>
---
 hw/ide/pci.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/hw/ide/pci.c b/hw/ide/pci.c
index 1b3d1c1..4b5e32d 100644
--- a/hw/ide/pci.c
+++ b/hw/ide/pci.c
@@ -452,8 +452,6 @@ static const struct IDEDMAOps bmdma_ops = {
 
 void bmdma_init(IDEBus *bus, BMDMAState *bm, PCIIDEState *d)
 {
-    qemu_irq *irq;
-
     if (bus->dma == &bm->dma) {
         return;
     }
@@ -461,8 +459,7 @@ void bmdma_init(IDEBus *bus, BMDMAState *bm, PCIIDEState *d)
     bm->dma.ops = &bmdma_ops;
     bus->dma = &bm->dma;
     bm->irq = bus->irq;
-    irq = qemu_allocate_irqs(bmdma_irq, bm, 1);
-    bus->irq = *irq;
+    bus->irq = qemu_allocate_irq(bmdma_irq, bm, 0);
     bm->pci_dev = d;
 }
 
-- 
2.0.4





reply via email to

[Prev in Thread] Current Thread [Next in Thread]