qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC PATCH 16/29] Revert "irq: introduce qemu_irq_proxy()"


From: Philippe Mathieu-Daudé
Subject: [Qemu-devel] [RFC PATCH 16/29] Revert "irq: introduce qemu_irq_proxy()"
Date: Sun, 7 Jan 2018 23:45:45 -0300

This function isn't used anymore.

This reverts commit 22ec3283efba9ba0792790da786d6776d83f2a92.

Signed-off-by: Philippe Mathieu-Daudé <address@hidden>
---
I think circular IRQ dependencies should be avoided with today's QOM devices.

 include/hw/irq.h |  5 -----
 hw/core/irq.c    | 14 --------------
 2 files changed, 19 deletions(-)

diff --git a/include/hw/irq.h b/include/hw/irq.h
index 4c4c2eaf9a..ee823177e6 100644
--- a/include/hw/irq.h
+++ b/include/hw/irq.h
@@ -53,11 +53,6 @@ qemu_irq qemu_irq_invert(qemu_irq irq);
 /* Returns a new IRQ which feeds into both the passed IRQs */
 qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2);
 
-/* Returns a new IRQ set which connects 1:1 to another IRQ set, which
- * may be set later.
- */
-qemu_irq *qemu_irq_proxy(qemu_irq **target, int n);
-
 /* For internal use in qtest.  Similar to qemu_irq_split, but operating
    on an existing vector of qemu_irq.  */
 void qemu_irq_intercept_in(qemu_irq *gpio_in, qemu_irq_handler handler, int n);
diff --git a/hw/core/irq.c b/hw/core/irq.c
index b98d1d69f5..c8e96f122a 100644
--- a/hw/core/irq.c
+++ b/hw/core/irq.c
@@ -121,20 +121,6 @@ qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2)
     return qemu_allocate_irq(qemu_splitirq, s, 0);
 }
 
-static void proxy_irq_handler(void *opaque, int n, int level)
-{
-    qemu_irq **target = opaque;
-
-    if (*target) {
-        qemu_set_irq((*target)[n], level);
-    }
-}
-
-qemu_irq *qemu_irq_proxy(qemu_irq **target, int n)
-{
-    return qemu_allocate_irqs(proxy_irq_handler, target, n);
-}
-
 void qemu_irq_intercept_in(qemu_irq *gpio_in, qemu_irq_handler handler, int n)
 {
     int i;
-- 
2.15.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]