[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] irq: introduce qemu_irq_proxy()

From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH] irq: introduce qemu_irq_proxy()
Date: Fri, 23 Sep 2011 13:51:01 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20110516 Lightning/1.0b2 Thunderbird/3.1.10

On 09/18/2011 07:58 AM, Avi Kivity wrote:
In some cases we have a circular dependency involving irqs - the irq
controller depends on a bus, which in turn depends on the irq controller.
Add qemu_irq_proxy() which acts as a passthrough, except that the target
irq may be set later on.

Signed-off-by: Avi Kivity<address@hidden>

Applied.  Thanks.


Anthony Liguori


Turns out the circular dependency i8259->isa->pci->i8259 is widespread,
so introduce a general means of fixing it up.  I'll update the patchset to
make use of it everywhere it occurs.

  hw/irq.c |   14 ++++++++++++++
  hw/irq.h |    5 +++++
  2 files changed, 19 insertions(+), 0 deletions(-)

diff --git a/hw/irq.c b/hw/irq.c
index 60eabe8..62f766e 100644
--- a/hw/irq.c
+++ b/hw/irq.c
@@ -90,3 +90,17 @@ qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2)
      s[1] = irq2;
      return qemu_allocate_irqs(qemu_splitirq, s, 1)[0];
+static void proxy_irq_handler(void *opaque, int n, int level)
+    qemu_irq **target = opaque;
+    if (*target) {
+        qemu_set_irq((*target)[n], level);
+    }
+qemu_irq *qemu_irq_proxy(qemu_irq **target, int n)
+    return qemu_allocate_irqs(proxy_irq_handler, target, n);
diff --git a/hw/irq.h b/hw/irq.h
index 389ed7a..64da2fd 100644
--- a/hw/irq.h
+++ b/hw/irq.h
@@ -33,4 +33,9 @@ qemu_irq qemu_irq_invert(qemu_irq irq);
  /* Returns a new IRQ which feeds into both the passed IRQs */
  qemu_irq qemu_irq_split(qemu_irq irq1, qemu_irq irq2);

+/* Returns a new IRQ set which connects 1:1 to another IRQ set, which
+ * may be set later.
+ */
+qemu_irq *qemu_irq_proxy(qemu_irq **target, int n);

reply via email to

[Prev in Thread] Current Thread [Next in Thread]