[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-1] A new irq device interface

From: Junling Ma
Subject: Re: [PATCH-1] A new irq device interface
Date: Sun, 2 Aug 2020 15:32:01 -0700

Hi Samual,

Thanks for reviewing.

> Junling Ma, le sam. 01 août 2020 18:59:58 -0700, a ecrit:
>> PATCH 1 is included in this email: it prepares the stage by
> Please provide separate patches for each over these. Otherwise this is
> all conflated and way more difficult to review.


>> 1. Change devicer_user_intr to be a linux interrupt handler. 
> Right, this makes sense, but did you actually try it? Check the existing
> code twice to realize why it's done that way. Here linux_intr is looking
> through the list of actions for an irq. You are making action->handler()
> call free_irq itself when the handler should be removed. But then the
> loop of linux_intr will have its action pointer undefined since freed by
> free_irq.

Right I realized that, and moved the free_irq to intr_thread. Will clean up the 

> I agree that we should probably remove some of the "user_intr" pieces of
> linux_intr(). But we cannot afford removing the part that checks the
> value returned by the handler, otherwise it'll break.

If we move the cleaning-up-dead-notification code to intr_thread, then the code 
should not break. (I did try with a set program). The handler does not care 
about notifications. It sees that irqtab.handler[id] is not null, then increase 
the interrupt counter, disable the irq, and wake up the intr_interupt thread. 
The thread, if finds an empty queue, then free the irq.

>> 2. Make user_intr_t represent a specific interrupt line, so that 
>> notifications queue on each line.
>> 3. Make struct irqdev store a vector user_intr_t, indexed by interrupt line.
> ? But then we can't have several userland translators using the same
> irq? That'll break.

Each user_intr_t has a queue of notifications. Originally, each notification 
was queued up on the linux side. In this patch we just move them back to the 
mach side and sort them by interrupt line (i.e., attach to user_intr_t). I am 
not sure why it would break. 

IN fact, I was pondering on the idea that each irq* device should be opened 
exclusively, and user land device translator does the multiplexing, because in 
the future, when we have MSI or MSI-X, the interrupt will not be shared, and 
they should be opened exclusively.. However, I am afraid that an interrupt 
going through two indirect calls would be too time costly to hurt driver 
performance. So this patch still does multiplexing and queue up multiple 

>> 4. The n_unacked counter only increases when successfully delivered an 
>> interrupt.
> ? It is meant to know how many times __disable_irq was called, and thus
> how many times we should call __enable_irq if the port dies.

The __disable_irq uses its own counter. I think, the n_unacked, as the name 
suggests, counts how many interrupts have not asked. The original way of 
counting means that the interrupt will be unfortunately blocked forever if a 
message somehow failed to deliver. This patch increases n_unacked only if a 
notification is delivered, and thus waiting for asking becomes sensible.

I am of course happy to produce a new version of the patches. But I also feel 
that whether we want to introduce this new scheme still needs more discussion.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]