qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] pciproxy status?


From: Gianni Tedesco
Subject: Re: [Qemu-devel] pciproxy status?
Date: Wed, 23 Mar 2005 03:16:38 +0000

On Wed, 2005-03-23 at 02:53 +0100, Karsten N. Strand wrote:
>On Tue, 2005-03-22 at 12:25 +0000, Gianni Tedesco wrote:
>> On Tue, 2005-03-22 at 11:22 +0100, Karsten N. Strand wrote:
>> >Hi,
>> >First I must say that I'm extremely impressed with the development of 
>> >qemu, and that I think it's currently one of the most technical 
>> >interesting open source projects these days. Then the question..
>> >
>> >What's the status of the pciproxy patch? Is it still work in progress, 
>> >or have it stalled now? What about pci DMA support?
>> 
>> Unfortunately it's pretty much stalled due to lack of time. It should
>> work OK though provided your PCI device is not sharing IRQs.
>> 
>
>The card I'm trying to debug is luckily the one and only device in my
>computer not sharing an IRQ.
>
>On the curiosity side of it, looking through the sigirq code I was
>unable to understand why it wouldn't support shared irqs by just adding
>the SHIRQ flag to request_irq(). It does provide a unique identifier
>through the procfs file pointer? Sorry if this is a stupid question, I'm
>not very experienced with kernel code.

Well shared IRQs are just a bit dodgy. If we get an IRQ we wont know
which device it is for so if we send a signal to the qemu first, the
guest OS will look on its ISR list for that IRQ, find the proxied
device, find that it wasn't that, and ACK the IRQ. But because PCI IRQs
are level triggered it gets immediately reasserted and locks up the
guest OS in an interrupt storm. Conversely if you send the signal after
checking all host ISRs then there is a window where if qemu is killed,
the IRQ storm happens on the host. SIGIRQ patch needs to make the kernel
send the IRQ-signal and wait for qemu to tell it when to ACK the PIC.

Anyway, thats all academic since I just checked and the qemu patches on
my site aren't even using SIGIRQ yet. So I should correct my previous
statement. It should work provided the card isn't using IRQs (so just
accessing registers).

>> PCI DMA is somewhat tricky to implement (not possible in a general way
>> without either patching the guest, or invasive patching of the host).
>> 
>> Maybe by analyzing the logs thus far, you could build some device
>> specific hooks for figuring out when/how to initiate DMA.
>
>Right now it crashes before even producing any output. It might be some
>change in qemu cvs I have overlooked that breaks the pciproxy patch..
>(?) Anyway, I will try to debug it, just want to make myself a bit more
>familiar with the qemu sources first.

Have you tried putting a printf at the beginning of ioread() / iowrite()
et al in the pciproxy code? the 0.2 version of the patch has the
printfs() removed...

>> I know the gelato team have got some patches that add new syscalls to
>> linux for allocating buffers to userspace suitable for PCI DMA - which
>> is the next thing thats required once you have the address/length of the
>> buffers and direction of transfer.
>> 
>
>I had never heard about that project before, but superficially looking
>through their wiki this looks like a good approach.
>
>This gets me thinking about the possibility of compiling a lightweight
>kernel with no native IO drivers, and proxying all devices on the PC
>through it (except some device for debug IO), making it a drop-in
>transparent debugger for any computer. That would be extremely cool :)

Heh, well a while back a guy was working on running BIOS code through
qemu (which was booted directly from the disk) and just copying all i/o
transactions to the ISA serial port. With a bit of work, a system like
that could log just about anything :)

-- 
// Gianni Tedesco (gianni at scaramanga dot co dot uk)
lynx --source www.scaramanga.co.uk/scaramanga.asc | gpg --import





reply via email to

[Prev in Thread] Current Thread [Next in Thread]