[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Cirrus bugs vs endian: how two bugs cancel each other o

From: Anthony Liguori
Subject: Re: [Qemu-devel] Cirrus bugs vs endian: how two bugs cancel each other out
Date: Mon, 30 Jul 2012 19:17:56 -0500
User-agent: Notmuch/0.13.2+93~ged93d79 (http://notmuchmail.org) Emacs/23.3.1 (x86_64-pc-linux-gnu)

Benjamin Herrenschmidt <address@hidden> writes:

> On Mon, 2012-07-30 at 16:55 +0300, Avi Kivity wrote:
>> > The trouble is predicting which guests have drivers and which guests
>> > don't.  Having a VGA model that could be enabled universally with good
>> > VBE support for guests without drivers would be a very nice default
>> > model.
>> I agree.  Hopefully it won't be difficult to get the guest to unmap, or
>> maybe we can just unregister the direct RAM mapping in qemu.
> I don't understand that part... why on earth would the fb has to be
> unmapped ?

This is a detail of how Spice/QXL works.  It is not a framebuffer

Spice sends a series of rendering commands.  It not rendering to a flat
framebuffer but rather to window-like objects.  It maintains a list of
these commands and objects and even maintain a tree to understand

It remotes these commands over the network and the client does all the
magic that your compositing manager/window manager/X server would
normally do.  This is why it's so complicated, it's doing an awful lot.

Normally, the framebuffer that the guest sees (it must exist, it's VGA
after all) is never updated.  If the guest attempts to read the
framebuffer (it normally doesn't), Spice/QXL will render the entire
queue all at once to produce the framebuffer result.  This is a slow
path because it doesn't happen normally.

Unmapping is probably the wrong word.  Quiescing is probably a better

>> > We've never made the switch because WinXP doesn't have VESA support
>> > natively.  But we're slowly getting to the point in time where it's
>> > acceptable to require a special command line option for running WinXP
>> > guests such that we could consider changing the default machine type.
>> Yes.
>> > 
>> >>> It's not clear to me why it doesn't enable VBE but presumably if it did,
>> >>> then accelerations could be mapped through VBE.
>> >>
>> >> I believe the idea is that you don't want to map the framebuffer into
>> >> the guest, this allows one-directional communication so you can defer
>> >> rendering to the client and not suffer from the latency.  But I may be
>> >> mixing things up.
>> > 
>> > Hrm, that seems like an odd strategy for legacy VGA.  Spice isn't
>> > remoting every pixel update, right?  I would assume it's using the same
>> > logic as the rest of the VGA cards and doing bulk updates based on the
>> > refresh timer.  In that case, exposing the framebuffer shouldn't matter
>> > at all.
>> I'd assume so too, but we need to make sure the framebuffer is unmapped
>> when in accelerated mode, or at least the guest has no expectations of
>> using it.
> Well, unmapping it is easy but why would you want to do or enforce
> that ? 

Because Spice depends on keeping the framebuffer inconsistent.

>> The drm drivers for the current model are needed anyway; so moving to
>> virtio is extra effort, not an alternative.
>> Note virtio doesn't support mapping framebuffers yet, or the entire vga
>> compatibility stuff, so the pc-oriented card will have to be a mix of
>> virtio and stdvga multiplexed on one pci card (maybe two functions, but
>> I'd rather avoid that).
> Well what I'm hacking as a proof of concept right now is std-vga with a
> BAR for MMIO regs to supplement the legacy VGA IO and an optional virtio
> BAR (which is not BAR 0, see below).
> So to be compatible with the existing std-vga I made the virtio BAR be
> BAR 3 or something like that (trivial patch in virtio-pci to allow that)
> but of course that means a hack in the guest to find it which is
> sub-optimal.

Why does it need to be compatible?  I don't think there's anything in
VESA that mandates the framebuffer be in BAR 0.  In fact, I don't think
VESA even mandates PCI.

VBE is strictly a BIOS interface.  So you could move the framebuffer to
BAR 1 by just modifying SeaBIOS.

> We had a chat with Rusty and it would be ideal if we could have a PCI
> capability indicating where to find the virtio config space.

Typically, PCI capabilities point to the PCI config space.  But the
virtio config space is not part of the PCI config space.  It's in BAR0.
So this doesn't make a lot of sense to me.

>From a virtio perspective, if there was an API to "map area of memory
from host", virtio-pci could use a transport feature flag to indicate
that such an area existed.  It would be a fully compatible change to

>From a VGA perspective, as long as we set the class code correctly and
handle legacy VGA access, it should all be fine.  We would need to
implement VBE in terms of virtio commands but that shouldn't be a

I think the best approach is to have some basic commands for managing
things like resolution setting, 2d accelerations, etc.  A feature flag
could be used to say "this device speaks Spice too."


Anthony Liguori

> However
> this is a bit problematic because either we use the vendor cap which
> means limiting outselves to RH vendor ID and hijacking the vendor cap
> for it for ever, or we get the SIG to allocate a capability for virtual
> IO devices....
> The later is ideal but I don't have contacts at the SIG. It could be
> done in a way that is usable by many vendors, ie, the cap itself could
> contain a vendor ID indicating the virtualization interface owner along
> with some additional data (in our case virtio version, BAR index, BAR
> offset).
> It does generally make sense to be able to have a device expose a
> more/less standard or existing HW interface (one of the USB HCIs, AHCI,
> VGA stuff, etc...) and also have a virtio channel for paravirt.
> Finally as for s390 & co, well... std-vga is still very PCI'ish, so we'd
> have to do a bit more work if we are to disconnect that.
> Cheers,
> Ben.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]