qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] qemu VGA endian swap low level drawing changes


From: Gerd Hoffmann
Subject: Re: [Qemu-devel] [RFC] qemu VGA endian swap low level drawing changes
Date: Tue, 17 Jun 2014 13:57:47 +0200

  Hi,

> > Let pixman handle it?  Well, except that pixman can't handle byteswapped
> > 16bpp formats.  Too bad :(
> 
> Right :) As I said, it's a trainwreck along the whole stack. I think my
> second patch is reasonably non-invasive and shouldn't affect performance
> of the existing path though.

Yep, that looked sane on a quick scan.

> > And as long as the guest uses 32bpp too there is nothing converted
> > anyway, we just setup pixman image in the correct format, backed by the
> > guests vga memory.  So this ...
> 
> pixman byteswaps 32bpp ? Good, I'd rather leave the work to it since it
> has vector accelerations and other similar niceties which would make it
> a better place than qemu.

Indeed, that is the reason why I've made qemu start using pixman ;)

> However we still need to deal with 15/16bpp guest side fb

Yes.

> Provider we also give pixman the right pixel format for "reverse endian"
> which we don't do today and I yet have to investigate it :-)

Oh.  Check ui/qemu-pixman.c.  Probably something goes wrong when
converting qemu's PixelFormat into a pixman format.

> The pixmap
> stuff in qemu to be honest is news to me, last time I looked it was SDL
> and hand made vnc. Is the vnc server going though pixman as well ?

Ok, it is supposed to work this way:

The virtual graphics card creates a DisplaySurface, which is a pixman
image under the hood.  There are basically two ways to do that:

  (1) Use qemu_create_displaysurface().  Allocates host memory.
      Returns 32bpp framebuffer in host byte order.  virtual graphics
      card is supposed to convert the guests framebuffer into that
      format.
  (2) Use qemu_create_displaysurface_from().  DisplaySurface (and pixman
      image) is backed by guest display memory then.  pixman format
      obviously must match the guests framebuffer format.

The ui (gtk / sdl / vnc / screendump via monitor) is supposed to deal
with whatever it gets.  Typically the ui checks whenever it can use the
format directly, and if not it converts using pixman (see gd_switch in
ui/gtk.c for example).  vnc and screendump use pixman too.  sdl feeds
SDL_CreateRGBSurfaceFrom with the shifts of the pixelformat instead.

HTH,
  Gerd





reply via email to

[Prev in Thread] Current Thread [Next in Thread]