[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] use corect depth from DisplaySurface in vmware_

From: Reimar Döffinger
Subject: Re: [Qemu-devel] [PATCH] use corect depth from DisplaySurface in vmware_vga.c
Date: Tue, 25 Aug 2009 16:54:11 +0200
User-agent: Mutt/1.5.20 (2009-06-14)

On Tue, Aug 25, 2009 at 01:45:15AM +0200, andrzej zaborowski wrote:
> 2009/8/24 Reimar Döffinger <address@hidden>:
> > On Sun, Aug 23, 2009 at 07:25:32PM +0200, andrzej zaborowski wrote:
> >> It was discussed at some point earlier that at the time this code runs
> >> SDL is not initialised and the depth returned is an arbitrary value
> >> from default allocator.
> >
> > It is not arbitrary. It matches exactly the DisplaySurface's
> > linesize and data buffer size.
> Only at the moment the function is called.  The value is still
> hardcoded, just elsewhere.  Once display backend initialises this
> value may be invalid.

Ok, I looked at the code again and it seems that the issue is that
we have to expect that the depth of the DisplayState/surface may change
any time.
I'm not sure that really was intentional and it just can't really work
in general, but vmware_vga can handle it just as well as vga by just
not storing/caching the depth and anything that depends on it.
So attached is a patch that does that.
Obviously it has to assume that the depth is not changed "under its
feet" after the guest OS has initialized a graphics mode but otherwise
it will not care.
I also have not tested it on hardware/software where the current code
works, since I don't know in which cases that would be.
And lastly, I suspect it breaks DIRECT_VRAM even more - given that
(AFAICT) it does not compile even now I'd suggest just removing that

> > As such, I want to add that the revert commit message of "was
> > incorrect." doesn't qualify as useful to me.
> I wasn't intending to push this commit, instead I responded to the
> thread but later noticed I had pushed it.

I can't resist pointing out that that doesn't make the commit message any

> Beside that it's an obvious performance gain.  The API change did not
> magically remove the pixel by pixel conversion of the colour space, it
> just hid it in SDL, under more indirection.

I see the point, but while I don't know SDL internals well enough to
know if it does this, it might be possible to do the conversion in
hardware in some cases.

Signed-off-by: Reimar Döffinger <address@hidden>

Attachment: vmware_depth2.diff
Description: Text document

reply via email to

[Prev in Thread] Current Thread [Next in Thread]