qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/9] pci: Make bounds checks on confi


From: David Gibson
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 3/9] pci: Make bounds checks on config space accesses actually work
Date: Fri, 13 Jan 2012 12:44:12 +1100
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Jan 13, 2012 at 03:23:37AM +0200, Michael S. Tsirkin wrote:
> On Fri, Jan 13, 2012 at 11:26:12AM +1100, David Gibson wrote:
> > On Thu, Jan 12, 2012 at 03:32:32PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Jan 12, 2012 at 04:46:22PM +1100, David Gibson wrote:
> > > > The pci_host_config_{read,write}_common() functions perform PCI config
> > > > accesses.  They take a limit parameter which they appear to be supposed
> > > > to bounds check against, however the bounds checking logic, such as it 
> > > > is,
> > > > is completely broken.
> > > > 
> > > > Currently, it takes the minimum of the supplied length and the remaining
> > > > space in the region and passes that as the length to the underlying
> > > > config_{read,write} function pointer.  This means that accesses which
> > > > partially overrun the region will be silently truncated - which makes
> > > > little sense.
> > > 
> > > Why does it make little sense? Makes sense to me.
> > 
> > Well, for starters a partial overrun would have to be an unaligned
> > config space access, which is not supported by PCI.  The behaviour if
> > you try is undefined on most bridges and unlikely to be a partial
> > read/write (ignoring the low addr bits would be more likely).
> 
> Yes, bus level cycles have dword granularity in PCI.
> But look e.g. at our express implementation.
> Config is memory mapped, so we simply map this as memory into guest.

So?

> If you do a large read, what happens on real hardware?
> I'm not sure, but it looks possible that it will get split
> and multiple dword transactions generated on the bus.

Totally irrelevant in this context.  The function definition only
allows a maximum of 4 byte transfers (because of the uint32_ts), so
the bus emulation logic would already have to split a long read into
multiple calls to the function.

> OTOH our implementation passes the read on as is, so
> it can get a multi-dword.
> I'll try to play with some real systems to see what they do.
> 
> > Even if a bridge somehow did a partial access, it's still wrong for
> > reads, since the overrunning bits would typically return 0xff not
> > 0x00, and so you'd need to 0xff pad the result.
> 
> Or maybe an error needs to be generated. We really don't know,
> it's up to the caller.
> 
> > There's just no point doing anything other than simply failing partial
> > overruns.
> 
> There's no point in random code churn either.

Removing an insane semantic os not random code churn.

> > > >  Accesses which entirely overrun the region will *not*
> > > > be blocked (an exploitable bug)
> > > >, because in that case (limit - addr) will
> > > > be negative and so the unsigned MIN will always return len instead.  
> > > > Even
> > > > if signed arithmetic was used, the config_{read,write} callback wouldn't
> > > > know what to do with a negative len parameter.
> > > 
> > > The assumption was callers never pass in such values.
> > 
> > So, callers are to to treat this function, taking a limit parameter as
> > having semantics of "checks a pointless edge case, but not the obvious
> > type of overrun".  You think that's a sensible semantic for a general
> > helper function?  Seriously?
> 
> I don't mind adding an extra check at this level. But
> the comment would need to be reworded from
> 'fix a bug' to 'pseries wants to pass out of range
> values so let's check this'.

Oh, FFS, you're just making excuses for not making a simple fix to
stupid code.

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson



reply via email to

[Prev in Thread] Current Thread [Next in Thread]