[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 7/8] PPC: booke206: Check for min/max TLB entry si

From: Scott Wood
Subject: Re: [Qemu-ppc] [PATCH 7/8] PPC: booke206: Check for min/max TLB entry size
Date: Mon, 23 Jan 2012 15:41:01 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:6.0.2) Gecko/20110906 Thunderbird/6.0.2

On 01/23/2012 03:29 PM, Alexander Graf wrote:
> On 23.01.2012, at 21:10, Scott Wood <address@hidden> wrote:
>> If TLB0 has TLBnCFG[AVAIL] set, then with this patch you'll be raising
>> an exception rather than setting the size to the minimum.
>> If TLB0 does not have TLBnCFG[AVAIL] set, you'll be letting the user set
>> whatever size they want.
>> In either case, you seem to be letting the user write whatever the want
>> to the TLB array, and only afterward check whether to send an exception.
> Yes, for !AVAIL we simply override the page size on qemu tlb miss iirc.

Ah.  That seems like a hotter path than tlbwe, and you could still
insert an invalid entry into tlb1 (you'd get an exception, but the entry
would be there).

> Is that wrong? Does tlbwe;tlbre result in different tsize values?

e500mc manual (table 6-6, "MMU Assist Register Field Updates") says
tlbre returns a tsize of 1 for tlb0 -- it doesn't store tsize.  The KVM
MMU API also requires that tsize be stored as a valid value, to simplify
the code that operates on the TLB.  The TLB dump code depends on this
(could be fixed of course, but simpler to fix it once in tlbwe).

>>> True. Maybe we should just always reserve a surplus TLB entry and have the 
>>> current code work, basically making it be a nop?
>>> Or we could add checks everywhere...
>> I'd have booke206_get_tlbm() check and return NULL, with callers
>> checking for that.  Optimization can come later, if/when it's shown to
>> be a bottleneck.
> It's more about not missing any cases :). But yeah, it's probably best to 
> just change the semantics.

At least a NULL deference will be more noticeable than an array overrun...


reply via email to

[Prev in Thread] Current Thread [Next in Thread]