qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 06/13] target-openrisc: Remove TLB flush from l.


From: Sebastian Macke
Subject: Re: [Qemu-devel] [PATCH 06/13] target-openrisc: Remove TLB flush from l.rfe instruction
Date: Tue, 29 Oct 2013 16:14:45 -0700
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0) Gecko/20100101 Thunderbird/24.0.1

On 29/10/2013 3:20 PM, Max Filippov wrote:
On Wed, Oct 30, 2013 at 1:53 AM, Sebastian Macke <address@hidden> wrote:
On 29/10/2013 2:01 PM, Max Filippov wrote:
On Tue, Oct 29, 2013 at 11:04 PM, Sebastian Macke <address@hidden>
wrote:
At the moment there are two TLBs. The OpenRISC TLB followed
by the QEMU's own TLB.
At the end of the TLB miss handler a tlb_flush of QEMUs TLB
is executed which is exactly what we want to avoid.
As long as there is no context switch we don't have to flush the TLB.
So this flush was needed in order to clean QEMU TLB in case
DTLB/ITLB translation was enabled/disabled, right? But since you
already have mmu index for nommu operation, wouldn't it be easier
to indicate mmu index correctly for data and code access and drop
this flush?

The mmu index is already set correctly and this patch removes the flush.
I'm not sure: cpu_mmu_index only checks SR_IME, not SR_DME.

And again, correct. I saw this problem too and wanted to correct it later.
But using Linux the IME and DME flags are always changed together and therefore distinguishing between them don't play a role.
So I forgot it in the end.

1. Problem
The problem is if there is a context switch.  OpenRISC clears its own small
tlb page by page. But this does mean it flushes all pages in the big QEMU
I think there shouldn't be any entries in QEMU TLB other than those in
OpenRISC TLB, otherwise it would be possible to access unmapped virtual
addresses without getting pagefaults.
Correct. And this was one dilemma I had. How to increase the whole speed of tlb misses by an order of magnitude
and not break any rules.

Without the two patches it works like this.
User mode OpenRISC TLB-Miss -> Exception -> QEMU TLB flush -> Set new page in the tlb handler -> return via l.rfe -> QEMU TLB flush

In the end we had always only one valid page in the QEMU TLB which is kind of crazy. ( Now I am unsure, because with my logic we would have never a valid page in the QEMU TLB and would run in an endless loop)

So with the patches QEMU's TLB could have indeed more entries than the OpenRISC TLB right now but which is 99% fine if you run it under Linux. Linux has the option to clear certain pages from the tlb which then of course will not reckognize those pages in QEMUs TLB. So there is a small chance for an error.

Three options:

1. Removing QEMUs TLBs.
2. Another option might be to make the flushing mmu_index dependend to increase the speed. But this is not implemented as far as I see. 3. I could also invalidate the previous page if the OpenRISC TLB entry is overwritten. But then I don't know the correct mmu_index. So this has to be done for all possible mmu indices.

Option three might be possible. I didn't think about this before.

Hopefully OpenRISC will get Hardware TLB refill next year. This would solve the problem.


TLB.  This is the reason why l.rfe instruction clears the tlb which is the
instruction used to return to user mode
But according to the specification this is wrong.

2. Problem which is the case you mentioned.
Your are right, this is one solution and its written in the patchnotes as
point 1.
But this would not solve the problem No 1. I mentioned in this email.

Confused? I am :)

Easy: l.rfe is not supposed to clear the tlb. It can but it shouldn't.
With this patch I remove the flush and solve all problems by assuming a
global tlb flush if you invalidate the first entry of the small OpenRISC
TLB.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]