[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 1/1] target/riscv: Don't set write permissions on dirty PT

From: Alistair Francis
Subject: Re: [PATCH v1 1/1] target/riscv: Don't set write permissions on dirty PTEs
Date: Thu, 12 Mar 2020 15:10:36 -0700

On Wed, Mar 4, 2020 at 9:34 AM Richard Henderson
<address@hidden> wrote:
> On 3/3/20 5:16 PM, Alistair Francis wrote:
> > The RISC-V spec specifies that when a write happens and the D bit is
> > clear the implementation will set the bit in the PTE. It does not
> > describe that the PTE being dirty means that we should provide write
> > access. This patch removes the write access granted to pages when the
> > dirty bit is set.
> The W bit by itself says we should provide write access.
> It is an implementation detail that we *withhold* write access when D is clear
> (etc) so that we can trap, so that we can properly set D in the future.
> The page table walk associated with a read is allowed to cache all of the
> information it finds in the PTE during that walk.  Which includes the D bit.
> If D is set, then we do not need to set it in future, so we do not need to
> trap, so we can immediately honor the W bit.

Ok, I understand what is going on here now. I agree that my patch is wrong.

> If the guest changes R/W/X within a PTE (e.g. for mprotect), it is obvious 
> that
> a TLB flush for that page must be done.  It is no different if the guest
> changes A/D (e.g. for swapping).


> > Setting write permission on dirty PTEs results in userspace inside a
> > Hypervisor guest (VU) becoming corrupted. This appears to be because it
> > ends up with write permission in the second stage translation in cases
> > where we aren't doing a store.
> You've not really given any more information than last time this patch came 
> around.
> I still think this must be a guest (or nested guest) bug related to clearing
> PTE bits and failing to flush the TLB properly.

It think so as well now. I have changed the Linux guest and Hypervisor
to be very aggressive with flushing but still can't get guest user
space working. I'll keep digging and see if I can figure out what's
going on.

> I don't see how it could be a qemu tlb flushing bug.  The only primitive,
> sfence.vma, is quite heavy-handed and explicitly local to the thread.

Yes, both sfence and hfence flush all TLBs, so that doesn't seem to be
the problem.

> It may be a bug in qemu's implementation of second stage paging.  Which is not
> yet upstream, and I haven't gone digging in the archives to find the patch.

It's upstream now, I have double checked it though and I can't see
anything wrong.


> r~

reply via email to

[Prev in Thread] Current Thread [Next in Thread]