qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH -V5] target-ppc: Fix page table lookup with kvm


From: Aneesh Kumar K.V
Subject: Re: [Qemu-devel] [PATCH -V5] target-ppc: Fix page table lookup with kvm enabled
Date: Wed, 06 Nov 2013 22:38:22 +0530
User-agent: Notmuch/0.16+99~g10596a5 (http://notmuchmail.org) Emacs/24.3.1 (x86_64-pc-linux-gnu)

Alexander Graf <address@hidden> writes:

> On 11.10.2013, at 09:58, Aneesh Kumar K.V <address@hidden> wrote:
>
>> Alexander Graf <address@hidden> writes:
>> 
>>> On 11.10.2013, at 13:13, Aneesh Kumar K.V <address@hidden> wrote:
>>> 
>>>> From: "Aneesh Kumar K.V" <address@hidden>
>>>> 
>>>> With kvm enabled, we store the hash page table information in the 
>>>> hypervisor.
>>>> Use ioctl to read the htab contents. Without this we get the below error 
>>>> when
>>>> trying to read the guest address
>>>> 
>>>> (gdb) x/10 do_fork
>>>> 0xc000000000098660 <do_fork>:   Cannot access memory at address 
>>>> 0xc000000000098660
>>>> (gdb)
>>>> 
>>>> Signed-off-by: Aneesh Kumar K.V <address@hidden>
>>>> ---
>>>> Changes from V4:
>>>> * Rewrite to avoid two code paths for doing hash lookups
>> 
>> ....
>> 
>>>> +
>>>> +    i = 0;
>>>> +    hpte = pte_index * HASH_PTE_SIZE_64;
>>>>    if (likely((flags & H_EXACT) == 0)) {
>>>>        pte_index &= ~7ULL;
>>>> -        hpte = pte_index * HASH_PTE_SIZE_64;
>>>> -        for (i = 0; ; ++i) {
>>>> +        token = ppc_hash64_start_access(ppc_env_get_cpu(env), pte_index);
>>>> +        do {
>>> 
>>> Why convert this into a while loop?
>> 
>> I am moving i = 0 outside the loop. Hence found while () better than 
>> for(;;++i) 
>
> Outside of what loop? You're only moving it outside of the if().

I meant moving i = 0 outside of for loop initialization part


here is how it would look like with for

i = 0;
if ()
    for(;;i++) {
      if () {
         break;
     }
   }
} else {

}
hpte += i * HASH_PTE_SIZE_64;



I found do { } while (i++)  better than for loop


>
>> 
>>> 
>>>>            if (i == 8) {
>>>> +                ppc_hash64_stop_access(token);
>>>>                return H_PTEG_FULL;
>>>>            }
>>>> -            if ((ppc_hash64_load_hpte0(env, hpte) & HPTE64_V_VALID) == 0) 
>>>> {
>>>> +            if ((ppc_hash64_load_hpte0(token, i) & HPTE64_V_VALID) == 0) {
>>>>                break;
>>>>            }
>>>> -            hpte += HASH_PTE_SIZE_64;
>>>> -        }
>>>> +        } while (i++);
>>>> +        ppc_hash64_stop_access(token);
>> 
>> ....

....

>>> You also want to cache kvmppc_uses_htab_fd(cpu) so it doesn't have to
>>> call an ioctl every time we try to read a PTE. I guess the best spot
>>> for it would be to put it into env.
>> 
>> didn't get that. We already cache that value as 
>> 
>> bool kvmppc_has_cap_htab_fd(void)
>> {
>>    return cap_htab_fd;
>> }
>> 
>> Looking at the kernel capability check I do find an issue, So with both
>> hv and pr loaded as module, that would always return true. Now that is
>> not we want here. Ideally we should have all these as VM ioctl and not
>> device ioctl. I had asked that as a fixme in the HV PR patchset.
>
> As you've realized we only cache the ability for an htab fd, not whether we 
> are actually using one. That needs to be a separate variable.
>

As mentioned earlier we can't really cache the htab fd, because the
interface doesn't support random reads of hash page table. 


-aneesh




reply via email to

[Prev in Thread] Current Thread [Next in Thread]