qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC][PATCH 03/14 v7] target-i386: implement cpu_get_me


From: HATAYAMA Daisuke
Subject: Re: [Qemu-devel] [RFC][PATCH 03/14 v7] target-i386: implement cpu_get_memory_mapping()
Date: Fri, 02 Mar 2012 11:16:34 +0900 ( )

From: Wen Congyang <address@hidden>
Subject: Re: [RFC][PATCH 03/14 v7] target-i386: implement 
cpu_get_memory_mapping()
Date: Thu, 01 Mar 2012 14:21:37 +0800

> At 03/01/2012 02:13 PM, HATAYAMA Daisuke Wrote:
>> From: Wen Congyang <address@hidden>
>> Subject: [RFC][PATCH 03/14 v7] target-i386: implement 
>> cpu_get_memory_mapping()
>> Date: Thu, 01 Mar 2012 10:41:47 +0800
>> 
>>> +int cpu_get_memory_mapping(MemoryMappingList *list, CPUState *env)
>>> +{
>>> +    if (env->cr[4] & CR4_PAE_MASK) {
>>> +#ifdef TARGET_X86_64
>>> +        if (env->hflags & HF_LMA_MASK) {
>>> +            target_phys_addr_t pml4e_addr;
>>> +
>>> +            pml4e_addr = (env->cr[3] & ~0xfff) & env->a20_mask;
>>> +            walk_pml4e(list, pml4e_addr, env->a20_mask);
>>> +        } else
>>> +#endif
>>> +        {
>>> +            target_phys_addr_t pdpe_addr;
>>> +
>>> +            pdpe_addr = (env->cr[3] & ~0x1f) & env->a20_mask;
>>> +            walk_pdpe2(list, pdpe_addr, env->a20_mask);
>>> +        }
>>> +    } else {
>>> +        target_phys_addr_t pde_addr;
>>> +        bool pse;
>>> +
>>> +        pde_addr = (env->cr[3] & ~0xfff) & env->a20_mask;
>>> +        pse = !!(env->cr[4] & CR4_PSE_MASK);
>>> +        walk_pde2(list, pde_addr, env->a20_mask, pse);
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>> 
>> Does this assume paging mode? I don't know qemu very well, but qemu
>> dump command runs externally to guest machine, so I think the machine
>> could be in the state with paging disabled where CR4 doesn't refer to
>> page table as expected.
> 
> CR4? I think you want to say CR3.
> 
> Yes, the guest may be in the state without paging mode. I will fix it.
> 

Hmmm, now I think dump command needs to have a option to specify
whether to do paging or not during dumping. Doing always paging is
problematic. Also generated formats should be as simple as possible,
different from the format this current version generates. The reasons
I have are as follows:

  - The qemu dump command runs outside of guest machine. If machine is
    in the state with paging disabled and CR3 doesn't has page table
    address, qemu dump command cannot do paging.

  - We cannot do paging if guest machine state is severe. For example,
    when page table data is corrupted in some reason. In general, we
    should use minimum kinds of data only during dumping.

  - There's also kdump specific issue. On kdump there are two kinds of
    kernels, 1st kernel and 2nd kernel, and when crash happens,
    execution is transmitted from the 1st to the 2nd, and then the 2nd
    kernel copies the 1st kernel's kernel image. The problem is that
    at catastrophic situation, kdump can also hang even in the 2nd
    kernel. At this point, the 2nd kernel refers to the 2nd kernel's
    page table. So paging at the situation leads to lost of the 1st
    kernel's memory.

  - OTOH, gdb cannot perform paging, so, for gdb support, qemu dump
    needs to have paging mode. The period when qemu dump can produce
    the dump gdb can read is limited to the machine state with paging
    enabled and the 1st krenel, but I think no choise.

    * There's a way of getting the 1st kernel's image as linear image
      from the dumpfile generated at the 2nd kernel generated without
      paging. But it uses kenrel's specific information, so I don't
      think qemu should do this.

  - Well, it's possible to generate dumpfile while enabling both
    physical and linear address access together. It's just what Wen is
    doing now. But I think it better to do that more simply: that is,
    if non-paging mode, produce dumpfile as raw format; if paging
    mode, produce as linear format.

    * For example, the current implementation assigns both virtual and
      physical address to a single PT_LOAD entry. But the memory areas
      mapped to by a single PT_LOAD is restricted to the ones
      contiguous both physically and virtually. Due to this, I guess
      there's a case where the number of program headers grows
      seriously in the worst case. It might reach ELF's limit size.

    * Also, by this, it's necessary to reduce the number of program
      headers as much as possible, and now qemu dump tries to merge
      them in PATCH 01, but it looks to me too complicated.

How do other people think?

Thanks.
HATAYAMA, Daisuke




reply via email to

[Prev in Thread] Current Thread [Next in Thread]