qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4] Add option to mlock qemu and guest memory


From: Satoru Moriya
Subject: Re: [Qemu-devel] [PATCH v4] Add option to mlock qemu and guest memory
Date: Tue, 23 Apr 2013 12:47:25 +0900
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130328 Thunderbird/17.0.5

Hi Vinod,

Thank you for your report.

(2013/04/22 14:16), Chegu Vinod wrote:
> FYI... I had tried to use this change earlier and it did show some
> improvements in perf. (due to reduced exits).
>
> But as expected mlockall () on large sized guests adds a considerable
> delay in boot time. 

Yes, it is expected.

> For e.g. on an 8 socket Westmere box => a 256G guest :
> took an additional ~2+ mins to boot and a 512G guest took an additional
> ~5+ mins to boot. This is mainly due to long time spent in trying to clear
> all the pages.
>
>     77.96%         35728  qemu-system-x86  [kernel.kallsyms]     [k] 
> clear_page_c
>             |
>             --- clear_page_c
>                 hugetlb_no_page
>                 hugetlb_fault
>                 follow_hugetlb_page
>                 __get_user_pages
>                 __mlock_vma_pages_range
>                 __mm_populate
>                 vm_mmap_pgoff
>                 sys_mmap_pgoff
>                 sys_mmap
>                 system_call
>                 __GI___mmap64
>                 qemu_ram_alloc_from_ptr
>                 qemu_ram_alloc
>                 memory_region_init_ram
>                 pc_memory_init
>                 pc_init1
>                 pc_init_pci
>                 main
>                 __libc_start_main
>
> Need to have a faster way to clear pages.

Hmm, clear_page() just calls memset(page, 0, PAGE_SIZE)...
The patch has just merged today. I'll start to think of the issue above.

Regards,
Satoru







reply via email to

[Prev in Thread] Current Thread [Next in Thread]