[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH 0/3] qemu-kvm: Check the dirty and non-dirty pages b
[Qemu-devel] [PATCH 0/3] qemu-kvm: Check the dirty and non-dirty pages by multiple size
Mon, 08 Feb 2010 19:22:10 +0900
Thunderbird 220.127.116.11 (Windows/20090812)
The dirty and non-dirty pages are checked one by one in vl.c.
Since we believe that most of the memory is not dirty,
checking the dirty and non-dirty pages by multiple page size
should be much faster than checking them one by one.
We think there is mostly two kind of situation.
One is almost all the page is clean because there should be small
amount of pages become dirty between each round of dirty bitmap check.
The other is all the pages is dirty because all the bitmap is marked as
dirty at the beginning of migration.
To prove our prospect,
we have evaluated effect of this patch. we compared runtime of
ram_save_remaining with original ram_save_remaining() and
ram_save_remaining() using functions of this patch.
CPU: 4x Intel Xeon Quad Core 2.66GHz
Mem size: 6GB
kvm version: 2.6.31-17-server
qemu version: commit ed880109f74f0a4dd5b7ec09e6a2d9ba4903d9a5
Host OS: Ubuntu 9.10 (kernel 2.6.31)
Guest OS: Debian/GNU Linux lenny (kernel 2.6.26)
Guest Mem size: 512MB
Conditions of experiments are as follows:
Cond1: Guest OS periodically makes the 256MB continuous dirty pages.
Cond2: Guest OS periodically makes the 256MB dirty pages and non-dirty pages
Cond3: Guest OS read 3GB file, which is bigger than memory.
Cond4: Guest OS read/write 3GB file, which is bigger than memory.
Cond1: 8~16 times speed up
Cond2: 3~4 times speed down
Cond3: 8~16 times speed up
Cond4: 2~16 times speed up
# Runtime of ram_save_remaining() is changed by the number of remaining
# dirty pages.
|[Prev in Thread]
||[Next in Thread]|
- [Qemu-devel] [PATCH 0/3] qemu-kvm: Check the dirty and non-dirty pages by multiple size,
OHMURA Kei <=