qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC PATCH 0/2] Calcuate downtime for postcopy live migrati


From: Alexey Perevalov
Subject: [Qemu-devel] [RFC PATCH 0/2] Calcuate downtime for postcopy live migration
Date: Sat, 18 Mar 2017 18:13:21 +0300

Hi David,

I already asked you about downtime calculation for postcopy live migration.
As I remember you said it's worth not to calculate it per vCPU or maybe I
understood you incorrectly. I decided to proof it could be useful.

This patch set is based on commit 272d7dee5951f926fad1911f2f072e5915cdcba0
of QEMU master branch. It requires commit into Andreas git repository
"userfaultfd: provide pid in userfault uffd_msg"

When I tested it I found following moments are strange:
1. First userfault always occurs due to access to ram in vapic_map_rom_writable,
all vCPU are sleeping in this time
2. Latest half of all userfault was initiated by kworkers, that's why I had a 
doubt
about current in handle_userfault inside kernel as a proper task_struct for 
pagefault
initiator. All vCPU was sleeping at that moment.
3. Also there is a discrepancy, of vCPU state and real vCPU thread state.

This patch is just for showing and idea, if you ok with this idea none RFC 
patch will not
include proc access && a lot of traces.
Also I think it worth to guard postcopy_downtime in MigrationIncomingState and
return calculated downtime into src, where qeury-migration will be invocked.

Alexey Perevalov (2):
  userfault: add pid into uffd_msg
  migration: calculate downtime on dst side

 include/migration/migration.h     |  11 ++
 linux-headers/linux/userfaultfd.h |   1 +
 migration/migration.c             | 238 +++++++++++++++++++++++++++++++++++++-
 migration/postcopy-ram.c          |  61 +++++++++-
 migration/savevm.c                |   2 +
 migration/trace-events            |  10 +-
 6 files changed, 319 insertions(+), 4 deletions(-)

-- 
1.8.3.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]