[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 15/41] migration: Add dirty_pages_rate to query
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [PATCH 15/41] migration: Add dirty_pages_rate to query migrate output |
Date: |
Fri, 21 Sep 2012 14:33:44 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120911 Thunderbird/15.0.1 |
Il 21/09/2012 10:47, Juan Quintela ha scritto:
> It indicates how many pages were dirtied during the last second.
>
> Signed-off-by: Juan Quintela <address@hidden>
> ---
> arch_init.c | 18 ++++++++++++++++++
> hmp.c | 4 ++++
> migration.c | 2 ++
> migration.h | 1 +
> qapi-schema.json | 8 ++++++--
> 5 files changed, 31 insertions(+), 2 deletions(-)
>
> diff --git a/arch_init.c b/arch_init.c
> index 0279d06..d96e888 100644
> --- a/arch_init.c
> +++ b/arch_init.c
> @@ -370,6 +370,14 @@ static void migration_bitmap_sync(void)
> RAMBlock *block;
> ram_addr_t addr;
> uint64_t num_dirty_pages_init = migration_dirty_pages;
> + MigrationState *s = migrate_get_current();
> + static int64_t start_time;
> + static int64_t num_dirty_pages_period;
> + int64_t end_time;
> +
> + if (!start_time) {
> + start_time = qemu_get_clock_ms(rt_clock);
> + }
>
> trace_migration_bitmap_sync_start();
> memory_global_sync_dirty_bitmap(get_system_memory());
> @@ -386,6 +394,16 @@ static void migration_bitmap_sync(void)
> }
> trace_migration_bitmap_sync_end(migration_dirty_pages
> - num_dirty_pages_init);
> + num_dirty_pages_period += migration_dirty_pages - num_dirty_pages_init;
> + end_time = qemu_get_clock_ms(rt_clock);
> +
> + /* more than 1 second = 1000 millisecons */
> + if (end_time > start_time + 1000) {
> + s->dirty_pages_rate = num_dirty_pages_period * 1000
> + / (end_time - start_time);
> + start_time = end_time;
> + num_dirty_pages_period = 0;
> + }
> }
Ok, this makes use of patch 6 as well. I'd still prefer the interface
change to the save_live functions for cumulating the expected downtime
across all save_live functions, but feel free to ignore me.
Paolo
>
> diff --git a/hmp.c b/hmp.c
> index 71c9292..67a529a 100644
> --- a/hmp.c
> +++ b/hmp.c
> @@ -175,6 +175,10 @@ void hmp_info_migrate(Monitor *mon)
> info->ram->normal);
> monitor_printf(mon, "normal bytes: %" PRIu64 " kbytes\n",
> info->ram->normal_bytes >> 10);
> + if (info->ram->dirty_pages_rate) {
> + monitor_printf(mon, "dirty pages rate: %" PRIu64 " pages\n",
> + info->ram->dirty_pages_rate);
> + }
> }
>
> if (info->has_disk) {
> diff --git a/migration.c b/migration.c
> index 62c8fe9..05634d5 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -180,6 +180,8 @@ MigrationInfo *qmp_query_migrate(Error **errp)
> info->ram->duplicate = dup_mig_pages_transferred();
> info->ram->normal = norm_mig_pages_transferred();
> info->ram->normal_bytes = norm_mig_bytes_transferred();
> + info->ram->dirty_pages_rate = s->dirty_pages_rate;
> +
>
> if (blk_mig_active()) {
> info->has_disk = true;
> diff --git a/migration.h b/migration.h
> index 552200c..66d7f68 100644
> --- a/migration.h
> +++ b/migration.h
> @@ -42,6 +42,7 @@ struct MigrationState
> int64_t total_time;
> int64_t downtime;
> int64_t expected_downtime;
> + int64_t dirty_pages_rate;
> bool enabled_capabilities[MIGRATION_CAPABILITY_MAX];
> int64_t xbzrle_cache_size;
> };
> diff --git a/qapi-schema.json b/qapi-schema.json
> index b8a1244..4a9ae52 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -358,13 +358,17 @@
> #
> # @normal : number of normal pages (since 1.2)
> #
> -# @normal-bytes : number of normal bytes sent (since 1.2)
> +# @normal-bytes: number of normal bytes sent (since 1.2)
> +#
> +# @dirty-pages-rate: number of pages dirtied by second by the
> +# guest (since 1.3)
> #
> # Since: 0.14.0
> ##
> { 'type': 'MigrationStats',
> 'data': {'transferred': 'int', 'remaining': 'int', 'total': 'int' ,
> - 'duplicate': 'int', 'normal': 'int', 'normal-bytes': 'int' } }
> + 'duplicate': 'int', 'normal': 'int', 'normal-bytes': 'int',
> + 'dirty-pages-rate' : 'int' } }
>
> ##
> # @XBZRLECacheStats
>
- [Qemu-devel] [PATCH 17/41] buffered_file: rename opaque to migration_state, (continued)
- [Qemu-devel] [PATCH 17/41] buffered_file: rename opaque to migration_state, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 20/41] buffered_file: unfold migrate_fd_put_ready, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 18/41] buffered_file: opaque is MigrationState, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 13/41] ram: create trace event for migration sync bitmap, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 22/41] buffered_file: unfold migrate_fd_put_buffer, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 21/41] buffered_file: unfold migrate_fd_put_buffer, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 19/41] buffered_file: unfold migrate_fd_put_buffer, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 15/41] migration: Add dirty_pages_rate to query migrate output, Juan Quintela, 2012/09/21
- Re: [Qemu-devel] [PATCH 15/41] migration: Add dirty_pages_rate to query migrate output,
Paolo Bonzini <=
- [Qemu-devel] [PATCH 27/41] savevm: unexport qemu_fflush, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 23/41] buffered_file: We can access directly to bandwidth_limit, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 26/41] migration: make migrate_fd_wait_for_unfreeze() return errors, Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 29/41] savevm: Remove qemu_fseek(), Juan Quintela, 2012/09/21
- [Qemu-devel] [PATCH 30/41] savevm: make qemu_fflush() return an error code, Juan Quintela, 2012/09/21