[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH 09/14] migration/multifd: Rename p->num_packets and clean it up
|
From: |
peterx |
|
Subject: |
[PATCH 09/14] migration/multifd: Rename p->num_packets and clean it up |
|
Date: |
Wed, 31 Jan 2024 18:31:06 +0800 |
From: Peter Xu <peterx@redhat.com>
This field, no matter whether on src or dest, is only used for debugging
purpose.
They can even be removed already, unless it still more or less provide some
accounting on "how many packets are sent/recved for this thread". The
other more important one is called packet_num, which is embeded in the
multifd packet headers (MultiFDPacket_t).
So let's keep them for now, but make them much easier to understand, by
doing below:
- Rename both of them to packets_sent / packets_recved, the old
name (num_packets) are waaay too confusing when we already have
MultiFDPacket_t.packets_num.
- Avoid worrying on the "initial packet": we know we will send it, that's
good enough. The accounting won't matter a great deal to start with 0 or
with 1.
- Move them to where we send/recv the packets. They're:
- multifd_send_fill_packet() for senders.
- multifd_recv_unfill_packet() for receivers.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/multifd.h | 6 +++---
migration/multifd.c | 13 +++++--------
2 files changed, 8 insertions(+), 11 deletions(-)
diff --git a/migration/multifd.h b/migration/multifd.h
index 08f26ef3fe..2e4ad0dc56 100644
--- a/migration/multifd.h
+++ b/migration/multifd.h
@@ -124,7 +124,7 @@ typedef struct {
/* size of the next packet that contains pages */
uint32_t next_packet_size;
/* packets sent through this channel */
- uint64_t num_packets;
+ uint64_t packets_sent;
/* non zero pages sent through this channel */
uint64_t total_normal_pages;
/* buffers to send */
@@ -174,8 +174,8 @@ typedef struct {
MultiFDPacket_t *packet;
/* size of the next packet that contains pages */
uint32_t next_packet_size;
- /* packets sent through this channel */
- uint64_t num_packets;
+ /* packets received through this channel */
+ uint64_t packets_recved;
/* ramblock */
RAMBlock *block;
/* ramblock host address */
diff --git a/migration/multifd.c b/migration/multifd.c
index 2d12de01a1..abc2746b6e 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -288,6 +288,8 @@ static void multifd_send_fill_packet(MultiFDSendParams *p)
packet->offset[i] = cpu_to_be64(temp);
}
+
+ p->packets_sent++;
}
static int multifd_recv_unfill_packet(MultiFDRecvParams *p, Error **errp)
@@ -335,6 +337,7 @@ static int multifd_recv_unfill_packet(MultiFDRecvParams *p,
Error **errp)
p->next_packet_size = be32_to_cpu(packet->next_packet_size);
p->packet_num = be64_to_cpu(packet->packet_num);
+ p->packets_recved++;
if (p->normal_num == 0) {
return 0;
@@ -683,8 +686,6 @@ static void *multifd_send_thread(void *opaque)
ret = -1;
goto out;
}
- /* initial packet */
- p->num_packets = 1;
while (true) {
qemu_sem_post(&multifd_send_state->channels_ready);
@@ -714,7 +715,6 @@ static void *multifd_send_thread(void *opaque)
}
multifd_send_fill_packet(p);
- p->num_packets++;
p->total_normal_pages += pages->num;
trace_multifd_send(p->id, packet_num, pages->num, p->flags,
p->next_packet_size);
@@ -782,7 +782,7 @@ out:
rcu_unregister_thread();
migration_threads_remove(thread);
- trace_multifd_send_thread_end(p->id, p->num_packets,
p->total_normal_pages);
+ trace_multifd_send_thread_end(p->id, p->packets_sent,
p->total_normal_pages);
return NULL;
}
@@ -1120,7 +1120,6 @@ static void *multifd_recv_thread(void *opaque)
p->flags &= ~MULTIFD_FLAG_SYNC;
trace_multifd_recv(p->id, p->packet_num, p->normal_num, flags,
p->next_packet_size);
- p->num_packets++;
p->total_normal_pages += p->normal_num;
qemu_mutex_unlock(&p->mutex);
@@ -1146,7 +1145,7 @@ static void *multifd_recv_thread(void *opaque)
qemu_mutex_unlock(&p->mutex);
rcu_unregister_thread();
- trace_multifd_recv_thread_end(p->id, p->num_packets,
p->total_normal_pages);
+ trace_multifd_recv_thread_end(p->id, p->packets_recved,
p->total_normal_pages);
return NULL;
}
@@ -1248,8 +1247,6 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error
**errp)
}
p->c = ioc;
object_ref(OBJECT(ioc));
- /* initial packet */
- p->num_packets = 1;
p->running = true;
qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,
--
2.43.0
- [PATCH 07/14] migration/multifd: Simplify locking in sender thread, (continued)
- [PATCH 07/14] migration/multifd: Simplify locking in sender thread, peterx, 2024/01/31
- [PATCH 05/14] migration/multifd: Drop MultiFDSendParams.normal[] array, peterx, 2024/01/31
- [PATCH 08/14] migration/multifd: Drop pages->num check in sender thread, peterx, 2024/01/31
- [PATCH 10/14] migration/multifd: Move total_normal_pages accounting, peterx, 2024/01/31
- [PATCH 11/14] migration/multifd: Move trace_multifd_send|recv(), peterx, 2024/01/31
- [PATCH 09/14] migration/multifd: Rename p->num_packets and clean it up,
peterx <=
- [PATCH 12/14] migration/multifd: multifd_send_prepare_header(), peterx, 2024/01/31
- [PATCH 13/14] migration/multifd: Move header prepare/fill into send_prepare(), peterx, 2024/01/31
- [PATCH 14/14] migration/multifd: Forbid spurious wakeups, peterx, 2024/01/31
- Re: [PATCH 00/14] migration/multifd: Refactor ->send_prepare() and cleanups, Fabiano Rosas, 2024/01/31