qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
Date: Wed, 5 Apr 2017 11:34:00 +0100
User-agent: Mutt/1.8.0 (2017-02-23)

* Juan Quintela (address@hidden) wrote:
> "Dr. David Alan Gilbert" <address@hidden> wrote:
> > * Juan Quintela (address@hidden) wrote:
> 
> Hi
> 
> >> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, 
> >> MigrationState *ms,
> >>   *
> >>   * It should be empty at the end anyway, but in error cases there may
> >>   * xbe some left.
> >> - *
> >> - * @ms: current migration state
> >>   */
> >> -void flush_page_queue(MigrationState *ms)
> >> +void flush_page_queue(void)
> >
> > I'm not sure this is safe;  it's called from migrate_fd_cleanup right at
> > the end, if you do any finalisation/cleanup of the RAMState in
> > ram_save_complete
> > then when is it safe to run this?
> 
> But, looking into it, I think that we should be able to move this into
> ram_save_cleanup() no?
> 
> We don't need it after that?
> As an added bonus, we can remove the export of it.

As discussed by irc;  the thing I'm cautious about is getting the order
of cleanup right.
If you look at migration_completion you see we call
qemu_savevm_state_complete_postcopy() (which calls ram_save_complete)
before we call await_return_path_close_on_source  which ensures that the
thread that's handling requests from the destination and queuing them
has finished.

It seems right to make sure that thread has finished (and thus nothing
is trying to add anythign to that queue) before trying to clean it up.

Dave

> >> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const 
> >> char *rbname,
> >>          goto err;
> >>      }
> >>  
> >> -    struct MigrationSrcPageRequest *new_entry =
> >> -        g_malloc0(sizeof(struct MigrationSrcPageRequest));
> >> +    struct RAMSrcPageRequest *new_entry =
> >> +        g_malloc0(sizeof(struct RAMSrcPageRequest));
> >>      new_entry->rb = ramblock;
> >>      new_entry->offset = start;
> >>      new_entry->len = len;
> >>  
> >>      memory_region_ref(ramblock->mr);
> >> -    qemu_mutex_lock(&ms->src_page_req_mutex);
> >> -    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
> >> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
> >> +    qemu_mutex_lock(&rs->src_page_req_mutex);
> >> +    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
> >> +    qemu_mutex_unlock(&rs->src_page_req_mutex);
> >
> > Hmm ok where did it get it's rs from?
> > Anyway, the thing I needed to convince myself of was that there was
> > any guarantee that
> > RAMState would exist by the time the first request came in, something
> > that we now need
> > to be careful of.
> > I think we're mostly OK; we call qemu_savevm_state_begin() at the top
> > of migration_thread so the ram_save_setup should be done and allocate
> > the RAMState before we get into the main loop and thus before we ever
> > look at the 'start_postcopy' flag and thus before we ever ask the 
> > destination
> > to send us stuff.
> >
> >>      rcu_read_unlock();
> >>  
> >>      return 0;
> >> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, 
> >> QEMUFile *f, bool last_stage)
> >>  
> >>      do {
> >>          again = true;
> >> -        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
> >> +        found = get_queued_page(rs, &pss, &dirty_ram_abs);
> >>  
> >>          if (!found) {
> >>              /* priority queue empty, so just search for something dirty */
> >> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
> >>  
> >>      memset(rs, 0, sizeof(*rs));
> >>      qemu_mutex_init(&rs->bitmap_mutex);
> >> +    qemu_mutex_init(&rs->src_page_req_mutex);
> >> +    QSIMPLEQ_INIT(&rs->src_page_requests);
> >
> > Similar question to above; that mutex is going to get reinit'd
> > on a new migration and it shouldn't be without it being destroyed.
> > Maybe make it a once.
> 
> good catch.  I think that the easiest way is to just move it into proper
> RAMState, init it here, and destroy it at cleanup?
> 
> Later, Juan.
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]