qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] migration/rdma: Increase the backlog from 5 to 128


From: Dr. David Alan Gilbert
Subject: Re: [PATCH 1/2] migration/rdma: Increase the backlog from 5 to 128
Date: Wed, 2 Feb 2022 09:20:03 +0000
User-agent: Mutt/2.1.5 (2021-12-30)

* Pankaj Gupta (pankaj.gupta@ionos.com) wrote:
> > > > >  migration/rdma.c | 2 +-
> > > > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/migration/rdma.c b/migration/rdma.c
> > > > > index c7c7a384875b..2e223170d06d 100644
> > > > > --- a/migration/rdma.c
> > > > > +++ b/migration/rdma.c
> > > > > @@ -4238,7 +4238,7 @@ void rdma_start_incoming_migration(const char 
> > > > > *host_port, Error **errp)
> > > > >
> > > > >      trace_rdma_start_incoming_migration_after_dest_init();
> > > > >
> > > > > -    ret = rdma_listen(rdma->listen_id, 5);
> > > > > +    ret = rdma_listen(rdma->listen_id, 128);
> > > >
> > > > 128 backlog seems too much to me. Any reason for choosing this number.
> > > > Any rationale to choose this number?
> > > >
> > > 128 is the default value of SOMAXCONN, I can use that if it is preferred.
> >
> > AFAICS backlog is only applicable with RDMA iWARP CM mode. Maybe we
> > can increase it to 128.these many
> 
> Or maybe we first increase it to 20 or 32? or so to avoid memory
> overhead if we are not
> using these many connections at the same time.

Can you explain why you're requiring more than 1?  Is this with multifd
patches?

Dave

> > Maybe you can also share any testing data for multiple concurrent live
> > migrations using RDMA, please.
> >
> > Thanks,
> > Pankaj
> >
> > Thanks,
> > Pankaj
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]