qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/5] Live Migration Acceleration with IAA Compression


From: Daniel P . Berrangé
Subject: Re: [PATCH 0/5] Live Migration Acceleration with IAA Compression
Date: Thu, 19 Oct 2023 16:32:08 +0100
User-agent: Mutt/2.2.9 (2022-11-12)

On Thu, Oct 19, 2023 at 11:23:31AM -0400, Peter Xu wrote:
> On Thu, Oct 19, 2023 at 03:52:14PM +0100, Daniel P. Berrangé wrote:
> > On Thu, Oct 19, 2023 at 01:40:23PM +0200, Juan Quintela wrote:
> > > Yuan Liu <yuan1.liu@intel.com> wrote:
> > > > Hi,
> > > >
> > > > I am writing to submit a code change aimed at enhancing live migration
> > > > acceleration by leveraging the compression capability of the Intel
> > > > In-Memory Analytics Accelerator (IAA).
> > > >
> > > > Enabling compression functionality during the live migration process can
> > > > enhance performance, thereby reducing downtime and network bandwidth
> > > > requirements. However, this improvement comes at the cost of additional
> > > > CPU resources, posing a challenge for cloud service providers in terms 
> > > > of
> > > > resource allocation. To address this challenge, I have focused on 
> > > > offloading
> > > > the compression overhead to the IAA hardware, resulting in performance 
> > > > gains.
> > > >
> > > > The implementation of the IAA (de)compression code is based on Intel 
> > > > Query
> > > > Processing Library (QPL), an open-source software project designed for
> > > > IAA high-level software programming.
> > > >
> > > > Best regards,
> > > > Yuan Liu
> > > 
> > > After reviewing the patches:
> > > 
> > > - why are you doing this on top of old compression code, that is
> > >   obsolete, deprecated and buggy
> > > 
> > > - why are you not doing it on top of multifd.
> > > 
> > > You just need to add another compression method on top of multifd.
> > > See how it was done for zstd:
> > 
> > I'm not sure that is ideal approach.  IIUC, the IAA/QPL library
> > is not defining a new compression format. Rather it is providing
> > a hardware accelerator for 'deflate' format, as can be made
> > compatible with zlib:
> > 
> >   
> > https://intel.github.io/qpl/documentation/dev_guide_docs/c_use_cases/deflate/c_deflate_zlib_gzip.html#zlib-and-gzip-compatibility-reference-link
> > 
> > With multifd we already have a 'zlib' compression format, and so
> > this IAA/QPL logic would effectively just be a providing a second
> > implementation of zlib.
> > 
> > Given the use of a standard format, I would expect to be able
> > to use software zlib on the src, mixed with IAA/QPL zlib on
> > the target, or vica-verca.
> > 
> > IOW, rather than defining a new compression format for this,
> > I think we could look at a new migration parameter for
> > 
> > "compression-accelerator": ["auto", "none", "qpl"]
> > 
> > with 'auto' the default, such that we can automatically enable
> > IAA/QPL when 'zlib' format is requested, if running on a suitable
> > host.
> 
> I was also curious about the format of compression comparing to software
> ones when reading.
> 
> Would there be a use case that one would prefer soft compression even if
> hardware accelerator existed, no matter on src/dst?
> 
> I'm wondering whether we can avoid that one more parameter but always use
> hardware accelerations as long as possible.

Yeah, I did wonder about whether we could avoid a parameter, but then
I'm thinking  it is good to have an escape hatch if we were to find
any flaws in the QPL library's impl of deflate() that caused interop
problems. 

With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]