qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Question about add AF_ALG backend for virtio-crypto


From: Daniel P. Berrange
Subject: Re: [Qemu-devel] Question about add AF_ALG backend for virtio-crypto
Date: Thu, 9 Feb 2017 11:22:56 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Thu, Feb 09, 2017 at 07:03:57PM +0800, Longpeng (Mike) wrote:
> Hi Daniel,
> 
> On 2017/2/9 18:11, Daniel P. Berrange wrote:
> 
> > On Thu, Feb 09, 2017 at 10:58:55AM +0800, Longpeng (Mike) wrote:
> >> Hi Daniel,
> 
> 
> >>
> >> So...you prefer approach 1 with a driver-table dispatch layer, right?
> >> And this implies that we must either rename some public methods in
> >> cipher-gcrypt.c/cipher-nettle.c, or change them to 'static'.
> > 
> > I'd suggest both - renaming them to have 'gcrypt' or 'nettle' in their
> > name, and also make them static.
> > 
> 
> 
> OK.
> 
> >> I also have some other ideas:
> >>
> 
> >> 2) *maybe we need a heuristic policy*
> >>
> >> I added some speed test in test-crypto-cipher/hash and found that for big
> >> packets AF_ALG is much faster than library-impl while library-impl is 
> >> better
> >> when the packets is small:
> >>
> >> packet(bytes)      AF_ALG(MB/sec, intel QAT)       Library-impl(MB/sec)
> >> 512                53.68                           127.82
> >> 1024               98.39                           133.21
> >> 2048               167.56                          134.62
> >> 4096               276.21                          135.10
> >> 8192               410.80                          135.82
> >> 16384              545.08                          136.01
> >> 32768              654.49                          136.30
> >> 65536              723.00                          136.29
> >>
> >> If a @alg is both supported by AF_ALG and library-impl, I think we should 
> >> decide
> >> to use which one dynamically.
> > 
> > What exactly are you measuring here?
> > 
> > Is this comparing encryption of a fixed total size of data, and
> > varying the packet size. ie sending 1024 * 512 byte packets against
> > 256  * 2048 byte packages.
> > 
> > Or is it sending a constant number of packets eg 1024 * 512 byte
> > packets against 1024 * 2048 byte packets ?
> > 
> 
> 
> The testcase encrypts data for 5 seconds and then calculates how many MBs it 
> can
> encrypt per second, as below:
> 
>     g_test_timer_start();
>     do {
>         g_assert(qcrypto_cipher_setiv(cipher,
>                                       iv, niv,
>                                       &err) == 0);
> 
>         g_assert(qcrypto_cipher_encrypt(cipher,
>                                         plaintext,
>                                         ciphertext,
>                                         chunk_size,
>                                         &err) == 0);
>         total += chunk_size;
>     } while (g_test_timer_elapsed() < 5.0);
> 
>     total /= 1024 * 1024; /* to MB */
>     g_print("Testing cbc(aes128): ");
>     g_print("Encrypting in chunks of %ld bytes: ", chunk_size);
>     g_print("done. %.2f MB in %.2f secs: ", total, g_test_timer_last());
>     g_print("%.2f MB/sec\t", total / g_test_timer_last());
> 
> chunk_size = 512/1024/2048/.../65536 bytes.
> 
> Some other projects (ie. cryptodev-linux, libkcapi) also use this way to test 
> speed.

I'd be interested to know if there's any different in the results if
you put the qcrypto_cipher_setiv() call outside of the loop.

Depending on the way the API is used, you might set an IV, and then
write many chunks of data before setting the next IV.

Of course for LUKS, we set a new IV every 512 bytes of data, since IVs
are tied to disk sectors, so we'd be hitting the small chunk size
code path.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]