qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH] iscsi: Fix divide-by-zero regressi


From: Holger Schranz
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH] iscsi: Fix divide-by-zero regression on raw SG devices
Date: Wed, 7 Sep 2016 05:15:46 +0200
User-agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0

Hi Eric,

I will test this patch within tomorrow. Is this o.k. for you?

Beste regards

Holger

Am 06.09.2016 um 21:04 schrieb Eric Blake:
When qemu uses iscsi devices in sg mode, iscsilun->block_size
is left at 0.  Prior to commits cf081fca and similar, when
block limits were tracked in sectors, this did not matter:
various block limits were just left at 0.  But when we started
scaling by block size, this caused SIGFPE.

One possible solution for SG mode is to just blindly skip ALL
of iscsi_refresh_limits(), since we already short circuit so
many other things in sg mode.  But this patch takes a slightly
more conservative approach, and merely guarantees that scaling
will succeed (for SG devices, the scaling is done to the block
layer default of 512 bytes, since we don't know of any iscsi
devices with a smaller block size), while still using multiples
of the original size.  Resulting limits may still be zero in SG
mode (that is, we only fix block_size used as a denominator, not
all uses).

Reported-by: Holger Schranz <address@hidden>
Signed-off-by: Eric Blake <address@hidden>
CC: address@hidden
---

I would really appreciate Holger testing this patch. We could
also go with the much shorter patch that just does
if (bs->sg) { return; }
at the top of iscsi_refresh_limits(), but I'm not sure if that
would break anything else in the block layer (we had several,
but not all, limits that were provably left alone at 0 for
sg mode).

  block/iscsi.c | 12 ++++++++----
  1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index 95ce9e1..ff84cd1 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -1813,6 +1813,10 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
Error **errp)

      IscsiLun *iscsilun = bs->opaque;
      uint64_t max_xfer_len = iscsilun->use_16_for_rw ? 0xffffffff : 0xffff;
+    unsigned int block_size = MIN_NON_ZERO(BDRV_SECTOR_SIZE,
+                                           iscsilun->block_size);
+
+    assert(iscsilun->block_size >= BDRV_SECTOR_SIZE || bs->sg);

      bs->bl.request_alignment = iscsilun->block_size;

@@ -1820,12 +1824,12 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
Error **errp)
          max_xfer_len = MIN(max_xfer_len, iscsilun->bl.max_xfer_len);
      }

-    if (max_xfer_len * iscsilun->block_size < INT_MAX) {
+    if (max_xfer_len * block_size < INT_MAX) {
          bs->bl.max_transfer = max_xfer_len * iscsilun->block_size;
      }

      if (iscsilun->lbp.lbpu) {
-        if (iscsilun->bl.max_unmap < 0xffffffff / iscsilun->block_size) {
+        if (iscsilun->bl.max_unmap < 0xffffffff / block_size) {
              bs->bl.max_pdiscard =
                  iscsilun->bl.max_unmap * iscsilun->block_size;
          }
@@ -1835,7 +1839,7 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
Error **errp)
          bs->bl.pdiscard_alignment = iscsilun->block_size;
      }

-    if (iscsilun->bl.max_ws_len < 0xffffffff / iscsilun->block_size) {
+    if (iscsilun->bl.max_ws_len < 0xffffffff / block_size) {
          bs->bl.max_pwrite_zeroes =
              iscsilun->bl.max_ws_len * iscsilun->block_size;
      }
@@ -1846,7 +1850,7 @@ static void iscsi_refresh_limits(BlockDriverState *bs, 
Error **errp)
          bs->bl.pwrite_zeroes_alignment = iscsilun->block_size;
      }
      if (iscsilun->bl.opt_xfer_len &&
-        iscsilun->bl.opt_xfer_len < INT_MAX / iscsilun->block_size) {
+        iscsilun->bl.opt_xfer_len < INT_MAX / block_size) {
          bs->bl.opt_transfer = pow2floor(iscsilun->bl.opt_xfer_len *
                                          iscsilun->block_size);
      }


---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus




reply via email to

[Prev in Thread] Current Thread [Next in Thread]