qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 05/10] block/dirty-bitmap: switch _next_dirty_area and _ne


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v3 05/10] block/dirty-bitmap: switch _next_dirty_area and _next_zero to int64_t
Date: Mon, 20 Jan 2020 12:28:25 +0000

20.01.2020 14:59, Max Reitz wrote:
> On 19.12.19 11:03, Vladimir Sementsov-Ogievskiy wrote:
>> We are going to introduce bdrv_dirty_bitmap_next_dirty so that same
>> variable may be used to store its return value and to be its parameter,
>> so it would int64_t.
>>
>> Similarly, we are going to refactor hbitmap_next_dirty_area to use
>> hbitmap_next_dirty together with hbitmap_next_zero, therefore we want
>> hbitmap_next_zero parameter type to be int64_t too.
>>
>> So, for convenience update all parameters of *_next_zero and
>> *_next_dirty_area to be int64_t.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>   include/block/dirty-bitmap.h |  6 +++---
>>   include/qemu/hbitmap.h       |  7 +++----
>>   block/dirty-bitmap.c         |  6 +++---
>>   nbd/server.c                 |  2 +-
>>   tests/test-hbitmap.c         | 32 ++++++++++++++++----------------
>>   util/hbitmap.c               | 13 ++++++++-----
>>   6 files changed, 34 insertions(+), 32 deletions(-)
> 
> [...]
> 
>> diff --git a/util/hbitmap.c b/util/hbitmap.c
>> index b6d4b99a06..df22f06be6 100644
>> --- a/util/hbitmap.c
>> +++ b/util/hbitmap.c
>> @@ -193,7 +193,7 @@ void hbitmap_iter_init(HBitmapIter *hbi, const HBitmap 
>> *hb, uint64_t first)
>>       }
>>   }
>>   
>> -int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t start, uint64_t count)
>> +int64_t hbitmap_next_zero(const HBitmap *hb, int64_t start, int64_t count)
>>   {
>>       size_t pos = (start >> hb->granularity) >> BITS_PER_LEVEL;
>>       unsigned long *last_lev = hb->levels[HBITMAP_LEVELS - 1];
>> @@ -202,6 +202,8 @@ int64_t hbitmap_next_zero(const HBitmap *hb, uint64_t 
>> start, uint64_t count)
>>       uint64_t end_bit, sz;
>>       int64_t res;
>>   
>> +    assert(start >= 0 && count >= 0);
>> +
>>       if (start >= hb->orig_size || count == 0) {
>>           return -1;
>>       }
> As far as I can see, NBD just passes NBDRequest.from (which is a
> uint64_t) to this function (on NBD_CMD_BLOCK_STATUS).  Would this allow
> a malicious client to send a value > INT64_MAX, thus provoking an
> overflow and killing the server with this new assertion?


in nbd_co_receive_request() we have


     if (request->from > client->exp->size ||
         request->len > client->exp->size - request->from) {


So, we check that from is <= exp->size. and exp->size cant be greater than 
INT64_MAX,
as it derived from bdrv_getlength, which returns int64_t.



Interesting, should we be more strict in server:?

--- a/nbd/server.c
+++ b/nbd/server.c
@@ -2178,7 +2178,7 @@ static int nbd_co_receive_request(NBDRequestData *req, 
NBDRequest *request,
          error_setg(errp, "Export is read-only");
          return -EROFS;
      }
-    if (request->from > client->exp->size ||
+    if (request->from >= client->exp->size ||
          request->len > client->exp->size - request->from) {
          error_setg(errp, "operation past EOF; From: %" PRIu64 ", Len: %" 
PRIu32
                     ", Size: %" PRIu64, request->from, request->len,

Or is it intentional? Looking through NBD spec I found only

    client MUST NOT use a length ... or which, when added to offset, would 
exceed the export size.

So, formally pair offset=<export size>, len=0 is valid...

> 
> On second thought, we have this problem already everywhere in
> nbd_handle_request().  I don’t see it or its caller ever checking
> whether the received values are in bounds, it just passes them to all
> kind of block layer functions that sometimes even just accept plain
> ints.  Well, I suppose all other functions just error out, so it
> probably isn’t an actual problem in practice so far...
> 
> Max
> 


-- 
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]