qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 07/11] curl: make use of CURLDataCache.


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH v5 07/11] curl: make use of CURLDataCache.
Date: Fri, 24 May 2013 11:44:02 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, May 24, 2013 at 11:10:26AM +0800, Fam Zheng wrote:
> On Thu, 05/23 16:23, Stefan Hajnoczi wrote:
> > On Thu, May 23, 2013 at 11:38:05AM +0800, Fam Zheng wrote:
> > > +    CURLDataCache *c = s->cache;
> > >      size_t realsize = size * nmemb;
> > > -    int i;
> > > -
> > > -    DPRINTF("CURL: Just reading %zd bytes\n", realsize);
> > > +    CURLAIOCB *acb;
> > >  
> > > -    if (!s || !s->orig_buf)
> > > +    if (!c || !c->data) {
> > >          goto read_end;
> > > +    }
> > > +    if (c->write_pos >= c->data_len) {
> > > +        goto read_end;
> > > +    }
> > > +    memcpy(c->data + c->write_pos, ptr,
> > > +           MIN(realsize, c->data_len - c->write_pos));
> > > +    c->write_pos += realsize;
> > > +    if (c->write_pos >= c->data_len) {
> > > +        c->write_pos = c->data_len;
> > > +    }
> > >  
> > > -    memcpy(s->orig_buf + s->buf_off, ptr, realsize);
> > > -    s->buf_off += realsize;
> > 
> > Why did you add MIN(realsize, c->data_len - c->write_pos)?  The original
> > code trusts realsize to be within s->orig_buf.
> 
> I don't see an evidence why it's safe here. CURL certainly doesn't know
> how much buffer do we have. (man 3 curl_easy_setopt, section
> CURLOPT_WRITEFUNCTION)

The HTTP request included a Range: header so we should know the total
number of bytes we'll receive.

That said, libcurl may not check so this is defensive programming.  A
malicious server shouldn't be able to overflow our buffer.  A comment or
note in the commit description would be nice to explain semantic changes
like this.

> > > @@ -600,29 +596,41 @@ static void curl_readv_bh_cb(void *p)
> > >      // No cache found, so let's start a new request
> > >      state = curl_init_state(s);
> > >      if (!state) {
> > > -        acb->common.cb(acb->common.opaque, -EIO);
> > > -        qemu_aio_release(acb);
> > > -        return;
> > > +        goto err_release;
> > >      }
> > >  
> > > -    acb->start = 0;
> > > -    acb->end = (acb->nb_sectors * SECTOR_SIZE);
> > > -
> > > -    state->buf_off = 0;
> > > -    if (state->orig_buf)
> > > -        g_free(state->orig_buf);
> > > -    state->buf_start = start;
> > > -    state->buf_len = acb->end + s->readahead_size;
> > > -    end = MIN(start + state->buf_len, s->len) - 1;
> > > -    state->orig_buf = g_malloc(state->buf_len);
> > > -    state->acb[0] = acb;
> > > -
> > > -    snprintf(state->range, sizeof(state->range) - 1, "%zd-%zd", start, 
> > > end);
> > > -    DPRINTF("CURL (AIO): Reading %d at %zd (%s)\n",
> > > -            (acb->nb_sectors * SECTOR_SIZE), start, state->range);
> > > -    curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
> > > +    cache = g_malloc0(sizeof(CURLDataCache));
> > > +    cache->base_pos = acb->sector_num * SECTOR_SIZE;
> > > +    cache->data_len = aio_bytes + s->readahead_size;
> > > +    cache->write_pos = 0;
> > > +    cache->data = g_malloc(cache->data_len);
> > >  
> > > +    QLIST_INSERT_HEAD(&s->acbs, acb, next);
> > > +    snprintf(state->range, sizeof(state->range) - 1, "%zd-%zd", 
> > > cache->base_pos,
> > > +             cache->base_pos + cache->data_len);
> > > +    DPRINTF("Reading range: %s\n", state->range);
> > > +    curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
> > > +    QLIST_INSERT_HEAD(&s->cache, cache, next);
> > > +    state->cache = cache;
> > > +    cache->use_count++;
> > 
> > I don't see where you bump the use_count when a cache lookup is
> > successful.  Maybe I just missed it in the other patches.
> 
> Use count is for serving as the receiving buffer for submitted CURL
> requests. It's not necessary to bump use_count when cache lookup is
> successful, since data is immediately copied to guest, no ref to the
> cache hold.

You're right.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]