qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [V4 PATCH 3/8] Add client side interfaces for chroot en


From: M. Mohan Kumar
Subject: [Qemu-devel] Re: [V4 PATCH 3/8] Add client side interfaces for chroot environment
Date: Tue, 8 Feb 2011 21:51:07 +0530
User-agent: KMail/1.13.5 (Linux/2.6.35.10-74.fc14.i686.PAE; KDE/4.5.4; i686; ; )

On Wednesday 02 February 2011 3:24:16 pm Stefan Hajnoczi wrote:
> On Tue, Feb 1, 2011 at 5:26 AM, M. Mohan Kumar <address@hidden> wrote:
> > +/* Receive file descriptor and error status from chroot process */
> > +static int v9fs_receivefd(int sockfd, int *error)
> 
> The return value and int *error overlap in functionality.  Would it be
> possible to have only one mechanism for returning errors?
> 

v9fs_receivefd function returns 'fd' on success and returns EIO (and error as 
EIO) if there was a socket error and -1 on other errors.

We need to return -EIO and error as EIO to differentiate between generic EIO 
error and socket failure.
 
> *error = 0 is never done so a caller that passes an uninitialized
> local variable gets back junk when the function succeeds.  It would be
> safer to clear it at the start of this function.
I will update the patch.
> 
> Inconsistent use of errno constants and -1:
> return -EIO;
> return -1; /* == -EPERM, probably not what you wanted */

-1 denotes failure

> 
> How about getting rid of int *error and returning the -errno?  If
> if_error is set then return -fd_info.error.

Do you mean return fd on success and -errno on failure? In this case how to 
differentiate between actual EIO and EIO because of socket read/write failure?

> 
> > +/*
> > + * V9fsFileObjectRequest is written into the socket by QEMU process.
> > + * Then this request is read by chroot process using read_request
> > function + */
> > +static int v9fs_write_request(int sockfd, V9fsFileObjectRequest
> > *request) +{
> > +    int retval, length;
> > +    char *buff, *buffp;
> > +
> > +    length = sizeof(request->data) + request->data.path_len +
> > +                    request->data.oldpath_len;
> > +
> > +    buff = qemu_malloc(length);
> > +    buffp = buff;
> > +    memcpy(buffp, &request->data, sizeof(request->data));
> > +    buffp += sizeof(request->data);
> > +    memcpy(buffp, request->path.path, request->data.path_len);
> > +    buffp += request->data.path_len;
> > +    memcpy(buffp, request->path.old_path, request->data.oldpath_len);
> > +
> > +    retval = qemu_write_full(sockfd, buff, length);
> 
> qemu_free(buff);
> 
> Also, weren't you doing the malloc() + single write() to avoid
> interleaved write()?  Is that still necessary, I thought a mutex was
> introduced?  It's probably worth adding a comment to explain why
> you're doing the malloc + write.

This is used to avoid multiple write system calls. Probably I can update with 
comments. 

----
M. Mohan Kumar



reply via email to

[Prev in Thread] Current Thread [Next in Thread]