qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] alloca one extra byte sockets


From: Riku Voipio
Subject: Re: [Qemu-devel] [PATCH 3/4] alloca one extra byte sockets
Date: Tue, 15 Jul 2014 16:29:30 +0300
User-agent: Mutt/1.5.21 (2010-09-15)

On Fri, Jul 11, 2014 at 05:18:03PM +0200, Joakim Tjernlund wrote:
> target_to_host_sockaddr() may increase the lenth with 1 byte
> for AF_UNIX sockets so allocate 1 extra byte.

Thanks, applied to linux-user tree

> Signed-off-by: Joakim Tjernlund <address@hidden>
> ---
>  linux-user/syscall.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/linux-user/syscall.c b/linux-user/syscall.c
> index a0e1ccc..8853c4e 100644
> --- a/linux-user/syscall.c
> +++ b/linux-user/syscall.c
> @@ -1978,7 +1978,7 @@ static abi_long do_connect(int sockfd, abi_ulong 
> target_addr,
>          return -TARGET_EINVAL;
>      }
>  
> -    addr = alloca(addrlen);
> +    addr = alloca(addrlen+1);
>  
>      ret = target_to_host_sockaddr(addr, target_addr, addrlen);
>      if (ret)
> @@ -1999,7 +1999,7 @@ static abi_long do_sendrecvmsg_locked(int fd, struct 
> target_msghdr *msgp,
>  
>      if (msgp->msg_name) {
>          msg.msg_namelen = tswap32(msgp->msg_namelen);
> -        msg.msg_name = alloca(msg.msg_namelen);
> +        msg.msg_name = alloca(msg.msg_namelen+1);
>          ret = target_to_host_sockaddr(msg.msg_name, tswapal(msgp->msg_name),
>                                  msg.msg_namelen);
>          if (ret) {
> @@ -2262,7 +2262,7 @@ static abi_long do_sendto(int fd, abi_ulong msg, size_t 
> len, int flags,
>      if (!host_msg)
>          return -TARGET_EFAULT;
>      if (target_addr) {
> -        addr = alloca(addrlen);
> +        addr = alloca(addrlen+1);
>          ret = target_to_host_sockaddr(addr, target_addr, addrlen);
>          if (ret) {
>              unlock_user(host_msg, msg, 0);
> -- 
> 1.8.5.5
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]