qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] virtio-net: check the existence of peer before accesing its


From: Michael S. Tsirkin
Subject: Re: [PATCH] virtio-net: check the existence of peer before accesing its config
Date: Mon, 27 Jul 2020 06:17:54 -0400

On Mon, Jul 27, 2020 at 05:49:47PM +0800, Jason Wang wrote:
> 
> On 2020/7/27 下午5:41, Michael S. Tsirkin wrote:
> > On Mon, Jul 27, 2020 at 03:43:28PM +0800, Jason Wang wrote:
> > > We try to get config from peer unconditionally which may lead NULL
> > > pointer dereference. Add a check before trying to access the config.
> > > 
> > > Fixes: 108a64818e69b ("vhost-vdpa: introduce vhost-vdpa backend")
> > > Cc: Cindy Lu <lulu@redhat.com>
> > > Tested-by: Cornelia Huck <cohuck@redhat.com>
> > > Signed-off-by: Jason Wang <jasowang@redhat.com>
> > I am a bit lost here. Isn't this invoked
> > when guest attempts to read the config?
> > With no peer, what do we return to guest?
> 
> 
> With no peer, it just work as in the past (read from the qemu's own emulated
> config space). With a vDPA as its peer, it tries to read it from vhost-vDPA.

Are these scenarios where guest would sometimes get one and
sometimes another? E.g. does it happen on disconnect?
If yes that might become a problem ...

> 
> > A code comment might be helpful here.
> 
> 
> Does something like above help?
> 
> Thanks
> 
> 
> > 
> > > ---
> > >   hw/net/virtio-net.c | 22 +++++++++++-----------
> > >   1 file changed, 11 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 4895af1cbe..935b9ef5c7 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -125,6 +125,7 @@ static void virtio_net_get_config(VirtIODevice *vdev, 
> > > uint8_t *config)
> > >   {
> > >       VirtIONet *n = VIRTIO_NET(vdev);
> > >       struct virtio_net_config netcfg;
> > > +    NetClientState *nc = qemu_get_queue(n->nic);
> > >       int ret = 0;
> > >       memset(&netcfg, 0 , sizeof(struct virtio_net_config));
> > > @@ -142,13 +143,12 @@ static void virtio_net_get_config(VirtIODevice 
> > > *vdev, uint8_t *config)
> > >                    VIRTIO_NET_RSS_SUPPORTED_HASHES);
> > >       memcpy(config, &netcfg, n->config_size);
> > > -    NetClientState *nc = qemu_get_queue(n->nic);
> > > -    if (nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > +    if (nc->peer && nc->peer->info->type == 
> > > NET_CLIENT_DRIVER_VHOST_VDPA) {
> > >           ret = vhost_net_get_config(get_vhost_net(nc->peer), (uint8_t 
> > > *)&netcfg,
> > > -                             n->config_size);
> > > -    if (ret != -1) {
> > > -        memcpy(config, &netcfg, n->config_size);
> > > -    }
> > > +                                   n->config_size);
> > > +        if (ret != -1) {
> > > +            memcpy(config, &netcfg, n->config_size);
> > > +        }
> > >       }
> > >   }
> > > @@ -156,6 +156,7 @@ static void virtio_net_set_config(VirtIODevice *vdev, 
> > > const uint8_t *config)
> > >   {
> > >       VirtIONet *n = VIRTIO_NET(vdev);
> > >       struct virtio_net_config netcfg = {};
> > > +    NetClientState *nc = qemu_get_queue(n->nic);
> > >       memcpy(&netcfg, config, n->config_size);
> > > @@ -166,11 +167,10 @@ static void virtio_net_set_config(VirtIODevice 
> > > *vdev, const uint8_t *config)
> > >           qemu_format_nic_info_str(qemu_get_queue(n->nic), n->mac);
> > >       }
> > > -    NetClientState *nc = qemu_get_queue(n->nic);
> > > -    if (nc->peer->info->type == NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > -        vhost_net_set_config(get_vhost_net(nc->peer), (uint8_t *)&netcfg,
> > > -                               0, n->config_size,
> > > -                        VHOST_SET_CONFIG_TYPE_MASTER);
> > > +    if (nc->peer && nc->peer->info->type == 
> > > NET_CLIENT_DRIVER_VHOST_VDPA) {
> > > +        vhost_net_set_config(get_vhost_net(nc->peer),
> > > +                             (uint8_t *)&netcfg, 0, n->config_size,
> > > +                             VHOST_SET_CONFIG_TYPE_MASTER);
> > >         }
> > >   }
> > > -- 
> > > 2.20.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]