[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL v2 39/58] Transmit vhost-user memory regions individually
From: |
Peter Maydell |
Subject: |
Re: [PULL v2 39/58] Transmit vhost-user memory regions individually |
Date: |
Fri, 19 Jun 2020 14:02:01 +0100 |
On Fri, 12 Jun 2020 at 15:52, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> From: Raphael Norwitz <raphael.norwitz@nutanix.com>
>
> With this change, when the VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS
> protocol feature has been negotiated, Qemu no longer sends the backend
> all the memory regions in a single message. Rather, when the memory
> tables are set or updated, a series of VHOST_USER_ADD_MEM_REG and
> VHOST_USER_REM_MEM_REG messages are sent to transmit the regions to map
> and/or unmap instead of sending send all the regions in one fixed size
> VHOST_USER_SET_MEM_TABLE message.
Hi; Coverity reports some issues with this change, which are
basically the same as the issue I noted in my other email.
> +static int send_remove_regions(struct vhost_dev *dev,
> + struct scrub_regions *remove_reg,
> + int nr_rem_reg, VhostUserMsg *msg,
> + bool reply_supported)
> +{
> + struct vhost_user *u = dev->opaque;
> + struct vhost_memory_region *shadow_reg;
> + int i, fd, shadow_reg_idx, ret;
> + ram_addr_t offset;
> + VhostUserMemoryRegion region_buffer;
Here region_buffer is uninitialized...
> +
> + /*
> + * The regions in remove_reg appear in the same order they do in the
> + * shadow table. Therefore we can minimize memory copies by iterating
> + * through remove_reg backwards.
> + */
> + for (i = nr_rem_reg - 1; i >= 0; i--) {
> + shadow_reg = remove_reg[i].region;
> + shadow_reg_idx = remove_reg[i].reg_idx;
> +
> + vhost_user_get_mr_data(shadow_reg->userspace_addr, &offset, &fd);
> +
> + if (fd > 0) {
> + msg->hdr.request = VHOST_USER_REM_MEM_REG;
> + vhost_user_fill_msg_region(®ion_buffer, shadow_reg);
...we pass it to vhost_user_fill_msg_region(), but that function
only initializes 3 out of 4 of the struct's fields...
> + msg->payload.mem_reg.region = region_buffer;
...so here we copy the uninitialized region_buffer.mmap_offset.
(CID 1429803)
I think in this case we are genuinely going to use uninitialized
data, unlike the other two places.
> +static int send_add_regions(struct vhost_dev *dev,
> + struct scrub_regions *add_reg, int nr_add_reg,
> + VhostUserMsg *msg, uint64_t *shadow_pcb,
> + bool reply_supported, bool track_ramblocks)
> +{
> + struct vhost_user *u = dev->opaque;
> + int i, fd, ret, reg_idx, reg_fd_idx;
> + struct vhost_memory_region *reg;
> + MemoryRegion *mr;
> + ram_addr_t offset;
> + VhostUserMsg msg_reply;
> + VhostUserMemoryRegion region_buffer;
Similarly, here region_buffer is uninitialized...
> +
> + for (i = 0; i < nr_add_reg; i++) {
> + reg = add_reg[i].region;
> + reg_idx = add_reg[i].reg_idx;
> + reg_fd_idx = add_reg[i].fd_idx;
> +
> + mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd);
> +
> + if (fd > 0) {
> + if (track_ramblocks) {
> + trace_vhost_user_set_mem_table_withfd(reg_fd_idx, mr->name,
> + reg->memory_size,
> + reg->guest_phys_addr,
> + reg->userspace_addr,
> + offset);
> + u->region_rb_offset[reg_idx] = offset;
> + u->region_rb[reg_idx] = mr->ram_block;
> + }
> + msg->hdr.request = VHOST_USER_ADD_MEM_REG;
> + vhost_user_fill_msg_region(®ion_buffer, reg);
> + msg->payload.mem_reg.region = region_buffer;
...so here we're copying across uninitialized data, which makes
Coverity unhappy (CID 1429802)...
> + msg->payload.mem_reg.region.mmap_offset = offset;
...even if in this case we end up filling the value in afterwards.
As noted in my other email, I think the best fix for this is to
have vhost_user_fill_msg_region() take an extra mmap_offset
argument to fill in the mmap_offset itself. In this callsite in
send_add_regions() we would pass in 'offset' and delete the manual
assignment to .mmap_offset. I'm not sure about the call in
send_remove_regions() but I guess if the intention is that the
payload field is not relevant then passing in '0' would be right.
thanks
-- PMM
- [PULL v2 29/58] pci: assert configuration access is within bounds, (continued)
- [PULL v2 29/58] pci: assert configuration access is within bounds, Michael S. Tsirkin, 2020/06/12
- [PULL v2 26/58] virtio-balloon: Provide an interface for free page reporting, Michael S. Tsirkin, 2020/06/12
- [PULL v2 33/58] hw/pci-host: Use the IEC binary prefix definitions, Michael S. Tsirkin, 2020/06/12
- [PULL v2 34/58] char-socket: return -1 in case of disconnect during tcp_chr_write, Michael S. Tsirkin, 2020/06/12
- [PULL v2 35/58] vhost-user-blk: delay vhost_user_blk_disconnect, Michael S. Tsirkin, 2020/06/12
- [PULL v2 37/58] Add vhost-user helper to get MemoryRegion data, Michael S. Tsirkin, 2020/06/12
- [PULL v2 36/58] Add helper to populate vhost-user message regions, Michael S. Tsirkin, 2020/06/12
- [PULL v2 38/58] Add VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS, Michael S. Tsirkin, 2020/06/12
- [PULL v2 39/58] Transmit vhost-user memory regions individually, Michael S. Tsirkin, 2020/06/12
- Re: [PULL v2 39/58] Transmit vhost-user memory regions individually,
Peter Maydell <=
- [PULL v2 40/58] Lift max memory slots limit imposed by vhost-user, Michael S. Tsirkin, 2020/06/12
- [PULL v2 41/58] Refactor out libvhost-user fault generation logic, Michael S. Tsirkin, 2020/06/12
- [PULL v2 42/58] Support ram slot configuration in libvhost-user, Michael S. Tsirkin, 2020/06/12
- [PULL v2 43/58] Support adding individual regions in libvhost-user, Michael S. Tsirkin, 2020/06/12
- [PULL v2 44/58] Support individual region unmap in libvhost-user, Michael S. Tsirkin, 2020/06/12
- [PULL v2 45/58] Lift max ram slots limit in libvhost-user, Michael S. Tsirkin, 2020/06/12
- [PULL v2 46/58] libvhost-user: advertise vring features, Michael S. Tsirkin, 2020/06/12
- [PULL v2 47/58] hw/pci: Fix crash when running QEMU with "-nic model=rocker", Michael S. Tsirkin, 2020/06/12
- [PULL v2 51/58] acpi: make build_madt() more generic., Michael S. Tsirkin, 2020/06/12