[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 1333688] Re: vhost-user: VHOST_USER_SET_MEM_TABLE does

From: T. Huth
Subject: [Qemu-devel] [Bug 1333688] Re: vhost-user: VHOST_USER_SET_MEM_TABLE doesn't contain all regions
Date: Tue, 28 Jun 2016 14:44:36 -0000

** Changed in: qemu
       Status: Fix Committed => Fix Released

You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.

  vhost-user: VHOST_USER_SET_MEM_TABLE doesn't contain all regions

Status in QEMU:
  Fix Released

Bug description:

  vhost-user implementation doesn't provide information about all memory 
  and in some cases client is not able to map buffer memory as he is missing 
  memory region information for specific address.

  Same thing is implemented properly for vhost-net. Below gdb outputs are 
  showing memory regions information provided to the vhost-net and vhost-user.

  ==== memory regions information provided to vhost-net  ====

  (gdb) p/x hdev->mem->regions[0]
  $21 = {
    guest_phys_addr = 0x100000000,
    memory_size = 0xc0000000,
    userspace_addr = 0x2aab6ac00000,
    flags_padding = 0x0
  (gdb) p/x hdev->mem->regions[1]
  $22 = {
    guest_phys_addr = 0xfffc0000,
    memory_size = 0x40000,
    userspace_addr = 0x7ffff4a00000,
    flags_padding = 0x0
  (gdb) p/x hdev->mem->regions[2]
  $23 = {
    guest_phys_addr = 0x0,
    memory_size = 0xa0000,
    userspace_addr = 0x2aaaaac00000,
    flags_padding = 0x0
  (gdb) p/x hdev->mem->regions[3]
  $24 = {
    guest_phys_addr = 0xc0000,
    memory_size = 0xbff40000,
    userspace_addr = 0x2aaaaacc0000,
    flags_padding = 0x0

  ==== memory regions information provided to vhost-user  ====

  (gdb) p/x msg.memory.nregions
  $28 = 0x1
  (gdb) p/x msg.memory.regions[0]
  $29 = {
    guest_phys_addr = 0x0,
    memory_size = 0x180000000,
    userspace_addr = 0x2aaaaac00000

To manage notifications about this bug go to:

reply via email to

[Prev in Thread] Current Thread [Next in Thread]