qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU-guestOS latencies.


From: Nir Levy
Subject: Re: [Qemu-devel] QEMU-guestOS latencies.
Date: Thu, 28 Jul 2016 16:37:16 +0000

After changing the
inside function qemuMonitorOpenUnix
int timeout to MAX_INT;
in file: ./src/qemu/qemu_monitor.c (libvirt)
I am now able to debug kvm and malloc

I would love hearing some tips regarding tracing for latencies.

regards,
Nir.

From: Nir Levy
Sent: Thursday, July 28, 2016 12:25 PM
To: 'address@hidden' <address@hidden>
Cc: Yan Fridland <address@hidden>
Subject: QEMU-guestOS latencies.

Hi all,

First, thanks for your time and attention for reading this.

I wish to share with you some of my goals.
My main goal is to trace latencies qemu-kvm interface (in order to see if they
secondary goal is to figure out the way qemu thread are spawned.
in addition I wish to understand ram allocation and avoid host swaps.

So far I have mainly debugged virstd and qemu.
not always I succeed with avoiding.
virsh -k0 start  KPO
error: Failed to start domain KPO
error: monitor socket did not show up: No such file or directory

when debugging qemu.
although I have used -k0, is there's any other way to overcome this?

My observations  so far using attaching to qemu process spawned from virtd are:
qemu thread are divided into several categories:
- block device controller (via qcow2_open - main io thread)
- thread for each VCPU
- Trace thread that launches at each report.
- IO worker threads (QEMU_AIO_READ, _WRITE, _IOCTL, _FLUSH ...etc) which are 
spawned regularly and I have failed so far to get the main purpose of them.
  those threads are spawned in an extensive rate one the guest application is 
running. (traffic is mainly through DPDK)
qemu_anon_ram_alloc summery:
    4G - pc.ram
256K - pc.bios
128K - pc.rom
256K - virtio-net-pci.rom
  2M  -/address@hidden/acpi/tables
  4K  -/address@hidden/table-loader

I used simple trace to get events ( I still have not insert my own's) according 
to Stefan Hajnoczi.
and studied a bit the output.
there are time offsets ranging from
object_dynamic_cast_assert 1574595.371 pid=15930 type=qio-channel-file 
target=qio-channel-file file=qemu-char.c line=0x509 
func=pty_chr_update_read_handler_locked to
object_dynamic_cast_assert -1.710 pid=15930 type=Haswell-noTSX-x86_64-cpu 
target=x86_64-cpu 
file=/home/nirl/qemu_instrumenting_build/qemu-2.6.0/target-i386/kvm.c 
line=0xac2 func=kvm_arch_post_run
which is very strange.

in addition those offsets as far as I get them are only from previous trace.
is there's a simple way to adjust log to get uptime in nano second instead of 
offsets?
what would you recommend for trace latencies of guest OS related?


Regards and many thanks.
Nir Levy
SW Engineer

Web: www.asocstech.com<http://www.asocstech.com/> |
[cid:image001.jpg@01D1B599.5A2C9530]

JPEG image


reply via email to

[Prev in Thread] Current Thread [Next in Thread]