qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Planning the instrumentation


From: Sami Kiminki
Subject: [Qemu-devel] Planning the instrumentation
Date: Fri, 17 Jul 2009 13:14:09 +0300

Hi,

Right now it seems that we would be getting some funding for
QEMU-related instrumentation work and the work would start at some point
after summer holidays. Albeit, things are never certain until they
are... Anyways, I thought I'd share some of our preliminary plans in
hope for constructive feedback, as it would be in our interests to get
some of that work into QEMU mainline.

We are thinking of a modular framework, where on the one side we have
trace sources and on the other side we have analysis modules, which use
the traces generated by the sources. Then one would enable analysis
modules from the command-line and possibly on-the-fly from the monitor.
Naturally, trace sources would be enabled only if there are any active
analysis modules requesting that specific input.

Some possible trace sources:

1) Memory access traces:
   - data access
   - instruction fetches

2) Interrupt/exception related tracing
   - interrupt requests
   - CPU state changes (enter system/user mode, enter/exit interrupt
     handler, ARM: enter/exit thumb mode, etc)
   - CPU interrupt mask changes

3) Execution-related tracing
   - instruction execution
   - block execution
   - block translation
   - translation block cache invalidation events

4) Branch tracing
   - taken/non-taken branches with hints (i.e. Itanium-style branches)
   - non-conditional jumps

5) Conditional instruction execution (CMOV in x86, ARM conditionals,
   ...)
   - Executed vs. canceled instructions

6) System/misc event tracing
   - power on/off, pause/resume
   - QEMU internal events

I assume that when not enabled, instrumentation code would have
negligible performance impact.

These should enable:
- cache analysis
- TLB analysis
- various execution-related analyses, such as instruction counting
- branch prediction -related analyses
- various performance counters
- various triggers to latch other analysis modules, e.g., to capture
  memory access patterns of interrupt/exception handlers, etc
- dumping traces to disk for later analysis

Some analysis modules could even be chainable. I'd assume that at least
hierarchical cache analysis could be implemented like this.

Of course, we'd like to have the source code as target-nonspecific as
possible. I think that this is easier on the analysis-side, provided
sufficient flexibility in module configuration.

We have also given some thought of whether some of the analysis modules
could be shared object plug-ins. However, this is probably easier said
than done, as many data structures inside QEMU are target-specific. In
any case, writing custom modules should be easy.

Ok, if we get the funding, we'd probably start from trace sources 1, 2,
3 and possibly 6. These should be reasonably target-nonspecific with
reasonably small impact on the existing code base, I hope. Consider
simplified memory access tracing:

tracing.c:

void trace_memory_load8(target_ulong address, target_ulong data)
{
  // call enabled modules
}


tcg/tcp-op.h:

static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
{
     tcg_gen_qemu_ldst_op(INDEX_op_qemu_ld8u, ret, addr, mem_index);
+
+    if (trace_data_access_enabled)
+    {
+       TCGArg args[2];
+       TCGv tracer_ret;
+
+       args[0]=GET_TCGV_I64(addr);
+       args[1]=GET_TCGV_I64(ret);
+       tracer_ret=tcg_temp_new();
+       tcg_gen_helperN(trace_memory_load8, 0, 7, tracer_ret, 2, args);
+       tcg_temp_free(tracer_ret);
+    }
}

and so on for 16, 32, 64-bit versions, signed versions and stores.

Of the analysis modules, writing the appropriate disk dumpers would be
the first priority.

Comments and suggestions are welcome.

Regards,
Sami Kiminki
Embedded Software Group / Helsinki University of Technology






reply via email to

[Prev in Thread] Current Thread [Next in Thread]