qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi


From: Markus Armbruster
Subject: Re: [Qemu-devel] QEMU patch to allow VM introspection via libvmi
Date: Mon, 19 Oct 2015 09:52:49 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

Valerio Aimale <address@hidden> writes:

> On 10/16/15 2:15 AM, Markus Armbruster wrote:
>> address@hidden writes:
>>
>>> All-
>>>
>>> I've produced a patch for the current QEMU HEAD, for libvmi to
>>> introspect QEMU/KVM VMs.
>>>
>>> Libvmi has patches for the old qeum-kvm fork, inside its source tree:
>>> https://github.com/libvmi/libvmi/tree/master/tools/qemu-kvm-patch
>>>
>>> This patch adds a hmp and a qmp command, "pmemaccess". When the
>>> commands is invoked with a string arguments (a filename), it will open
>>> a UNIX socket and spawn a listening thread.
>>>
>>> The client writes binary commands to the socket, in the form of a c
>>> structure:
>>>
>>> struct request {
>>>       uint8_t type;   // 0 quit, 1 read, 2 write, ... rest reserved
>>>       uint64_t address;   // address to read from OR write to
>>>       uint64_t length;    // number of bytes to read OR write
>>> };
>>>
>>> The client receives as a response, either (length+1) bytes, if it is a
>>> read operation, or 1 byte ifit is a write operation.
>>>
>>> The last bytes of a read operation response indicates success (1
>>> success, 0 failure). The single byte returned for a write operation
>>> indicates same (1 success, 0 failure).
>> So, if you ask to read 1 MiB, and it fails, you get back 1 MiB of
>> garbage followed by the "it failed" byte?
> Markus, that appear to be the case. However, I did not write the
> communication protocol between libvmi and qemu. I'm assuming that the
> person that wrote the protocol, did not want to bother with over
> complicating things.
>
> https://github.com/libvmi/libvmi/blob/master/libvmi/driver/kvm/kvm.c
>
> I'm thinking he assumed reads would be small in size and the price of
> reading garbage was less than the price of writing a more complicated
> protocol. I can see his point, confronted with the same problem, I
> might have done the same.

All right, the interface is designed for *small* memory blocks then.

Makes me wonder why he needs a separate binary protocol on a separate
socket.  Small blocks could be done just fine in QMP.

>>> The socket API was written by the libvmi author and it works the with
>>> current libvmi version. The libvmi client-side implementation is at:
>>>
>>> https://github.com/libvmi/libvmi/blob/master/libvmi/driver/kvm/kvm.c
>>>
>>> As many use kvm VM's for introspection, malware and security analysis,
>>> it might be worth thinking about making the pmemaccess a permanent
>>> hmp/qmp command, as opposed to having to produce a patch at each QEMU
>>> point release.
>> Related existing commands: memsave, pmemsave, dump-guest-memory.
>>
>> Can you explain why these won't do for your use case?
> For people who do security analysis there are two use cases, static
> and dynamic analysis. With memsave, pmemsave and dum-guest-memory one
> can do static analysis. I.e. snapshotting a VM and see what was
> happening at that point in time.
> Dynamic analysis require to be able to 'introspect' a VM while it's running.
>
> If you take a snapshot of two people exchanging a glass of water, and
> you happen to take it at the very moment both persons have their hands
> on the glass, it's hard to tell who passed the glass to whom. If you
> have a movie of the same scene, it's obvious who's the giver and who's
> the receiver. Same use case.

I understand the need for introspecting a running guest.  What exactly
makes the existing commands unsuitable for that?

> More to the point, there's a host of C and python frameworks to
> dynamically analyze VMs: volatility, rekal, "drakvuf", etc. They all
> build on top of libvmi. I did not want to reinvent the wheel.

Fair enough.

Front page http://libvmi.com/ claims "Works with Xen, KVM, Qemu, and Raw
memory files."  What exactly is missing for KVM?

> Mind you, 99.9% of people that do dynamic VM analysis use xen. They
> contend that xen has better introspection support. In my case, I did
> not want to bother with dedicating a full server to be a xen domain
> 0. I just wanted to do a quick test by standing up a QEMU/kvm VM, in
> an otherwise purposed server.

I'm not at all against better introspection support in QEMU.  I'm just
trying to understand the problem you're trying to solve with your
patches.

>>> Also, the pmemsave commands QAPI should be changed to be usable with
>>> 64bit VM's
>>>
>>> in qapi-schema.json
>>>
>>> from
>>>
>>> ---
>>> { 'command': 'pmemsave',
>>>    'data': {'val': 'int', 'size': 'int', 'filename': 'str'} }
>>> ---
>>>
>>> to
>>>
>>> ---
>>> { 'command': 'pmemsave',
>>>    'data': {'val': 'int64', 'size': 'int64', 'filename': 'str'} }
>>> ---
>> In the QAPI schema, 'int' is actually an alias for 'int64'.  Yes, that's
>> confusing.
> I think it's confusing for the HMP parser too. If you have a VM with
> 8Gb of RAM and want to snapshot the whole physical memory, via HMP
> over telnet this is what happens:
>
> $ telnet localhost 1234
> Trying 127.0.0.1...
> Connected to localhost.
> Escape character is '^]'.
> QEMU 2.4.0.1 monitor - type 'help' for more information
> (qemu) help pmemsave
> pmemsave addr size file -- save to disk physical memory dump starting
> at 'addr' of size 'size'
> (qemu) pmemsave 0 8589934591 "/tmp/memorydump"
> 'pmemsave' has failed: integer is for 32-bit values
> Try "help pmemsave" for more information
> (qemu) quit

Your change to pmemsave's definition in qapi-schema.json is effectively a
no-op.

Your example shows *HMP* command pmemsave.  The definition of an HMP
command is *independent* of the QMP command.  The implementation *uses*
the QMP command.

QMP pmemsave is defined in qapi-schema.json as

    { 'command': 'pmemsave',
      'data': {'val': 'int', 'size': 'int', 'filename': 'str'} }

Its implementation is in cpus.c:

    void qmp_pmemsave(int64_t addr, int64_t size, const char *filename,
                      Error **errp)

Note the int64_t size.

HMP pmemsave is defined in hmp-commands.hx as

    {
        .name       = "pmemsave",
        .args_type  = "val:l,size:i,filename:s",
        .params     = "addr size file",
        .help       = "save to disk physical memory dump starting at 'addr' of 
size 'size'",
        .mhandler.cmd = hmp_pmemsave,
    },

Its implementation is in hmp.c:

    void hmp_pmemsave(Monitor *mon, const QDict *qdict)
    {
        uint32_t size = qdict_get_int(qdict, "size");
        const char *filename = qdict_get_str(qdict, "filename");
        uint64_t addr = qdict_get_int(qdict, "val");
        Error *err = NULL;

        qmp_pmemsave(addr, size, filename, &err);
        hmp_handle_error(mon, &err);
    }

Note uint32_t size.

Arguably, the QMP size argument should use 'size' (an alias for
'uint64'), and the HMP args_type should use 'size:o'.

> With the changes I suggested, the command succeeds
>
> $ telnet localhost 1234
> Trying 127.0.0.1...
> Connected to localhost.
> Escape character is '^]'.
> QEMU 2.4.0.1 monitor - type 'help' for more information
> (qemu) help pmemsave
> pmemsave addr size file -- save to disk physical memory dump starting
> at 'addr' of size 'size'
> (qemu) pmemsave 0 8589934591 "/tmp/memorydump"
> (qemu) quit
>
> However I just noticed that the dump is just about 4GB in size, so
> there might be more changes needed to snapshot all physical memory of
> a 64 but VM. I did not investigate any further.
>
> ls -l /tmp/memorydump
> -rw-rw-r-- 1 libvirt-qemu kvm 4294967295 Oct 16 08:04 /tmp/memorydump
>
>>> hmp-commands.hx and qmp-commands.hx should be edited accordingly. I
>>> did not make the above pmemsave changes part of my patch.
>>>
>>> Let me know if you have any questions,
>>>
>>> Valerio



reply via email to

[Prev in Thread] Current Thread [Next in Thread]