qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [patch 2/3] QEMU-C-F: Introducing qemu userspace tool q


From: Mahesh Jagannath Salgaonkar
Subject: Re: [Qemu-devel] [patch 2/3] QEMU-C-F: Introducing qemu userspace tool qemu-core-filter.
Date: Fri, 25 Jun 2010 18:08:15 +0530
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-2.fc12 Thunderbird/3.0.4

On 06/22/2010 06:32 PM, Anthony Liguori wrote:
Hrm, the way you've sent this patch makes Thunderbird unhappy.  It
appears the whole thing is treated as an attachment. In the future, I'd
suggest avoiding the Content-Disposition tag

Sure. I will take care of this in future.

On 06/21/2010 11:01 PM, Mahesh Salgaonkar wrote:
Qemu userspace tool to filter out guest OS memory from qemu core file.
Use '--enable-core-filter' option while running ./configure script to
build
qemu-core-filter tool. This is a post-processing tool works offline on
qemu
coredumps. This tool helps to reuce the size of qemu core file
(generated by
qemu crash) by removing guest OS memory from original core file.

Currently it is only supported for Linux on x86 and x86_64.

There are a few problems with a tool like this. The first is that it
depends on very specific internals of qemu (namely, the way we allocate
ram). If we applied this, we would get subtle breakages if we made even
the slightest changes to qemu.

This is the precise reason we would like to get this tool integrated into QEMU sources. So, whenever something changes in qemu, then this tool can be modified accordingly.

IMHO, the value is also questionable. There is quite a bit of sensitive
data left in the core file after removing guest memory. Any DMA buffer
may contain very sensitive data (for instance, if you crash during a
read of /etc/shadow). Even the CPU registers can contain sensitive data.

I think the only really viable approach to this problem is to take a
white list approach instead of a black list approach. That means
extracting useful information that we're reasonably confident preserves
privacy. That would be information like a back trace, the crash reason,
etc. Tools like apport and ABT already do exactly this and they also
present an interface to the user to validate the data before sending it.
They also provide a way to collect other information (like host dmesg).

I understand your point but this tool can be of interest of people who sends out large coredump files to service centers for initial analysis. This tool will help them to reduce the size of core file before sending it to service centers for analysis. What do you think?

Regards,

Anthony Liguori

Regards,
-Mahesh.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]