qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Support running QEMU on Valgrind


From: Stefan Weil
Subject: Re: [Qemu-devel] [PATCH] Support running QEMU on Valgrind
Date: Mon, 31 Oct 2011 19:51:25 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.23) Gecko/20110921 Thunderbird/3.1.15

Am 31.10.2011 19:22, schrieb Daniel P. Berrange:
On Sun, Oct 30, 2011 at 01:07:26PM +0100, Stefan Weil wrote:
Valgrind is a tool which can automatically detect many kinds of bugs.

Running QEMU on Valgrind with x86_64 hosts was not possible because
Valgrind aborts when memalign is called with an alignment larger than
1 MiB. QEMU normally uses 2 MiB on Linux x86_64.

Now the alignment is reduced to the page size when QEMU is running on
Valgrind.

valgrind.h is a copy from Valgrind svn trunk r12226 with trailing
whitespace stripped but otherwise unmodified, so it still raises lots
of errors when checked with scripts/checkpatch.pl.

It is included here to avoid a dependency on Valgrind.

In libvirt we do the following fun hack to avoid a build dep on valgrind:

const char *ld = getenv("LD_PRELOAD");
if (ld && strstr(ld, "vgpreload")) {
fprintf(stderr, "Running under valgrind, disabling driver\n");
return 0;
}

Regards,
Daniel

Thanks, Daniel.

That works, although it is not the official way and it would fail
if vgpreload were renamed.

It is much slower than the offical macro, so the test would
have to be done once and save the result in a static variable.

As it solves the current problem with QEMU on Valgrind,
this solution would be better than no solution, so if more
people agree, it could be done like this.

From other mails, I expect that the 2 MiB alignment will
be used in more scenarios (any host and operating system
which supports KVM). As far as I know, Valgrind runs
on ARM, PPC, S390, BSD, ..., too, and latest valgrind.h
is designed to support all those scenarios. I have no
idea whether the vgpreload hack also works everywhere.

Regards,
Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]