[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] Guest unresponsive after Virtqueue size ex

From: Fernando Casas Schössow
Subject: Re: [Qemu-devel] [Qemu-block] Guest unresponsive after Virtqueue size exceeded error
Date: Mon, 25 Feb 2019 15:41:55 +0000

Ok, the new package is deployed.

I stopped and started the test guest so it will use the new QEMU binary.

Will monitor and report back.

On lun, feb 25, 2019 at 2:32 PM, Fernando Casas Schössow <address@hidden> wrote:
Thanks Natanael. Is the new package ready?

I will update as soon as the package is available, try to repro and report back.

Thanks everyone for looking into this!

On lun, feb 25, 2019 at 2:25 PM, Natanael Copa <address@hidden> wrote:
On Mon, 25 Feb 2019 13:06:16 +0000 Peter Maydell 
<address@hidden<mailto:address@hidden>> wrote:
On Mon, 25 Feb 2019 at 12:22, Natanael Copa 
<address@hidden<mailto:address@hidden>> wrote: > > On Mon, 25 Feb 2019 10:34:23 
+0000 > Peter Maydell <address@hidden<mailto:address@hidden>> wrote: > > The 
short term fix is to fix your toolchain/compilation > > environment options so 
that it isn't trying to override > > the definition of memcpy(). > > The 
easiest workaround is to simply disable FORTIY_SOURCE, but that > will weaken 
the security for all implemented string functions, strcpy, > memmove etc, so I 
don't want to do that. > > Is it only lduw_he_p that needs to be atomic or are 
the other functions > in include/qemu/bswap.h using memcpy also required to be 
atomic? Hard to say, since we haven't done the "audit all the callers" step 
that Stefan mentioned. If you're going to replace memcpy with __builtin_memcpy 
then the safest thing is to do it for all those uses (this will also give you 
much better generated code for performance purposes).
I figured that and that is exactly what I did. Fernando: Can you please test 
the binary from qemu-system-x86_64-3.1.0-r3 from alpine edge? I will backport 
the fix if you can confirm it fixes the problem. Thanks! -nc PS. Those issues 
are pretty hard to track down, so big thanks to everyone who helped find the 
exact issue here. You have done a great work!

reply via email to

[Prev in Thread] Current Thread [Next in Thread]