qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 11/24] tcg/optimize: Use tcg_constant_internal with constant f


From: Richard W.M. Jones
Subject: Re: [PULL 11/24] tcg/optimize: Use tcg_constant_internal with constant folding
Date: Thu, 4 Feb 2021 09:29:16 +0000
User-agent: Mutt/1.5.21 (2010-09-15)

> > commit 8f17a975e60b773d7c366a81c0d9bbe304f30859
> > Author: Richard Henderson <richard.henderson@linaro.org>
> > Date:   Mon Mar 30 19:52:02 2020 -0700
> > 
> >     tcg/optimize: Adjust TempOptInfo allocation
> > 
> > The image boots just fine on s390x/TCG as well.
> 
> Let me try this in a minute on my original test machine.

I got the wrong end of the stick as David pointed out in the other email.

However I did test things again this morning (all on s390 host), and
current head (1ed9228f63ea4b) fails same as before ("mount" command
fails).

Also I downloaded:

  
https://dl.fedoraproject.org/pub/fedora-secondary/releases/33/Cloud/s390x/images/Fedora-Cloud-Base-33-1.2.s390x.qcow2

and booted it on 1ed9228f63ea4b using this command:

  $ ~/d/qemu/build/s390x-softmmu/qemu-system-s390x -machine accel=tcg -m 2048 
-drive file=Fedora-Cloud-Base-33-1.2.s390x.qcow2,format=qcow2,if=virtio -serial 
stdio

Lots of core dumps inside the guest, same as David saw.

I then reset qemu back to 8f17a975e60b773d ("tcg/optimize: Adjust
TempOptInfo allocation"), rebuilt qemu, tested the same command and
cloud image, and that booted up much happier with no failures or core
dumps.

Isn't it kind of weird that this would only affect an s390 host?  I
don't understand why the host would make a difference if we're doing
TCG.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-df lists disk usage of guests without needing to install any
software inside the virtual machine.  Supports Linux and Windows.
http://people.redhat.com/~rjones/virt-df/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]