[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Powerpc regressions?

From: Rob Landley
Subject: Re: [Qemu-devel] Powerpc regressions?
Date: Thu, 9 Jul 2009 22:55:22 -0500
User-agent: KMail/1.11.2 (Linux/2.6.28-13-generic; KDE/4.2.2; x86_64; ; )

On Thursday 09 July 2009 08:46:48 Lennart Sorensen wrote:
> On Thu, Jul 09, 2009 at 06:49:47AM -0500, Rob Landley wrote:
> > I don't think 0.9.x had a g3beige, or at least I didn't get it to work. 
> > I booted a patched prep kernel under that (with a custom boot rom feeding
> > in a custom device tree).  I went to a vanilla unpatched kernel as soon
> > as I was able.
> Hmm, well I think 0.9.1 worked OK when I used the debian lenny kernel
> (never worked with 2.6.18 from etch though).  Of course I might be
> remembering a development snapshot from between 0.9.1 and 0.10.0.  I do
> remember the drive order changing at some point and having to change hda
> to hdc to boot my images, and now with 0.10.50 (git checkout) having to
> change it back again.

The 0.9.x series used Open Hackware, not Open Bios.

Among its many limitations was the refusal to allow -kernel to boot a kernel 
if you didn't feed in a _partitioned_ hda so it could set the "boot partition" 
field.  (Thus if your -hda was an ext2 image created by genext2fs, and thus 
didn't _have_ a partition table, every other qemu target could boot that but 
powerpc couldn't due to open hackware limitations.)

> > Device layout varying randomly between different qemu versions is kind of
> > annoying, yes.  Especially since the linux kernel needs init=/dev/blah to
> > boot directly from a filesystem image, so the kernel command line needed
> > to boot now varies between qemu versions.
> Bad enough that some linux kernel versions changed the order of devices
> at times.

The Linux SCSI layer does have a regrettable tendency to throw all devices of 
all transport types asynchronously into the same bucket and give it a stir, so 
your USB devices get mixed in with your SATA devices and the order they show 
up in depends on module load order and a couple of fun race conditions.

According to Ted Tso, this is to force the "Linux on the desktop" people to 
confront and hopefully solve the device enumeration problems of IBM mainframes 
with thousands of devices.  Apparently, if desktop linux users feel the pain 
of mainframe users, they'll buy macintoshes and this will solve the problem 

However, since you generally have to rebuild to trigger random changes in scsi 
enumeration behavior, and you can control it by only HAVING one type of 
transport in the system you're emulating, it doesn't break existing binary 
images.  Moving the hardware addresses around under the covers does.

> > Also, if the argument wasn't called "-hda", or if the last version that
> > actually worked hadn't associated -hda with /dev/hda, the change wouldn't
> > be so obviously silly.
> Yeah I am not sure who is changing it.

Most recently it changed when they moved from hardwired built-in device tree 
to querying openbios for this information.  Apparently, the device tree 
openbios provides is not the same device tree that qemu was synthesizing when 
it was hardwired in.

That's what I get from a very cursory read of the patch that changed it, 
anyway.  I can't really say I have a deep understanding of any of this, it's 
really not my area.

> > But that one's easy enough to work around.  The "panic shortly after init
> > runs" isn't.
> No that would be much worse.

It is, yeah.

> > Still using -kernel.  The tarball I pointed at includes the boot shell
> > script, which calls qemu with:
> >
> > qemu-system-ppc -M g3beige -nographic -no-reboot -kernel "zImage-powerpc"
> >   -hdc "image-powerpc.ext2" -append "root=/dev/hda rw
> > init=/usr/sbin/init.sh panic=1 PATH=/usr/bin console=ttyS0 $KERNEL_EXTRA"
> >
> > (Feeding in the ext2 image file as -hdc is the workaround for qemu being
> > unable to keep hda and hdc straight on powerpc anymore.  On debian, I
> > expect it boots to an initramfs that fires up HAL to look at all
> > partitions on all devices and would happily mount a USB flash key as hda
> > if it had the right UUID.  Which somebody's going to exploit one of these
> > days, but oh well.)
> Well if I was using UUIDs then yes it would probably not mind, but I am
> not at the moment.  Still need to know which /dev/hd* to install the
> boot loader to of course.

Using uuids requires using initramfs, in which case having a root partition is 
sort of optional.  It also requires feeding the virtual machine a bit more 
than the default 128 megs of ram.

Mainly, what I'm trying to do is get an arm system, a mips system, an x86-64 
system, an sh4 system, and so on that all work in the same way.  Building the 
same kernel and same root filesystem from the same source, using the same build 
scripts, just for different targets.

Considering that a mips system _can't_ have more than 256 megs of physical 
memory (due to the top 3 bits of their addresses being used for some kind of 
strange mode flags and half the rest kept for I/O memory mappings), I need an 
actual block device that can mount an ext2 root filesystem image to fit a 
native toolchain and enough space to build anything.  Initramfs just doesn't 
give me enough space on that target.

I note that lots of other targets have size limitations in how big a kernel 
image they'll load and decompress before falling off the end of one of their 
memory mappings.  An initramfs with 40 megs of files in it wasn't really the 
intent of initramfs, and is not that well tested.

So yeah, root=/dev/hda isn't actually obsolete.  Some of us find it very 

Latency is more important than throughput. It's that simple. - Linus Torvalds

reply via email to

[Prev in Thread] Current Thread [Next in Thread]