[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] qemu-darwin-user

From: Ian Rogers
Subject: Re: [Qemu-devel] qemu-darwin-user
Date: Fri, 27 Aug 2004 14:49:16 +0100
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040114

The fundamental problem is that you don't know all the messages. With Linux and BSD the number of system calls is limited and the interfaces are relatively small (barring exceptions like socketcall, but even they are limited eventually and don't vary over small kernel number changes). With messages you could do with seeing all the mig files that will be in operation to work out what messages are going to be sent around, but you can probably never know all the messages a head of time. If you assume that all you will see will be IOKIT and the kernel then you're being restrictive in what you can emulate. You can partly solve this by running two versions of servers (big and little endian) but then you need to re-write the kernel to distinguish them and steer messages. A possibly more open solution would be to automatically generate conversion code using the mig definitions, but I don't think some companies would be happy to provide these.

Sorry to sound critical I think this is good work. From a design perspective I think pointers in messages are a fundamentally bad idea. I'm hoping to extend this idea further with Java operating system design.


Jocelyn Mayer wrote:

On Fri, 2004-08-27 at 14:25, Ian Rogers wrote:

I think there will be a fundamental limit you reach with this work. The reason being the mach messages can contain pointers to data structures which the kernel fills in. If the pointers are in the wrong endian then the kernel will do something to the application. You can write code to perform transformations on pointers for all the messages you can find documentation on, but some systems will be entirely closed (for example, microsofts messages). Of course you could emulate both the server and the application, but I think you will need a lot of kernel jiggery pokery still. I believe this is the same problem that stops Mac OS X being in a 64bit memory space. You basically need different messages for every kind of pointer you can have. Apple estimated it would take 6months to write support for all those messages, but they revised that up to 2 years iirc. 64 bit OS X applications send 32bit messages currently and pointers to datastructures must appear within the first 4Gb as a consequence. Let me know if I'm wrong.

Seems to me there is no special issue with this point.
If the structure pointed is to be filled by the kernek, then the problem
is exactly the same that what we do for Linux or BSD syscalls.
If the message is to be filled by another application, there is nothing
to do as the memory is in the emulated endianness.
If there is a lot of different structures used by the Darwin kernel, the
way to make things simpler may be to run the IOKIT emulated so all the
structures coming from there would not have to be translated.
Apart from the IOKIT, I don't know which part may be really difficult
(as all user-land parts are to be ran natively / translated by Qemu but
not emulated).

reply via email to

[Prev in Thread] Current Thread [Next in Thread]