bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: The problems for the rootless subhurd


From: olafBuddenhagen
Subject: Re: The problems for the rootless subhurd
Date: Fri, 24 Apr 2009 01:13:00 +0200
User-agent: Mutt/1.5.18 (2008-05-17)

Hi,

On Tue, Apr 21, 2009 at 12:28:13AM +0100, Da Zheng wrote:

> First, I don't know how to handle vm_wire() and thread_wire(). Since
> the  boot doesn't have the real privileged host port, there seems to
> be no  way to handle them. Currently, I don't implement them and just
> let them  return EOPNOTSUPP. As far as I see, vm_wire() is only used
> by init and  the mach default pager, and thread_wire() is only used by
> the mach  default pager. Since the second Hurd doesn't have its own
> mach default  pager, init is the only program that might cause the
> problem.

Well, if it doesn't panic immediately, I guess it's fine...

If EOPNOTSUPP makes trouble, it might be also possible just to return
success, but doing nothing.

I do not see what the subhurd would really need wired memory for. Wired
memory is necessary only in the paging path itself, and in certain code
dealing with hardware devices directly -- and subhurd shouldn't have
either...

> In order to track all  tasks in subhurd, boot works as a proxy for all
> RPCs on the task port,  
[...]
> However, it seems to be the source of the most serious  bug in my
> modified boot.
>
> BUG: After I added the proxy for all RPCs to 'boot', I find that
> subhurd  sometimes failed to boot. For example, it sometimes stops
> booting after  the system displays "GNU 0.3 (hurd) (console)" and it
> sometimes boots  successfully and displays "login>" but stops working
> after I try to  login. Sometimes, it even prints the error message
> like

>    getty[47]: /bin/login: No such file or directory Use `login USER'
>    to login, or `help' for more information.

> Of course, sometimes subhurd can boot and I can login successfully.

Sounds like some kind of race condition... But I don't know where.

You could try tracing all RPCs made to the proxy (using some logging
mechanism in the proxy itself, or perhaps rpctrace), and comparing the
results of various runs...

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]