[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: malloc() patches round 3

From: Igor Khavkine
Subject: Re: malloc() patches round 3
Date: Wed, 22 Aug 2001 19:56:54 -0400
User-agent: Mutt/1.3.20i

On Wed, Aug 22, 2001 at 04:42:31PM -0700, Thomas Bushnell, BSG wrote:
> Igor Khavkine <i_khavki@alcor.concordia.ca> writes:
> > Since this is a support library, I would prefer that it did not by itself
> > terminate the running process under any circumstances and propagted
> > all errors to the programs that use it, however there are always excetions.
> > So should the fix be a simple assert() statement or shoul I look into
> > libpager in more detail to be able to propagate these errors?
> I would prefer assert().

I'm going to ponder this for a bit, I guess I'll have to learn more
about libpager along they way.

> I have no objection to being more diligent about checking malloc
> returns, but I have *NEVER* seen a Unixoid system perform anything
> like sensibly in the event of virtual memory exhaustion.  The Mach
> kernel itself just crashes directly.  And that's *better* than
> diligently returning errors and trying to poke along, because it
> actually *fixes* the problem: the system comes up again, and works.
> Every system I've seen, when memory exhaustion happens, simply never
> functions right again until rebooted.

That's a chicken and egg problem. What was first, Unixoid systems that
didn't behave sanely when memory was exhausted, or developers who wrote
their own Unixoid systems that behaved similarly?

In my perfect world, OS's don't crash unless there is a hardware failure
or an internal inconsistency, and resource exhaustion is neither.  If
you want the system to reboot in that situation, all that is needed is
some sort of daemon that uses a fixed amount of resources and reboots
the system if the error is propagated to it.

We have an opportunity to create something like this. And just because
Mach crashes when it's out of memory, doesn't mean it's the right thing
to do. We can change that as well.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]