l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Getting Started with Hurd-L4


From: Neal H. Walfield
Subject: Re: Getting Started with Hurd-L4
Date: Mon, 25 Oct 2004 21:30:12 +0100
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.2 (i386-debian-linux-gnu) MULE/5.0 (SAKAKI)

> I've been trying to figure out how these container things work.  For a
> basic operation, I think the idea is that the client process (what do
> you call these things? 

Tasks?  (They are clients of physmem.)

> I basically mean a running program/server/
> module) would ask physmem for a new container of the appropriate
> size;

Containers are filled by a task.  (However, there is an optimization
to create a container and commit a number of pages to it
simultaneously.)

> it would then give this container to the server module, who would put
> the data into it, or get the data out;

The client does not give the container to the server as it would not
be able to get it back.  The clients gives the server access to the
container including the right to uninterruptibly lock the container
for a time period.  (This way the server can be sure that the client
will not remove the container while it is filling it or corrupting it
before it takes a copy into its cache and allows the client to know
that it can get its resources back eventually.)

> the server would then give the
> container back to the client; the client would then probably have to
> dump the container that was used for the transaction (I'm assuming
> that there will be some memory pressure here).

Containers don't need to be dumped.  They are designed to be low
overhead and could be reused (but needn't be).

> Assuming I've got all that right, there will be quite a few trips
> through the kernel involved.  For a basic file system operation
> there's going to be, at very least, three processes involved in
> getting a block to disk - the client, file-system and device driver.

And physmem, of course.
 
> If these are all going to have to have an extra journey into physmem
> to move the data around we're looking at least 10 context switches
> between processes before we've even done anything interesting.

If you go to disk it doesn't matter how slow it is.  The fast path is
where the data is in core.

>  The reason for limiting
> the operation to a single CPU is that the operation is fundamentally
> serial in nature, so moving work across to other CPUs is just going
> to give them lots of unnecessary work to do.

Unless a thread is bound to a specific CPU, it is unlikely you will
get any cross cpu IPCs.  You may change CPUs if you are preempted (or
block) but that is different.

>  If things actually end
> up blocking, however, like if we start having to wait for a block
> to come back from the disk, then we can start taking over work from
> other processors.  

Once you go to disk overhead from executing code is negligible.

I did a presentation at Waterloo two years ago about the virtual
memory subsystem.  You can find it here [1].  There is no text to go
along with it so you will have to ask questions but the diagrams
should help.

Neal

[1] http://web.walfield.org/pub/people/neal/papers/better-best-effort-20021026/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]