l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

DMA vs. Persistence


From: Jonathan S. Shapiro
Subject: DMA vs. Persistence
Date: Thu, 13 Oct 2005 14:45:33 -0400

Neal's question unfortunately stepped into a complication: in a
persistent system, disk I/O is handled specially, because the disk has a
relationship to the system that is unlike any other device.

In the following discussion, I'm going to talk about a simplified
picture of EROS object I/O. What I am ignoring is the additional
complications introduced by the checkpoint area. You can read about
those here if you care:

  http://www.eros-os.org/papers/storedesign2002.pdf


When an application allocates a page from a space bank, it gets back a
capability. This capability contains an OID. When the application
invokes or uses the page capability, the kernel gets asked to resolve
the object (in this case, the page).

If the object is in memory, it will appear on a hash table. We look it
up and we are done.

** In old, monolithic EROS:

In monolithic EROS, the disk drivers are compiled in to the kernel.

The kernel maintains a table with entries of the form:

  ( start OID, end OID, drive id, start LBA )

Each of these entries describes a sequential region on the hard disk.
The obvious relative offset adjustment is performed to compute the
starting LBA, an in-memory page frame in the page cache is selected and
marked "I/O pin" and a disk I/O is initiated. This is basically just
like a file block cache, and DMA works in the usual way.

Once the sector-level I/O is complete, the frame is added to the hash
table and the process that needed the object is restarted. This time the
hash lookup succeeds and we are done.

So DMA in this case isn't really complicated.


** In new, microkernel EROS:

In the current version of EROS (and CapROS -- an EROS derivative),
persistence is not implemented. I had just finished moving to a
user-mode driver model, and I hadn't reconnected the persistence logic
back up, but here is how it was intended to work.

When the kernel needs an object, it upcalls a message to the object
server. The object server runs in non-persistent memory, and **it is
part of the TCB**.

The object server uses a highly sensitive capability to request a page
cache page frame for I/O. The kernel clears a page cache frame and
returns a page frame capability to the object server. The page frame is
marked "I/O pinned". The information disclosed by the kernel includes
the physical address of the frame.

This physical address can then be used by the user-level driver for DMA.


The details of this protocol probably need revision. In particular,
there is a potential livelock in the simplified version above if it is
not implemented very carefully.

The important things to know here are that:

  1. The object server runs in fixed resource.
  2. The object server runs from preloaded, non-persistent pages
     and nodes
  3. Preloaded pages and nodes remain resident until
     explicitly destroyed.

These statements are true for drivers as well.

shap





reply via email to

[Prev in Thread] Current Thread [Next in Thread]