l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memory allocation/sharing when DMA operations used..


From: Markus Kode Kaarn
Subject: Re: Memory allocation/sharing when DMA operations used..
Date: Fri, 1 Apr 2005 14:39:51 -0600
User-agent: KMail/1.7.2

On Friday 01 April 2005 03:22, you wrote:
>
> Markus Kode Kaarn wrote:
> > Hey everybody,
> >
> > For last few days i came whith idea of how Device-Drivers, should handle
> > DMA requests. If DMA channels access can be considered as a priviledge of
> > only PLM's, or even centralized DMA driver(which is prefered), through
> > which will all DMA requests will be handled. First, i think DD's, that
> > receives DMA request from user for the first time, should allocate/map
> > memory for it selt by requesting it from physmem. Then share this region
> > of memory whith task(as far as i can think these will be device drivers)
> > that requests DMA. There would be read-only and the writeable memory
> > shared, every of which is decided to be given whereas task want to
> > receive(read) data, or send(write) it.
> > Here i can see a good decision on that only a DMA driver should allocate
> > and share memory regions whith tasks requesting the operations. Because
> > allowing user to supply a buffer is not far clever, cause user can die or
> > give up its pages back to physmem, and at time of receive-operation from
> > DMA channel memory region could be used by some other task.
> >
> > This probably can apply not only to DMA operations. Some system part that
> > provide service(s) to many tasks may can come whith this approach.
> >
> > At the moment i don't know much about the memory allocation/sharing in
> > hurd-l4 and can't be more specific on this, or supply code.
> >
> > Comments please.
>
> Hi,
>
> I'm not very familiar with DMA, but I remembered that only certain pages
> can be used for DMA.  Or was that only for ISA?  Anyway, if that is not
> the case, it becomes a lot simpler.
>
> The L4-Hurd design uses the idea to give the costs to the user (which is
> a program in this context).  This is true in many places, in particular
> for memory.  For example, if a client wants to read data from a
> filesystem, it supplies a page to the filesystem, which then fills it
> with data.
>
> You suggest to leave that design for DMA.  If there are special
> "DMA-pages", that doesn't sound strange: Otherwise a DoS-attack would be
> possible by allocating many DMA pages.  Of course there could be a quota
> system, which might be enough to prevent this.  If all pages can be used
> for DMA, I don't see any reason not to use the normal approach of the
> client providing the memory for the operation.
>
> You say the user can die on you, leaving a big mess.  This is no
> problem, as the container is shared by the driver and the user.  If the
> user dies, the container is not deallocated until the driver lets go of
> it.  It shouldn't do that before the DMA operation is finished.  It is
> not unusual for device drivers to wait with freeing resources after
> client death until the device has finished its operation.  This is not a
> big problem.  I think it should not be a reason to leave a good design
> decision.
>
> If I misunderstood you, please rephrase.
>
> Thanks,
> Bas

No, everything you said is ok to me. Now i'm curious of if physmem
calls for task to take back some memory pages, and task returns
pages that are shared among number of other tasks, will physmem
accept these shared pages, and if it will how it is going to act on
disassociating other tasks whith this page? Here i can think that
physmem will not accept this type of pages, and for sucha tasks
behaviour(for returning not a clean pages) will punish it some way,
say terminating it. And the another way is that physmem will try to
force these shared pages deallocation from tasks address-spaces
that shares the pages. The second way is a huge performance loss i think.

So, in short my answer is how physmem will act if to its give-pages-back
signal to task it will receive shared page.

Thank you.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]