[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 hurd] pci: Add RPCs for taking and freeing io ports by BAR

From: Damien Zammit
Subject: Re: [PATCH v2 hurd] pci: Add RPCs for taking and freeing io ports by BAR
Date: Fri, 21 Jul 2023 01:50:22 +0000

Hi Joan,

On 21/7/23 06:38, Joan Lledó wrote:
> I think your design is not compatible with nested arbiters.

Actually it is, I have also written a patch for libpciaccess.
Basically we need to reduce the number of io ports we are requesting
down to just the minimum required to access a device, then gnumach can
lock access to any overlapping range of io ports.

In this way, for example, the x86_enable_io() call can instead just request 
to range 0xcf8 - 0xcff and it will not need any more than that to access the 
pci io space.

Gnumach currently locks access to PCI_CFG1 whenever the range is taken, but it 
could instead
store a bitmap of all io ports and lock every single one whenever it's taken by 
a process.
In this way, it would be "first in best dressed" method of accessing io ports.

The idea is that i386_io_perm_create is called first via the arbiter and the 
ioperm mach port
is returned to the caller IF the range is untaken, then the caller task can call
i386_io_perm_modify on that port and enable/disable the io ports at will.  Any 
task that requests a
range of io ports will have that set locked in the kernel so long as they were 
free at the time
of calling i386_io_perm_create.

When the port is deallocated, we probably need to make .nosenders unset the io 
port range in gnumach.
But that is for the future when we are not using ioperm() calls anymore, and 
all io port access
can be via the pci-arbiter.

The struct pci_io_handle structure in libpcicaccess could just have a 
conditional member when
__GNU__ is defined, to hold the ioperm mach port per io request.

Hope that makes sense.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]