[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: PCI arbiter memory mapping

From: Joan Lledó
Subject: Re: PCI arbiter memory mapping
Date: Tue, 17 Aug 2021 20:46:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.12.0


I'm sorry I can't follow your discussion, I only know about the small part of the kernel I worked on.

El 16/8/21 a les 23:07, Sergey Bugaev ha escrit:
I don't think I understand enough about the situation. It would help
if you or Joan were to kindly give me some more context :)

Basically, libpciaccess gets a memory object at [1]. And later I need it in the arbiter at [2], to create a proxy over it.

To do that, the current code stores it in a structure I created called pci_user_data [3], and then it reads that structure from the arbiter [4].

> What's the issue you're trying to solve?

We are looking for another way to get the pager at [4] and get rid of that structure.

As I understand it, there's the PCI arbiter, which is a translator
that arbitrates access to PCI, which is a hardware bus that various
devices can be connected to.

Yes, and the arbiter can play two roles: root arbiter, which uses x86 module in libpciacces; and nested arbiter, which uses the hurd module in libpciaccess.

The hardware devices connected via PCI are available (to the PCI arbiter)
as Mach devices

Actually, the devices are available to the arbiter as libpciaccess devices

it's possible to use device_map () and then vm_map () to access the
device memory.

Yes, root arbiter uses device_map() on "mem" to get the memory object, nested arbiters use io_map() over the region files exposed by the root arbiter to get the memory object.

Both pass the memory object to vm_map() to map the range.

Then there's libpciaccess whose Hurd backend uses the
files exported by the PCI arbiter to get access to the PCI,

Only nested arbiters, as I said the root arbiter uses the x86 backend

Naturally its user can request read-only or
read-write mapping, but the PCI arbiter may decide to only return a
read-only memory object (a proxy to the device pager), in which case
libpciaccess should deallocate the port and return EPREM, or the PCI
arbiter may return the real device pager.

Yes, but that's not really relevant for our problem, I was talking about a bug I found.

[1] https://gitlab.freedesktop.org/jlledom/libpciaccess/-/blob/hurd-device-map/src/x86_pci.c#L275 [2] http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/pci-arbiter/netfs_impl.c?h=jlledom-pci-memory-map#n613 [3] https://gitlab.freedesktop.org/jlledom/libpciaccess/-/blob/hurd-device-map/src/x86_pci.c#L287 [4] http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/pci-arbiter/netfs_impl.c?h=jlledom-pci-memory-map#n605

reply via email to

[Prev in Thread] Current Thread [Next in Thread]