qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Virt machine memory map


From: Igor Mammedov
Subject: Re: [Qemu-devel] [RFC] Virt machine memory map
Date: Mon, 20 Jul 2015 15:30:06 +0200

On Mon, 20 Jul 2015 13:23:45 +0200
Alexander Graf <address@hidden> wrote:

> On 07/20/15 11:41, Peter Maydell wrote:
> > On 20 July 2015 at 09:55, Pavel Fedin <address@hidden> wrote:
> >>   Hello!
> >>
> >>   In our project we work on a very fast paravirtualized network I/O 
> >> drivers, based  on ivshmem. We
> >> successfully got ivshmem working on ARM, however with one hack.
> >> Currently we have:
> >> --- cut ---
> >>      [VIRT_PCIE_MMIO] =          { 0x10000000, 0x2eff0000 },
> >>      [VIRT_PCIE_PIO] =           { 0x3eff0000, 0x00010000 },
> >>      [VIRT_PCIE_ECAM] =          { 0x3f000000, 0x01000000 },
> >>      [VIRT_MEM] =                { 0x40000000, 30ULL * 1024 * 1024 * 1024 
> >> },
> >> --- cut ---
> >>   And MMIO region is not enough for us because we want to have 1GB mapping 
> >> for PCI device. In order
> >> to make it working, we modify the map as follows:
> >> --- cut ---
> >>      [VIRT_PCIE_MMIO] =            { 0x10000000, 0x7eff0000 },
> >>      [VIRT_PCIE_PIO] =           { 0x8eff0000, 0x00010000 },
> >>      [VIRT_PCIE_ECAM] =          { 0x8f000000, 0x01000000 },
> >>      [VIRT_MEM] =             { 0x90000000, 30ULL * 1024 * 1024 * 1024 },
> >> --- cut ---
> >>   The question is - how could we upstream this? I believe modifying 32-bit 
> >> virt memory map this way
> >> is not good. Will it be OK to have different memory map for 64-bit virt ?
> > I think the theory we discussed at the time of putting in the PCIe
> > device was that if we wanted this we'd add support for the other
> > PCIe memory window (which would then live at somewhere above 4GB).
> > Alex, can you remember what the idea was?
> 
> Yes, pretty much. It would give us an upper bound to the amount of RAM 
> that we're able to support, but at least we would be able to support big 
> MMIO regions like for ivshmem.
> 
> I'm not really sure where to put it though. Depending on your kernel 
> config Linux supports somewhere between 39 and 48 or so bits of phys 
> address space. And I'd rather not crawl into the PCI hole rat hole that 
> we have on x86 ;).
> 
> We could of course also put it just above RAM - but then our device tree 
> becomes really dynamic and heavily dependent on -m.
on x86 we've made everything that is not mapped to ram/mmio fall down to
PCI address space, see pc_pci_as_mapping_init().

So we don't have explicitly mapped PCI regions anymore there, but
we still thinking in terms of PCI hole/PCI ranges when it comes to ACPI
PCI bus description where one need to specify ranges available for bus
in its _CRS.

> 
> >
> > But to be honest I think we weren't expecting anybody to need
> > 1GB of PCI MMIO space unless it was a video card...
> 
> Ivshmem was actually the most likely target that I could've thought of 
> to require big MMIO regions ;).
> 
> 
> Alex
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]