qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/3] hw/iommu: enable iommu with -device


From: Peter Xu
Subject: Re: [Qemu-devel] [PATCH v2 2/3] hw/iommu: enable iommu with -device
Date: Mon, 13 Jun 2016 21:04:43 +0800
User-agent: Mutt/1.5.24 (2015-08-30)

On Mon, Jun 13, 2016 at 01:20:11PM +0300, Marcel Apfelbaum wrote:
> On 06/12/2016 07:27 AM, Peter Xu wrote:
> >On Thu, Jun 02, 2016 at 11:15:54PM +0300, Marcel Apfelbaum wrote:
> >
> >[...]
> >
> >>  static void vtd_realize(DeviceState *dev, Error **errp)
> >>  {
> >>+    PCIBus *bus = PC_MACHINE(qdev_get_machine())->bus;
> >>      IntelIOMMUState *s = INTEL_IOMMU_DEVICE(dev);
> >>
> >>      VTD_DPRINTF(GENERAL, "");
> >>@@ -2029,6 +2043,9 @@ static void vtd_realize(DeviceState *dev, Error 
> >>**errp)
> >>      s->vtd_as_by_busptr = g_hash_table_new_full(vtd_uint64_hash, 
> >> vtd_uint64_equal,
> >>                                                g_free, g_free);
> >>      vtd_init(s);
> >>+    sysbus_mmio_map(SYS_BUS_DEVICE(s), 0, Q35_HOST_BRIDGE_IOMMU_ADDR);
> >>+    bus->iommu_fn = vtd_host_dma_iommu;
> >>+    bus->iommu_opaque = dev;
> >
> >Here, shall we still use pci_setup_iommu() to keep the two fields
> >private for pci framework?
> >
> 
> I've already spotted it and took care of it, thanks :) !

Cool. :)

Btw, have we removed MachineState.iommu variable as well?

> 
> >Btw, I am rebasing Intel IR work onto this patchset, but encountered
> >issues (guest hang, or errornous interrupts) when guest specify more
> >than 1 vcpus (everything is cool as long as vcpu=1). Maybe there is
> >something wrong during the rebase, still investigating. Please shoot
> >if there is any clue.
> >
> 
> I am running with 2 vcpus and I didn't see any problem, I'll let you
> know if can reproduce.

My fault during rebase. It's very easy to lost lines of codes during
rebase, especially where there is function move from one place to
another, while in which function I did some changes... It's all good
now. Thanks!

-- peterx



reply via email to

[Prev in Thread] Current Thread [Next in Thread]