qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2] spapr_iommu: Enable multiple TCE requests


From: Alexander Graf
Subject: Re: [Qemu-devel] [PATCH v2] spapr_iommu: Enable multiple TCE requests
Date: Wed, 21 May 2014 23:54:46 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:24.0) Gecko/20100101 Thunderbird/24.5.0


On 21.05.14 18:03, Alexey Kardashevskiy wrote:
On 05/22/2014 01:23 AM, Alexey Kardashevskiy wrote:
Currently only single TCE entry per request is supported (H_PUT_TCE).
However PAPR+ specification allows multiple entry requests such as
H_PUT_TCE_INDIRECT and H_STUFF_TCE. Having less transitions to the host
kernel via ioctls, support of these calls can accelerate IOMMU operations.

This implements H_STUFF_TCE and H_PUT_TCE_INDIRECT.

This advertises "multi-tce" capability to the guest if the host kernel
supports it (KVM_CAP_SPAPR_MULTITCE) or guest is running in TCG mode.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
Changes:
v2:
* multi-tce enabled explicitely for TCG, it was implicit
* kvmppc_spapr_use_multitce() does not handle TCG anymore

v1:
* removed checks for liobn as the check is performed already in
spapr_tce_find_by_liobn
* added hcall-multi-tce if the host kernel supports the capability
---
  hw/ppc/spapr.c       |  3 ++
  hw/ppc/spapr_iommu.c | 78 ++++++++++++++++++++++++++++++++++++++++++++++++++++
  target-ppc/kvm.c     |  7 +++++
  target-ppc/kvm_ppc.h |  6 ++++
  trace-events         |  2 ++
  5 files changed, 96 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 3b28211..697fba6 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -498,6 +498,9 @@ static void *spapr_create_fdt_skel(hwaddr initrd_base,
      /* RTAS */
      _FDT((fdt_begin_node(fdt, "rtas")));
+ if (!kvm_enabled() || kvmppc_spapr_use_multitce()) {
+        add_str(hypertas, "hcall-multi-tce");
+    }
      _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas->str,
                         hypertas->len)));
      g_string_free(hypertas, TRUE);
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index 72493d8..ab5037c 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -224,6 +224,82 @@ static target_ulong put_tce_emu(sPAPRTCETable *tcet, 
target_ulong ioba,
      return H_SUCCESS;
  }
+static target_ulong h_put_tce_indirect(PowerPCCPU *cpu,
+                                       sPAPREnvironment *spapr,
+                                       target_ulong opcode, target_ulong *args)
+{
+    int i;
+    target_ulong liobn = args[0];
+    target_ulong ioba = args[1];
+    target_ulong ioba1 = ioba;
+    target_ulong tce_list = args[2];
+    target_ulong npages = args[3];
+    target_ulong ret = H_PARAMETER;
+    sPAPRTCETable *tcet = spapr_tce_find_by_liobn(liobn);
+    CPUState *cs = CPU(cpu);
+
+    if (!tcet) {
+        return H_PARAMETER;
+    }
+
+    if (npages > 512) {
+        return H_PARAMETER;
+    }
+
+    ioba &= ~SPAPR_TCE_PAGE_MASK;
+    tce_list &= ~SPAPR_TCE_PAGE_MASK;
+
+    for (i = 0; i < npages; ++i, ioba += SPAPR_TCE_PAGE_SIZE) {
+        target_ulong tce = ldq_phys(cs->as, tce_list +
+                                    i * sizeof(target_ulong));
Sorry, it is too late here, forgot to comment :)

I cannot use rtas_ld straight away as it is 32bit and here I need 64bit.
Anyway, this is a hypercall and it is called from guest virtual mode so I
do not think that rule with masking top bits for RTAS applies here.

SPAPR says about it:

uint64 TCE, /* The logical address of a page of (4 K long on a 4 K
boundary) of TCE contents to
be stored in the TCE table (contains logical address of storage page to be
mapped)*/

I believe "logical address" in this context is a "guest physical" address.
Does it help? Does not help me :)

Yeah, I think it's safe to assume that "logical address" basically is our guest physical. So using ldq_phys is correct.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]