[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 05/13] iommu: Make sglists and dma_bdrv helpers

From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 05/13] iommu: Make sglists and dma_bdrv helpers use new universal DMA helpers
Date: Wed, 20 Jun 2012 16:21:14 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 06/19/2012 01:39 AM, Benjamin Herrenschmidt wrote:
From: David Gibson<address@hidden>

dma-helpers.c contains a number of helper functions for doing
scatter/gather DMA, and various block device related DMA.  Currently,
these directly access guest memory using cpu_physical_memory_*(),
assuming no IOMMU translation.

This patch updates this code to use the new universal DMA helper
functions.  qemu_sglist_init() now takes a DMAContext * to describe
the DMA address space in which the scatter/gather will take place.

We minimally update the callers qemu_sglist_init() to pass NULL
(i.e. no translation, same as current behaviour).  Some of those
callers should pass something else in some cases to allow proper IOMMU
translation in future, but that will be fixed in later patches.

Cc: Kevin Wolf<address@hidden>
Cc: Michael S. Tsirkin<address@hidden>
Cc: Paolo Bonzini<address@hidden>

Signed-off-by: David Gibson<address@hidden>
Signed-off-by: Benjamin Herrenschmidt<address@hidden>
  dma-helpers.c  |   24 ++++++++++++------------
  dma.h          |    3 ++-
  hw/ide/ahci.c  |    3 ++-
  hw/ide/macio.c |    4 ++--
  hw/pci.h       |    2 +-
  5 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/dma-helpers.c b/dma-helpers.c
index 7971a89..2dc4691 100644
--- a/dma-helpers.c
+++ b/dma-helpers.c
@@ -10,12 +10,13 @@
  #include "dma.h"
  #include "trace.h"

-void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint)
+void qemu_sglist_init(QEMUSGList *qsg, int alloc_hint, DMAContext *dma)
      qsg->sg = g_malloc(alloc_hint * sizeof(ScatterGatherEntry));
      qsg->nsg = 0;
      qsg->nalloc = alloc_hint;
      qsg->size = 0;
+    qsg->dma = dma;

  void qemu_sglist_add(QEMUSGList *qsg, dma_addr_t base, dma_addr_t len)
@@ -74,10 +75,9 @@ static void dma_bdrv_unmap(DMAAIOCB *dbs)
      int i;

      for (i = 0; i<  dbs->iov.niov; ++i) {
-        cpu_physical_memory_unmap(dbs->iov.iov[i].iov_base,
-                                  dbs->iov.iov[i].iov_len,
-                                  dbs->dir != DMA_DIRECTION_TO_DEVICE,
-                                  dbs->iov.iov[i].iov_len);
+        dma_memory_unmap(dbs->sg->dma, dbs->iov.iov[i].iov_base,
+                         dbs->iov.iov[i].iov_len, dbs->dir,
+                         dbs->iov.iov[i].iov_len);
@@ -106,7 +106,7 @@ static void dma_complete(DMAAIOCB *dbs, int ret)
  static void dma_bdrv_cb(void *opaque, int ret)
      DMAAIOCB *dbs = (DMAAIOCB *)opaque;
-    target_phys_addr_t cur_addr, cur_len;
+    dma_addr_t cur_addr, cur_len;
      void *mem;

      trace_dma_bdrv_cb(dbs, ret);
@@ -123,8 +123,7 @@ static void dma_bdrv_cb(void *opaque, int ret)
      while (dbs->sg_cur_index<  dbs->sg->nsg) {
          cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
          cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
-        mem = cpu_physical_memory_map(cur_addr,&cur_len,
-                                      dbs->dir != DMA_DIRECTION_TO_DEVICE);
+        mem = dma_memory_map(dbs->sg->dma, cur_addr,&cur_len, dbs->dir);
          if (!mem)
          qemu_iovec_add(&dbs->iov, mem, cur_len);
@@ -209,7 +208,8 @@ BlockDriverAIOCB *dma_bdrv_write(BlockDriverState *bs,

-static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg, bool 
+static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
+                           DMADirection dir)
      uint64_t resid;
      int sg_cur_index;
@@ -220,7 +220,7 @@ static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, 
QEMUSGList *sg, bool to_de
      while (len>  0) {
          ScatterGatherEntry entry = sg->sg[sg_cur_index++];
          int32_t xfer = MIN(len, entry.len);
-        cpu_physical_memory_rw(entry.base, ptr, xfer, !to_dev);
+        dma_memory_rw(sg->dma, entry.base, ptr, xfer, dir);

Again, you return an error but ignore it now.

In the very least, on error you should scrub the passed in buffer to avoid leaking data to the guest.

You can imagine a malicious guest programming the IOMMU with invalid mappings and then doing DMA operations in order to read memory from the host QEMU process.


Anthony Liguori

reply via email to

[Prev in Thread] Current Thread [Next in Thread]