qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ide_dma_cancel will result in partial DMA trans


From: Kevin Wolf
Subject: Re: [Qemu-devel] [PATCH] ide_dma_cancel will result in partial DMA transfer (resend #4)
Date: Fri, 30 Jul 2010 10:02:43 +0200
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100720 Fedora/3.0.6-1.fc12 Thunderbird/3.0.6

Am 27.07.2010 21:04, schrieb Andrea Arcangeli:
> Subject: avoid canceling ide dma
> 
> From: Andrea Arcangeli <address@hidden>
> 
> The reason for not actually canceling the I/O is because with
> virtualization and lots of VM running, a guest fs may mistake a
> overload of the host, as an IDE timeout. So rather than canceling the
> I/O, it's safer to wait I/O completion and simulate that the I/O has
> completed just before the io cancellation was requested by the
> guest. This way if ntfs or an app writes data without checking for
> -EIO retval, and it thinks the write has succeeded, it's less likely
> to run into troubles. Similar issues for reads.
> 
> Furthermore because the DMA operation is splitted into many synchronous
> aio_read/write if there's more than one entry in the SG table, without this
> patch the DMA would be cancelled in the middle, something we've no idea if it
> happens on real hardware too or not. Overall this seems a great risk for zero
> gain.
> 
> This approach is sure safer than previous code given we can't pretend all 
> guest
> fs code out there to check for errors and reply the DMA if it was completed
> partially, given a timeout would never materialize on a real harddisk unless
> there are defective blocks (and defective blocks are practically only an issue
> for reads never for writes in any recent hardware as writing to blocks is the
> way to fix them) or the harddisk breaks as a whole.
> 
> Signed-off-by: Izik Eidus <address@hidden>
> Signed-off-by: Andrea Arcangeli <address@hidden>

Thanks, applied to the block branch.

Kevin



reply via email to

[Prev in Thread] Current Thread [Next in Thread]