qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [RFC] Replication agent requirements (was [RFC PATCH] repli


From: Ori Mamluk
Subject: [Qemu-devel] [RFC] Replication agent requirements (was [RFC PATCH] replication agent module)
Date: Wed, 08 Feb 2012 15:00:19 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:9.0) Gecko/20111222 Thunderbird/9.0.1

Hi,
Following previous mails from Kevin and Dor, I'd like to specify the high level requirements of a replication agent as I see them.

1. Report each write to a protected volume to the rephub, at an IO transaction granularity * The reporting is not synchronous, i.e. the write completion is not delayed until the rephub received it. * The IOs have to be the raw guest IOs - i.e. not converted to any sparse format or another filter that alters the size/offset 2. Report failures to report an IO (e.g. socket disconnect or send timeout) or failed IOs (bad status from storage) to rephub * It is enough to disconnect the socket - that can be considered a 'failure report'
3. Enable rephub to read arbitrary regions in the protected volume
* Assume that rephub can identify IOs which were dropped by the replication system, and needs to re-read the data of these IOs.

We'd like to treat the following requirement as a second stage - not to implement it in the first version: 4. Synchronously report IO writes meta data (offset, size) to an external API * Synchronous meaning that it is reported (blocking) before the IO is processed by storage.
    * The goal is to maintain a dirty bitmap outside of the Qemu process
* The tracking needs to be more persistent than the Qemu process. A good example for that is to expose an additional process API (yet another NBD??) that will be hold the bitmap by either the host RAM or by writing persistently to storage.

The emphasis to report single IO transactions is because high end replication (near synchronous) requires access to every IO shortly after it is written to the storage.

Thanks,
Ori



reply via email to

[Prev in Thread] Current Thread [Next in Thread]