bug-parted
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Moving partition to an overlapping position


From: K.G.
Subject: Re: Moving partition to an overlapping position
Date: Wed, 20 Jul 2005 19:17:43 +0200

On Wed, 20 Jul 2005 18:25:22 +0200 (MEST) Szakacsits Szabolcs <address@hidden> 
wrote:
> Please note, that unless power goes down or the box crashes, flushes don't
> matter for consistency but they matter a lot for how the kernel's I/O
> scheduler can optimize reads and writes.

I 100% agree. But remember that "power goes down or the box crashes" is the main
concern about what is designated as a "dangerous" operation. A software can't 
debug
hard problems, an in the general case can't be proven bug free.

During the last year I've seen 2 Parti**** Mag** completly trashing NTFS file
systems on laptops, and I've had a power outage at home because of a big fire
near the main electric line of the area where I live. In the first 2 cases,
this was a crash and a battery problem, and I believe PM doesn't operate in
ordered mode. In the latter, I discovered that my ReiserFS did a good job.

Complete program interruption is a probable case of potential data loss, and
trying to handle it is the way to try to decrease the probability of data loss.

So I come to a proposition for Parted. After all as everything is a probability
question, so maybe there could be an "I am a courageous man I want to take 
extra 
risks to try to go faster and to let me doing generic partition moves" option ?

If the speed gain is important this also might in some cases decrease the risk
associated with the power outage type of failure, as going faster means a smaler
probability of black out during the shorter operation.

(indeed the craziest idea I just had is to calculate the optimal sync rate to
minimize the probability of data loss given a set of parameters modelising the
software environement, but this is probably more a PHD subject than something
implementable during spare time :-)

Cheers,
Guillaume




reply via email to

[Prev in Thread] Current Thread [Next in Thread]