gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Question on Geo-Replication


From: Vijaykumar Koppad
Subject: Re: [Gluster-devel] Question on Geo-Replication
Date: Mon, 23 Jul 2012 04:05:17 -0400 (EDT)

Hi Pitichai,

    It will be helpful to find out the problem, if you provide us 
with the log files and the configuration of the setup. 
And also the data set you are using , I mean the size and number of  files. 

Thanks,
Vijaykumar 

----- Original Message -----
From: "Pitichai Pitimaneeyakul" <address@hidden>
To: "gluster-devel" <address@hidden>
Sent: Saturday, July 21, 2012 1:53:24 PM
Subject: [Gluster-devel] Question on Geo-Replication




Hi there, 


I ran into problem of synchronization in geo-replication is not completed. 
And I found the the document say about force to full sync by erasing index and 
restart geo-replication 
as below from Gluster Document. Somehow I went to Tuning Volume Options, there 
is nothing mention about erasing index. 
Or it is just to set geo-replication.index=off and restart geo-replication ? 


====== 


Synchronization is not complete 

Description: GlusterFS Geo-replication did not synchronize the data completely 
but still the geo-replication status display OK. 

Solution: You can enforce a full sync of the data by erasing the index and 
restarting GlusterFS Geo-replication. After restarting, GlusterFS 
Geo-replication begins synchronizing all the data, that is, all files will be 
compared with by means of being checksummed, which can be a lengthy /resource 
high utilization operation, mainly on large data sets (however, actual data 
loss will not occur). If the error situation persists, contact Gluster Support. 

For more information about erasing index, see Tuning Volume Options . 

====== 

Thank you and Best Regards, 
Pitichai 

_______________________________________________
Gluster-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/gluster-devel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]