You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Stefan Reek <st...@unitedgames.com> on 2012/03/06 10:13:06 UTC

Old data coming alive after adding node

Hi,

We were running a 3-node cluster of cassandra 0.6.13 with RF=3.
After we added a fourth node, keeping RF=3, some old data appeared in 
the database.
As far as I understand this can only happen if nodetool repair wasn't 
run for more than GCGraceSeconds.
Our GCGraceSeconds is set to the default of 10 days (864000 seconds).
We have  a scheduled cronjob to run repair once each week on every node, 
each on another day.
I'm sure that none of the nodes ever skipped running a repair.
We don't run compact on the nodes explicitly as I understand that 
running repair will trigger a
major compaction. I'm not entirely sure if it does so, but in any case 
the tombstones will be removed by a minor
compaction. So I expected that the reappearing data, which is a couple 
of months old in some cases, was long gone
by the time we added the node.

Can anyone think of any reason why the old data reappeared?

Stefan

Re: Old data coming alive after adding node

Posted by Stefan Reek <st...@unitedgames.com>.
After the old data came up we were able to delete it again. And it is 
stable now.
We are in the process of upgrading to 1.0, but as you said that's a 
painful process.
I just hope 0.6 will keep running till we're done with the upgrade.
Anyway thanks for the help.

Cheers,

Stefan



On 03/06/2012 07:02 PM, aaron morton wrote:
>> All our writes/deletes are done with CL.QUORUM.
>> Our reads are done with CL.ONE. Although the reads that confirmed the 
>> old data were done with CL.QUORUM.
> mmmm
>
>> According to 
>> https://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt 0.6.6 
>> has the same patch
>> for (CASSANDRA-1074) as 0.7 and so I assumed that minor compactions 
>> in 0.6.6 and up also purged tombstones.
> My bad. As you were.
>
> After the repair did the un-deleted data remain un-deleted ? Are you 
> back to a stable situation ?
>
> Without a lot more detail I am at a bit of a loss.
>
> I know it's painful but migrating to 1.0 *really* will make your life 
> so much easier and faster. At some point you may hit a bug or a 
> problem in 0.6 and the solution may be to upgrade, quickly.
>
> Cheers
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6/03/2012, at 11:13 PM, Stefan Reek wrote:
>
>> Hi Aaron,
>>
>> Thanks for the quick reply.
>> All our writes/deletes are done with CL.QUORUM.
>> Our reads are done with CL.ONE. Although the reads that confirmed the 
>> old data were done with CL.QUORUM.
>> According to 
>> https://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt 
>> 0.6.6 has the same patch
>> for (CASSANDRA-1074) as 0.7 and so I assumed that minor compactions 
>> in 0.6.6 and up also purged tombstones.
>> The only suspicious thing I noticed was that after adding the fourth 
>> node repairs became extremely slow and heavy.
>> Running it degraded the performance of the whole cluster and the new 
>> node even went OOM when running it.
>>
>> Cheers,
>>
>> Stefan
>>
>> On 03/06/2012 10:51 AM, aaron morton wrote:
>>>> After we added a fourth node, keeping RF=3, some old data appeared 
>>>> in the database.
>>> What CL are you working at ? (Should not matter too much with repair 
>>> working, just asking)
>>>
>>>
>>>> We don't run compact on the nodes explicitly as I understand that 
>>>> running repair will trigger a
>>>> major compaction. I'm not entirely sure if it does so, but in any 
>>>> case the tombstones will be removed by a minor
>>>> compaction.
>>> In 0.6.x tombstones were only purged during a major / manual 
>>> compaction. Purging during minor compaction came in during 0.7
>>> https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L1467
>>>
>>>> Can anyone think of any reason why the old data reappeared?
>>> It sounds like you are doing things correctly. The complicating 
>>> factor is 0.6 is so very old.
>>>
>>>
>>> If I wanted to poke around some more I would conduct reads as CL one 
>>> against nodes and see if they return the "deleted" data or not. This 
>>> would help me understand if the tombstone is still out there.
>>>
>>> I would also poke around a lot in the logs to make sure repair was 
>>> running as expected and completing. If you find anything suspicious 
>>> post examples.
>>>
>>> Finally I would ensure CL QUROUM was been used.
>>>
>>> Hope that helps.
>>>
>>>
>>> -----------------
>>> Aaron Morton
>>> Freelance Developer
>>> @aaronmorton
>>> http://www.thelastpickle.com <http://www.thelastpickle.com/>
>>>
>>> On 6/03/2012, at 10:13 PM, Stefan Reek wrote:
>>>
>>>> Hi,
>>>>
>>>> We were running a 3-node cluster of cassandra 0.6.13 with RF=3.
>>>> After we added a fourth node, keeping RF=3, some old data appeared 
>>>> in the database.
>>>> As far as I understand this can only happen if nodetool repair 
>>>> wasn't run for more than GCGraceSeconds.
>>>> Our GCGraceSeconds is set to the default of 10 days (864000 seconds).
>>>> We have  a scheduled cronjob to run repair once each week on every 
>>>> node, each on another day.
>>>> I'm sure that none of the nodes ever skipped running a repair.
>>>> We don't run compact on the nodes explicitly as I understand that 
>>>> running repair will trigger a
>>>> major compaction. I'm not entirely sure if it does so, but in any 
>>>> case the tombstones will be removed by a minor
>>>> compaction. So I expected that the reappearing data, which is a 
>>>> couple of months old in some cases, was long gone
>>>> by the time we added the node.
>>>>
>>>> Can anyone think of any reason why the old data reappeared?
>>>>
>>>> Stefan
>>>
>>
>


Re: Old data coming alive after adding node

Posted by aaron morton <aa...@thelastpickle.com>.
> All our writes/deletes are done with CL.QUORUM.
> Our reads are done with CL.ONE. Although the reads that confirmed the old data were done with CL.QUORUM.
mmmm

> According to https://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt 0.6.6 has the same patch
> for (CASSANDRA-1074) as 0.7 and so I assumed that minor compactions in 0.6.6 and up also purged tombstones.
My bad. As you were. 

After the repair did the un-deleted data remain un-deleted ? Are you back to a stable situation ? 

Without a lot more detail I am at a bit of a loss. 

I know it's painful but migrating to 1.0 *really* will make your life so much easier and faster. At some point you may hit a bug or a problem in 0.6 and the solution may be to upgrade, quickly.

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 6/03/2012, at 11:13 PM, Stefan Reek wrote:

> Hi Aaron,
> 
> Thanks for the quick reply.
> All our writes/deletes are done with CL.QUORUM.
> Our reads are done with CL.ONE. Although the reads that confirmed the old data were done with CL.QUORUM.
> According to https://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt 0.6.6 has the same patch
> for (CASSANDRA-1074) as 0.7 and so I assumed that minor compactions in 0.6.6 and up also purged tombstones.
> The only suspicious thing I noticed was that after adding the fourth node repairs became extremely slow and heavy.
> Running it degraded the performance of the whole cluster and the new node even went OOM when running it.
> 
> Cheers,
> 
> Stefan
> 
> On 03/06/2012 10:51 AM, aaron morton wrote:
>> 
>>> After we added a fourth node, keeping RF=3, some old data appeared in the database.
>> What CL are you working at ? (Should not matter too much with repair working, just asking)
>> 
>> 
>>> We don't run compact on the nodes explicitly as I understand that running repair will trigger a
>>> major compaction. I'm not entirely sure if it does so, but in any case the tombstones will be removed by a minor
>>> compaction.
>> In 0.6.x tombstones were only purged during a major / manual compaction. Purging during minor compaction came in during 0.7
>> https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L1467
>> 
>>> Can anyone think of any reason why the old data reappeared?
>> It sounds like you are doing things correctly. The complicating factor is 0.6 is so very old. 
>> 
>> 
>> If I wanted to poke around some more I would conduct reads as CL one against nodes and see if they return the "deleted" data or not. This would help me understand if the tombstone is still out there. 
>> 
>> I would also poke around a lot in the logs to make sure repair was running as expected and completing. If you find anything suspicious post examples. 
>> 
>> Finally I would ensure CL QUROUM was been used. 
>> 
>> Hope that helps.
>> 
>> 
>> -----------------
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>> 
>> On 6/03/2012, at 10:13 PM, Stefan Reek wrote:
>> 
>>> Hi,
>>> 
>>> We were running a 3-node cluster of cassandra 0.6.13 with RF=3.
>>> After we added a fourth node, keeping RF=3, some old data appeared in the database.
>>> As far as I understand this can only happen if nodetool repair wasn't run for more than GCGraceSeconds.
>>> Our GCGraceSeconds is set to the default of 10 days (864000 seconds).
>>> We have  a scheduled cronjob to run repair once each week on every node, each on another day.
>>> I'm sure that none of the nodes ever skipped running a repair.
>>> We don't run compact on the nodes explicitly as I understand that running repair will trigger a
>>> major compaction. I'm not entirely sure if it does so, but in any case the tombstones will be removed by a minor
>>> compaction. So I expected that the reappearing data, which is a couple of months old in some cases, was long gone
>>> by the time we added the node.
>>> 
>>> Can anyone think of any reason why the old data reappeared?
>>> 
>>> Stefan
>> 
> 


Re: Old data coming alive after adding node

Posted by Stefan Reek <st...@unitedgames.com>.
Hi Aaron,

Thanks for the quick reply.
All our writes/deletes are done with CL.QUORUM.
Our reads are done with CL.ONE. Although the reads that confirmed the 
old data were done with CL.QUORUM.
According to 
https://svn.apache.org/viewvc/cassandra/branches/cassandra-0.6/CHANGES.txt 
0.6.6 has the same patch
for (CASSANDRA-1074) as 0.7 and so I assumed that minor compactions in 
0.6.6 and up also purged tombstones.
The only suspicious thing I noticed was that after adding the fourth 
node repairs became extremely slow and heavy.
Running it degraded the performance of the whole cluster and the new 
node even went OOM when running it.

Cheers,

Stefan

On 03/06/2012 10:51 AM, aaron morton wrote:
>> After we added a fourth node, keeping RF=3, some old data appeared in 
>> the database.
> What CL are you working at ? (Should not matter too much with repair 
> working, just asking)
>
>
>> We don't run compact on the nodes explicitly as I understand that 
>> running repair will trigger a
>> major compaction. I'm not entirely sure if it does so, but in any 
>> case the tombstones will be removed by a minor
>> compaction.
> In 0.6.x tombstones were only purged during a major / manual 
> compaction. Purging during minor compaction came in during 0.7
> https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L1467
>
>> Can anyone think of any reason why the old data reappeared?
> It sounds like you are doing things correctly. The complicating factor 
> is 0.6 is so very old.
>
>
> If I wanted to poke around some more I would conduct reads as CL one 
> against nodes and see if they return the "deleted" data or not. This 
> would help me understand if the tombstone is still out there.
>
> I would also poke around a lot in the logs to make sure repair was 
> running as expected and completing. If you find anything suspicious 
> post examples.
>
> Finally I would ensure CL QUROUM was been used.
>
> Hope that helps.
>
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6/03/2012, at 10:13 PM, Stefan Reek wrote:
>
>> Hi,
>>
>> We were running a 3-node cluster of cassandra 0.6.13 with RF=3.
>> After we added a fourth node, keeping RF=3, some old data appeared in 
>> the database.
>> As far as I understand this can only happen if nodetool repair wasn't 
>> run for more than GCGraceSeconds.
>> Our GCGraceSeconds is set to the default of 10 days (864000 seconds).
>> We have  a scheduled cronjob to run repair once each week on every 
>> node, each on another day.
>> I'm sure that none of the nodes ever skipped running a repair.
>> We don't run compact on the nodes explicitly as I understand that 
>> running repair will trigger a
>> major compaction. I'm not entirely sure if it does so, but in any 
>> case the tombstones will be removed by a minor
>> compaction. So I expected that the reappearing data, which is a 
>> couple of months old in some cases, was long gone
>> by the time we added the node.
>>
>> Can anyone think of any reason why the old data reappeared?
>>
>> Stefan
>


Re: Old data coming alive after adding node

Posted by aaron morton <aa...@thelastpickle.com>.
> After we added a fourth node, keeping RF=3, some old data appeared in the database.
What CL are you working at ? (Should not matter too much with repair working, just asking)


> We don't run compact on the nodes explicitly as I understand that running repair will trigger a
> major compaction. I'm not entirely sure if it does so, but in any case the tombstones will be removed by a minor
> compaction.
In 0.6.x tombstones were only purged during a major / manual compaction. Purging during minor compaction came in during 0.7
https://github.com/apache/cassandra/blob/trunk/CHANGES.txt#L1467

> Can anyone think of any reason why the old data reappeared?
It sounds like you are doing things correctly. The complicating factor is 0.6 is so very old. 


If I wanted to poke around some more I would conduct reads as CL one against nodes and see if they return the "deleted" data or not. This would help me understand if the tombstone is still out there. 

I would also poke around a lot in the logs to make sure repair was running as expected and completing. If you find anything suspicious post examples. 

Finally I would ensure CL QUROUM was been used. 

Hope that helps.


-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 6/03/2012, at 10:13 PM, Stefan Reek wrote:

> Hi,
> 
> We were running a 3-node cluster of cassandra 0.6.13 with RF=3.
> After we added a fourth node, keeping RF=3, some old data appeared in the database.
> As far as I understand this can only happen if nodetool repair wasn't run for more than GCGraceSeconds.
> Our GCGraceSeconds is set to the default of 10 days (864000 seconds).
> We have  a scheduled cronjob to run repair once each week on every node, each on another day.
> I'm sure that none of the nodes ever skipped running a repair.
> We don't run compact on the nodes explicitly as I understand that running repair will trigger a
> major compaction. I'm not entirely sure if it does so, but in any case the tombstones will be removed by a minor
> compaction. So I expected that the reappearing data, which is a couple of months old in some cases, was long gone
> by the time we added the node.
> 
> Can anyone think of any reason why the old data reappeared?
> 
> Stefan