You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Jeff House <ho...@flowroute.com> on 2013/07/04 00:58:09 UTC
Ranged Tombstones causing timeouts, not removed during compaction.
How to remove?
We are on 1.2.5 with a 4 node cluster (RF 3) and have a cql3 wide row
table. each row has about 2000 columns. While running some test data
through it, it started throwing rpc_timeout errors when returning a couple
specific rows (with Consistency ONE).
After hunting through sstable2json results and looking at the source for
it, it looks like these are Ranged Tombstones. I see there's a bug filed
(and a patch) for this, but is there a way to clear out the tombstones? I
have 'nodetool cleanup'ed, 'nodetool repair'ed and 'nodetool scrub'bed the
table, but they just seem to linger as does the problem reading the rows in
question.
Is there a way I can clear this data out and move forward?
Thanks,
-Jeff
Re: Ranged Tombstones causing timeouts, not removed during
compaction. How to remove?
Posted by Jeff House <ho...@flowroute.com>.
Thanks Jeremiah, those are great suggestions.
Unfortunately, I have done a full repair and compaction on that CF, but the
ranged tombstones remain.
-Jeff
On Wed, Jul 3, 2013 at 7:54 PM, Jeremiah D Jordan <jeremiah.jordan@gmail.com
> wrote:
> To force clean out a tombstone.
>
> 1. Stop doing deletes on the CF, or switch to performing all deletes at ALL
> 2. Run a full repair of the cluster for that CF.
> 3. Change GC grace to be small, like 5 seconds or something for that CF
> Either:
> 4. Find all sstables which have that row key in them using
> sstable2keys/json
> 5. Use JMX to force those tables to compact with each other.
> Or
> 4. Do a major compaction on that CF.
>
> -Jeremiah
>
> On Jul 3, 2013, at 5:58 PM, Jeff House <ho...@flowroute.com> wrote:
>
> >
> > We are on 1.2.5 with a 4 node cluster (RF 3) and have a cql3 wide row
> table. each row has about 2000 columns. While running some test data
> through it, it started throwing rpc_timeout errors when returning a couple
> specific rows (with Consistency ONE).
> >
> > After hunting through sstable2json results and looking at the source for
> it, it looks like these are Ranged Tombstones. I see there's a bug filed
> (and a patch) for this, but is there a way to clear out the tombstones? I
> have 'nodetool cleanup'ed, 'nodetool repair'ed and 'nodetool scrub'bed the
> table, but they just seem to linger as does the problem reading the rows in
> question.
> >
> > Is there a way I can clear this data out and move forward?
> >
> > Thanks,
> >
> > -Jeff
>
>
Re: Ranged Tombstones causing timeouts, not removed during compaction. How to remove?
Posted by Jeremiah D Jordan <je...@gmail.com>.
To force clean out a tombstone.
1. Stop doing deletes on the CF, or switch to performing all deletes at ALL
2. Run a full repair of the cluster for that CF.
3. Change GC grace to be small, like 5 seconds or something for that CF
Either:
4. Find all sstables which have that row key in them using sstable2keys/json
5. Use JMX to force those tables to compact with each other.
Or
4. Do a major compaction on that CF.
-Jeremiah
On Jul 3, 2013, at 5:58 PM, Jeff House <ho...@flowroute.com> wrote:
>
> We are on 1.2.5 with a 4 node cluster (RF 3) and have a cql3 wide row table. each row has about 2000 columns. While running some test data through it, it started throwing rpc_timeout errors when returning a couple specific rows (with Consistency ONE).
>
> After hunting through sstable2json results and looking at the source for it, it looks like these are Ranged Tombstones. I see there's a bug filed (and a patch) for this, but is there a way to clear out the tombstones? I have 'nodetool cleanup'ed, 'nodetool repair'ed and 'nodetool scrub'bed the table, but they just seem to linger as does the problem reading the rows in question.
>
> Is there a way I can clear this data out and move forward?
>
> Thanks,
>
> -Jeff