You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Roland Etzenhammer <r....@t-online.de> on 2015/01/08 09:16:34 UTC
incremental repairs
Hi,
I am currently trying to migrate my test cluster to incremental repairs.
These are the steps I'm doing on every node:
- touch marker
- nodetool disableautocompation
- nodetool repair
- cassandra stop
- find all *Data*.db files older then marker
- invoke sstablerepairedset on those
- cassandra start
This is essentially what
http://www.datastax.com/dev/blog/anticompaction-in-cassandra-2-1 says.
After all nodes migrated this way, I think I need to run my regular
repairs more often and they should be faster afterwards. But do I need
to run "nodetool repair" or is "nodetool repair -pr" sufficient?
And do I need to reenable autocompation? Oder do I need to compact myself?
Thanks for any input,
Roland
Re: incremental repairs
Posted by Robert Coli <rc...@eventbrite.com>.
On Thu, Jan 8, 2015 at 12:28 AM, Marcus Eriksson <kr...@gmail.com> wrote:
> But, if you are running 2.1 in production, I would recommend that you wait
> until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
> fixes a bunch of issues with incremental repairs
>
There are other serious issues with 2.1.2, I +1 recommend no one run it in
production. :D
=Rob
Re: incremental repairs
Posted by Roland Etzenhammer <r....@t-online.de>.
Hi Marcus,
thanks a lot for those pointers. Now further testing can begin - and
I'll wait for 2.1.3. Right now on production repair times are really
painful, maybe that will become better. At least I hope so :-)
Re: incremental repairs
Posted by Marcus Eriksson <kr...@gmail.com>.
Yes, you should reenable autocompaction
/Marcus
On Thu, Jan 8, 2015 at 10:33 AM, Roland Etzenhammer <
r.etzenhammer@t-online.de> wrote:
> Hi Marcus,
>
> thanks for that quick reply. I did also look at:
>
> http://www.datastax.com/documentation/cassandra/2.1/
> cassandra/operations/ops_repair_nodes_c.html
>
> which describes the same process, it's 2.1.x, so I see that 2.1.2+ is not
> covered there. I did upgrade my testcluster to 2.1.2 and with your hint I
> take a look at sstablemetadata from a non "migrated" node and there are
> indeed "Repaired at" entries on some sstables already. So if I got this
> right, in 2.1.2+ there is nothing to do to switch to incremental repairs
> (apart from running the repairs themself).
>
> But one thing I see during testing is that there are many sstables, with
> small size:
>
> - in total there are 5521 sstables on one node
> - 115 sstables are bigger than 1MB
> - 4949 sstables are smaller than 10kB
>
> I don't know where they came from - I found one piece of information where
> this happend when cassandra was low on heap which happend to me while
> running tests (the suggested solution is to trigger compaction via JMX).
>
> Question for me: I did disable autocompaction on some nodes of our test
> cluster as the blog and docs said. Should/can I reenable autocompaction
> again with incremental repairs?
>
> Cheers,
> Roland
>
>
>
>
Re: incremental repairs
Posted by Roland Etzenhammer <r....@t-online.de>.
Hi Marcus,
thanks for that quick reply. I did also look at:
http://www.datastax.com/documentation/cassandra/2.1/cassandra/operations/ops_repair_nodes_c.html
which describes the same process, it's 2.1.x, so I see that 2.1.2+ is
not covered there. I did upgrade my testcluster to 2.1.2 and with your
hint I take a look at sstablemetadata from a non "migrated" node and
there are indeed "Repaired at" entries on some sstables already. So if I
got this right, in 2.1.2+ there is nothing to do to switch to
incremental repairs (apart from running the repairs themself).
But one thing I see during testing is that there are many sstables, with
small size:
- in total there are 5521 sstables on one node
- 115 sstables are bigger than 1MB
- 4949 sstables are smaller than 10kB
I don't know where they came from - I found one piece of information
where this happend when cassandra was low on heap which happend to me
while running tests (the suggested solution is to trigger compaction via
JMX).
Question for me: I did disable autocompaction on some nodes of our test
cluster as the blog and docs said. Should/can I reenable autocompaction
again with incremental repairs?
Cheers,
Roland
Re: incremental repairs
Posted by Marcus Eriksson <kr...@gmail.com>.
If you are on 2.1.2+ (or using STCS) you don't those steps (should probably
update the blog post).
Now we keep separate levelings for the repaired/unrepaired data and move
the sstables over after the first incremental repair
But, if you are running 2.1 in production, I would recommend that you wait
until 2.1.3 is out, https://issues.apache.org/jira/browse/CASSANDRA-8316
fixes a bunch of issues with incremental repairs
-pr is sufficient, same rules apply as before, if you run -pr you need to
repair every node
/Marcus
On Thu, Jan 8, 2015 at 9:16 AM, Roland Etzenhammer <
r.etzenhammer@t-online.de> wrote:
> Hi,
>
> I am currently trying to migrate my test cluster to incremental repairs.
> These are the steps I'm doing on every node:
>
> - touch marker
> - nodetool disableautocompation
> - nodetool repair
> - cassandra stop
> - find all *Data*.db files older then marker
> - invoke sstablerepairedset on those
> - cassandra start
>
> This is essentially what http://www.datastax.com/dev/
> blog/anticompaction-in-cassandra-2-1 says. After all nodes migrated this
> way, I think I need to run my regular repairs more often and they should be
> faster afterwards. But do I need to run "nodetool repair" or is "nodetool
> repair -pr" sufficient?
>
> And do I need to reenable autocompation? Oder do I need to compact myself?
>
> Thanks for any input,
> Roland
>