You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Michael Vaknine <mi...@citypath.com> on 2011/11/21 14:53:33 UTC

Upgrade Cassandra Cluster to 1.0.3

Hi,
Any help will be appreciated.

I am upgrading Cassandra 1.0.0 to 1.0.3 got error 
ERROR [CompactionExecutor:3] 2011-11-21 11:10:59,075
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[CompactionExecutor:     3,1,main]
3645 java.lang.StackOverflowError
3646     at com.google.common.base.Objects.equal(Objects.java:51)
3647     at org.apache.cassandra.utils.Pair.equals(Pair.java:48)
3648     at
java.util.concurrent.ConcurrentHashMap$Segment.get(ConcurrentHashMap.java:33
8)
3649     at
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:769)
3650     at
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.get(Concurren
tLinkedHashMap.java:740)
3651     at
org.apache.cassandra.cache.ConcurrentLinkedHashCache.get(ConcurrentLinkedHas
hCache.java:81)
3652     at
org.apache.cassandra.cache.InstrumentingCache.get(InstrumentingCache.java:68
)
3653     at
org.apache.cassandra.io.sstable.SSTableReader.getCachedPosition(SSTableReade
r.java:598)
3654     at
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java
:621)
3655     at
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader
.java:786)
3656     at
org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNa
mesIterator.java:61)

I tried to increase Xss to 640M and then got this error

ERROR [Thread-28] 2011-11-21 12:52:40,808 AbstractCassandraDaemon.java (line
133) Fatal exception in thread Thread[Thread-28,5,main]
4940 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
java.lang.Error: Maximum lock count exceeded
4941     at
org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
es(SecondaryIndexManager.java:131)
4942     at
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSessi
on.java:151)
4943     at
org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReade
r.java:102)
4944     at
org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.
java:184)
4945     at
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.jav
a:81)
4946 Caused by: java.util.concurrent.ExecutionException: java.lang.Error:
Maximum lock count exceeded
4947     at
java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
4948     at java.util.concurrent.FutureTask.get(FutureTask.java:83)
4949     at
org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
es(SecondaryIndexManager.java:122)
4950     ... 4 more
4951 Caused by: java.lang.Error: Maximum lock count exceeded

None of this errors happen on 1.0.0


Thanks
Michael


Upgrade cassandra to 1.0.0

Posted by Michael Vaknine <mi...@citypath.com>.
I am upgrading Cassandra 0.7.8 to 1.0.0 got error

ERROR [SSTableBatchOpen:2] 2011-11-22 09:48:00,000
AbstractCassandraDaemon.java (line 133) Fatal exception in thread
Thread[SSTableBatchOpen:2,5,main], 
java.lang.AssertionError, 
org.apache.cassandra.io.sstable.SSTable.<init>(SSTable.java:99), 
org.apache.cassandra.io.sstable.SSTableReader.<init>(SSTableReader.java:261)
, 
org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:139), 
org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:196),

java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441), 
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303), 
java.util.concurrent.FutureTask.run(FutureTask.java:138), 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.ja
va:886), 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
08), 
java.lang.Thread.run(Thread.java:619)

 
Thanks
Michael


Re: Upgrade Cassandra Cluster to 1.0.3

Posted by Jonathan Ellis <jb...@gmail.com>.
That should do the trick.

2011/11/23 Michael Vaknine <mi...@citypath.com>:
> Hi Jonathan,
>
> You are right I had 1 node 1.0.2 for some reason so I did the upgrade again.
> I have now a 4 cluster upgraded to 1.0.3 but now I get the following error
> on 2 nodes on the cluster:
>
> ERROR [HintedHandoff:3] 2011-11-23 06:39:31,250 AbstractCassandraDaemon.java
> (line 133) Fatal exception in thread Thread[HintedHandoff:3,1,main]
> java.lang.AssertionError
>        at
> org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHa
> ndOffManager.java:301)
>        at
> org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager
> .java:81)
>        at
> org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffMana
> ger.java:353)
>        at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.ja
> va:886)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
> 08)
>        at java.lang.Thread.run(Thread.java:619)
>
> if I did cassandra repair on all the servers on the cluster can I simply
> truncate the column family HintsColumnFamily?
> Will I lose any data?
>
> Thanks
> Michael
>
> -----Original Message-----
> From: Jonathan Ellis [mailto:jbellis@gmail.com]
> Sent: Monday, November 21, 2011 8:40 PM
> To: user@cassandra.apache.org
> Cc: cassandra-user@incubator.apache.org
> Subject: Re: Upgrade Cassandra Cluster to 1.0.3
>
> Sounds to me like
> https://issues.apache.org/jira/browse/CASSANDRA-3491, which was
> present in 1.0.2 and fixed in 1.0.3.  It sounds like you're running
> the wrong version by mistake.
>
> On Mon, Nov 21, 2011 at 7:53 AM, Michael Vaknine <mi...@citypath.com>
> wrote:
>> Hi,
>> Any help will be appreciated.
>>
>> I am upgrading Cassandra 1.0.0 to 1.0.3 got error
>> ERROR [CompactionExecutor:3] 2011-11-21 11:10:59,075
>> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
>> Thread[CompactionExecutor:     3,1,main]
>> 3645 java.lang.StackOverflowError
>> 3646     at com.google.common.base.Objects.equal(Objects.java:51)
>> 3647     at org.apache.cassandra.utils.Pair.equals(Pair.java:48)
>> 3648     at
>>
> java.util.concurrent.ConcurrentHashMap$Segment.get(ConcurrentHashMap.java:33
>> 8)
>> 3649     at
>> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:769)
>> 3650     at
>>
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.get(Concurren
>> tLinkedHashMap.java:740)
>> 3651     at
>>
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.get(ConcurrentLinkedHas
>> hCache.java:81)
>> 3652     at
>>
> org.apache.cassandra.cache.InstrumentingCache.get(InstrumentingCache.java:68
>> )
>> 3653     at
>>
> org.apache.cassandra.io.sstable.SSTableReader.getCachedPosition(SSTableReade
>> r.java:598)
>> 3654     at
>>
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java
>> :621)
>> 3655     at
>>
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader
>> .java:786)
>> 3656     at
>>
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNa
>> mesIterator.java:61)
>>
>> I tried to increase Xss to 640M and then got this error
>>
>> ERROR [Thread-28] 2011-11-21 12:52:40,808 AbstractCassandraDaemon.java
> (line
>> 133) Fatal exception in thread Thread[Thread-28,5,main]
>> 4940 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
>> java.lang.Error: Maximum lock count exceeded
>> 4941     at
>>
> org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
>> es(SecondaryIndexManager.java:131)
>> 4942     at
>>
> org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSessi
>> on.java:151)
>> 4943     at
>>
> org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReade
>> r.java:102)
>> 4944     at
>>
> org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.
>> java:184)
>> 4945     at
>>
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.jav
>> a:81)
>> 4946 Caused by: java.util.concurrent.ExecutionException: java.lang.Error:
>> Maximum lock count exceeded
>> 4947     at
>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>> 4948     at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>> 4949     at
>>
> org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
>> es(SecondaryIndexManager.java:122)
>> 4950     ... 4 more
>> 4951 Caused by: java.lang.Error: Maximum lock count exceeded
>>
>> None of this errors happen on 1.0.0
>>
>>
>> Thanks
>> Michael
>>
>>
>
>
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder of DataStax, the source for professional Cassandra support
> http://www.datastax.com
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

RE: Upgrade Cassandra Cluster to 1.0.3

Posted by Michael Vaknine <mi...@citypath.com>.
Hi Jonathan,

You are right I had 1 node 1.0.2 for some reason so I did the upgrade again.
I have now a 4 cluster upgraded to 1.0.3 but now I get the following error
on 2 nodes on the cluster:

ERROR [HintedHandoff:3] 2011-11-23 06:39:31,250 AbstractCassandraDaemon.java
(line 133) Fatal exception in thread Thread[HintedHandoff:3,1,main]
java.lang.AssertionError
        at
org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHa
ndOffManager.java:301)
        at
org.apache.cassandra.db.HintedHandOffManager.access$100(HintedHandOffManager
.java:81)
        at
org.apache.cassandra.db.HintedHandOffManager$2.runMayThrow(HintedHandOffMana
ger.java:353)
        at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.ja
va:886)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9
08)
        at java.lang.Thread.run(Thread.java:619)

if I did cassandra repair on all the servers on the cluster can I simply
truncate the column family HintsColumnFamily?
Will I lose any data?

Thanks
Michael

-----Original Message-----
From: Jonathan Ellis [mailto:jbellis@gmail.com] 
Sent: Monday, November 21, 2011 8:40 PM
To: user@cassandra.apache.org
Cc: cassandra-user@incubator.apache.org
Subject: Re: Upgrade Cassandra Cluster to 1.0.3

Sounds to me like
https://issues.apache.org/jira/browse/CASSANDRA-3491, which was
present in 1.0.2 and fixed in 1.0.3.  It sounds like you're running
the wrong version by mistake.

On Mon, Nov 21, 2011 at 7:53 AM, Michael Vaknine <mi...@citypath.com>
wrote:
> Hi,
> Any help will be appreciated.
>
> I am upgrading Cassandra 1.0.0 to 1.0.3 got error
> ERROR [CompactionExecutor:3] 2011-11-21 11:10:59,075
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:     3,1,main]
> 3645 java.lang.StackOverflowError
> 3646     at com.google.common.base.Objects.equal(Objects.java:51)
> 3647     at org.apache.cassandra.utils.Pair.equals(Pair.java:48)
> 3648     at
>
java.util.concurrent.ConcurrentHashMap$Segment.get(ConcurrentHashMap.java:33
> 8)
> 3649     at
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:769)
> 3650     at
>
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.get(Concurren
> tLinkedHashMap.java:740)
> 3651     at
>
org.apache.cassandra.cache.ConcurrentLinkedHashCache.get(ConcurrentLinkedHas
> hCache.java:81)
> 3652     at
>
org.apache.cassandra.cache.InstrumentingCache.get(InstrumentingCache.java:68
> )
> 3653     at
>
org.apache.cassandra.io.sstable.SSTableReader.getCachedPosition(SSTableReade
> r.java:598)
> 3654     at
>
org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java
> :621)
> 3655     at
>
org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader
> .java:786)
> 3656     at
>
org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNa
> mesIterator.java:61)
>
> I tried to increase Xss to 640M and then got this error
>
> ERROR [Thread-28] 2011-11-21 12:52:40,808 AbstractCassandraDaemon.java
(line
> 133) Fatal exception in thread Thread[Thread-28,5,main]
> 4940 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.Error: Maximum lock count exceeded
> 4941     at
>
org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
> es(SecondaryIndexManager.java:131)
> 4942     at
>
org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSessi
> on.java:151)
> 4943     at
>
org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReade
> r.java:102)
> 4944     at
>
org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.
> java:184)
> 4945     at
>
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.jav
> a:81)
> 4946 Caused by: java.util.concurrent.ExecutionException: java.lang.Error:
> Maximum lock count exceeded
> 4947     at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> 4948     at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> 4949     at
>
org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
> es(SecondaryIndexManager.java:122)
> 4950     ... 4 more
> 4951 Caused by: java.lang.Error: Maximum lock count exceeded
>
> None of this errors happen on 1.0.0
>
>
> Thanks
> Michael
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com


Re: Upgrade Cassandra Cluster to 1.0.3

Posted by Jonathan Ellis <jb...@gmail.com>.
Sounds to me like
https://issues.apache.org/jira/browse/CASSANDRA-3491, which was
present in 1.0.2 and fixed in 1.0.3.  It sounds like you're running
the wrong version by mistake.

On Mon, Nov 21, 2011 at 7:53 AM, Michael Vaknine <mi...@citypath.com> wrote:
> Hi,
> Any help will be appreciated.
>
> I am upgrading Cassandra 1.0.0 to 1.0.3 got error
> ERROR [CompactionExecutor:3] 2011-11-21 11:10:59,075
> AbstractCassandraDaemon.java (line 133) Fatal exception in thread
> Thread[CompactionExecutor:     3,1,main]
> 3645 java.lang.StackOverflowError
> 3646     at com.google.common.base.Objects.equal(Objects.java:51)
> 3647     at org.apache.cassandra.utils.Pair.equals(Pair.java:48)
> 3648     at
> java.util.concurrent.ConcurrentHashMap$Segment.get(ConcurrentHashMap.java:33
> 8)
> 3649     at
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:769)
> 3650     at
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.get(Concurren
> tLinkedHashMap.java:740)
> 3651     at
> org.apache.cassandra.cache.ConcurrentLinkedHashCache.get(ConcurrentLinkedHas
> hCache.java:81)
> 3652     at
> org.apache.cassandra.cache.InstrumentingCache.get(InstrumentingCache.java:68
> )
> 3653     at
> org.apache.cassandra.io.sstable.SSTableReader.getCachedPosition(SSTableReade
> r.java:598)
> 3654     at
> org.apache.cassandra.io.sstable.SSTableReader.getPosition(SSTableReader.java
> :621)
> 3655     at
> org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader
> .java:786)
> 3656     at
> org.apache.cassandra.db.columniterator.SSTableNamesIterator.<init>(SSTableNa
> mesIterator.java:61)
>
> I tried to increase Xss to 640M and then got this error
>
> ERROR [Thread-28] 2011-11-21 12:52:40,808 AbstractCassandraDaemon.java (line
> 133) Fatal exception in thread Thread[Thread-28,5,main]
> 4940 java.lang.RuntimeException: java.util.concurrent.ExecutionException:
> java.lang.Error: Maximum lock count exceeded
> 4941     at
> org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
> es(SecondaryIndexManager.java:131)
> 4942     at
> org.apache.cassandra.streaming.StreamInSession.closeIfFinished(StreamInSessi
> on.java:151)
> 4943     at
> org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReade
> r.java:102)
> 4944     at
> org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.
> java:184)
> 4945     at
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.jav
> a:81)
> 4946 Caused by: java.util.concurrent.ExecutionException: java.lang.Error:
> Maximum lock count exceeded
> 4947     at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
> 4948     at java.util.concurrent.FutureTask.get(FutureTask.java:83)
> 4949     at
> org.apache.cassandra.db.index.SecondaryIndexManager.maybeBuildSecondaryIndex
> es(SecondaryIndexManager.java:122)
> 4950     ... 4 more
> 4951 Caused by: java.lang.Error: Maximum lock count exceeded
>
> None of this errors happen on 1.0.0
>
>
> Thanks
> Michael
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com