You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Michael Kjellman (JIRA)" <ji...@apache.org> on 2013/01/03 21:36:13 UTC
[jira] [Issue Comment Deleted] (CASSANDRA-5102) upgrading from
1.1.7 to 1.2.0 caused upgraded nodes to only know about other 1.2.0 nodes
[ https://issues.apache.org/jira/browse/CASSANDRA-5102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Kjellman updated CASSANDRA-5102:
----------------------------------------
Comment: was deleted
(was: logs from another upgraded node:
{code}
INFO 14:44:32,505 Node /10.8.25.123 state jump to normal
INFO 14:44:32,679 Completed flushing /data/cassandra/system/local/system-local-ia-26-Data.db (129 bytes) for commitlog position ReplayPosition(segmentId=13571665
43508, position=348466)
INFO 14:44:32,707 Enqueuing flush of Memtable-local@1222484065(70/70 serialized/live bytes, 2 ops)
INFO 14:44:32,708 Writing Memtable-local@1222484065(70/70 serialized/live bytes, 2 ops)
INFO 14:44:32,730 CFS(Keyspace='OpsCenter', ColumnFamily='rollups60') liveRatio is 34.490243902439026 (just-counted was 34.490243902439026). calculation took 12
ms for 406 columns
INFO 14:44:32,738 CFS(Keyspace='OpsCenter', ColumnFamily='rollups300') liveRatio is 37.59350163627863 (just-counted was 36.72463768115942). calculation took 2ms
for 43 columns
INFO 14:44:32,760 CFS(Keyspace='OpsCenter', ColumnFamily='pdps') liveRatio is 2.1512722202858137 (just-counted was 2.1512722202858137). calculation took 1ms for
70 columns
INFO 14:44:32,773 CFS(Keyspace='OpsCenter', ColumnFamily='pdps') liveRatio is 2.162702452568255 (just-counted was 2.162702452568255). calculation took 1ms for 1
33 columns
INFO 14:44:32,892 Completed flushing /data/cassandra/system/local/system-local-ia-27-Data.db (129 bytes) for commitlog position ReplayPosition(segmentId=13571665
43508, position=348660)
INFO 14:44:32,910 Startup completed! Now serving reads.
INFO 14:44:32,914 Node /10.8.25.101 has restarted, now UP
INFO 14:44:32,914 InetAddress /10.8.25.101 is now UP
INFO 14:44:32,926 Node /10.8.25.101 state jump to normal
WARN 14:44:32,928 Skipping default superuser setup: some nodes are not ready
INFO 14:44:32,929 Enqueuing flush of Memtable-peers@988939977(198/198 serialized/live bytes, 13 ops)
INFO 14:44:32,929 Writing Memtable-peers@988939977(198/198 serialized/live bytes, 13 ops)
INFO 14:44:33,001 Not starting native transport as requested. Use JMX (StorageService->startNativeTransport()) to start it
INFO 14:44:33,003 Binding thrift service to /0.0.0.0:9160
INFO 14:44:33,039 Using TFramedTransport with a max frame size of 15728640 bytes.
INFO 14:44:33,049 Using synchronous/threadpool thrift server on 0.0.0.0 : 9160
INFO 14:44:33,049 Listening for thrift clients...
INFO 14:44:33,168 Compacted to [/data2/cassandra/system/peers/system-peers-ia-15-Data.db,]. 2,262 to 1,475 (~65% of original) bytes for 11 keys at 0.001595MB/s.
Time: 882ms.
{code})
> upgrading from 1.1.7 to 1.2.0 caused upgraded nodes to only know about other 1.2.0 nodes
> ----------------------------------------------------------------------------------------
>
> Key: CASSANDRA-5102
> URL: https://issues.apache.org/jira/browse/CASSANDRA-5102
> Project: Cassandra
> Issue Type: Bug
> Affects Versions: 1.2.0
> Reporter: Michael Kjellman
> Assignee: Brandon Williams
> Priority: Blocker
>
> I upgraded as I have since 0.86 and things didn't go very smoothly.
> I did a nodetool drain to my 1.1.7 node and changed my puppet config to use the new merged config. When it came back up (without any errors in the log) a nodetool ring only showed itself. I upgraded another node and sure enough now nodetool ring showed two nodes.
> I tried resetting the local schema. The upgraded node happily grabbed the schema again but still only 1.2 nodes were visible in the ring to any upgraded nodes.
> "Interesting" Log Lines:
> INFO 14:43:41,997 Using saved token [42535295865117307932921825928971026436]
> ....
> WARN 23:04:03,361 No host ID found, created 5cef7f51-688d-46c3-9fe4-6c82bde4bb98 (Note: This should happen exa
> ctly once per node).
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira