You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Cathy Daw (JIRA)" <ji...@apache.org> on 2012/08/14 08:52:38 UTC
[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException
when massive insert binary data
[ https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13433944#comment-13433944 ]
Cathy Daw commented on CASSANDRA-4538:
--------------------------------------
I tried lots of permutations and could not reproduce.
Can you verify if this consistently reproducible for you?
Here are my repro tests
{code}
// Test Setup
* Modify: InsertThread.java to change host IP address
* Run: mvn install
* Start: cassandra 1.1.4
// Test Run
* Test Setup: create / modify KS and CF below
* Run test: mvn exec:java -Dexec.mainClass="com.test.CreateTestData"
// *** cassandra-cli ***
create keyspace ST with
placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
and strategy_options = {replication_factor:1};
use ST;
// Test #1: SizeTieredCompactionStrategy
create column family company;
// Test #2: SizeTieredCompactionStrategy and 1mb sstables
drop column family company;
create column family company with
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};
// Test #3: SizeTieredCompactionStrategy and 100mb sstables
drop column family company;
create column family company with
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};
// Test #4: LeveledCompactionStrategy and 10mb sstables
drop column family company;
create column family company
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 10};
// Test #5: LeveledCompactionStrategy and 1mb sstables
drop column family company;
create column family company
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};
// Test #6: LeveledCompactionStrategy and 100mb sstables
drop column family company;
create column family company
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};
// ADDITIONAL TESTS VIA JAVA STRESS
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=100000 --num-different-keys=20000 --columns=2 --threads=2 --compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy --column-size=20000
./cassandra-stress --operation=READ --num-keys=100000 --num-different-keys=20000 --columns=2 --threads=2 --compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy --column-size=20000
// Distructive test: check nodetool -h localhost compactionstats and run the following while there are pending compactions
./cassandra-stress --operation=INSERT --num-keys=1000 --num-different-keys=100 --columns=2 --threads=2 --compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy --column-size=20000
// Tried with SizeTieredCompactionStrategy
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=60000 --num-different-keys=20000 --columns=2 --compression=SnappyCompressor --compaction-strategy=SizeTieredCompactionStrategy --column-size=20000
./cassandra-stress --operation=READ --num-keys=60000 --num-different-keys=20000 --columns=2 --compression=SnappyCompressor --compaction-strategy=SizeTieredCompactionStrategy --column-size=20000
// Distructive test: check nodetool -h localhost compactionstats and kill the c* server while compactions are in progress and then restart
{code}
> Strange CorruptedBlockException when massive insert binary data
> ---------------------------------------------------------------
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
> Issue Type: Bug
> Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
> Reporter: Tommy Cheng
> Priority: Critical
> Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 10000 records, here is the error log
> INFO 10:53:33,543 Compacted to [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,]. 407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 0.715926MB/s. Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption detected, chunk at 7530128 of length 19575.
> at org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at org.apache.cassandra.db.compaction.PrecompactedRow.<init>(PrecompactedRow.java:99)
> at org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
> at org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
> at org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
> at org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
> at org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
> at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
> at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
> at org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
> at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption detected, chunk at 7530128 of length 19575.
> at org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
> at org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
> at org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
> at org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
> at org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
> at org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
> at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
> at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
> at org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
> at org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:234)
> at org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
> ... 20 more
> Here is the startup of cassandra
> root@cassandra-desktop:~# cassandra -f
> xss = -ea -javaagent:/usr/share/cassandra/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms496M -Xmx496M -Xmn124M -XX:+HeapDumpOnOutOfMemoryError -Xss128k
> INFO 10:56:37,113 Logging initialized
> INFO 10:56:37,122 JVM vendor/version: Java HotSpot(TM) Client VM/1.6.0_26
> INFO 10:56:37,123 Heap size: 507117568/507117568
> INFO 10:56:37,123 Classpath: /etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/avro-1.4.0-fixes.jar:/usr/share/cassandra/lib/avro-1.4.0-sources-fixes.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang-2.4.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/guava-r08.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jline-0.9.94.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.7.0.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/metrics-core-2.0.3.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.6.1.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.6.1.jar:/usr/share/cassandra/lib/snakeyaml-1.6.jar:/usr/share/cassandra/lib/snappy-java-1.0.4.1.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/apache-cassandra-1.1.3.jar:/usr/share/cassandra/apache-cassandra-thrift-1.1.3.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar
> INFO 10:56:37,126 JNA not found. Native methods will be disabled.
> INFO 10:56:37,143 Loading settings from file:/etc/cassandra/cassandra.yaml
> Attached is the test case
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira