You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Hau Phan (JIRA)" <ji...@apache.org> on 2016/07/15 18:49:20 UTC

[jira] [Created] (CASSANDRA-12215) Read failure in cqlsh

Hau Phan created CASSANDRA-12215:
------------------------------------

             Summary: Read failure in cqlsh 
                 Key: CASSANDRA-12215
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12215
             Project: Cassandra
          Issue Type: Bug
          Components: Compaction
         Environment: Cassandra 3.0.8, cqlsh 5.0.1
            Reporter: Hau Phan


Running 3.0.8 on a single standalone node with cqlsh 5.0.1, the keyspace RF = 1 and class SimpleStrategy.  

Attempting to run a 'select * from <table>' and receiving this error:

ReadFailure: code=1300 [Replica(s) failed to execute read] message="Operation failed - received 0 responses and 1 failures" info={'failures': 1, 'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

Cassandra system.log prints this:
ERROR [CompactionExecutor:5] 2016-07-15 13:42:13,219 CassandraDaemon.java:201 - Exception in thread Thread[CompactionExecutor:5,1,main]
java.lang.NullPointerException: null
	at org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:263) ~[apache-cassandra-3.0.8.jar:3.0.8]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_65]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_65]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_65]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_65]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65]

Doing a sstabledump -d shows a few rows with the column value of "<tombstone>", telling me compaction doesn't seem to be working correctly.  

# nodetool compactionstats 
pending tasks: 1

attempting to run a compaction gets:
# nodetool compact <table> <cf>
error: null
-- StackTrace --
java.lang.NullPointerException
	at org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:58)
	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
	at org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
	at org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:96)
	at org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
	at org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:177)
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
	at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:78)
	at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
	at org.apache.cassandra.db.compaction.CompactionManager$8.runMayThrow(CompactionManager.java:606)
	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

Since the table is pretty small, I can do a copy to, truncate the table, and copy from, and the table is fine.  But my concern is if compaction fails to remove those rows, and the table will eventually be very large in a production environment, the copy, truncate, and copy will no longer be an option.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)