You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Fraizier (JIRA)" <ji...@apache.org> on 2018/02/14 14:16:00 UTC

[jira] [Created] (CASSANDRA-14235) ReadFailure Error -- Large Unbound Query

Fraizier created CASSANDRA-14235:
------------------------------------

             Summary: ReadFailure Error -- Large Unbound Query 
                 Key: CASSANDRA-14235
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-14235
             Project: Cassandra
          Issue Type: Bug
          Components: CQL
         Environment: My instance of Cassandra is a single local node. It was installed via a tar file, versus installing it as a service. All settings are default. 

I'm operating on a Centos 7 machine (release 7.4.1708)
            Reporter: Fraizier
             Fix For: 3.11.1


Receiving ReadFailure Error when executing 'select' query with cassandra python-driver.  

I have a keyspace called "Documents" and a table with two columns, name and object. Name is the text datatype and object is the blob datatype. The blob objects are pickled python class instances. The description of the keyspace/table is as follows:

 
{code:java}
CREATE TABLE "Documents".table ( 
     name text PRIMARY KEY, 
     object blob 
) WITH bloom_filter_fp_chance = 0.01 
     AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' 
     AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} 
     AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} 
     AND crc_check_chance = 1.0 
     AND dclocal_read_repair_chance = 0.1 
     AND default_time_to_live = 0 AND gc_grace_seconds = 864000 
     AND max_index_interval = 2048 
     AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 
     AND read_repair_chance = 0.0 
     AND speculative_retry = '99PERCENTILE';{code}
 

There are 3509 rows contained within this table and each object is approx. 25kb of data. (so I'm estimating ~90Mb of data in the object column.) I'm attempting to run a simple line of python cassandra code :
{code:java}
rows = session.execute("SELECT name, object FROM table")
{code}
and in the log file of cassandra this is what is produced:
{code:java}
WARN  [ReadStage-4] 2018-02-13 14:53:12,319 AbstractLocalAwareExecutorService.java:167 - Uncaught exception on thread Thread[ReadStage-4,10,main]: {}
java.lang.RuntimeException: java.lang.RuntimeException
    at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2598) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_151]
    at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:162) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:134) [apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:109) [apache-cassandra-3.11.1.jar:3.11.1]
    at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]
Caused by: java.lang.RuntimeException: null
    at org.apache.cassandra.io.util.DataOutputBuffer.validateReallocation(DataOutputBuffer.java:134) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.io.util.DataOutputBuffer.calculateNewSize(DataOutputBuffer.java:152) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.io.util.DataOutputBuffer.reallocate(DataOutputBuffer.java:159) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.io.util.DataOutputBuffer.doFlush(DataOutputBuffer.java:119) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:132) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.write(BufferedDataOutputStreamPlus.java:151) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.utils.ByteBufferUtil.writeWithVIntLength(ByteBufferUtil.java:296) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.marshal.AbstractType.writeValue(AbstractType.java:413) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.Cell$Serializer.serialize(Cell.java:210) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredSerializer.lambda$serializeRowBody$0(UnfilteredSerializer.java:248) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.utils.btree.BTree.applyForwards(BTree.java:1242) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.utils.btree.BTree.apply(BTree.java:1197) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.BTreeRow.apply(BTreeRow.java:172) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredSerializer.serializeRowBody(UnfilteredSerializer.java:236) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:205) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:137) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredSerializer.serialize(UnfilteredSerializer.java:125) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:137) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:92) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.rows.UnfilteredRowIteratorSerializer.serialize(UnfilteredRowIteratorSerializer.java:79) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.partitions.UnfilteredPartitionIterators$Serializer.serialize(UnfilteredPartitionIterators.java:308) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.build(ReadResponse.java:167) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:160) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.ReadResponse$LocalDataResponse.<init>(ReadResponse.java:156) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.ReadResponse.createDataResponse(ReadResponse.java:76) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.db.ReadCommand.createResponse(ReadCommand.java:346) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:1886) ~[apache-cassandra-3.11.1.jar:3.11.1]
    at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2594) ~[apache-cassandra-3.11.1.jar:3.11.1]
... 5 common frames omitted
{code}
In addition to my query working in CQLSH, I can also execute the query from python on other identical tables. The only difference being they have less rows ∴ less data.

I have attempted to change timeout values in the yaml file to not avail, which makes sense because this is not a timeout issue. I have also modified the tombstone_failure_threshold variable in the yaml file, again to no avail.

How can I execute a query on a large dataset without receiving this error. Is it possible to set a batch_size variable? Any guidance at this point would be helpful.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@cassandra.apache.org
For additional commands, e-mail: commits-help@cassandra.apache.org