You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Neha Trivedi <ne...@gmail.com> on 2015/04/20 07:38:10 UTC

COPY command to export a table to CSV file

Hello all,

We are getting the OutOfMemoryError on one of the Node and the Node is
down, when we run the export command to get all the data from a table.


Regards
Neha




ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java (line
199) Exception in thread Thread[ReadStage:532074,5,main]
java.lang.OutOfMemoryError: Java heap space
        at
org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
        at
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
        at
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
        at
org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
        at
org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
        at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
        at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
        at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
        at
org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
        at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at
org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
        at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
        at
org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
        at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at
org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
        at
org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
        at
org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
        at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
        at
org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
        at
org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
        at
org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
        at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
        at
org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
        at
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
        at
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
        at
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
        at
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)

Re: COPY command to export a table to CSV file

Posted by Neha Trivedi <ne...@gmail.com>.
Values in /etc/security/limits.d/cassandra.conf

# Provided by the cassandra package
cassandra  -  memlock  unlimited
cassandra  -  nofile   100000


On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com> wrote:

> Hi,
>
> Thanks for the info,
>
> Does the nproc,nofile,memlock settings in
> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>
> What is the consistency level ?
>
> Best Regardds,
> Kiran.M.K.
>
>
> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
> wrote:
>
>> hi,
>>
>> What is the count of records in the column-family ?
>>       We have about 38,000 Rows in the column-family for which we are
>> trying to export
>> What  is the Cassandra Version ?
>>      We are using Cassandra 2.0.11
>>
>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>> The Server is 8 GB.
>>
>> regards
>> Neha
>>
>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
>>> file
>>>
>>> Also HEAP_NEWSIZE ?
>>>
>>> What is the Consistency Level you are using ?
>>>
>>> Best REgards,
>>> Kiran.M.K.
>>>
>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>> wrote:
>>>
>>>> Seems like the is related to JAVA HEAP Memory.
>>>>
>>>> What is the count of records in the column-family ?
>>>>
>>>> What  is the Cassandra Version ?
>>>>
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> We are getting the OutOfMemoryError on one of the Node and the Node is
>>>>> down, when we run the export command to get all the data from a table.
>>>>>
>>>>>
>>>>> Regards
>>>>> Neha
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>>>>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>         at
>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>         at
>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>         at
>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>         at
>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>         at
>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>         at
>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>         at
>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Kiran.M.K.
>>>
>>
>>
>
>
> --
> Best Regards,
> Kiran.M.K.
>

Re: COPY command to export a table to CSV file

Posted by Neha Trivedi <ne...@gmail.com>.
Thanks Sebastian, I will try it out.
But I am also curious why is the COPY command failing with Out of Memory
Error.

regards
Neha

On Tue, Apr 21, 2015 at 4:35 AM, Sebastian Estevez <
sebastian.estevez@datastax.com> wrote:

> Blobs are ByteBuffer s  it calls getBytes().toString:
>
>
> https://github.com/brianmhess/cassandra-loader/blob/master/src/main/java/com/datastax/loader/parser/ByteBufferParser.java#L35
>
> All the best,
>
>
> [image: datastax_logo.png] <http://www.datastax.com/>
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com
>
> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
> <https://twitter.com/datastax> [image: g+.png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax>
>
> <http://cassandrasummit-datastax.com/>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Mon, Apr 20, 2015 at 5:47 PM, Serega Sheypak <se...@gmail.com>
> wrote:
>
>> hi, what happens if unloader meets blob field?
>>
>> 2015-04-20 23:43 GMT+02:00 Sebastian Estevez <
>> sebastian.estevez@datastax.com>:
>>
>>> Try Brian's cassandra-unloader
>>> <https://github.com/brianmhess/cassandra-loader#cassandra-unloader>
>>>
>>> All the best,
>>>
>>>
>>> [image: datastax_logo.png] <http://www.datastax.com/>
>>>
>>> Sebastián Estévez
>>>
>>> Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com
>>>
>>> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
>>> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
>>> <https://twitter.com/datastax> [image: g+.png]
>>> <https://plus.google.com/+Datastax/about>
>>> <http://feeds.feedburner.com/datastax>
>>>
>>> <http://cassandrasummit-datastax.com/>
>>>
>>> DataStax is the fastest, most scalable distributed database technology,
>>> delivering Apache Cassandra to the world’s most innovative enterprises.
>>> Datastax is built to be agile, always-on, and predictably scalable to any
>>> size. With more than 500 customers in 45 countries, DataStax is the
>>> database technology and transactional backbone of choice for the worlds
>>> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>>
>>> On Mon, Apr 20, 2015 at 12:31 PM, Neha Trivedi <ne...@gmail.com>
>>> wrote:
>>>
>>>> Does the nproc,nofile,memlock settings in
>>>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>>> it's all default.
>>>>
>>>> What is the consistency level ?
>>>> CL = Qurom
>>>>
>>>> Is there any other way to export a table to CSV?
>>>>
>>>> regards
>>>> Neha
>>>>
>>>> On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for the info,
>>>>>
>>>>> Does the nproc,nofile,memlock settings in
>>>>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>>>>
>>>>> What is the consistency level ?
>>>>>
>>>>> Best Regardds,
>>>>> Kiran.M.K.
>>>>>
>>>>>
>>>>> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <nehajtrivedi@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> hi,
>>>>>>
>>>>>> What is the count of records in the column-family ?
>>>>>>       We have about 38,000 Rows in the column-family for which we are
>>>>>> trying to export
>>>>>> What  is the Cassandra Version ?
>>>>>>      We are using Cassandra 2.0.11
>>>>>>
>>>>>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>>>>>> The Server is 8 GB.
>>>>>>
>>>>>> regards
>>>>>> Neha
>>>>>>
>>>>>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh
>>>>>>> environment file
>>>>>>>
>>>>>>> Also HEAP_NEWSIZE ?
>>>>>>>
>>>>>>> What is the Consistency Level you are using ?
>>>>>>>
>>>>>>> Best REgards,
>>>>>>> Kiran.M.K.
>>>>>>>
>>>>>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Seems like the is related to JAVA HEAP Memory.
>>>>>>>>
>>>>>>>> What is the count of records in the column-family ?
>>>>>>>>
>>>>>>>> What  is the Cassandra Version ?
>>>>>>>>
>>>>>>>> Best Regards,
>>>>>>>> Kiran.M.K.
>>>>>>>>
>>>>>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <
>>>>>>>> nehajtrivedi@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hello all,
>>>>>>>>>
>>>>>>>>> We are getting the OutOfMemoryError on one of the Node and the
>>>>>>>>> Node is down, when we run the export command to get all the data from a
>>>>>>>>> table.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Regards
>>>>>>>>> Neha
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603
>>>>>>>>> CassandraDaemon.java (line 199) Exception in thread
>>>>>>>>> Thread[ReadStage:532074,5,main]
>>>>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>>>>>         at
>>>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>>         at
>>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Best Regards,
>>>>>>>> Kiran.M.K.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards,
>>>>>>> Kiran.M.K.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Kiran.M.K.
>>>>>
>>>>
>>>>
>>>
>>
>

Re: COPY command to export a table to CSV file

Posted by Sebastian Estevez <se...@datastax.com>.
Blobs are ByteBuffer s  it calls getBytes().toString:

https://github.com/brianmhess/cassandra-loader/blob/master/src/main/java/com/datastax/loader/parser/ByteBufferParser.java#L35

All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>

<http://cassandrasummit-datastax.com/>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 5:47 PM, Serega Sheypak <se...@gmail.com>
wrote:

> hi, what happens if unloader meets blob field?
>
> 2015-04-20 23:43 GMT+02:00 Sebastian Estevez <
> sebastian.estevez@datastax.com>:
>
>> Try Brian's cassandra-unloader
>> <https://github.com/brianmhess/cassandra-loader#cassandra-unloader>
>>
>> All the best,
>>
>>
>> [image: datastax_logo.png] <http://www.datastax.com/>
>>
>> Sebastián Estévez
>>
>> Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com
>>
>> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
>> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
>> <https://twitter.com/datastax> [image: g+.png]
>> <https://plus.google.com/+Datastax/about>
>> <http://feeds.feedburner.com/datastax>
>>
>> <http://cassandrasummit-datastax.com/>
>>
>> DataStax is the fastest, most scalable distributed database technology,
>> delivering Apache Cassandra to the world’s most innovative enterprises.
>> Datastax is built to be agile, always-on, and predictably scalable to any
>> size. With more than 500 customers in 45 countries, DataStax is the
>> database technology and transactional backbone of choice for the worlds
>> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>>
>> On Mon, Apr 20, 2015 at 12:31 PM, Neha Trivedi <ne...@gmail.com>
>> wrote:
>>
>>> Does the nproc,nofile,memlock settings in
>>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>> it's all default.
>>>
>>> What is the consistency level ?
>>> CL = Qurom
>>>
>>> Is there any other way to export a table to CSV?
>>>
>>> regards
>>> Neha
>>>
>>> On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Thanks for the info,
>>>>
>>>> Does the nproc,nofile,memlock settings in
>>>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>>>
>>>> What is the consistency level ?
>>>>
>>>> Best Regardds,
>>>> Kiran.M.K.
>>>>
>>>>
>>>> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
>>>> wrote:
>>>>
>>>>> hi,
>>>>>
>>>>> What is the count of records in the column-family ?
>>>>>       We have about 38,000 Rows in the column-family for which we are
>>>>> trying to export
>>>>> What  is the Cassandra Version ?
>>>>>      We are using Cassandra 2.0.11
>>>>>
>>>>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>>>>> The Server is 8 GB.
>>>>>
>>>>> regards
>>>>> Neha
>>>>>
>>>>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh
>>>>>> environment file
>>>>>>
>>>>>> Also HEAP_NEWSIZE ?
>>>>>>
>>>>>> What is the Consistency Level you are using ?
>>>>>>
>>>>>> Best REgards,
>>>>>> Kiran.M.K.
>>>>>>
>>>>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Seems like the is related to JAVA HEAP Memory.
>>>>>>>
>>>>>>> What is the count of records in the column-family ?
>>>>>>>
>>>>>>> What  is the Cassandra Version ?
>>>>>>>
>>>>>>> Best Regards,
>>>>>>> Kiran.M.K.
>>>>>>>
>>>>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <
>>>>>>> nehajtrivedi@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hello all,
>>>>>>>>
>>>>>>>> We are getting the OutOfMemoryError on one of the Node and the Node
>>>>>>>> is down, when we run the export command to get all the data from a table.
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Neha
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603
>>>>>>>> CassandraDaemon.java (line 199) Exception in thread
>>>>>>>> Thread[ReadStage:532074,5,main]
>>>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>>>>         at
>>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>>         at
>>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best Regards,
>>>>>>> Kiran.M.K.
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Kiran.M.K.
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>
>>>
>>
>

Re: COPY command to export a table to CSV file

Posted by Serega Sheypak <se...@gmail.com>.
hi, what happens if unloader meets blob field?

2015-04-20 23:43 GMT+02:00 Sebastian Estevez <sebastian.estevez@datastax.com
>:

> Try Brian's cassandra-unloader
> <https://github.com/brianmhess/cassandra-loader#cassandra-unloader>
>
> All the best,
>
>
> [image: datastax_logo.png] <http://www.datastax.com/>
>
> Sebastián Estévez
>
> Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com
>
> [image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
> facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
> <https://twitter.com/datastax> [image: g+.png]
> <https://plus.google.com/+Datastax/about>
> <http://feeds.feedburner.com/datastax>
>
> <http://cassandrasummit-datastax.com/>
>
> DataStax is the fastest, most scalable distributed database technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies such as Netflix, Adobe, Intuit, and eBay.
>
> On Mon, Apr 20, 2015 at 12:31 PM, Neha Trivedi <ne...@gmail.com>
> wrote:
>
>> Does the nproc,nofile,memlock settings in
>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>> it's all default.
>>
>> What is the consistency level ?
>> CL = Qurom
>>
>> Is there any other way to export a table to CSV?
>>
>> regards
>> Neha
>>
>> On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> Thanks for the info,
>>>
>>> Does the nproc,nofile,memlock settings in
>>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>>
>>> What is the consistency level ?
>>>
>>> Best Regardds,
>>> Kiran.M.K.
>>>
>>>
>>> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
>>> wrote:
>>>
>>>> hi,
>>>>
>>>> What is the count of records in the column-family ?
>>>>       We have about 38,000 Rows in the column-family for which we are
>>>> trying to export
>>>> What  is the Cassandra Version ?
>>>>      We are using Cassandra 2.0.11
>>>>
>>>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>>>> The Server is 8 GB.
>>>>
>>>> regards
>>>> Neha
>>>>
>>>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh
>>>>> environment file
>>>>>
>>>>> Also HEAP_NEWSIZE ?
>>>>>
>>>>> What is the Consistency Level you are using ?
>>>>>
>>>>> Best REgards,
>>>>> Kiran.M.K.
>>>>>
>>>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Seems like the is related to JAVA HEAP Memory.
>>>>>>
>>>>>> What is the count of records in the column-family ?
>>>>>>
>>>>>> What  is the Cassandra Version ?
>>>>>>
>>>>>> Best Regards,
>>>>>> Kiran.M.K.
>>>>>>
>>>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <
>>>>>> nehajtrivedi@gmail.com> wrote:
>>>>>>
>>>>>>> Hello all,
>>>>>>>
>>>>>>> We are getting the OutOfMemoryError on one of the Node and the Node
>>>>>>> is down, when we run the export command to get all the data from a table.
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Neha
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603
>>>>>>> CassandraDaemon.java (line 199) Exception in thread
>>>>>>> Thread[ReadStage:532074,5,main]
>>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>>         at
>>>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>>>         at
>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>>>         at
>>>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>>>         at
>>>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>>>         at
>>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>>>         at
>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>>>         at
>>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>>         at
>>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best Regards,
>>>>>> Kiran.M.K.
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Kiran.M.K.
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Kiran.M.K.
>>>
>>
>>
>

Re: COPY command to export a table to CSV file

Posted by Sebastian Estevez <se...@datastax.com>.
Try Brian's cassandra-unloader
<https://github.com/brianmhess/cassandra-loader#cassandra-unloader>

All the best,


[image: datastax_logo.png] <http://www.datastax.com/>

Sebastián Estévez

Solutions Architect | 954 905 8615 | sebastian.estevez@datastax.com

[image: linkedin.png] <https://www.linkedin.com/company/datastax> [image:
facebook.png] <https://www.facebook.com/datastax> [image: twitter.png]
<https://twitter.com/datastax> [image: g+.png]
<https://plus.google.com/+Datastax/about>
<http://feeds.feedburner.com/datastax>

<http://cassandrasummit-datastax.com/>

DataStax is the fastest, most scalable distributed database technology,
delivering Apache Cassandra to the world’s most innovative enterprises.
Datastax is built to be agile, always-on, and predictably scalable to any
size. With more than 500 customers in 45 countries, DataStax is the
database technology and transactional backbone of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.

On Mon, Apr 20, 2015 at 12:31 PM, Neha Trivedi <ne...@gmail.com>
wrote:

> Does the nproc,nofile,memlock settings in
> /etc/security/limits.d/cassandra.conf are set to optimum value ?
> it's all default.
>
> What is the consistency level ?
> CL = Qurom
>
> Is there any other way to export a table to CSV?
>
> regards
> Neha
>
> On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com>
> wrote:
>
>> Hi,
>>
>> Thanks for the info,
>>
>> Does the nproc,nofile,memlock settings in
>> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>>
>> What is the consistency level ?
>>
>> Best Regardds,
>> Kiran.M.K.
>>
>>
>> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
>> wrote:
>>
>>> hi,
>>>
>>> What is the count of records in the column-family ?
>>>       We have about 38,000 Rows in the column-family for which we are
>>> trying to export
>>> What  is the Cassandra Version ?
>>>      We are using Cassandra 2.0.11
>>>
>>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>>> The Server is 8 GB.
>>>
>>> regards
>>> Neha
>>>
>>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
>>>> file
>>>>
>>>> Also HEAP_NEWSIZE ?
>>>>
>>>> What is the Consistency Level you are using ?
>>>>
>>>> Best REgards,
>>>> Kiran.M.K.
>>>>
>>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>>> wrote:
>>>>
>>>>> Seems like the is related to JAVA HEAP Memory.
>>>>>
>>>>> What is the count of records in the column-family ?
>>>>>
>>>>> What  is the Cassandra Version ?
>>>>>
>>>>> Best Regards,
>>>>> Kiran.M.K.
>>>>>
>>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <nehajtrivedi@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Hello all,
>>>>>>
>>>>>> We are getting the OutOfMemoryError on one of the Node and the Node
>>>>>> is down, when we run the export command to get all the data from a table.
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Neha
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>>>>>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>>         at
>>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>>         at
>>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>>         at
>>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>>         at
>>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>>         at
>>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>>         at
>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>>         at
>>>>>> org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>         at
>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>>         at
>>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>         at
>>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>>         at
>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>>         at
>>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>>         at
>>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>>         at
>>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>>         at
>>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>>         at
>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>>         at
>>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>>         at
>>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best Regards,
>>>>> Kiran.M.K.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Kiran.M.K.
>>
>
>

Re: COPY command to export a table to CSV file

Posted by Neha Trivedi <ne...@gmail.com>.
Does the nproc,nofile,memlock settings in
/etc/security/limits.d/cassandra.conf are set to optimum value ?
it's all default.

What is the consistency level ?
CL = Qurom

Is there any other way to export a table to CSV?

regards
Neha

On Mon, Apr 20, 2015 at 12:21 PM, Kiran mk <co...@gmail.com> wrote:

> Hi,
>
> Thanks for the info,
>
> Does the nproc,nofile,memlock settings in
> /etc/security/limits.d/cassandra.conf are set to optimum value ?
>
> What is the consistency level ?
>
> Best Regardds,
> Kiran.M.K.
>
>
> On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
> wrote:
>
>> hi,
>>
>> What is the count of records in the column-family ?
>>       We have about 38,000 Rows in the column-family for which we are
>> trying to export
>> What  is the Cassandra Version ?
>>      We are using Cassandra 2.0.11
>>
>> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
>> The Server is 8 GB.
>>
>> regards
>> Neha
>>
>> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
>>> file
>>>
>>> Also HEAP_NEWSIZE ?
>>>
>>> What is the Consistency Level you are using ?
>>>
>>> Best REgards,
>>> Kiran.M.K.
>>>
>>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>>> wrote:
>>>
>>>> Seems like the is related to JAVA HEAP Memory.
>>>>
>>>> What is the count of records in the column-family ?
>>>>
>>>> What  is the Cassandra Version ?
>>>>
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello all,
>>>>>
>>>>> We are getting the OutOfMemoryError on one of the Node and the Node is
>>>>> down, when we run the export command to get all the data from a table.
>>>>>
>>>>>
>>>>> Regards
>>>>> Neha
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>>>>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>>>>> java.lang.OutOfMemoryError: Java heap space
>>>>>         at
>>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>>         at
>>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>>         at
>>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>>         at
>>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>>         at
>>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>>         at
>>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>>         at
>>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>>         at
>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>>         at
>>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>>         at
>>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>>         at
>>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best Regards,
>>>> Kiran.M.K.
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Kiran.M.K.
>>>
>>
>>
>
>
> --
> Best Regards,
> Kiran.M.K.
>

Re: COPY command to export a table to CSV file

Posted by Kiran mk <co...@gmail.com>.
Hi,

Thanks for the info,

Does the nproc,nofile,memlock settings in
/etc/security/limits.d/cassandra.conf are set to optimum value ?

What is the consistency level ?

Best Regardds,
Kiran.M.K.


On Mon, Apr 20, 2015 at 11:55 AM, Neha Trivedi <ne...@gmail.com>
wrote:

> hi,
>
> What is the count of records in the column-family ?
>       We have about 38,000 Rows in the column-family for which we are
> trying to export
> What  is the Cassandra Version ?
>      We are using Cassandra 2.0.11
>
> MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
> The Server is 8 GB.
>
> regards
> Neha
>
> On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com>
> wrote:
>
>> Hi,
>>
>> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
>> file
>>
>> Also HEAP_NEWSIZE ?
>>
>> What is the Consistency Level you are using ?
>>
>> Best REgards,
>> Kiran.M.K.
>>
>> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
>> wrote:
>>
>>> Seems like the is related to JAVA HEAP Memory.
>>>
>>> What is the count of records in the column-family ?
>>>
>>> What  is the Cassandra Version ?
>>>
>>> Best Regards,
>>> Kiran.M.K.
>>>
>>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
>>> wrote:
>>>
>>>> Hello all,
>>>>
>>>> We are getting the OutOfMemoryError on one of the Node and the Node is
>>>> down, when we run the export command to get all the data from a table.
>>>>
>>>>
>>>> Regards
>>>> Neha
>>>>
>>>>
>>>>
>>>>
>>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>>>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>>>> java.lang.OutOfMemoryError: Java heap space
>>>>         at
>>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>>         at
>>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>>         at
>>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>>         at
>>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>>         at
>>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>         at
>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>>         at
>>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>         at
>>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>>         at
>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>>         at
>>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>         at
>>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>>         at
>>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>>         at
>>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>         at
>>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>>         at
>>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>>         at
>>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>>         at
>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>>         at
>>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>>         at
>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>>         at
>>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>>         at
>>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Kiran.M.K.
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Kiran.M.K.
>>
>
>


-- 
Best Regards,
Kiran.M.K.

Re: COPY command to export a table to CSV file

Posted by Neha Trivedi <ne...@gmail.com>.
hi,

What is the count of records in the column-family ?
      We have about 38,000 Rows in the column-family for which we are
trying to export
What  is the Cassandra Version ?
     We are using Cassandra 2.0.11

MAX_HEAP_SIZE and HEAP_NEWSIZE is the default .
The Server is 8 GB.

regards
Neha

On Mon, Apr 20, 2015 at 11:39 AM, Kiran mk <co...@gmail.com> wrote:

> Hi,
>
> check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment
> file
>
> Also HEAP_NEWSIZE ?
>
> What is the Consistency Level you are using ?
>
> Best REgards,
> Kiran.M.K.
>
> On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com>
> wrote:
>
>> Seems like the is related to JAVA HEAP Memory.
>>
>> What is the count of records in the column-family ?
>>
>> What  is the Cassandra Version ?
>>
>> Best Regards,
>> Kiran.M.K.
>>
>> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
>> wrote:
>>
>>> Hello all,
>>>
>>> We are getting the OutOfMemoryError on one of the Node and the Node is
>>> down, when we run the export command to get all the data from a table.
>>>
>>>
>>> Regards
>>> Neha
>>>
>>>
>>>
>>>
>>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>>> java.lang.OutOfMemoryError: Java heap space
>>>         at
>>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>>         at
>>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>>         at
>>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>>         at
>>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>>         at
>>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>>         at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>         at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>         at
>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>>         at
>>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>>         at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>         at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>         at
>>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>>         at
>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>>         at
>>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>>         at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>         at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>         at
>>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>>         at
>>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>>         at
>>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>>         at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>         at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>         at
>>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>>         at
>>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>>         at
>>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>>         at
>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>>         at
>>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>>         at
>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>>         at
>>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>>         at
>>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>>         at
>>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>>
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Kiran.M.K.
>>
>
>
>
> --
> Best Regards,
> Kiran.M.K.
>

Re: COPY command to export a table to CSV file

Posted by Kiran mk <co...@gmail.com>.
Hi,

check  the MAX_HEAP_SIZE configuration in cassandra-env.sh environment file

Also HEAP_NEWSIZE ?

What is the Consistency Level you are using ?

Best REgards,
Kiran.M.K.

On Mon, Apr 20, 2015 at 11:13 AM, Kiran mk <co...@gmail.com> wrote:

> Seems like the is related to JAVA HEAP Memory.
>
> What is the count of records in the column-family ?
>
> What  is the Cassandra Version ?
>
> Best Regards,
> Kiran.M.K.
>
> On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
> wrote:
>
>> Hello all,
>>
>> We are getting the OutOfMemoryError on one of the Node and the Node is
>> down, when we run the export command to get all the data from a table.
>>
>>
>> Regards
>> Neha
>>
>>
>>
>>
>> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
>> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
>> java.lang.OutOfMemoryError: Java heap space
>>         at
>> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>>         at
>> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>>         at
>> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>>         at
>> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>>         at
>> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>>         at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>         at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>         at
>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>>         at
>> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>>         at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>         at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>         at
>> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>>         at
>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>>         at
>> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>>         at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>         at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>         at
>> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>>         at
>> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>>         at
>> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>>         at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>         at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>         at
>> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>>         at
>> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>>         at
>> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>>         at
>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>>         at
>> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>>         at
>> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>>         at
>> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>>         at
>> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>>         at
>> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>>
>>
>
>
>
> --
> Best Regards,
> Kiran.M.K.
>



-- 
Best Regards,
Kiran.M.K.

Re: COPY command to export a table to CSV file

Posted by Kiran mk <co...@gmail.com>.
Seems like the is related to JAVA HEAP Memory.

What is the count of records in the column-family ?

What  is the Cassandra Version ?

Best Regards,
Kiran.M.K.

On Mon, Apr 20, 2015 at 11:08 AM, Neha Trivedi <ne...@gmail.com>
wrote:

> Hello all,
>
> We are getting the OutOfMemoryError on one of the Node and the Node is
> down, when we run the export command to get all the data from a table.
>
>
> Regards
> Neha
>
>
>
>
> ERROR [ReadStage:532074] 2015-04-09 01:04:00,603 CassandraDaemon.java
> (line 199) Exception in thread Thread[ReadStage:532074,5,main]
> java.lang.OutOfMemoryError: Java heap space
>         at
> org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:347)
>         at
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:392)
>         at
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:355)
>         at
> org.apache.cassandra.db.ColumnSerializer.deserializeColumnBody(ColumnSerializer.java:124)
>         at
> org.apache.cassandra.db.OnDiskAtom$Serializer.deserializeFromSSTable(OnDiskAtom.java:85)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:75)
>         at org.apache.cassandra.db.Column$1.computeNext(Column.java:64)
>         at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:88)
>         at
> org.apache.cassandra.db.columniterator.SimpleSliceReader.computeNext(SimpleSliceReader.java:37)
>         at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at
> org.apache.cassandra.db.columniterator.SSTableSliceIterator.hasNext(SSTableSliceIterator.java:82)
>         at
> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:82)
>         at
> org.apache.cassandra.db.columniterator.LazyColumnIterator.computeNext(LazyColumnIterator.java:59)
>         at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at
> org.apache.cassandra.db.filter.QueryFilter$2.getNext(QueryFilter.java:157)
>         at
> org.apache.cassandra.db.filter.QueryFilter$2.hasNext(QueryFilter.java:140)
>         at
> org.apache.cassandra.utils.MergeIterator$OneToOne.computeNext(MergeIterator.java:200)
>         at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>         at
> org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:185)
>         at
> org.apache.cassandra.db.filter.QueryFilter.collateColumns(QueryFilter.java:122)
>         at
> org.apache.cassandra.db.filter.QueryFilter.collateOnDiskAtom(QueryFilter.java:80)
>         at
> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:101)
>         at
> org.apache.cassandra.db.RowIteratorFactory$2.getReduced(RowIteratorFactory.java:75)
>         at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:115)
>         at
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:98)
>         at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:143)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:138)
>
>



-- 
Best Regards,
Kiran.M.K.