You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Michael Kjellman <mk...@barracuda.com> on 2012/09/20 04:32:39 UTC

Re:

A few questions: what version of 1.1 are you running. What version of Hadoop?

What is your job config? What is the buffer size you've chosen? How much data are you dealing with?

On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <ow...@gmail.com> wrote:

> I've been bulk loading data into Cassandra and seen the following exception:
> 
> ERROR 10:10:31,032 Exception in thread Thread[CompactionExecutor:5,1,main]
> java.lang.RuntimeException: Last written key DecoratedKey(-442063125946754, 313130303136373a31) >= current key DecoratedKey(-465541023623745, 313036393331333a33) writing into /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
> 	at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
> 	at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
> 	at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
> 	at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> 	at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> 	at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
> 	at org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> 	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> 	at java.lang.Thread.run(Thread.java:722)
> 
> The running Cassandra and that I load data into are the same one.
> 
> What's the cause?

'Like' us on Facebook for exclusive content and other resources on all Barracuda Networks solutions.
Visit http://barracudanetworks.com/facebook



Re:

Posted by Manu Zhang <ow...@gmail.com>.
I had Murmur3Partitioner for both of them, otherwise bulk loader would have
complained  since I put them under the same project.  I saw some negative
token issues of Murmur3Partitioner on JIRA recently so I moved back to
RandomPartitioner.

Thanks for your concern

On Tue, Sep 25, 2012 at 12:49 PM, Vijay <vi...@gmail.com> wrote:

> Hi Manu,
>
> Glad that you have the issue resolved.
>
> If i understand the issue correctly....
> Your cassandra installation had RandomParitioner but the bulk loader
> configuration (cassandra.yaml) had Murmur3Partitioner?
> By fixing the cassandra.yaml for the bulk loader the issue got resolved?
>
> If not then we might have a bug and your feedback might help the community.
>
> Regards,
> </VJ>
>
>
>
>
> On Wed, Sep 19, 2012 at 10:41 PM, Manu Zhang <ow...@gmail.com>wrote:
>
>> the problem seems to have gone away with changing Murmur3Partitioner back
>> to RandomPartitioner
>>
>>
>> On Thu, Sep 20, 2012 at 11:14 AM, Manu Zhang <ow...@gmail.com>wrote:
>>
>>> Yeah, BulkLoader. You did help me to elaborate my question. Thanks!
>>>
>>>
>>> On Thu, Sep 20, 2012 at 10:58 AM, Michael Kjellman <
>>> mkjellman@barracuda.com> wrote:
>>>
>>>> I assumed you were talking about BulkLoader. I haven't played with
>>>> trunk yet so I'm afraid I won't be much help here...
>>>>
>>>> On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <owenzhang1990@gmail.com
>>>> <ma...@gmail.com>> wrote:
>>>>
>>>> cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
>>>> http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer
>>>> size is 64 MB as in the example; I'm dealing with about 1GB data. job
>>>> config, you mean?
>>>>
>>>> On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <
>>>> mkjellman@barracuda.com<ma...@barracuda.com>> wrote:
>>>> A few questions: what version of 1.1 are you running. What version of
>>>> Hadoop?
>>>>
>>>> What is your job config? What is the buffer size you've chosen? How
>>>> much data are you dealing with?
>>>>
>>>> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <owenzhang1990@gmail.com
>>>> <ma...@gmail.com>> wrote:
>>>>
>>>> > I've been bulk loading data into Cassandra and seen the following
>>>> exception:
>>>> >
>>>> > ERROR 10:10:31,032 Exception in thread
>>>> Thread[CompactionExecutor:5,1,main]
>>>> > java.lang.RuntimeException: Last written key
>>>> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
>>>> DecoratedKey(-465541023623745, 313036393331333a33) writing into
>>>> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
>>>> >       at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
>>>> >       at
>>>> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
>>>> >       at
>>>> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
>>>> >       at
>>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>>> >       at
>>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>>> >       at
>>>> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
>>>> >       at
>>>> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
>>>> >       at
>>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>>> >       at
>>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>> >       at
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>>> >       at
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>>> >       at java.lang.Thread.run(Thread.java:722)
>>>> >
>>>> > The running Cassandra and that I load data into are the same one.
>>>> >
>>>> > What's the cause?
>>>>
>>>> 'Like' us on Facebook for exclusive content and other resources on all
>>>> Barracuda Networks solutions.
>>>>
>>>> Visit http://barracudanetworks.com/facebook
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> 'Like' us on Facebook for exclusive content and other resources on all
>>>> Barracuda Networks solutions.
>>>> Visit http://barracudanetworks.com/facebook
>>>>
>>>>
>>>>
>>>
>>
>

Re:

Posted by Vijay <vi...@gmail.com>.
Hi Manu,

Glad that you have the issue resolved.

If i understand the issue correctly....
Your cassandra installation had RandomParitioner but the bulk loader
configuration (cassandra.yaml) had Murmur3Partitioner?
By fixing the cassandra.yaml for the bulk loader the issue got resolved?

If not then we might have a bug and your feedback might help the community.

Regards,
</VJ>



On Wed, Sep 19, 2012 at 10:41 PM, Manu Zhang <ow...@gmail.com>wrote:

> the problem seems to have gone away with changing Murmur3Partitioner back
> to RandomPartitioner
>
>
> On Thu, Sep 20, 2012 at 11:14 AM, Manu Zhang <ow...@gmail.com>wrote:
>
>> Yeah, BulkLoader. You did help me to elaborate my question. Thanks!
>>
>>
>> On Thu, Sep 20, 2012 at 10:58 AM, Michael Kjellman <
>> mkjellman@barracuda.com> wrote:
>>
>>> I assumed you were talking about BulkLoader. I haven't played with trunk
>>> yet so I'm afraid I won't be much help here...
>>>
>>> On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <owenzhang1990@gmail.com
>>> <ma...@gmail.com>> wrote:
>>>
>>> cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
>>> http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer
>>> size is 64 MB as in the example; I'm dealing with about 1GB data. job
>>> config, you mean?
>>>
>>> On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <
>>> mkjellman@barracuda.com<ma...@barracuda.com>> wrote:
>>> A few questions: what version of 1.1 are you running. What version of
>>> Hadoop?
>>>
>>> What is your job config? What is the buffer size you've chosen? How much
>>> data are you dealing with?
>>>
>>> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <owenzhang1990@gmail.com
>>> <ma...@gmail.com>> wrote:
>>>
>>> > I've been bulk loading data into Cassandra and seen the following
>>> exception:
>>> >
>>> > ERROR 10:10:31,032 Exception in thread
>>> Thread[CompactionExecutor:5,1,main]
>>> > java.lang.RuntimeException: Last written key
>>> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
>>> DecoratedKey(-465541023623745, 313036393331333a33) writing into
>>> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
>>> >       at
>>> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
>>> >       at
>>> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
>>> >       at
>>> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
>>> >       at
>>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>>> >       at
>>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>>> >       at
>>> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
>>> >       at
>>> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
>>> >       at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>>> >       at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>> >       at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>>> >       at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>>> >       at java.lang.Thread.run(Thread.java:722)
>>> >
>>> > The running Cassandra and that I load data into are the same one.
>>> >
>>> > What's the cause?
>>>
>>> 'Like' us on Facebook for exclusive content and other resources on all
>>> Barracuda Networks solutions.
>>>
>>> Visit http://barracudanetworks.com/facebook
>>>
>>>
>>>
>>>
>>>
>>>
>>> 'Like' us on Facebook for exclusive content and other resources on all
>>> Barracuda Networks solutions.
>>> Visit http://barracudanetworks.com/facebook
>>>
>>>
>>>
>>
>

Re:

Posted by Manu Zhang <ow...@gmail.com>.
the problem seems to have gone away with changing Murmur3Partitioner back
to RandomPartitioner

On Thu, Sep 20, 2012 at 11:14 AM, Manu Zhang <ow...@gmail.com>wrote:

> Yeah, BulkLoader. You did help me to elaborate my question. Thanks!
>
>
> On Thu, Sep 20, 2012 at 10:58 AM, Michael Kjellman <
> mkjellman@barracuda.com> wrote:
>
>> I assumed you were talking about BulkLoader. I haven't played with trunk
>> yet so I'm afraid I won't be much help here...
>>
>> On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <owenzhang1990@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>> cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
>> http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer
>> size is 64 MB as in the example; I'm dealing with about 1GB data. job
>> config, you mean?
>>
>> On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <
>> mkjellman@barracuda.com<ma...@barracuda.com>> wrote:
>> A few questions: what version of 1.1 are you running. What version of
>> Hadoop?
>>
>> What is your job config? What is the buffer size you've chosen? How much
>> data are you dealing with?
>>
>> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <owenzhang1990@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>> > I've been bulk loading data into Cassandra and seen the following
>> exception:
>> >
>> > ERROR 10:10:31,032 Exception in thread
>> Thread[CompactionExecutor:5,1,main]
>> > java.lang.RuntimeException: Last written key
>> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
>> DecoratedKey(-465541023623745, 313036393331333a33) writing into
>> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
>> >       at
>> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
>> >       at
>> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
>> >       at
>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>> >       at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
>> >       at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> >       at
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> >       at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> >       at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >       at java.lang.Thread.run(Thread.java:722)
>> >
>> > The running Cassandra and that I load data into are the same one.
>> >
>> > What's the cause?
>>
>> 'Like' us on Facebook for exclusive content and other resources on all
>> Barracuda Networks solutions.
>>
>> Visit http://barracudanetworks.com/facebook
>>
>>
>>
>>
>>
>>
>> 'Like' us on Facebook for exclusive content and other resources on all
>> Barracuda Networks solutions.
>> Visit http://barracudanetworks.com/facebook
>>
>>
>>
>

Re:

Posted by Manu Zhang <ow...@gmail.com>.
Yeah, BulkLoader. You did help me to elaborate my question. Thanks!

On Thu, Sep 20, 2012 at 10:58 AM, Michael Kjellman
<mk...@barracuda.com>wrote:

> I assumed you were talking about BulkLoader. I haven't played with trunk
> yet so I'm afraid I won't be much help here...
>
> On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <owenzhang1990@gmail.com<mailto:
> owenzhang1990@gmail.com>> wrote:
>
> cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
> http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer size
> is 64 MB as in the example; I'm dealing with about 1GB data. job config,
> you mean?
>
> On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <
> mkjellman@barracuda.com<ma...@barracuda.com>> wrote:
> A few questions: what version of 1.1 are you running. What version of
> Hadoop?
>
> What is your job config? What is the buffer size you've chosen? How much
> data are you dealing with?
>
> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <owenzhang1990@gmail.com<mailto:
> owenzhang1990@gmail.com>> wrote:
>
> > I've been bulk loading data into Cassandra and seen the following
> exception:
> >
> > ERROR 10:10:31,032 Exception in thread
> Thread[CompactionExecutor:5,1,main]
> > java.lang.RuntimeException: Last written key
> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
> DecoratedKey(-465541023623745, 313036393331333a33) writing into
> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
> >       at
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
> >       at
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
> >       at
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
> >       at
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> >       at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> >       at
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
> >       at
> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
> >       at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >       at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> >       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> >       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >       at java.lang.Thread.run(Thread.java:722)
> >
> > The running Cassandra and that I load data into are the same one.
> >
> > What's the cause?
>
> 'Like' us on Facebook for exclusive content and other resources on all
> Barracuda Networks solutions.
>
> Visit http://barracudanetworks.com/facebook
>
>
>
>
>
>
> 'Like' us on Facebook for exclusive content and other resources on all
> Barracuda Networks solutions.
> Visit http://barracudanetworks.com/facebook
>
>
>

Re:

Posted by Michael Kjellman <mk...@barracuda.com>.
I assumed you were talking about BulkLoader. I haven't played with trunk yet so I'm afraid I won't be much help here...

On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <ow...@gmail.com>> wrote:

cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer size is 64 MB as in the example; I'm dealing with about 1GB data. job config, you mean?

On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <mk...@barracuda.com>> wrote:
A few questions: what version of 1.1 are you running. What version of Hadoop?

What is your job config? What is the buffer size you've chosen? How much data are you dealing with?

On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <ow...@gmail.com>> wrote:

> I've been bulk loading data into Cassandra and seen the following exception:
>
> ERROR 10:10:31,032 Exception in thread Thread[CompactionExecutor:5,1,main]
> java.lang.RuntimeException: Last written key DecoratedKey(-442063125946754, 313130303136373a31) >= current key DecoratedKey(-465541023623745, 313036393331333a33) writing into /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
>       at org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
>       at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
>       at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
>       at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>       at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>       at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
>       at org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
>       at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>       at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>       at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>       at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>       at java.lang.Thread.run(Thread.java:722)
>
> The running Cassandra and that I load data into are the same one.
>
> What's the cause?

'Like' us on Facebook for exclusive content and other resources on all Barracuda Networks solutions.

Visit http://barracudanetworks.com/facebook






'Like' us on Facebook for exclusive content and other resources on all Barracuda Networks solutions.
Visit http://barracudanetworks.com/facebook



Re:

Posted by Manu Zhang <ow...@gmail.com>.
cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer size
is 64 MB as in the example; I'm dealing with about 1GB data. job config,
you mean?

On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman
<mk...@barracuda.com>wrote:

> A few questions: what version of 1.1 are you running. What version of
> Hadoop?
>
> What is your job config? What is the buffer size you've chosen? How much
> data are you dealing with?
>
> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <ow...@gmail.com> wrote:
>
> > I've been bulk loading data into Cassandra and seen the following
> exception:
> >
> > ERROR 10:10:31,032 Exception in thread
> Thread[CompactionExecutor:5,1,main]
> > java.lang.RuntimeException: Last written key
> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
> DecoratedKey(-465541023623745, 313036393331333a33) writing into
> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
> >       at
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
> >       at
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
> >       at
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
> >       at
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
> >       at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
> >       at
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
> >       at
> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
> >       at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> >       at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> >       at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
> >       at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
> >       at java.lang.Thread.run(Thread.java:722)
> >
> > The running Cassandra and that I load data into are the same one.
> >
> > What's the cause?
>
> 'Like' us on Facebook for exclusive content and other resources on all
> Barracuda Networks solutions.
>
> Visit http://barracudanetworks.com/facebook
>
>
>
>
>