You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@cassandra.apache.org by Eric Stevens <mi...@gmail.com> on 2016/10/03 15:25:24 UTC

Re: Way to write to dc1 but keep data only in dc2

It sounds like you're trying to avoid the latency of waiting for a write
confirmation to a remote data center?

App ==> DC1 ==high-latency==> DC2

If you need the write to be confirmed before you consider the write
successful in your application (definitely recommended unless you're ok
with losing data and the app having no idea), you're not going to solve the
fundamental physics problem of having to wait for a round-trip between
_something_ and DC2.  DC1 can't acknowledge the write until it's in
memtables and commitlog of a node that owns that data, so under the hoods
it's doing basically the same thing your app would have to do.  In fact,
putting DC1 in the middle just introduces a (possibly trivial but
definitely not zero) amount of additional latency over:

App ==high-latency==> DC2

The only exception would be if you had an expectation that latency between
DC1 and DC2 would be lower than latency between App and DC2, which I admit
is not impossible.

On Fri, Sep 30, 2016 at 1:49 PM Dorian Hoxha <do...@gmail.com> wrote:

> Thanks Edward. Looks like it's not possible what I really wanted (to use
> some kind of a quorum write ex).
>
> Note that the queue is ordered, but I need just so they eventually happen,
> but with more consistency than ANY (2 nodes or more).
>
> On Fri, Sep 30, 2016 at 12:25 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>> You can do something like this, though your use of terminology like
>> "queue" really do not apply.
>>
>> You can setup your keyspace with replication in only one data center.
>>
>> CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc2' : 3 };
>>
>> This will make the NTSkeyspace like only in one data center. You can
>> always write to any Cassandra node, since they will transparently proxy the
>> writes to the proper place. You can configure your client to ONLY bind to
>> specific hosts or data centers/hosts DC1.
>>
>> You can use a write consistency level like ANY. IF you use a consistency
>> level like ONE. It will cause the the write to block anyway waiting for
>> completion on the other datacenter.
>>
>> Since you mentioned the words "like a queue" I would suggest an
>> alternative is to writing the data do a distributed commit log like kafka.
>> At that point you can decouple the write systems either through producer
>> consumer or through a tool like Kafka's mirror maker.
>>
>>
>> On Thu, Sep 29, 2016 at 5:24 PM, Dorian Hoxha <do...@gmail.com>
>> wrote:
>>
>>> I have dc1 and dc2.
>>> I want to keep a keyspace only on dc2.
>>> But I only have my app on dc1.
>>> And I want to write to dc1 (lower latency) which will not keep data
>>> locally but just push it to dc2.
>>> While reading will only work for dc2.
>>> Since my app is mostly write, my app ~will be faster while not having to
>>> deploy to the app to dc2 or write directly to dc2 with higher latency.
>>>
>>> dc1 would act like a queue or something and just push data + delete
>>> locally.
>>>
>>> Does this make sense ?
>>>
>>> Thank You
>>>
>>
>>
>

Re: Way to write to dc1 but keep data only in dc2

Posted by INDRANIL BASU <in...@yahoo.com>.
@Dorain, yes i did that by mistake. I rectified it by starting a new thread.
  Thanks and regards,-- Indranil Basu

      From: Dorian Hoxha <do...@gmail.com>
 To: user@cassandra.apache.org; INDRANIL BASU <in...@yahoo.com> 
 Sent: Monday, 3 October 2016 11:07 PM
 Subject: Re: Way to write to dc1 but keep data only in dc2
   
@INDRANIL
Please go find your own thread and don't hijack mine.

On Mon, Oct 3, 2016 at 6:19 PM, INDRANIL BASU <in...@yahoo.com> wrote:

Hello All,

I am getting the below error repeatedly in the system log of C* 2.1.0

WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835 SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in test_schema.test_cf.test_cf_ col1_idx (see tombstone_warn_threshold). 5000 columns was requested, slices=[-], delInfo={deletedAt=- 9223372036854775808, localDeletion=2147483647}
After that NullPointer Exception and finally OOM
ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 6287,1,main]
java.lang. NullPointerException: null
        at org.apache.cassandra.service. CacheService$ KeyCacheSerializer.serialize( CacheService.java:475) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.service. CacheService$ KeyCacheSerializer.serialize( CacheService.java:463) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.cache. AutoSavingCache$Writer. saveCache(AutoSavingCache. java:225) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.db. compaction.CompactionManager$ 11.run(CompactionManager.java: 1061) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at java.util.concurrent. Executors$RunnableAdapter. call(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. FutureTask.run(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown Source) [na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown Source) [na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 9712,1,main]
java.lang. NullPointerException: null
ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 10070,1,main]
java.lang. NullPointerException: null
ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 10413,1,main]
java.lang. NullPointerException: null
ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter: 2396,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
        at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.addWorker( Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor. processWorkerExit(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown Source) ~[na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
-- IB






   

Re: Way to write to dc1 but keep data only in dc2

Posted by Yabin Meng <ya...@gmail.com>.
Dorian, I don't think Cassandra is able to achieve what you want natively.
In short words, what you want to achieve is conditional data replication.

Yabin



On Mon, Oct 3, 2016 at 1:37 PM, Dorian Hoxha <do...@gmail.com> wrote:

> @INDRANIL
> Please go find your own thread and don't hijack mine.
>
> On Mon, Oct 3, 2016 at 6:19 PM, INDRANIL BASU <in...@yahoo.com>
> wrote:
>
>> Hello All,
>>
>> I am getting the below error repeatedly in the system log of C* 2.1.0
>>
>> WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835
>> SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in
>> test_schema.test_cf.test_cf_col1_idx (see tombstone_warn_threshold).
>> 5000 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808,
>> localDeletion=2147483647}
>>
>> After that NullPointer Exception and finally OOM
>>
>> ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[CompactionExecutor:6287,1,main]
>> java.lang.NullPointerException: null
>>         at org.apache.cassandra.service.CacheService$KeyCacheSerializer
>> .serialize(CacheService.java:475) ~[apache-cassandra-2.1.0.jar:2.1.0]
>>         at org.apache.cassandra.service.CacheService$KeyCacheSerializer
>> .serialize(CacheService.java:463) ~[apache-cassandra-2.1.0.jar:2.1.0]
>>         at org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]
>>         at org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061)
>> ~[apache-cassandra-2.1.0.jar:2.1.0]
>>         at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
>> Source) ~[na:1.7.0_80]
>>         at java.util.concurrent.FutureTask.run(Unknown Source)
>> ~[na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>> Source) [na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> Source) [na:1.7.0_80]
>>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
>> ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[CompactionExecutor:9712,1,main]
>> java.lang.NullPointerException: null
>> ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[CompactionExecutor:10070,1,main]
>> java.lang.NullPointerException: null
>> ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[CompactionExecutor:10413,1,main]
>> java.lang.NullPointerException: null
>> ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425
>> CassandraDaemon.java:166 - Exception in thread
>> Thread[MemtableFlushWriter:2396,5,main]
>> java.lang.OutOfMemoryError: unable to create new native thread
>>         at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
>>         at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown
>> Source) ~[na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(Unknown
>> Source) ~[na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>> Source) ~[na:1.7.0_80]
>>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
>> Source) ~[na:1.7.0_80]
>>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
>>
>> -- IB
>>
>>
>>
>

Re: Way to write to dc1 but keep data only in dc2

Posted by Dorian Hoxha <do...@gmail.com>.
@INDRANIL
Please go find your own thread and don't hijack mine.

On Mon, Oct 3, 2016 at 6:19 PM, INDRANIL BASU <in...@yahoo.com> wrote:

> Hello All,
>
> I am getting the below error repeatedly in the system log of C* 2.1.0
>
> WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835
> SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in
> test_schema.test_cf.test_cf_col1_idx (see tombstone_warn_threshold). 5000
> columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808,
> localDeletion=2147483647}
>
> After that NullPointer Exception and finally OOM
>
> ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 6287,1,main]
> java.lang.NullPointerException: null
>         at org.apache.cassandra.service.CacheService$
> KeyCacheSerializer.serialize(CacheService.java:475)
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.service.CacheService$
> KeyCacheSerializer.serialize(CacheService.java:463)
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.cache.AutoSavingCache$Writer.
> saveCache(AutoSavingCache.java:225) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager$
> 11.run(CompactionManager.java:1061) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.FutureTask.run(Unknown Source)
> ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source) [na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source) [na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
> ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 9712,1,main]
> java.lang.NullPointerException: null
> ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10070,1,main]
> java.lang.NullPointerException: null
> ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10413,1,main]
> java.lang.NullPointerException: null
> ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425
> CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:
> 2396,5,main]
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
>         at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source) ~[na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
>
> -- IB
>
>
>

Re: Tombstoned error and then OOM

Posted by kurt Greaves <ku...@instaclustr.com>.
you'll still need to query all the data even if it's secondary indexed.

On 4 October 2016 at 17:13, INDRANIL BASU <in...@yahoo.com> wrote:

> The query has a where clause on a column which is a secondary index in the
> column family.
> E.g
> select * from test_schema.test_cf where status = 0;
> Here the status is integer column which is indexed.
>
> -- IB
>
> ------------------------------
> *From:* kurt Greaves <ku...@instaclustr.com>
> *To:* user@cassandra.apache.org; INDRANIL BASU <in...@yahoo.com>
> *Sent:* Tuesday, 4 October 2016 10:38 PM
> *Subject:* Re: Tombstoned error and then OOM
>
> This sounds like you're running a query that consumes a lot of memory. Are
> you by chance querying a very large partition or not bounding your query?
>
> I'd also recommend upgrading to 2.1.15, 2.1.0 is very old and has quite a
> few bugs.
>
> On 3 October 2016 at 17:08, INDRANIL BASU <in...@yahoo.com> wrote:
>
> Hello All,
>
>
>
> I am getting the below error repeatedly in the system log of C* 2.1.0
>
> WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835
> SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in
> test_schema.test_cf.test_cf_ col1_idx (see tombstone_warn_threshold). 5000
> columns was requested, slices=[-], delInfo={deletedAt=-
> 9223372036854775808, localDeletion=2147483647}
>
> After that NullPointer Exception and finally OOM
>
> ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 6287,1,main]
> java.lang. NullPointerException: null
>         at org.apache.cassandra.service. CacheService$
> KeyCacheSerializer.serialize( CacheService.java:475)
> ~[apache-cassandra-2.1.0.jar: 2.1.0]
>         at org.apache.cassandra.service. CacheService$
> KeyCacheSerializer.serialize( CacheService.java:463)
> ~[apache-cassandra-2.1.0.jar: 2.1.0]
>         at org.apache.cassandra.cache. AutoSavingCache$Writer.
> saveCache(AutoSavingCache. java:225) ~[apache-cassandra-2.1.0.jar: 2.1.0]
>         at org.apache.cassandra.db. compaction.CompactionManager$
> 11.run(CompactionManager.java: 1061) ~[apache-cassandra-2.1.0.jar: 2.1.0]
>         at java.util.concurrent. Executors$RunnableAdapter. call(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent. FutureTask.run(Unknown Source)
> ~[na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown
> Source) [na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown
> Source) [na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
> ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 9712,1,main]
> java.lang. NullPointerException: null
> ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10070,1,main]
> java.lang. NullPointerException: null
> ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10413,1,main]
> java.lang. NullPointerException: null
> ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425
> CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:
> 2396,5,main]
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
>         at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor.addWorker( Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor.
> processWorkerExit(Unknown Source) ~[na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown
> Source) ~[na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
>
> -- IB
>
>
>
>
>
>
>
>
> --
> Kurt Greaves
> kurt@instaclustr.com
> www.instaclustr.com
>
>
>


-- 
Kurt Greaves
kurt@instaclustr.com
www.instaclustr.com

Re: Tombstoned error and then OOM

Posted by INDRANIL BASU <in...@yahoo.com>.
The query has a where clause on a column which is a secondary index in the column family.E.g 
select * from test_schema.test_cf where status = 0; 
Here the status is integer column which is indexed. 
 -- IB

      From: kurt Greaves <ku...@instaclustr.com>
 To: user@cassandra.apache.org; INDRANIL BASU <in...@yahoo.com> 
 Sent: Tuesday, 4 October 2016 10:38 PM
 Subject: Re: Tombstoned error and then OOM
   
This sounds like you're running a query that consumes a lot of memory. Are you by chance querying a very large partition or not bounding your query?

I'd also recommend upgrading to 2.1.15, 2.1.0 is very old and has quite a few bugs.

On 3 October 2016 at 17:08, INDRANIL BASU <in...@yahoo.com> wrote:

Hello All,



I am getting the below error repeatedly in the system log of C* 2.1.0

WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835 SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in test_schema.test_cf.test_cf_ col1_idx (see tombstone_warn_threshold). 5000 columns was requested, slices=[-], delInfo={deletedAt=- 9223372036854775808, localDeletion=2147483647}
After that NullPointer Exception and finally OOM
ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 6287,1,main]
java.lang. NullPointerException: null
        at org.apache.cassandra.service. CacheService$ KeyCacheSerializer.serialize( CacheService.java:475) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.service. CacheService$ KeyCacheSerializer.serialize( CacheService.java:463) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.cache. AutoSavingCache$Writer. saveCache(AutoSavingCache. java:225) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at org.apache.cassandra.db. compaction.CompactionManager$ 11.run(CompactionManager.java: 1061) ~[apache-cassandra-2.1.0.jar: 2.1.0]
        at java.util.concurrent. Executors$RunnableAdapter. call(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. FutureTask.run(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown Source) [na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown Source) [na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 9712,1,main]
java.lang. NullPointerException: null
ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 10070,1,main]
java.lang. NullPointerException: null
ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor: 10413,1,main]
java.lang. NullPointerException: null
ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter: 2396,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
        at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.addWorker( Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor. processWorkerExit(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor.runWorker( Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent. ThreadPoolExecutor$Worker.run( Unknown Source) ~[na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
-- IB




   



-- 
Kurt Greaveskurt@instaclustr.comwww.instaclustr.com

   

Re: Tombstoned error and then OOM

Posted by kurt Greaves <ku...@instaclustr.com>.
This sounds like you're running a query that consumes a lot of memory. Are
you by chance querying a very large partition or not bounding your query?

I'd also recommend upgrading to 2.1.15, 2.1.0 is very old and has quite a
few bugs.

On 3 October 2016 at 17:08, INDRANIL BASU <in...@yahoo.com> wrote:

> Hello All,
>
>
>
> I am getting the below error repeatedly in the system log of C* 2.1.0
>
> WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835
> SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in
> test_schema.test_cf.test_cf_col1_idx (see tombstone_warn_threshold). 5000
> columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808,
> localDeletion=2147483647}
>
> After that NullPointer Exception and finally OOM
>
> ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 6287,1,main]
> java.lang.NullPointerException: null
>         at org.apache.cassandra.service.CacheService$
> KeyCacheSerializer.serialize(CacheService.java:475)
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.service.CacheService$
> KeyCacheSerializer.serialize(CacheService.java:463)
> ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.cache.AutoSavingCache$Writer.
> saveCache(AutoSavingCache.java:225) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at org.apache.cassandra.db.compaction.CompactionManager$
> 11.run(CompactionManager.java:1061) ~[apache-cassandra-2.1.0.jar:2.1.0]
>         at java.util.concurrent.Executors$RunnableAdapter.call(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.FutureTask.run(Unknown Source)
> ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source) [na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source) [na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
> ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 9712,1,main]
> java.lang.NullPointerException: null
> ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10070,1,main]
> java.lang.NullPointerException: null
> ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265
> CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:
> 10413,1,main]
> java.lang.NullPointerException: null
> ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425
> CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:
> 2396,5,main]
> java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
>         at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source) ~[na:1.7.0_80]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source) ~[na:1.7.0_80]
>         at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
>
> -- IB
>
>
>
>
>
>


-- 
Kurt Greaves
kurt@instaclustr.com
www.instaclustr.com

Tombstoned error and then OOM

Posted by INDRANIL BASU <in...@yahoo.com>.
Hello All,



I am getting the below error repeatedly in the system log of C* 2.1.0

WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835 SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in test_schema.test_cf.test_cf_col1_idx (see tombstone_warn_threshold). 5000 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
After that NullPointer Exception and finally OOM
ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:6287,1,main]
java.lang.NullPointerException: null
        at org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:9712,1,main]
java.lang.NullPointerException: null
ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:10070,1,main]
java.lang.NullPointerException: null
ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:10413,1,main]
java.lang.NullPointerException: null
ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:2396,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
        at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
-- IB




   

Re: Way to write to dc1 but keep data only in dc2

Posted by INDRANIL BASU <in...@yahoo.com>.
Hello All,

I am getting the below error repeatedly in the system log of C* 2.1.0

WARN  [SharedPool-Worker-64] 2016-09-27 00:43:35,835 SliceQueryFilter.java:236 - Read 0 live and 1923 tombstoned cells in test_schema.test_cf.test_cf_col1_idx (see tombstone_warn_threshold). 5000 columns was requested, slices=[-], delInfo={deletedAt=-9223372036854775808, localDeletion=2147483647}
After that NullPointer Exception and finally OOM
ERROR [CompactionExecutor:6287] 2016-09-29 22:09:13,546 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:6287,1,main]
java.lang.NullPointerException: null
        at org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:475) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.service.CacheService$KeyCacheSerializer.serialize(CacheService.java:463) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.cache.AutoSavingCache$Writer.saveCache(AutoSavingCache.java:225) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at org.apache.cassandra.db.compaction.CompactionManager$11.run(CompactionManager.java:1061) ~[apache-cassandra-2.1.0.jar:2.1.0]
        at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.FutureTask.run(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) [na:1.7.0_80]
ERROR [CompactionExecutor:9712] 2016-10-01 10:09:13,871 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:9712,1,main]
java.lang.NullPointerException: null
ERROR [CompactionExecutor:10070] 2016-10-01 14:09:14,154 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:10070,1,main]
java.lang.NullPointerException: null
ERROR [CompactionExecutor:10413] 2016-10-01 18:09:14,265 CassandraDaemon.java:166 - Exception in thread Thread[CompactionExecutor:10413,1,main]
java.lang.NullPointerException: null
ERROR [MemtableFlushWriter:2396] 2016-10-01 20:28:27,425 CassandraDaemon.java:166 - Exception in thread Thread[MemtableFlushWriter:2396,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
        at java.lang.Thread.start0(Native Method) ~[na:1.7.0_80]
        at java.lang.Thread.start(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.processWorkerExit(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[na:1.7.0_80]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[na:1.7.0_80]
        at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80]
-- IB



Re: Way to write to dc1 but keep data only in dc2

Posted by Dorian Hoxha <do...@gmail.com>.
Thanks for the explanation Eric.

I would think it as something like:
The keyspace will be on dc1 + dc2, with the option that no long-term-data
is in dc1. So you write to dc1 (to the right nodes), they write to
commit-log/memtable and when they push for inter-dc-replication dc1 then
deletes local data. While dc2 doesn't push data to dc1 for replication.

On Mon, Oct 3, 2016 at 5:25 PM, Eric Stevens <mi...@gmail.com> wrote:

> It sounds like you're trying to avoid the latency of waiting for a write
> confirmation to a remote data center?
>
> App ==> DC1 ==high-latency==> DC2
>
> If you need the write to be confirmed before you consider the write
> successful in your application (definitely recommended unless you're ok
> with losing data and the app having no idea), you're not going to solve the
> fundamental physics problem of having to wait for a round-trip between
> _something_ and DC2.  DC1 can't acknowledge the write until it's in
> memtables and commitlog of a node that owns that data, so under the hoods
> it's doing basically the same thing your app would have to do.  In fact,
> putting DC1 in the middle just introduces a (possibly trivial but
> definitely not zero) amount of additional latency over:
>
> App ==high-latency==> DC2
>
> The only exception would be if you had an expectation that latency between
> DC1 and DC2 would be lower than latency between App and DC2, which I admit
> is not impossible.
>
> On Fri, Sep 30, 2016 at 1:49 PM Dorian Hoxha <do...@gmail.com>
> wrote:
>
>> Thanks Edward. Looks like it's not possible what I really wanted (to use
>> some kind of a quorum write ex).
>>
>> Note that the queue is ordered, but I need just so they eventually
>> happen, but with more consistency than ANY (2 nodes or more).
>>
>> On Fri, Sep 30, 2016 at 12:25 AM, Edward Capriolo <ed...@gmail.com>
>> wrote:
>>
>>> You can do something like this, though your use of terminology like
>>> "queue" really do not apply.
>>>
>>> You can setup your keyspace with replication in only one data center.
>>>
>>> CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'dc2' : 3 };
>>>
>>> This will make the NTSkeyspace like only in one data center. You can
>>> always write to any Cassandra node, since they will transparently proxy the
>>> writes to the proper place. You can configure your client to ONLY bind to
>>> specific hosts or data centers/hosts DC1.
>>>
>>> You can use a write consistency level like ANY. IF you use a consistency
>>> level like ONE. It will cause the the write to block anyway waiting for
>>> completion on the other datacenter.
>>>
>>> Since you mentioned the words "like a queue" I would suggest an
>>> alternative is to writing the data do a distributed commit log like kafka.
>>> At that point you can decouple the write systems either through producer
>>> consumer or through a tool like Kafka's mirror maker.
>>>
>>>
>>> On Thu, Sep 29, 2016 at 5:24 PM, Dorian Hoxha <do...@gmail.com>
>>> wrote:
>>>
>>>> I have dc1 and dc2.
>>>> I want to keep a keyspace only on dc2.
>>>> But I only have my app on dc1.
>>>> And I want to write to dc1 (lower latency) which will not keep data
>>>> locally but just push it to dc2.
>>>> While reading will only work for dc2.
>>>> Since my app is mostly write, my app ~will be faster while not having
>>>> to deploy to the app to dc2 or write directly to dc2 with higher latency.
>>>>
>>>> dc1 would act like a queue or something and just push data + delete
>>>> locally.
>>>>
>>>> Does this make sense ?
>>>>
>>>> Thank You
>>>>
>>>
>>>
>>