You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by xmw45688 <xw...@procurant.com> on 2020/09/15 23:59:08 UTC

Re: Getting NullPointerException during commit into cassandra, after reconnecting to ignite server

I got this exception for ignite 2.8.0.  When the note restarted and the data
is inserted into the cache and Cassandra store.  Please note 
1) my setup is that Native Persistence is enabled.  just one server node, no
client node
2) there is no issue for the very first start (i.e. the native storage is
not created)
3) Ignite can be restarted if I delete all data from Native Storage folder.  

So basically,  Ignite server will not able to update/insert data once the
storage directory exists.  Please any help is appreciated. 

[16:32:47] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[16:32:48] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[16:32:48] Security status [authentication=off, tls/ssl=off]
[16:32:50] Both Ignite native persistence and CacheStore are configured for
cache 'FactLine'. This configuration does not guarantee strict consistency
between CacheStore and Ignite data storage upon restarts. Consult
documentation for more details.
[16:32:50] Both Ignite native persistence and CacheStore are configured for
cache 'InvoiceLine'. This configuration does not guarantee strict
consistency between CacheStore and Ignite data storage upon restarts.
Consult documentation for more details.
[16:32:50] Both Ignite native persistence and CacheStore are configured for
cache 'DimProduct'. This configuration does not guarantee strict consistency
between CacheStore and Ignite data storage upon restarts. Consult
documentation for more details.
[16:32:50] Both Ignite native persistence and CacheStore are configured for
cache 'Fact'. This configuration does not guarantee strict consistency
between CacheStore and Ignite data storage upon restarts. Consult
documentation for more details.
[16:32:50] Both Ignite native persistence and CacheStore are configured for
cache 'DimStore'. This configuration does not guarantee strict consistency
between CacheStore and Ignite data storage upon restarts. Consult
documentation for more details.
[16:33:09] Performance suggestions for grid 'MyCluster' (fix if possible)
[16:33:09] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[16:33:09]   ^-- Enable ATOMIC mode if not using transactions (set
'atomicityMode' to ATOMIC)
[16:33:09]   ^-- Enable write-behind to persistent store (set
'writeBehindEnabled' to true)
[16:33:09]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[16:33:09]   ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]' to
JVM options)
[16:33:09]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
[16:33:09]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[16:33:09] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[16:33:09] 
[16:33:09] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[16:33:09] Data Regions Configured:
[16:33:09]   ^-- Default_Region [initSize=10.0 MiB, maxSize=100.0 MiB,
persistence=true, lazyMemoryAllocation=true]
[16:33:09] 
[16:33:09] Ignite node started OK (id=53454c70, instance name=MyCluster)
[16:33:09] Topology snapshot [ver=1, locNode=53454c70, servers=1, clients=0,
state=INACTIVE, CPUs=8, offheap=0.1GB, heap=7.1GB]
[16:33:09]   ^-- Baseline [id=0, size=1, online=1, offline=0]
[16:33:09]   ^-- All baseline nodes are online, will start auto-activation

>>> ******* Start...


>>> ******* start populateDimStore...

16:33:09.755 [main] INFO com.datastax.driver.core.GuavaCompatibility -
Detected Guava >= 19 in the classpath, using modern compatibility layer
16:33:09.757 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value
1
16:33:09.759 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default
value 60
16:33:09.771 [main] INFO com.datastax.driver.core.SystemProperties -
com.datastax.driver.USE_NATIVE_CLOCK is defined, using value false
16:33:09.771 [main] INFO com.datastax.driver.core.ClockFactory - Using
java.lang.System clock to generate timestamps.
16:33:09.774 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default
value 8
16:33:09.818 [main] DEBUG com.datastax.driver.core.Cluster - Starting new
cluster with contact points [127.0.0.1:9042]
16:33:09.830 [main] DEBUG
io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the
default logging framework
16:33:09.840 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap -
-Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
16:33:09.840 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap -
-Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
16:33:09.853 [main] DEBUG io.netty.util.internal.PlatformDependent -
Platform: Windows
16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
-Dio.netty.noUnsafe: false
16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java
version: 8
16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
sun.misc.Unsafe.theUnsafe: available
16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
sun.misc.Unsafe.copyMemory: available
16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
java.nio.Buffer.address: available
16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 - direct
buffer constructor: available
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
java.nio.Bits.unaligned: available, true
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior
to Java9
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
java.nio.DirectByteBuffer.<init>(long, int): available
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
sun.misc.Unsafe: available
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.tmpdir: C:\Users\XINMIN~1\AppData\Local\Temp (java.io.tmpdir)
16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.bitMode: 64 (sun.arch.data.model)
16:33:09.858 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.maxDirectMemory: 7598505984 bytes
16:33:09.858 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.uninitializedArrayAllocationThreshold: -1
16:33:09.859 [main] DEBUG io.netty.util.internal.CleanerJava6 -
java.nio.ByteBuffer.cleaner(): available
16:33:09.859 [main] DEBUG io.netty.util.internal.PlatformDependent -
-Dio.netty.noPreferDirect: false
16:33:09.861 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.FORCE_NIO is undefined, using default value false
16:33:09.862 [main] INFO com.datastax.driver.core.NettyUtil - Did not find
Netty's native epoll transport in the classpath, defaulting to NIO.
16:33:09.865 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup -
-Dio.netty.eventLoopThreads: 16
16:33:09.872 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.noKeySetOptimization: false
16:33:09.873 [main] DEBUG io.netty.channel.nio.NioEventLoop -
-Dio.netty.selectorAutoRebuildThreshold: 512
16:33:09.879 [main] DEBUG io.netty.util.internal.PlatformDependent -
org.jctools-core.MpscChunkedArrayQueue: available
16:33:09.895 [main] DEBUG io.netty.util.ResourceLeakDetector -
-Dio.netty.leakDetection.level: simple
16:33:09.895 [main] DEBUG io.netty.util.ResourceLeakDetector -
-Dio.netty.leakDetection.targetRecords: 4
16:33:09.898 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory - Loaded
default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@95eb320
16:33:09.902 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.EXTENDED_PEER_CHECK is undefined, using default value
true
16:33:09.955 [main] DEBUG com.datastax.driver.core.Host.STATES -
[127.0.0.1:9042] preparing to open 1 new connections, total = 1
16:33:09.958 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.DISABLE_COALESCING is undefined, using default value
false
16:33:09.982 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numHeapArenas: 16
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.numDirectArenas: 16
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.pageSize: 8192
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxOrder: 11
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.chunkSize: 16777216
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.smallCacheSize: 256
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.normalCacheSize: 64
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxCachedBufferCapacity: 32768
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.cacheTrimInterval: 8192
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.cacheTrimIntervalMillis: 0
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.useCacheForAllThreads: true
16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
-Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
16:33:10.000 [main] DEBUG io.netty.channel.DefaultChannelId -
-Dio.netty.processId: 16172 (auto-detected)
16:33:10.002 [main] DEBUG io.netty.util.NetUtil -
-Djava.net.preferIPv4Stack: false
16:33:10.002 [main] DEBUG io.netty.util.NetUtil -
-Djava.net.preferIPv6Addresses: false
16:33:10.225 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo
(Software Loopback Interface 1, 127.0.0.1)
16:33:10.225 [main] DEBUG io.netty.util.NetUtil - Failed to get SOMAXCONN
from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
16:33:10.457 [main] DEBUG io.netty.channel.DefaultChannelId -
-Dio.netty.machineId: 00:ff:1f:ff:fe:aa:da:94 (auto-detected)
16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.allocator.type: pooled
16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.threadLocalDirectBufferSize: 0
16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
-Dio.netty.maxThreadLocalCharBufferSize: 16384
16:33:10.497 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] Connection established, initializing transport
16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxCapacityPerThread: 4096
16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.maxSharedCapacityFactor: 2
16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.linkCapacity: 16
16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.ratio: 8
16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
-Dio.netty.recycler.delayedQueue.ratio: 8
16:33:10.527 [cluster1-nio-worker-0] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkAccessible: true
16:33:10.527 [cluster1-nio-worker-0] DEBUG io.netty.buffer.AbstractByteBuf -
-Dio.netty.buffer.checkBounds: true
16:33:10.527 [cluster1-nio-worker-0] DEBUG
io.netty.util.ResourceLeakDetectorFactory - Loaded default
ResourceLeakDetector: io.netty.util.ResourceLeakDetector@847df0e
16:33:10.542 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.SystemProperties -
com.datastax.driver.NATIVE_TRANSPORT_MAX_FRAME_SIZE_IN_MB is undefined,
using default value 256
16:33:10.543 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Host.STATES - [127.0.0.1:9042]
Connection[127.0.0.1:9042-1, inFlight=0, closed=false] Transport
initialized, connection ready
16:33:10.544 [main] DEBUG com.datastax.driver.core.ControlConnection -
[Control connection] Refreshing node list and token map
16:33:10.597 [main] DEBUG com.datastax.driver.core.ControlConnection -
[Control connection] Refreshing schema
16:33:10.806 [main] DEBUG com.datastax.driver.core.Host.STATES - [Control
connection] established to 127.0.0.1:9042
16:33:10.807 [main] INFO
com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using
data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is
incorrect, please provide the correct datacenter name with
DCAwareRoundRobinPolicy constructor)
16:33:10.808 [main] INFO com.datastax.driver.core.Cluster - New Cassandra
host 127.0.0.1:9042 added
16:33:10.811 [main] DEBUG com.datastax.driver.core.SystemProperties -
com.datastax.driver.CHECK_IO_DEADLOCKS is undefined, using default value
true
16:33:10.814 [main] DEBUG com.datastax.driver.core.Host.STATES -
[127.0.0.1:9042] preparing to open 1 new connections, total = 2
16:33:10.820 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] Connection established, initializing transport
16:33:10.831 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Host.STATES - [127.0.0.1:9042]
Connection[127.0.0.1:9042-2, inFlight=0, closed=false] Transport
initialized, connection ready
16:33:10.833 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.HostConnectionPool - Created connection pool to
host 127.0.0.1:9042 (1 connections needed, 1 successfully opened)
16:33:10.833 [cluster1-nio-worker-1] DEBUG com.datastax.driver.core.Session
- Added connection pool for 127.0.0.1:9042
[16:33:10,855][SEVERE][main][CassandraCacheStore] Failed to apply 1
mutations performed withing Ignite transaction into Cassandra
class org.apache.ignite.IgniteException: Failed to apply 1 mutations
performed withing Ignite transaction into Cassandra
	at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:516)
	at
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.sessionEnd(CassandraCacheStore.java:172)
	at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.sessionEnd(GridCacheStoreManagerAdapter.java:800)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.sessionEnd(IgniteTxAdapter.java:1410)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1591)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:592)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3850)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:440)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:390)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4129)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4118)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:4118)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3012)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:738)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:493)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2596)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2594)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4332)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2594)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2575)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2552)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1302)
	at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
	at
com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.populateStore(FactPartitionNativePersistenceCassandraToCacheTester.java:120)
	at
com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.main(FactPartitionNativePersistenceCassandraToCacheTester.java:91)
Caused by: java.lang.NullPointerException
	at
org.apache.ignite.cache.store.cassandra.persistence.PojoField.getValueFromObject(PojoField.java:167)
	at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindValues(PersistenceController.java:448)
	at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindKeyValue(PersistenceController.java:203)
	at
org.apache.ignite.cache.store.cassandra.session.transaction.WriteMutation.bindStatement(WriteMutation.java:58)
	at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:479)
	... 26 more
Exception in thread "main" javax.cache.CacheException: class
org.apache.ignite.transactions.TransactionRollbackException: Transaction has
been rolled back: 4771d149471-00000000-0c9e-2cf6-0000-000000000001
	at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1317)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2069)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1305)
	at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
	at
com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.populateStore(FactPartitionNativePersistenceCassandraToCacheTester.java:120)
	at
com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.main(FactPartitionNativePersistenceCassandraToCacheTester.java:91)
Caused by: class
org.apache.ignite.transactions.TransactionRollbackException: Transaction has
been rolled back: 4771d149471-00000000-0c9e-2cf6-0000-000000000001
	at
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:943)
	at
org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:941)
	... 6 more
Caused by: class
org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException:
Transaction has been rolled back:
4771d149471-00000000-0c9e-2cf6-0000-000000000001
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2594)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2575)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2552)
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1302)
	... 3 more
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to apply 1
mutations performed withing Ignite transaction into Cassandra
	at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7507)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$18.applyx(GridNearTxLocal.java:3017)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$18.applyx(GridNearTxLocal.java:3013)
	at
org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
	at
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
	at
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70)
	at
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture.<init>(GridFutureAdapter.java:588)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.chain(GridFutureAdapter.java:361)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3012)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:738)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:493)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2596)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2594)
	at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4332)
	... 7 more
Caused by: class org.apache.ignite.IgniteException: Failed to apply 1
mutations performed withing Ignite transaction into Cassandra
	at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:532)
	at
org.apache.ignite.cache.store.cassandra.CassandraCacheStore.sessionEnd(CassandraCacheStore.java:172)
	at
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.sessionEnd(GridCacheStoreManagerAdapter.java:800)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.sessionEnd(IgniteTxAdapter.java:1410)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1591)
	at
org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:592)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3850)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:440)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:390)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4129)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4118)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
	at
org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
	at
org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:4118)
	... 13 more
Caused by: class org.apache.ignite.IgniteException: Failed to apply 1
mutations performed withing Ignite transaction into Cassandra
	at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:516)
	... 26 more
Caused by: java.lang.NullPointerException
	at
org.apache.ignite.cache.store.cassandra.persistence.PojoField.getValueFromObject(PojoField.java:167)
	at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindValues(PersistenceController.java:448)
	at
org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindKeyValue(PersistenceController.java:203)
	at
org.apache.ignite.cache.store.cassandra.session.transaction.WriteMutation.bindStatement(WriteMutation.java:58)
	at
org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:479)
	... 26 more
16:33:40.658 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:33:40.660 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:33:40.860 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:33:40.861 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded
16:34:10.677 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:34:10.678 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:34:10.864 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:34:10.865 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded
16:34:40.679 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:34:40.680 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:34:40.881 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:34:40.882 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded
16:35:10.697 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:35:10.699 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:35:10.883 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:35:10.884 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded
16:35:40.713 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:35:40.714 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:35:40.885 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:35:40.886 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded
16:36:10.727 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:36:10.729 [cluster1-nio-worker-0] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
inFlight=0, closed=false] heartbeat query succeeded
16:36:10.899 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
16:36:10.900 [cluster1-nio-worker-1] DEBUG
com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
inFlight=0, closed=false] heartbeat query succeeded




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Getting NullPointerException during commit into cassandra, after reconnecting to ignite server

Posted by Ilya Kasnacheev <il...@gmail.com>.
Hello!

As far as my understanding goes, the workaround is as follows:

- Declare all cassandra-backed caches on client nodes along with their
cache store.
- clientReconnectDisabled true.

You can also track https://issues.apache.org/jira/browse/IGNITE-8788

Regards,
-- 
Ilya Kasnacheev


ср, 16 сент. 2020 г. в 02:59, xmw45688 <xw...@procurant.com>:

> I got this exception for ignite 2.8.0.  When the note restarted and the
> data
> is inserted into the cache and Cassandra store.  Please note
> 1) my setup is that Native Persistence is enabled.  just one server node,
> no
> client node
> 2) there is no issue for the very first start (i.e. the native storage is
> not created)
> 3) Ignite can be restarted if I delete all data from Native Storage
> folder.
>
> So basically,  Ignite server will not able to update/insert data once the
> storage directory exists.  Please any help is appreciated.
>
> [16:32:47] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
> [16:32:48] Message queue limit is set to 0 which may lead to potential
> OOMEs
> when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
> message queues growth on sender and receiver sides.
> [16:32:48] Security status [authentication=off, tls/ssl=off]
> [16:32:50] Both Ignite native persistence and CacheStore are configured for
> cache 'FactLine'. This configuration does not guarantee strict consistency
> between CacheStore and Ignite data storage upon restarts. Consult
> documentation for more details.
> [16:32:50] Both Ignite native persistence and CacheStore are configured for
> cache 'InvoiceLine'. This configuration does not guarantee strict
> consistency between CacheStore and Ignite data storage upon restarts.
> Consult documentation for more details.
> [16:32:50] Both Ignite native persistence and CacheStore are configured for
> cache 'DimProduct'. This configuration does not guarantee strict
> consistency
> between CacheStore and Ignite data storage upon restarts. Consult
> documentation for more details.
> [16:32:50] Both Ignite native persistence and CacheStore are configured for
> cache 'Fact'. This configuration does not guarantee strict consistency
> between CacheStore and Ignite data storage upon restarts. Consult
> documentation for more details.
> [16:32:50] Both Ignite native persistence and CacheStore are configured for
> cache 'DimStore'. This configuration does not guarantee strict consistency
> between CacheStore and Ignite data storage upon restarts. Consult
> documentation for more details.
> [16:33:09] Performance suggestions for grid 'MyCluster' (fix if possible)
> [16:33:09] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
> [16:33:09]   ^-- Enable ATOMIC mode if not using transactions (set
> 'atomicityMode' to ATOMIC)
> [16:33:09]   ^-- Enable write-behind to persistent store (set
> 'writeBehindEnabled' to true)
> [16:33:09]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
> options)
> [16:33:09]   ^-- Specify JVM heap max size (add '-Xmx<size>[g|G|m|M|k|K]'
> to
> JVM options)
> [16:33:09]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
> memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
> [16:33:09]   ^-- Disable processing of calls to System.gc() (add
> '-XX:+DisableExplicitGC' to JVM options)
> [16:33:09] Refer to this page for more performance suggestions:
> https://apacheignite.readme.io/docs/jvm-and-system-tuning
> [16:33:09]
> [16:33:09] To start Console Management & Monitoring run
> ignitevisorcmd.{sh|bat}
> [16:33:09] Data Regions Configured:
> [16:33:09]   ^-- Default_Region [initSize=10.0 MiB, maxSize=100.0 MiB,
> persistence=true, lazyMemoryAllocation=true]
> [16:33:09]
> [16:33:09] Ignite node started OK (id=53454c70, instance name=MyCluster)
> [16:33:09] Topology snapshot [ver=1, locNode=53454c70, servers=1,
> clients=0,
> state=INACTIVE, CPUs=8, offheap=0.1GB, heap=7.1GB]
> [16:33:09]   ^-- Baseline [id=0, size=1, online=1, offline=0]
> [16:33:09]   ^-- All baseline nodes are online, will start auto-activation
>
> >>> ******* Start...
>
>
> >>> ******* start populateDimStore...
>
> 16:33:09.755 [main] INFO com.datastax.driver.core.GuavaCompatibility -
> Detected Guava >= 19 in the classpath, using modern compatibility layer
> 16:33:09.757 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default
> value
> 1
> 16:33:09.759 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default
> value 60
> 16:33:09.771 [main] INFO com.datastax.driver.core.SystemProperties -
> com.datastax.driver.USE_NATIVE_CLOCK is defined, using value false
> 16:33:09.771 [main] INFO com.datastax.driver.core.ClockFactory - Using
> java.lang.System clock to generate timestamps.
> 16:33:09.774 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default
> value 8
> 16:33:09.818 [main] DEBUG com.datastax.driver.core.Cluster - Starting new
> cluster with contact points [127.0.0.1:9042]
> 16:33:09.830 [main] DEBUG
> io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the
> default logging framework
> 16:33:09.840 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap -
> -Dio.netty.threadLocalMap.stringBuilder.initialSize: 1024
> 16:33:09.840 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap -
> -Dio.netty.threadLocalMap.stringBuilder.maxSize: 4096
> 16:33:09.853 [main] DEBUG io.netty.util.internal.PlatformDependent -
> Platform: Windows
> 16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> -Dio.netty.noUnsafe: false
> 16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 - Java
> version: 8
> 16:33:09.855 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> sun.misc.Unsafe.theUnsafe: available
> 16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> sun.misc.Unsafe.copyMemory: available
> 16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> java.nio.Buffer.address: available
> 16:33:09.856 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> direct
> buffer constructor: available
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> java.nio.Bits.unaligned: available, true
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable prior
> to Java9
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent0 -
> java.nio.DirectByteBuffer.<init>(long, int): available
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
> sun.misc.Unsafe: available
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
> -Dio.netty.tmpdir: C:\Users\XINMIN~1\AppData\Local\Temp (java.io.tmpdir)
> 16:33:09.857 [main] DEBUG io.netty.util.internal.PlatformDependent -
> -Dio.netty.bitMode: 64 (sun.arch.data.model)
> 16:33:09.858 [main] DEBUG io.netty.util.internal.PlatformDependent -
> -Dio.netty.maxDirectMemory: 7598505984 bytes
> 16:33:09.858 [main] DEBUG io.netty.util.internal.PlatformDependent -
> -Dio.netty.uninitializedArrayAllocationThreshold: -1
> 16:33:09.859 [main] DEBUG io.netty.util.internal.CleanerJava6 -
> java.nio.ByteBuffer.cleaner(): available
> 16:33:09.859 [main] DEBUG io.netty.util.internal.PlatformDependent -
> -Dio.netty.noPreferDirect: false
> 16:33:09.861 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.FORCE_NIO is undefined, using default value false
> 16:33:09.862 [main] INFO com.datastax.driver.core.NettyUtil - Did not find
> Netty's native epoll transport in the classpath, defaulting to NIO.
> 16:33:09.865 [main] DEBUG io.netty.channel.MultithreadEventLoopGroup -
> -Dio.netty.eventLoopThreads: 16
> 16:33:09.872 [main] DEBUG io.netty.channel.nio.NioEventLoop -
> -Dio.netty.noKeySetOptimization: false
> 16:33:09.873 [main] DEBUG io.netty.channel.nio.NioEventLoop -
> -Dio.netty.selectorAutoRebuildThreshold: 512
> 16:33:09.879 [main] DEBUG io.netty.util.internal.PlatformDependent -
> org.jctools-core.MpscChunkedArrayQueue: available
> 16:33:09.895 [main] DEBUG io.netty.util.ResourceLeakDetector -
> -Dio.netty.leakDetection.level: simple
> 16:33:09.895 [main] DEBUG io.netty.util.ResourceLeakDetector -
> -Dio.netty.leakDetection.targetRecords: 4
> 16:33:09.898 [main] DEBUG io.netty.util.ResourceLeakDetectorFactory -
> Loaded
> default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@95eb320
> 16:33:09.902 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.EXTENDED_PEER_CHECK is undefined, using default value
> true
> 16:33:09.955 [main] DEBUG com.datastax.driver.core.Host.STATES -
> [127.0.0.1:9042] preparing to open 1 new connections, total = 1
> 16:33:09.958 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.DISABLE_COALESCING is undefined, using default value
> false
> 16:33:09.982 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.numHeapArenas: 16
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.numDirectArenas: 16
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.pageSize: 8192
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.maxOrder: 11
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.chunkSize: 16777216
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.smallCacheSize: 256
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.normalCacheSize: 64
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.maxCachedBufferCapacity: 32768
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.cacheTrimInterval: 8192
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.cacheTrimIntervalMillis: 0
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.useCacheForAllThreads: true
> 16:33:09.983 [main] DEBUG io.netty.buffer.PooledByteBufAllocator -
> -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
> 16:33:10.000 [main] DEBUG io.netty.channel.DefaultChannelId -
> -Dio.netty.processId: 16172 (auto-detected)
> 16:33:10.002 [main] DEBUG io.netty.util.NetUtil -
> -Djava.net.preferIPv4Stack: false
> 16:33:10.002 [main] DEBUG io.netty.util.NetUtil -
> -Djava.net.preferIPv6Addresses: false
> 16:33:10.225 [main] DEBUG io.netty.util.NetUtil - Loopback interface: lo
> (Software Loopback Interface 1, 127.0.0.1)
> 16:33:10.225 [main] DEBUG io.netty.util.NetUtil - Failed to get SOMAXCONN
> from sysctl and file \proc\sys\net\core\somaxconn. Default: 200
> 16:33:10.457 [main] DEBUG io.netty.channel.DefaultChannelId -
> -Dio.netty.machineId: 00:ff:1f:ff:fe:aa:da:94 (auto-detected)
> 16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
> -Dio.netty.allocator.type: pooled
> 16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
> -Dio.netty.threadLocalDirectBufferSize: 0
> 16:33:10.473 [main] DEBUG io.netty.buffer.ByteBufUtil -
> -Dio.netty.maxThreadLocalCharBufferSize: 16384
> 16:33:10.497 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] Connection established, initializing transport
> 16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
> -Dio.netty.recycler.maxCapacityPerThread: 4096
> 16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
> -Dio.netty.recycler.maxSharedCapacityFactor: 2
> 16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
> -Dio.netty.recycler.linkCapacity: 16
> 16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
> -Dio.netty.recycler.ratio: 8
> 16:33:10.521 [cluster1-nio-worker-0] DEBUG io.netty.util.Recycler -
> -Dio.netty.recycler.delayedQueue.ratio: 8
> 16:33:10.527 [cluster1-nio-worker-0] DEBUG io.netty.buffer.AbstractByteBuf
> -
> -Dio.netty.buffer.checkAccessible: true
> 16:33:10.527 [cluster1-nio-worker-0] DEBUG io.netty.buffer.AbstractByteBuf
> -
> -Dio.netty.buffer.checkBounds: true
> 16:33:10.527 [cluster1-nio-worker-0] DEBUG
> io.netty.util.ResourceLeakDetectorFactory - Loaded default
> ResourceLeakDetector: io.netty.util.ResourceLeakDetector@847df0e
> 16:33:10.542 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.SystemProperties -
> com.datastax.driver.NATIVE_TRANSPORT_MAX_FRAME_SIZE_IN_MB is undefined,
> using default value 256
> 16:33:10.543 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Host.STATES - [127.0.0.1:9042]
> Connection[127.0.0.1:9042-1, inFlight=0, closed=false] Transport
> initialized, connection ready
> 16:33:10.544 [main] DEBUG com.datastax.driver.core.ControlConnection -
> [Control connection] Refreshing node list and token map
> 16:33:10.597 [main] DEBUG com.datastax.driver.core.ControlConnection -
> [Control connection] Refreshing schema
> 16:33:10.806 [main] DEBUG com.datastax.driver.core.Host.STATES - [Control
> connection] established to 127.0.0.1:9042
> 16:33:10.807 [main] INFO
> com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using
> data-center name 'datacenter1' for DCAwareRoundRobinPolicy (if this is
> incorrect, please provide the correct datacenter name with
> DCAwareRoundRobinPolicy constructor)
> 16:33:10.808 [main] INFO com.datastax.driver.core.Cluster - New Cassandra
> host 127.0.0.1:9042 added
> 16:33:10.811 [main] DEBUG com.datastax.driver.core.SystemProperties -
> com.datastax.driver.CHECK_IO_DEADLOCKS is undefined, using default value
> true
> 16:33:10.814 [main] DEBUG com.datastax.driver.core.Host.STATES -
> [127.0.0.1:9042] preparing to open 1 new connections, total = 2
> 16:33:10.820 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] Connection established, initializing transport
> 16:33:10.831 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Host.STATES - [127.0.0.1:9042]
> Connection[127.0.0.1:9042-2, inFlight=0, closed=false] Transport
> initialized, connection ready
> 16:33:10.833 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.HostConnectionPool - Created connection pool to
> host 127.0.0.1:9042 (1 connections needed, 1 successfully opened)
> 16:33:10.833 [cluster1-nio-worker-1] DEBUG com.datastax.driver.core.Session
> - Added connection pool for 127.0.0.1:9042
> [16:33:10,855][SEVERE][main][CassandraCacheStore] Failed to apply 1
> mutations performed withing Ignite transaction into Cassandra
> class org.apache.ignite.IgniteException: Failed to apply 1 mutations
> performed withing Ignite transaction into Cassandra
>         at
>
> org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:516)
>         at
>
> org.apache.ignite.cache.store.cassandra.CassandraCacheStore.sessionEnd(CassandraCacheStore.java:172)
>         at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.sessionEnd(GridCacheStoreManagerAdapter.java:800)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.sessionEnd(IgniteTxAdapter.java:1410)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1591)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:592)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3850)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:440)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:390)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4129)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4118)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:4118)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3012)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:738)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:493)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2596)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2594)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4332)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2594)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2575)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2552)
>         at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1302)
>         at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
>         at
>
> com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.populateStore(FactPartitionNativePersistenceCassandraToCacheTester.java:120)
>         at
>
> com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.main(FactPartitionNativePersistenceCassandraToCacheTester.java:91)
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PojoField.getValueFromObject(PojoField.java:167)
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindValues(PersistenceController.java:448)
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindKeyValue(PersistenceController.java:203)
>         at
>
> org.apache.ignite.cache.store.cassandra.session.transaction.WriteMutation.bindStatement(WriteMutation.java:58)
>         at
>
> org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:479)
>         ... 26 more
> Exception in thread "main" javax.cache.CacheException: class
> org.apache.ignite.transactions.TransactionRollbackException: Transaction
> has
> been rolled back: 4771d149471-00000000-0c9e-2cf6-0000-000000000001
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1317)
>         at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2069)
>         at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1305)
>         at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:856)
>         at
>
> com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.populateStore(FactPartitionNativePersistenceCassandraToCacheTester.java:120)
>         at
>
> com.procurant.test.partition.FactPartitionNativePersistenceCassandraToCacheTester.main(FactPartitionNativePersistenceCassandraToCacheTester.java:91)
> Caused by: class
> org.apache.ignite.transactions.TransactionRollbackException: Transaction
> has
> been rolled back: 4771d149471-00000000-0c9e-2cf6-0000-000000000001
>         at
> org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:943)
>         at
> org.apache.ignite.internal.util.IgniteUtils$11.apply(IgniteUtils.java:941)
>         ... 6 more
> Caused by: class
> org.apache.ignite.internal.transactions.IgniteTxRollbackCheckedException:
> Transaction has been rolled back:
> 4771d149471-00000000-0c9e-2cf6-0000-000000000001
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2594)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2575)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2552)
>         at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1302)
>         ... 3 more
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to apply
> 1
> mutations performed withing Ignite transaction into Cassandra
>         at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7507)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.java:260)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:172)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$18.applyx(GridNearTxLocal.java:3017)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$18.applyx(GridNearTxLocal.java:3013)
>         at
>
> org.apache.ignite.internal.util.lang.IgniteClosureX.apply(IgniteClosureX.java:38)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture.<init>(GridFutureAdapter.java:588)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.chain(GridFutureAdapter.java:361)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.optimisticPutFuture(GridNearTxLocal.java:3012)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:738)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.putAsync(GridNearTxLocal.java:493)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2596)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2594)
>         at
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4332)
>         ... 7 more
> Caused by: class org.apache.ignite.IgniteException: Failed to apply 1
> mutations performed withing Ignite transaction into Cassandra
>         at
>
> org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:532)
>         at
>
> org.apache.ignite.cache.store.cassandra.CassandraCacheStore.sessionEnd(CassandraCacheStore.java:172)
>         at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.sessionEnd(GridCacheStoreManagerAdapter.java:800)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.sessionEnd(IgniteTxAdapter.java:1410)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1591)
>         at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:592)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3850)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:440)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:390)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4129)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$25.apply(GridNearTxLocal.java:4118)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>         at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.listen(GridFutureAdapter.java:354)
>         at
>
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.commitNearTxLocalAsync(GridNearTxLocal.java:4118)
>         ... 13 more
> Caused by: class org.apache.ignite.IgniteException: Failed to apply 1
> mutations performed withing Ignite transaction into Cassandra
>         at
>
> org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:516)
>         ... 26 more
> Caused by: java.lang.NullPointerException
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PojoField.getValueFromObject(PojoField.java:167)
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindValues(PersistenceController.java:448)
>         at
>
> org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.bindKeyValue(PersistenceController.java:203)
>         at
>
> org.apache.ignite.cache.store.cassandra.session.transaction.WriteMutation.bindStatement(WriteMutation.java:58)
>         at
>
> org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:479)
>         ... 26 more
> 16:33:40.658 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:33:40.660 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:33:40.860 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:33:40.861 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:34:10.677 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:34:10.678 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:34:10.864 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:34:10.865 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:34:40.679 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:34:40.680 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:34:40.881 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:34:40.882 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:35:10.697 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:35:10.699 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:35:10.883 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:35:10.884 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:35:40.713 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:35:40.714 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:35:40.885 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:35:40.886 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:36:10.727 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:36:10.729 [cluster1-nio-worker-0] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-1,
> inFlight=0, closed=false] heartbeat query succeeded
> 16:36:10.899 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] was inactive for 30 seconds, sending heartbeat
> 16:36:10.900 [cluster1-nio-worker-1] DEBUG
> com.datastax.driver.core.Connection - Connection[127.0.0.1:9042-2,
> inFlight=0, closed=false] heartbeat query succeeded
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>