You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Hyma <hy...@gmail.com> on 2017/11/17 14:31:17 UTC

IgniteInterruptedException: Node is stopping

Hi,

When loading ignite cache, we saw the spark job went into hung state at this
step.
We see one of the executor task has been running for hours and below are the
logs from this executor that had the failure.

Stdout log
Launch class org.apache.spark.executor.CoarseGrainedExecutorBackend by
calling
co.cask.cdap.app.runtime.spark.distributed.SparkContainerLauncher.launch
13:12:58.115 [main] INFO  c.c.c.l.a.LogAppenderInitializer - Initializing
log appender KafkaLogAppender
13:12:58.679 [authorization-enforcement-service] INFO 
c.c.c.s.a.AbstractAuthorizationService - Started authorization enforcement
service...
13:12:59.391 [main] INFO  c.c.c.c.g.LocationRuntimeModule - HDFS namespace
is /project/ecpprodcdap
13:12:59.438 [main] INFO  c.c.c.a.r.s.d.SparkContainerLauncher - Launch main
class
org.apache.spark.executor.CoarseGrainedExecutorBackend.main([--driver-url,
spark://CoarseGrainedScheduler@10.214.4.161:33947, --executor-id, 29,
--hostname, c893ach.ecom.bigdata.int.thomsonreuters.com, --cores, 5,
--app-id, application_1506331241975_7951, --user-class-path,
file:/data/7/yarn/nm/usercache/bigdata-app-ecplegalanalytics-svc/appcache/application_1506331241975_7951/container_e28_1506331241975_7951_01_000030/__app__.jar])
13:12:59.501 [main] WARN  c.c.c.i.a.Classes - Cannot patch method
obtainTokenForHiveMetastore in
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil due to non-void return
type: (Lorg/apache/hadoop/conf/Configuration;)Lscala/Option;
13:12:59.501 [main] WARN  c.c.c.i.a.Classes - Cannot patch method
obtainTokenForHBase in org.apache.spark.deploy.yarn.YarnSparkHadoopUtil due
to non-void return type:
(Lorg/apache/hadoop/conf/Configuration;)Lscala/Option;
13:13:26.130 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:26 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:26.134 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:26 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:26.134 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:26 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:26.135 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:26 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:26.135 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:26 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:26.281 [Executor task launch worker-0] ERROR  - Failed to resolve
default logging config file: config/java.util.logging.properties
13:13:26.283 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - Console logging handler is not
configured.
[13:13:26]    __________  ________________ 
[13:13:26]   /  _/ ___/ |/ /  _/_  __/ __/ 
[13:13:26]  _/ // (7 7    // /  / / / _/   
[13:13:26] /___/\___/_/|_/___/ /_/ /___/  
[13:13:26] 
[13:13:26] ver. 1.8.0#20161205-sha1:9ca40dbe
[13:13:26] 2016 Copyright(C) Apache Software Foundation
[13:13:26] 
[13:13:26] Ignite documentation: http://ignite.apache.org
[13:13:26] 
[13:13:26] Quiet mode.
[13:13:26]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[13:13:26] 
[13:13:26] OS: Linux 3.10.0-514.16.1.el7.x86_64 amd64
[13:13:26] VM information: Java(TM) SE Runtime Environment 1.8.0_121-b13
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.121-b13
[13:13:26] Configured plugins:
[13:13:26]   ^-- None
[13:13:26] 
[13:13:26] Security status [authentication=off, tls/ssl=off]
[13:13:27] Topology snapshot [ver=3, servers=3, clients=0, CPUs=48,
heap=96.0GB]
[13:13:27] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[13:13:27] 
[13:13:27] Ignite node started OK (id=e98b003d,
grid=WCAGridapplication_1506331241975_7951)
[13:13:27] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48,
heap=66.0GB]
13:13:27.660 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.660 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.661 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.661 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.661 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.674 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.674 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.676 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.676 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.677 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.678 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.678 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.679 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.680 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.682 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.689 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.690 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.691 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.692 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.692 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.694 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.695 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.696 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.717 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.718 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.719 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.720 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.725 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.725 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.726 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.726 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.727 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.728 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.745 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.746 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.747 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.749 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.750 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.751 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.753 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.755 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.755 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.756 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.772 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.774 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.784 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.785 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.786 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.787 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.787 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.788 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.788 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.789 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.791 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.792 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.803 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.804 [Executor task launch worker-2] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.805 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.807 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.808 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.809 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.810 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.810 [Executor task launch worker-1] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.812 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.814 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.816 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.817 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.819 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.821 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.828 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.829 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.830 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.832 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.832 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.833 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.845 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.846 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.848 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.849 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.856 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.857 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.859 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.860 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.864 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.865 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.872 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.873 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.874 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.875 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.878 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.879 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.887 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.888 [Executor task launch worker-0] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.889 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.889 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.890 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.891 [Executor task launch worker-3] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
13:13:27.899 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Starting the Ignite node on - 10.214.4.161
13:13:27.901 [Executor task launch worker-4] WARN 
o.a.s.e.CoarseGrainedExecutorBackend - 17/11/16 13:13:27 INFO
dataloader.IgniteDataLoader: Started the Ignite node on - 10.214.4.161
[13:14:02] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96,
heap=130.0GB]
[13:14:02] Topology snapshot [ver=5, servers=5, clients=0, CPUs=144,
heap=160.0GB]
[13:14:03] Topology snapshot [ver=6, servers=6, clients=0, CPUs=192,
heap=190.0GB]
[13:14:03] Topology snapshot [ver=7, servers=7, clients=0, CPUs=240,
heap=220.0GB]
[13:14:03] Topology snapshot [ver=8, servers=8, clients=0, CPUs=288,
heap=250.0GB]
[13:14:03] Topology snapshot [ver=9, servers=9, clients=0, CPUs=336,
heap=280.0GB]
[13:14:04] Topology snapshot [ver=10, servers=10, clients=0, CPUs=384,
heap=310.0GB]
[13:14:04] Topology snapshot [ver=11, servers=11, clients=0, CPUs=432,
heap=340.0GB]
[13:14:04] Topology snapshot [ver=12, servers=12, clients=0, CPUs=480,
heap=370.0GB]
[13:14:04] Topology snapshot [ver=13, servers=13, clients=0, CPUs=528,
heap=400.0GB]
[13:14:04] Topology snapshot [ver=14, servers=14, clients=0, CPUs=576,
heap=430.0GB]
[13:14:04] Topology snapshot [ver=15, servers=15, clients=0, CPUs=624,
heap=460.0GB]
[13:14:05] Topology snapshot [ver=16, servers=16, clients=0, CPUs=672,
heap=490.0GB]
[13:14:05] Topology snapshot [ver=17, servers=17, clients=0, CPUs=720,
heap=520.0GB]
[13:14:06] Topology snapshot [ver=18, servers=18, clients=0, CPUs=720,
heap=550.0GB]
[13:14:06] Topology snapshot [ver=19, servers=19, clients=0, CPUs=768,
heap=580.0GB]
[13:14:06] Topology snapshot [ver=20, servers=20, clients=0, CPUs=816,
heap=610.0GB]
[13:14:06] Topology snapshot [ver=21, servers=21, clients=0, CPUs=864,
heap=640.0GB]
[13:14:07] Topology snapshot [ver=22, servers=22, clients=0, CPUs=864,
heap=670.0GB]
[13:14:07] Topology snapshot [ver=23, servers=23, clients=0, CPUs=912,
heap=700.0GB]
[13:14:07] Topology snapshot [ver=24, servers=24, clients=0, CPUs=960,
heap=730.0GB]
[13:14:07] Topology snapshot [ver=25, servers=25, clients=0, CPUs=1008,
heap=760.0GB]
[13:14:08] Topology snapshot [ver=26, servers=26, clients=0, CPUs=1056,
heap=790.0GB]
[13:14:08] Topology snapshot [ver=27, servers=27, clients=0, CPUs=1104,
heap=820.0GB]
[13:14:08] Topology snapshot [ver=28, servers=28, clients=0, CPUs=1104,
heap=850.0GB]
[13:14:08] Topology snapshot [ver=29, servers=29, clients=0, CPUs=1152,
heap=880.0GB]
[13:14:09] Topology snapshot [ver=30, servers=30, clients=0, CPUs=1200,
heap=910.0GB]
[13:14:09] Topology snapshot [ver=31, servers=31, clients=0, CPUs=1248,
heap=940.0GB]
[13:14:09] Topology snapshot [ver=32, servers=32, clients=0, CPUs=1248,
heap=970.0GB]
[13:14:09] Topology snapshot [ver=33, servers=33, clients=0, CPUs=1296,
heap=1000.0GB]
[13:14:10] Topology snapshot [ver=34, servers=34, clients=0, CPUs=1344,
heap=1000.0GB]
[13:14:10] Topology snapshot [ver=35, servers=35, clients=0, CPUs=1344,
heap=1100.0GB]
[13:14:10] Topology snapshot [ver=36, servers=36, clients=0, CPUs=1392,
heap=1100.0GB]
[13:14:11] Topology snapshot [ver=37, servers=37, clients=0, CPUs=1440,
heap=1100.0GB]
[13:14:11] Topology snapshot [ver=38, servers=38, clients=0, CPUs=1440,
heap=1100.0GB]
[13:14:12] Topology snapshot [ver=39, servers=39, clients=0, CPUs=1440,
heap=1200.0GB]
[13:14:12] Topology snapshot [ver=40, servers=40, clients=0, CPUs=1488,
heap=1200.0GB]
[13:14:12] Topology snapshot [ver=41, servers=41, clients=0, CPUs=1536,
heap=1200.0GB]
[13:14:16] Topology snapshot [ver=42, servers=42, clients=0, CPUs=1536,
heap=1300.0GB]
[13:14:16] Topology snapshot [ver=43, servers=43, clients=0, CPUs=1584,
heap=1300.0GB]
[13:14:16] Topology snapshot [ver=44, servers=44, clients=0, CPUs=1632,
heap=1300.0GB]
[13:14:16] Topology snapshot [ver=45, servers=45, clients=0, CPUs=1632,
heap=1400.0GB]
[13:14:17] Topology snapshot [ver=46, servers=46, clients=0, CPUs=1680,
heap=1400.0GB]
[13:14:17] Topology snapshot [ver=47, servers=47, clients=0, CPUs=1680,
heap=1400.0GB]
[13:14:17] Topology snapshot [ver=48, servers=48, clients=0, CPUs=1680,
heap=1400.0GB]
[13:14:17] Topology snapshot [ver=49, servers=49, clients=0, CPUs=1680,
heap=1500.0GB]
[13:14:18] Topology snapshot [ver=50, servers=50, clients=0, CPUs=1728,
heap=1500.0GB]
[13:14:18] Topology snapshot [ver=51, servers=51, clients=0, CPUs=1728,
heap=1500.0GB]
[13:14:18] Topology snapshot [ver=52, servers=52, clients=0, CPUs=1776,
heap=1600.0GB]
[13:14:19] Topology snapshot [ver=53, servers=53, clients=0, CPUs=1776,
heap=1600.0GB]
[13:14:19] Topology snapshot [ver=54, servers=54, clients=0, CPUs=1824,
heap=1600.0GB]
[13:14:19] Topology snapshot [ver=55, servers=55, clients=0, CPUs=1872,
heap=1700.0GB]
[13:14:19] Topology snapshot [ver=56, servers=56, clients=0, CPUs=1872,
heap=1700.0GB]
[13:14:20] Topology snapshot [ver=57, servers=57, clients=0, CPUs=1920,
heap=1700.0GB]
[13:14:20] Topology snapshot [ver=58, servers=58, clients=0, CPUs=1968,
heap=1700.0GB]
[13:14:20] Topology snapshot [ver=59, servers=59, clients=0, CPUs=1968,
heap=1800.0GB]
[13:14:21] Topology snapshot [ver=60, servers=60, clients=0, CPUs=1968,
heap=1800.0GB]
[13:14:21] Topology snapshot [ver=61, servers=61, clients=0, CPUs=2016,
heap=1800.0GB]
[13:14:21] Topology snapshot [ver=62, servers=62, clients=0, CPUs=2064,
heap=1900.0GB]
[13:14:21] Topology snapshot [ver=63, servers=63, clients=0, CPUs=2064,
heap=1900.0GB]
13:16:03.292 [driver-heartbeater] WARN  o.a.s.r.n.NettyRpcEndpointRef -
Error sending message [message =
Heartbeat(29,[Lscala.Tuple2;@57dd5d40,BlockManagerId(29,
c893ach.ecom.bigdata.int.thomsonreuters.com, 35301))] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10
seconds]. This timeout is controlled by spark.executor.heartbeatInterval
	at
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:491)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1818)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[10 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$.result(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	... 14 common frames omitted
13:16:05.145 [shuffle-client-0] WARN  o.a.s.n.c.TransportResponseHandler -
Ignoring response for RPC 6032743701250525400 from
c893ach.ecom.bigdata.int.thomsonreuters.com/10.214.4.161:33947 (81 bytes)
since it is not outstanding
13:20:31.373 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable
[13:20:55] Topology snapshot [ver=64, servers=62, clients=0, CPUs=2064,
heap=1900.0GB]
[13:21:07] Topology snapshot [ver=65, servers=61, clients=0, CPUs=2064,
heap=1800.0GB]
13:21:11.914 [Executor task launch worker-3] ERROR o.a.s.e.Executor -
Exception in task 18.0 in stage 4.0 (TID 308)
javax.cache.CacheException: class
org.apache.ignite.IgniteInterruptedException: Node is stopping:
WCAGridapplication_1506331241975_7951
	at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1440)
~[program.expanded.jar/:na]
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.cacheException(IgniteCacheProxy.java:2183)
~[program.expanded.jar/:na]
	at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.putAll(IgniteCacheProxy.java:1430)
~[program.expanded.jar/:na]
	at
un.api.dataloader.IgniteDataLoader$$anonfun$loadCompanyData$13.apply(IgniteDataLoader.scala:207)
~[program.expanded.jar/:na]
	at
un.api.dataloader.IgniteDataLoader$$anonfun$loadCompanyData$13.apply(IgniteDataLoader.scala:205)
~[program.expanded.jar/:na]
	at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.scheduler.Task.run(Task.scala:89)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: org.apache.ignite.IgniteInterruptedException: Node is stopping:
WCAGridapplication_1506331241975_7951
	at
org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:766)
~[program.expanded.jar/:na]
	at
org.apache.ignite.internal.util.IgniteUtils$3.apply(IgniteUtils.java:764)
~[program.expanded.jar/:na]
	... 16 common frames omitted
[13:21:12] Topology snapshot [ver=66, servers=60, clients=0, CPUs=2064,
heap=1800.0GB]
[13:21:14] Topology snapshot [ver=67, servers=59, clients=0, CPUs=2016,
heap=1800.0GB]
[13:21:14] Topology snapshot [ver=68, servers=58, clients=0, CPUs=2016,
heap=1700.0GB]
[13:21:14] Topology snapshot [ver=69, servers=57, clients=0, CPUs=1968,
heap=1700.0GB]
[13:21:14] Topology snapshot [ver=70, servers=56, clients=0, CPUs=1968,
heap=1700.0GB]
[13:21:14] Topology snapshot [ver=71, servers=55, clients=0, CPUs=1968,
heap=1700.0GB]
[13:21:17] Topology snapshot [ver=72, servers=54, clients=0, CPUs=1968,
heap=1600.0GB]
[13:21:17] Topology snapshot [ver=73, servers=53, clients=0, CPUs=1968,
heap=1600.0GB]
[13:21:17] Topology snapshot [ver=74, servers=52, clients=0, CPUs=1968,
heap=1600.0GB]
[13:21:17] Topology snapshot [ver=75, servers=51, clients=0, CPUs=1968,
heap=1500.0GB]
[13:21:17] Topology snapshot [ver=76, servers=50, clients=0, CPUs=1968,
heap=1500.0GB]
[13:21:17] Topology snapshot [ver=77, servers=49, clients=0, CPUs=1968,
heap=1500.0GB]
[13:21:18] Topology snapshot [ver=78, servers=48, clients=0, CPUs=1968,
heap=1400.0GB]
[13:21:19] Topology snapshot [ver=79, servers=47, clients=0, CPUs=1920,
heap=1400.0GB]
[13:21:19] Topology snapshot [ver=80, servers=46, clients=0, CPUs=1872,
heap=1400.0GB]
[13:21:19] Topology snapshot [ver=81, servers=45, clients=0, CPUs=1824,
heap=1400.0GB]
[13:21:19] Topology snapshot [ver=82, servers=44, clients=0, CPUs=1776,
heap=1300.0GB]
[13:21:19] Topology snapshot [ver=83, servers=43, clients=0, CPUs=1776,
heap=1300.0GB]
[13:21:19] Topology snapshot [ver=84, servers=42, clients=0, CPUs=1728,
heap=1300.0GB]
[13:21:19] Topology snapshot [ver=85, servers=41, clients=0, CPUs=1680,
heap=1200.0GB]
[13:21:19] Topology snapshot [ver=86, servers=40, clients=0, CPUs=1632,
heap=1200.0GB]
[13:21:19] Topology snapshot [ver=87, servers=39, clients=0, CPUs=1632,
heap=1200.0GB]
[13:21:19] Topology snapshot [ver=88, servers=38, clients=0, CPUs=1584,
heap=1100.0GB]
[13:21:19] Topology snapshot [ver=89, servers=37, clients=0, CPUs=1536,
heap=1100.0GB]
[13:21:19] Topology snapshot [ver=90, servers=36, clients=0, CPUs=1488,
heap=1100.0GB]
[13:21:19] Topology snapshot [ver=91, servers=35, clients=0, CPUs=1440,
heap=1100.0GB]
[13:21:20] Topology snapshot [ver=92, servers=34, clients=0, CPUs=1440,
heap=1000.0GB]
[13:21:20] Topology snapshot [ver=93, servers=33, clients=0, CPUs=1440,
heap=990.0GB]
[13:21:20] Topology snapshot [ver=94, servers=32, clients=0, CPUs=1392,
heap=960.0GB]
[13:21:21] Topology snapshot [ver=95, servers=31, clients=0, CPUs=1344,
heap=930.0GB]
[13:21:22] Topology snapshot [ver=96, servers=30, clients=0, CPUs=1344,
heap=900.0GB]
[13:21:22] Topology snapshot [ver=97, servers=29, clients=0, CPUs=1296,
heap=870.0GB]
[13:21:22] Topology snapshot [ver=98, servers=28, clients=0, CPUs=1248,
heap=840.0GB]
[13:21:22] Topology snapshot [ver=99, servers=27, clients=0, CPUs=1200,
heap=810.0GB]
[13:21:22] Topology snapshot [ver=100, servers=26, clients=0, CPUs=1152,
heap=780.0GB]
[13:21:22] Topology snapshot [ver=101, servers=25, clients=0, CPUs=1104,
heap=750.0GB]
[13:21:22] Topology snapshot [ver=102, servers=24, clients=0, CPUs=1056,
heap=720.0GB]
[13:21:23] Topology snapshot [ver=103, servers=23, clients=0, CPUs=1008,
heap=690.0GB]
[13:21:23] Topology snapshot [ver=104, servers=22, clients=0, CPUs=960,
heap=660.0GB]
[13:21:23] Topology snapshot [ver=105, servers=21, clients=0, CPUs=912,
heap=630.0GB]
[13:21:23] Topology snapshot [ver=106, servers=20, clients=0, CPUs=864,
heap=600.0GB]
[13:21:23] Topology snapshot [ver=107, servers=19, clients=0, CPUs=816,
heap=570.0GB]
[13:21:23] Topology snapshot [ver=108, servers=18, clients=0, CPUs=768,
heap=540.0GB]
[13:21:23] Topology snapshot [ver=109, servers=17, clients=0, CPUs=720,
heap=510.0GB]
[13:21:23] Topology snapshot [ver=110, servers=16, clients=0, CPUs=672,
heap=480.0GB]
[13:21:23] Topology snapshot [ver=111, servers=15, clients=0, CPUs=624,
heap=450.0GB]
[13:21:23] Topology snapshot [ver=112, servers=14, clients=0, CPUs=576,
heap=420.0GB]
[13:21:23] Topology snapshot [ver=113, servers=13, clients=0, CPUs=528,
heap=390.0GB]
[13:21:25] Topology snapshot [ver=114, servers=12, clients=0, CPUs=480,
heap=360.0GB]
[13:21:25] Topology snapshot [ver=115, servers=11, clients=0, CPUs=432,
heap=330.0GB]
[13:21:25] Topology snapshot [ver=116, servers=10, clients=0, CPUs=432,
heap=300.0GB]
[13:21:25] Topology snapshot [ver=117, servers=9, clients=0, CPUs=384,
heap=270.0GB]
[13:21:25] Topology snapshot [ver=118, servers=8, clients=0, CPUs=336,
heap=240.0GB]
[13:21:25] Topology snapshot [ver=119, servers=7, clients=0, CPUs=288,
heap=210.0GB]
[13:21:25] Topology snapshot [ver=120, servers=6, clients=0, CPUs=288,
heap=180.0GB]
[13:21:25] Topology snapshot [ver=121, servers=5, clients=0, CPUs=240,
heap=150.0GB]
[13:21:25] Topology snapshot [ver=122, servers=4, clients=0, CPUs=192,
heap=120.0GB]
[13:21:25] Topology snapshot [ver=123, servers=3, clients=0, CPUs=144,
heap=90.0GB]
[13:21:25] Topology snapshot [ver=124, servers=2, clients=0, CPUs=96,
heap=60.0GB]
[13:21:26] Topology snapshot [ver=125, servers=1, clients=0, CPUs=48,
heap=30.0GB]
[13:21:36] Ignite node stopped OK
[name=WCAGridapplication_1506331241975_7951, uptime=00:08:09:192]
[13:21:45]    __________  ________________ 
[13:21:45]   /  _/ ___/ |/ /  _/_  __/ __/ 
[13:21:45]  _/ // (7 7    // /  / / / _/   
[13:21:45] /___/\___/_/|_/___/ /_/ /___/  
[13:21:45] 
[13:21:45] ver. 1.8.0#20161205-sha1:9ca40dbe
[13:21:45] 2016 Copyright(C) Apache Software Foundation
[13:21:45] 
[13:21:45] Ignite documentation: http://ignite.apache.org
[13:21:45] 
[13:21:45] Quiet mode.
[13:21:45]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[13:21:45] 
[13:21:45] OS: Linux 3.10.0-514.16.1.el7.x86_64 amd64
[13:21:45] VM information: Java(TM) SE Runtime Environment 1.8.0_121-b13
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.121-b13
[13:21:46] Configured plugins:
[13:21:46]   ^-- None
[13:21:46] 
[13:21:47] Security status [authentication=off, tls/ssl=off]
13:30:13.022 [driver-heartbeater] WARN  o.a.s.r.n.NettyRpcEndpointRef -
Error sending message [message =
Heartbeat(29,[Lscala.Tuple2;@151ddb70,BlockManagerId(29,
c893ach.ecom.bigdata.int.thomsonreuters.com, 35301))] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10
seconds]. This timeout is controlled by spark.executor.heartbeatInterval
	at
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:491)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1818)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[10 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$.result(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	... 14 common frames omitted
13:30:16.071 [shuffle-client-0] WARN  o.a.s.n.c.TransportResponseHandler -
Ignoring response for RPC 8904876257727937012 from
c893ach.ecom.bigdata.int.thomsonreuters.com/10.214.4.161:33947 (81 bytes)
since it is not outstanding
13:43:03.022 [driver-heartbeater] WARN  o.a.s.r.n.NettyRpcEndpointRef -
Error sending message [message =
Heartbeat(29,[Lscala.Tuple2;@5c8cc3b4,BlockManagerId(29,
c893ach.ecom.bigdata.int.thomsonreuters.com, 35301))] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10
seconds]. This timeout is controlled by spark.executor.heartbeatInterval
	at
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:491)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1818)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[10 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$.result(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	... 14 common frames omitted
13:43:08.960 [shuffle-client-0] WARN  o.a.s.n.c.TransportResponseHandler -
Ignoring response for RPC 8645866707163739584 from
c893ach.ecom.bigdata.int.thomsonreuters.com/10.214.4.161:33947 (81 bytes)
since it is not outstanding
14:13:03.023 [driver-heartbeater] WARN  o.a.s.r.n.NettyRpcEndpointRef -
Error sending message [message =
Heartbeat(29,[Lscala.Tuple2;@7d0e3c98,BlockManagerId(29,
c893ach.ecom.bigdata.int.thomsonreuters.com, 35301))] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10
seconds]. This timeout is controlled by spark.executor.heartbeatInterval
	at
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:491)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1818)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[10 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$.result(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	... 14 common frames omitted
14:13:05.100 [shuffle-client-0] WARN  o.a.s.n.c.TransportResponseHandler -
Ignoring response for RPC 6998673354740181981 from
c893ach.ecom.bigdata.int.thomsonreuters.com/10.214.4.161:33947 (81 bytes)
since it is not outstanding
14:46:56.835 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable
[14:47:20] Topology snapshot [ver=66, servers=62, clients=0, CPUs=2064,
heap=1900.0GB]
[14:47:27] Topology snapshot [ver=67, servers=61, clients=0, CPUs=2064,
heap=1800.0GB]
[14:47:27] Topology snapshot [ver=68, servers=60, clients=0, CPUs=2064,
heap=1800.0GB]
[14:47:27] Topology snapshot [ver=69, servers=59, clients=0, CPUs=2016,
heap=1800.0GB]
[14:47:27] Topology snapshot [ver=70, servers=58, clients=0, CPUs=2016,
heap=1700.0GB]
[14:47:27] Topology snapshot [ver=71, servers=57, clients=0, CPUs=1968,
heap=1700.0GB]
[14:47:27] Topology snapshot [ver=72, servers=56, clients=0, CPUs=1968,
heap=1700.0GB]
[14:47:27] Topology snapshot [ver=73, servers=55, clients=0, CPUs=1968,
heap=1700.0GB]
[14:47:27] Topology snapshot [ver=74, servers=54, clients=0, CPUs=1968,
heap=1600.0GB]
[14:47:27] Topology snapshot [ver=75, servers=53, clients=0, CPUs=1968,
heap=1600.0GB]
[14:47:27] Topology snapshot [ver=76, servers=52, clients=0, CPUs=1968,
heap=1600.0GB]
[14:47:27] Topology snapshot [ver=77, servers=51, clients=0, CPUs=1968,
heap=1500.0GB]
[14:47:27] Topology snapshot [ver=78, servers=50, clients=0, CPUs=1968,
heap=1500.0GB]
[14:47:27] Topology snapshot [ver=79, servers=49, clients=0, CPUs=1968,
heap=1500.0GB]
[14:47:27] Topology snapshot [ver=80, servers=48, clients=0, CPUs=1968,
heap=1400.0GB]
[14:47:27] Topology snapshot [ver=81, servers=47, clients=0, CPUs=1920,
heap=1400.0GB]
[14:47:27] Topology snapshot [ver=82, servers=46, clients=0, CPUs=1872,
heap=1400.0GB]
[14:47:27] Topology snapshot [ver=83, servers=45, clients=0, CPUs=1824,
heap=1400.0GB]
[14:47:27] Topology snapshot [ver=84, servers=44, clients=0, CPUs=1776,
heap=1300.0GB]
[14:47:27] Topology snapshot [ver=85, servers=43, clients=0, CPUs=1776,
heap=1300.0GB]
[14:47:27] Topology snapshot [ver=86, servers=42, clients=0, CPUs=1728,
heap=1300.0GB]
[14:47:27] Topology snapshot [ver=87, servers=41, clients=0, CPUs=1680,
heap=1200.0GB]
[14:47:27] Topology snapshot [ver=88, servers=40, clients=0, CPUs=1632,
heap=1200.0GB]
[14:47:27] Topology snapshot [ver=89, servers=39, clients=0, CPUs=1632,
heap=1200.0GB]
[14:47:27] Topology snapshot [ver=90, servers=38, clients=0, CPUs=1584,
heap=1100.0GB]
[14:47:27] Topology snapshot [ver=91, servers=37, clients=0, CPUs=1536,
heap=1100.0GB]
[14:47:27] Topology snapshot [ver=92, servers=36, clients=0, CPUs=1488,
heap=1100.0GB]
[14:47:27] Topology snapshot [ver=93, servers=35, clients=0, CPUs=1440,
heap=1100.0GB]
[14:47:27] Topology snapshot [ver=94, servers=34, clients=0, CPUs=1440,
heap=1000.0GB]
[14:47:27] Topology snapshot [ver=95, servers=33, clients=0, CPUs=1440,
heap=990.0GB]
[14:47:27] Topology snapshot [ver=96, servers=32, clients=0, CPUs=1392,
heap=960.0GB]
[14:47:27] Topology snapshot [ver=97, servers=31, clients=0, CPUs=1344,
heap=930.0GB]
[14:47:27] Topology snapshot [ver=98, servers=30, clients=0, CPUs=1344,
heap=900.0GB]
[14:47:27] Topology snapshot [ver=99, servers=29, clients=0, CPUs=1296,
heap=870.0GB]
[14:47:27] Topology snapshot [ver=100, servers=28, clients=0, CPUs=1248,
heap=840.0GB]
[14:47:27] Topology snapshot [ver=101, servers=27, clients=0, CPUs=1200,
heap=810.0GB]
[14:47:27] Topology snapshot [ver=102, servers=26, clients=0, CPUs=1152,
heap=780.0GB]
[14:47:27] Topology snapshot [ver=103, servers=25, clients=0, CPUs=1104,
heap=750.0GB]
[14:47:27] Topology snapshot [ver=104, servers=24, clients=0, CPUs=1056,
heap=720.0GB]
[14:47:27] Topology snapshot [ver=105, servers=23, clients=0, CPUs=1008,
heap=690.0GB]
[14:47:27] Topology snapshot [ver=106, servers=22, clients=0, CPUs=960,
heap=660.0GB]
[14:47:27] Topology snapshot [ver=107, servers=21, clients=0, CPUs=912,
heap=630.0GB]
[14:47:27] Topology snapshot [ver=108, servers=20, clients=0, CPUs=864,
heap=600.0GB]
[14:47:27] Topology snapshot [ver=109, servers=19, clients=0, CPUs=816,
heap=570.0GB]
[14:47:27] Topology snapshot [ver=110, servers=18, clients=0, CPUs=768,
heap=540.0GB]
[14:47:27] Topology snapshot [ver=111, servers=17, clients=0, CPUs=720,
heap=510.0GB]
[14:47:27] Topology snapshot [ver=112, servers=16, clients=0, CPUs=672,
heap=480.0GB]
[14:47:27] Topology snapshot [ver=113, servers=15, clients=0, CPUs=624,
heap=450.0GB]
[14:47:27] Topology snapshot [ver=114, servers=14, clients=0, CPUs=576,
heap=420.0GB]
[14:47:27] Topology snapshot [ver=115, servers=13, clients=0, CPUs=528,
heap=390.0GB]
[14:47:27] Topology snapshot [ver=116, servers=12, clients=0, CPUs=480,
heap=360.0GB]
[14:47:27] Topology snapshot [ver=117, servers=11, clients=0, CPUs=432,
heap=330.0GB]
[14:47:27] Topology snapshot [ver=118, servers=10, clients=0, CPUs=432,
heap=300.0GB]
[14:47:27] Topology snapshot [ver=119, servers=9, clients=0, CPUs=384,
heap=270.0GB]
[14:47:27] Topology snapshot [ver=120, servers=8, clients=0, CPUs=336,
heap=240.0GB]
[14:47:27] Topology snapshot [ver=121, servers=7, clients=0, CPUs=288,
heap=210.0GB]
[14:47:27] Topology snapshot [ver=122, servers=6, clients=0, CPUs=288,
heap=180.0GB]
[14:47:28] Topology snapshot [ver=123, servers=5, clients=0, CPUs=240,
heap=150.0GB]
[14:47:28] Topology snapshot [ver=124, servers=4, clients=0, CPUs=192,
heap=120.0GB]
[14:47:28] Topology snapshot [ver=125, servers=3, clients=0, CPUs=144,
heap=90.0GB]
[14:47:28] Topology snapshot [ver=126, servers=2, clients=0, CPUs=96,
heap=60.0GB]
[14:47:28] Topology snapshot [ver=127, servers=1, clients=0, CPUs=48,
heap=30.0GB]
14:50:23.042 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable
14:57:51.026 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable
16:43:03.022 [driver-heartbeater] WARN  o.a.s.r.n.NettyRpcEndpointRef -
Error sending message [message =
Heartbeat(29,[Lscala.Tuple2;@221555a,BlockManagerId(29,
c893ach.ecom.bigdata.int.thomsonreuters.com, 35301))] in 1 attempts
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10
seconds]. This timeout is controlled by spark.executor.heartbeatInterval
	at
org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:76)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$reportHeartBeat(Executor.scala:491)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply$mcV$sp(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
org.apache.spark.executor.Executor$$anon$1$$anonfun$run$1.apply(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1818)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.executor.Executor$$anon$1.run(Executor.scala:520)
[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
[na:1.8.0_121]
	at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
[na:1.8.0_121]
	at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[na:1.8.0_121]
	at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
Caused by: java.util.concurrent.TimeoutException: Futures timed out after
[10 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at scala.concurrent.Await$.result(package.scala:107)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
~[spark-assembly-1.6.0-cdh5.10.2-hadoop2.6.0-cdh5.10.2.jar:na]
	... 14 common frames omitted
16:43:03.445 [shuffle-client-0] WARN  o.a.s.n.c.TransportResponseHandler -
Ignoring response for RPC 8335836454112163975 from
c893ach.ecom.bigdata.int.thomsonreuters.com/10.214.4.161:33947 (81 bytes)
since it is not outstanding
18:11:17.912 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable
18:12:02.467 [zk-client-EventThread] WARN  o.a.t.d.ZKDiscoveryService - ZK
Session expired:
c519xzf.ecom.bigdata.int.thomsonreuters.com:2181,c570ntw.ecom.bigdata.int.thomsonreuters.com:2181,c482btu.ecom.bigdata.int.thomsonreuters.com:2181/ecpprodcdap/discoverable

Stderr log
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data/9/yarn/nm/usercache/bigdata-app-ecplegalanalytics-svc/filecache/1480/cdap-spark.jar/lib/ch.qos.logback.logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/hadoop/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type
[ch.qos.logback.classic.util.ContextSelectorStaticBinder]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:file:/data/9/yarn/nm/usercache/bigdata-app-ecplegalanalytics-svc/filecache/1480/cdap-spark.jar/lib/ch.qos.logback.logback-classic-1.0.9.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in
[jar:file:/hadoop/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
explanation.
SLF4J: Actual binding is of type
[ch.qos.logback.classic.util.ContextSelectorStaticBinder]
17/11/16 13:12:58 INFO utils.VerifiableProperties: Verifying properties
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property
key.serializer.class is overridden to kafka.serializer.StringEncoder
17/11/16 13:12:58 WARN utils.VerifiableProperties: Property
log.publish.num.partitions is not valid
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property
metadata.broker.list is overridden to
kafka-cpsprodeage.int.thomsonreuters.com:9092
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property
partitioner.class is overridden to
co.cask.cdap.logging.appender.kafka.StringPartitioner
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property producer.type is
overridden to sync
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property
queue.buffering.max.ms is overridden to 1000
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property
request.required.acks is overridden to 1
17/11/16 13:12:58 INFO utils.VerifiableProperties: Property serializer.class
is overridden to kafka.serializer.DefaultEncoder
17/11/16 13:13:26 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:26 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:26 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:26 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:26 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:26 INFO client.ClientUtils$: Fetching metadata from broker
id:0,host:kafka-cpsprodeage.int.thomsonreuters.com,port:9092 with
correlation id 0 for 1 topic(s) Set(ecpprodcdap.logs.user-v2)
17/11/16 13:13:26 INFO producer.SyncProducer: Connected to
kafka-cpsprodeage.int.thomsonreuters.com:9092 for producing
17/11/16 13:13:26 INFO producer.SyncProducer: Disconnecting from
kafka-cpsprodeage.int.thomsonreuters.com:9092
17/11/16 13:13:26 INFO producer.SyncProducer: Connected to
c307senkecppr.int.thomsonreuters.com:9092 for producing
Console logging handler is not configured.
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Starting the Ignite node
on - 10.214.4.161
17/11/16 13:13:27 INFO dataloader.IgniteDataLoader: Started the Ignite node
on - 10.214.4.161
17/11/16 13:30:13 INFO client.ClientUtils$: Fetching metadata from broker
id:0,host:kafka-cpsprodeage.int.thomsonreuters.com,port:9092 with
correlation id 178 for 1 topic(s) Set(ecpprodcdap.logs.user-v2)
17/11/16 13:30:13 INFO producer.SyncProducer: Connected to
kafka-cpsprodeage.int.thomsonreuters.com:9092 for producing
17/11/16 13:30:13 INFO producer.SyncProducer: Disconnecting from
kafka-cpsprodeage.int.thomsonreuters.com:9092
17/11/16 13:30:13 INFO producer.SyncProducer: Disconnecting from
c593drtkecppr.int.thomsonreuters.com:9092
17/11/16 13:30:13 INFO producer.SyncProducer: Disconnecting from
c481bmckecppr.int.thomsonreuters.com:9092
17/11/16 13:30:13 INFO producer.SyncProducer: Disconnecting from
c307senkecppr.int.thomsonreuters.com:9092
17/11/16 13:30:13 INFO producer.SyncProducer: Connected to
c307senkecppr.int.thomsonreuters.com:9092 for producing

I see RpcTimeoutException before ignite node being stopped. Does this mean
executor is being lost from the spark cluster which is resulting in ignite
node being terminated from the ignite cluster? Do you have any other
findings from the log?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteInterruptedException: Node is stopping

Posted by Denis Mekhanikov <dm...@gmail.com>.
Hi Hyma!

Looks like you encountered a classic deadlock. It happens because you put
values into cache in arbitrary order.
This line causes this problem:
*companyDao.nameCache.putAll(kvs)*

So, when multiple threads try to acquire the same locks in different order,
then these operations will be waiting for each other.
To avoid this problem, you should sort data by keys, before calling *putAll*
on it. It can be achieved by using TreeMap. I'm not sure, how to do it in
Scala, sorry.

Let me know if it helps.

Denis

чт, 23 нояб. 2017 г. в 21:14, Hyma <hy...@gmail.com>:

> Below is the corresponding code where ignite step was in hung state.
>
> logInfo("Populating the canonical name Cache on Ignite Nodes")
>     val time = System.currentTimeMillis()
>     companyVORDD.mapPartitions(x => {
>       val kvs = x.map(comp =>
> (comp.wcaId,comp)).toMap[String,CompanyVO].asJava
>       companyDao.nameCache.putAll(kvs)
>       x
>     }).count()
>
> And for your information, many times we won't see any issues with this and
> the hung state I mentioned aboveis happening only sometimes.
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteInterruptedException: Node is stopping

Posted by Hyma <hy...@gmail.com>.
Below is the corresponding code where ignite step was in hung state. 

logInfo("Populating the canonical name Cache on Ignite Nodes")
    val time = System.currentTimeMillis()
    companyVORDD.mapPartitions(x => {
      val kvs = x.map(comp =>
(comp.wcaId,comp)).toMap[String,CompanyVO].asJava
      companyDao.nameCache.putAll(kvs)
      x
    }).count()

And for your information, many times we won't see any issues with this and
the hung state I mentioned aboveis happening only sometimes. 






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteInterruptedException: Node is stopping

Posted by Michael Cherkasov <mi...@gmail.com>.
Hi Hyma,

Could you please show a code snippet where it is hanged?

Thanks,
Mike.

2017-11-22 12:48 GMT+03:00 Hyma <hy...@gmail.com>:

> Thanks Mikhail.
>
> I suspected to increase the spark heartbeat/network timeout. But my
> question
> here is if an executor is lost, corresponding ignite node also gets out of
> cluster. In that case, ignite takes care of re balancing between the other
> active nodes right. My spark job was not killed and it keeps on running
> until I terminate the job, Instead my job is getting hung at the ignite
> cache load step for hours.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Re: IgniteInterruptedException: Node is stopping

Posted by Hyma <hy...@gmail.com>.
Thanks Mikhail.

I suspected to increase the spark heartbeat/network timeout. But my question
here is if an executor is lost, corresponding ignite node also gets out of
cluster. In that case, ignite takes care of re balancing between the other
active nodes right. My spark job was not killed and it keeps on running
until I terminate the job, Instead my job is getting hung at the ignite
cache load step for hours. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: IgniteInterruptedException: Node is stopping

Posted by Mikhail <mi...@gmail.com>.
Hi Hyma,

looks like your job takes too much time, you hit some timeout and spark
killed your jobs.
I don't see any other errors or warnings from your logs, it's very likely
that you need to increase some time out in spark.

thanks,
Mike.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/