You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@samza.apache.org by Yi Pan <ni...@gmail.com> on 2017/01/13 23:44:00 UTC
Re: How to gracefully stop samza job
Hi, Qi,
Sorry to reply late. I am curious on your comment that the close and stop
methods are not called. When user initiated a kill request, the graceful
shutdown sequence is triggered by the shutdown hook added to
SamzaContainer. The shutdown sequence is the following in the code:
{code}
info("Shutting down.")
shutdownConsumers
shutdownTask
shutdownStores
shutdownDiskSpaceMonitor
shutdownHostStatisticsMonitor
shutdownProducers
shutdownLocalityManager
shutdownOffsetManager
shutdownMetrics
shutdownSecurityManger
info("Shutdown complete.")
{code}
in which, MessageChooser.stop() is invoked in shutdownConsumers, and
SystemProducer.close() is invoked in shutdownProducers.
Could you explain why you are not able to shutdown a Samza job gracefully?
Thanks!
-Yi
On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <sh...@eefung.com> wrote:
> Hi Guys,
>
> How can I stop running samza job gracefully except killing it?
>
> Because when samza job was killed, the close and stop method in
> BaseMessageChooser and SystemProducer will not be called and the container
> log will be removed automatically, how can resolve this?
>
> Thanks.
>
> ————————
> ShuQi
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
LogType:directory.info
LogLength:24674
Log Contents:
ls -l:
total 24
-rw-r--r-- 1 yarn hadoop 110 Jan 16 15:09 container_tokens
-rwx------ 1 yarn hadoop 672 Jan 16 15:09 default_container_executor_session.sh
-rwx------ 1 yarn hadoop 726 Jan 16 15:09 default_container_executor.sh
-rwx------ 1 yarn hadoop 3234 Jan 16 15:09 launch_container.sh
lrwxrwxrwx 1 yarn hadoop 139 Jan 16 15:09 __package -> /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/filecache/10/canal-persistent-hstore-1.0-SNAPSHOT-dist.tar.gz
drwx--x--- 2 yarn hadoop 4096 Jan 16 15:09 tmp
find -L . -maxdepth 5 -ls:
178258385 4 drwx--x--- 3 yarn hadoop 4096 Jan 16 15:09 .
178258392 4 -rw-r--r-- 1 yarn hadoop 16 Jan 16 15:09 ./.default_container_executor_session.sh.crc
178258393 4 -rwx------ 1 yarn hadoop 726 Jan 16 15:09 ./default_container_executor.sh
178258390 4 -rw-r--r-- 1 yarn hadoop 36 Jan 16 15:09 ./.launch_container.sh.crc
178258166 4 drwx------ 6 yarn hadoop 4096 Jan 16 15:09 ./__package
178258371 4 drwx------ 2 yarn hadoop 4096 Jan 16 15:09 ./__package/config
178258372 4 -r-x------ 1 yarn hadoop 1982 Jan 16 15:08 ./__package/config/canal-status-persistent-hstore.properties
178258382 4 drwxr-xr-x 4 yarn hadoop 4096 Jan 16 15:09 ./__package/tmp
178258383 4 drwxr-xr-x 2 yarn hadoop 4096 Jan 16 15:09 ./__package/tmp/scalate-15151179506416753-workdir
178258384 4 drwxr-xr-x 2 yarn hadoop 4096 Jan 16 15:09 ./__package/tmp/scalate-7573075638914433300-workdir
178258167 4 drwx------ 2 yarn hadoop 4096 Jan 16 15:09 ./__package/bin
178258169 4 -r-x------ 1 yarn hadoop 1029 Sep 27 04:46 ./__package/bin/validate-yarn-job.sh
178258172 4 -r-x------ 1 yarn hadoop 1034 Sep 27 04:46 ./__package/bin/list-yarn-job.sh
178258171 4 -r-x------ 1 yarn hadoop 1041 Sep 27 04:46 ./__package/bin/stat-yarn-job.sh
178258173 4 -r-x------ 1 yarn hadoop 1205 Sep 27 04:46 ./__package/bin/run-jc.sh
178258176 4 -r-x------ 1 yarn hadoop 1585 Sep 27 04:46 ./__package/bin/kill-yarn-job-by-name.sh
178258181 4 -r-x------ 1 yarn hadoop 1015 Sep 27 04:46 ./__package/bin/checkpoint-tool.sh
178258170 4 -r-x------ 1 yarn hadoop 1003 Sep 27 04:46 ./__package/bin/run-job.sh
178258177 4 -r-x------ 1 yarn hadoop 1019 Sep 27 04:46 ./__package/bin/read-rocksdb-tool.sh
178258168 4 -r-x------ 1 yarn hadoop 1032 Sep 27 04:46 ./__package/bin/run-coordinator-stream-writer.sh
178258179 4 -r-x------ 1 yarn hadoop 1024 Sep 27 04:46 ./__package/bin/run-config-manager.sh
178258183 4 -r-x------ 1 yarn hadoop 1340 Sep 27 04:46 ./__package/bin/log4j-console.xml
178258180 4 -r-x------ 1 yarn hadoop 1150 Sep 27 04:46 ./__package/bin/kill-all.sh
178258182 4 -r-x------ 1 yarn hadoop 1014 Sep 27 04:46 ./__package/bin/state-storage-tool.sh
178258175 4 -r-x------ 1 yarn hadoop 1429 Sep 27 04:46 ./__package/bin/run-container.sh
178258174 4 -r-x------ 1 yarn hadoop 1039 Sep 27 04:46 ./__package/bin/kill-yarn-job.sh
178258178 8 -r-x------ 1 yarn hadoop 4865 Sep 27 04:46 ./__package/bin/run-class.sh
178258184 12 drwx------ 2 yarn hadoop 12288 Jan 16 15:09 ./__package/lib
178258268 2856 -r-x------ 1 yarn hadoop 2924029 Nov 23 20:08 ./__package/lib/commons-1.0.3.jar
178258303 4 -r-x------ 1 yarn hadoop 2497 Nov 23 20:08 ./__package/lib/javax.inject-1.jar
178258225 292 -r-x------ 1 yarn hadoop 298829 Nov 23 20:08 ./__package/lib/commons-configuration-1.6.jar
178258324 1444 -r-x------ 1 yarn hadoop 1475955 Nov 23 20:08 ./__package/lib/htrace-core-3.1.0-incubating.jar
178258196 2040 -r-x------ 1 yarn hadoop 2086587 Nov 23 20:08 ./__package/lib/antfact-avro-1.0.1.jar
178258212 148 -r-x------ 1 yarn hadoop 147952 Nov 23 20:08 ./__package/lib/jersey-json-1.9.jar
178258364 3900 -r-x------ 1 yarn hadoop 3991269 Nov 23 20:08 ./__package/lib/kafka_2.10-0.8.2.1.jar
178258336 196 -r-x------ 1 yarn hadoop 200387 Nov 23 20:08 ./__package/lib/javax.servlet-3.0.0.v201112011016.jar
178258328 64 -r-x------ 1 yarn hadoop 64154 Nov 23 20:08 ./__package/lib/samza-api-0.11.0.jar
178258216 24 -r-x------ 1 yarn hadoop 23346 Nov 23 20:08 ./__package/lib/stax-api-1.0-2.jar
178258334 88 -r-x------ 1 yarn hadoop 89854 Nov 23 20:08 ./__package/lib/jetty-security-8.1.8.v20121106.jar
178258300 1992 -r-x------ 1 yarn hadoop 2039143 Nov 23 20:08 ./__package/lib/hadoop-yarn-api-2.7.3.jar
178258333 96 -r-x------ 1 yarn hadoop 97228 Nov 23 20:08 ./__package/lib/jetty-servlet-8.1.8.v20121106.jar
178258202 40 -r-x------ 1 yarn hadoop 40863 Nov 23 20:08 ./__package/lib/hadoop-annotations-2.7.3.jar
178258266 84 -r-x------ 1 yarn hadoop 82123 Nov 23 20:08 ./__package/lib/metrics-core-2.2.0.jar
178258222 528 -r-x------ 1 yarn hadoop 539735 Nov 23 20:08 ./__package/lib/jets3t-0.9.0.jar
178258302 696 -r-x------ 1 yarn hadoop 710492 Nov 23 20:08 ./__package/lib/guice-3.0.jar
178258329 1256 -r-x------ 1 yarn hadoop 1282850 Nov 23 20:08 ./__package/lib/samza-core_2.10-0.11.0.jar
178258200 1564 -r-x------ 1 yarn hadoop 1599627 Dec 23 11:53 ./__package/lib/commons-math3-3.1.1.jar
178258307 128 -r-x------ 1 yarn hadoop 130458 Nov 23 20:08 ./__package/lib/jersey-client-1.9.jar
178258367 1740 -r-x------ 1 yarn hadoop 1779991 Nov 23 20:08 ./__package/lib/netty-all-4.0.23.Final.jar
178258229 524 -r-x------ 1 yarn hadoop 533455 Nov 23 20:08 ./__package/lib/protobuf-java-2.5.0.jar
178258260 2384 -r-x------ 1 yarn hadoop 2438880 Nov 23 20:08 ./__package/lib/kafka_2.8.0-0.8.0.jar
178258201 3400 -r-x------ 1 yarn hadoop 3479293 Nov 23 20:08 ./__package/lib/hadoop-common-2.7.3.jar
178258243 36 -r-x------ 1 yarn hadoop 33031 Nov 23 20:08 ./__package/lib/jsr305-3.0.0.jar
178258217 64 -r-x------ 1 yarn hadoop 62983 Nov 23 20:08 ./__package/lib/activation-1.1.jar
178258337 24 -r-x------ 1 yarn hadoop 21138 Nov 23 20:08 ./__package/lib/jetty-continuation-8.1.8.v20121106.jar
178258257 424 -r-x------ 1 yarn hadoop 433368 Nov 23 20:08 ./__package/lib/httpclient-4.2.5.jar
178258339 104 -r-x------ 1 yarn hadoop 103293 Nov 23 20:08 ./__package/lib/jetty-io-8.1.8.v20121106.jar
178258370 4 -r-x------ 1 yarn hadoop 748 Sep 23 10:11 ./__package/lib/log4j.xml
178258338 96 -r-x------ 1 yarn hadoop 94481 Nov 23 20:08 ./__package/lib/jetty-http-8.1.8.v20121106.jar
178258353 184 -r-x------ 1 yarn hadoop 185676 Nov 23 20:08 ./__package/lib/config-1.0.0.jar
178258242 268 -r-x------ 1 yarn hadoop 270342 Nov 23 20:08 ./__package/lib/curator-recipes-2.7.1.jar
178258247 60 -r-x------ 1 yarn hadoop 58160 Nov 23 20:08 ./__package/lib/commons-codec-1.4.jar
178258330 108 -r-x------ 1 yarn hadoop 110031 Nov 23 20:08 ./__package/lib/jetty-webapp-8.1.8.v20121106.jar
178258251 104 -r-x------ 1 yarn hadoop 105112 Nov 23 20:08 ./__package/lib/servlet-api-2.5.jar
178258361 484 -r-x------ 1 yarn hadoop 491831 Nov 23 20:08 ./__package/lib/samza-kafka_2.10-0.11.0.jar
178258187 300 -r-x------ 1 yarn hadoop 303139 Dec 23 11:53 ./__package/lib/avro-1.7.4.jar
178258362 320 -r-x------ 1 yarn hadoop 324010 Nov 23 20:08 ./__package/lib/kafka-clients-0.8.2.1.jar
178258235 80 -r-x------ 1 yarn hadoop 79912 Nov 23 20:08 ./__package/lib/api-util-1.0.0-M20.jar
178258236 776 -r-x------ 1 yarn hadoop 792964 Dec 14 09:34 ./__package/lib/zookeeper-3.4.6.jar
178258347 8 -r-x------ 1 yarn hadoop 6440 Nov 23 20:08 ./__package/lib/grizzled-slf4j_2.10-1.0.1.jar
178258237 88 -r-x------ 1 yarn hadoop 87325 Nov 23 20:08 ./__package/lib/jline-0.9.94.jar
178258314 8 -r-x------ 1 yarn hadoop 8114 Nov 23 20:08 ./__package/lib/grizzly-rcm-2.1.2.jar
178258312 44 -r-x------ 1 yarn hadoop 42212 Nov 23 20:08 ./__package/lib/management-api-3.0.0-b012.jar
178258363 164 -r-x------ 1 yarn hadoop 165505 Nov 23 20:08 ./__package/lib/lz4-1.2.0.jar
178258195 44 -r-x------ 1 yarn hadoop 44598 Nov 23 20:07 ./__package/lib/commons-logging-api-1.1.jar
178258359 36 -r-x------ 1 yarn hadoop 34658 Nov 23 20:08 ./__package/lib/samza-kv-rocksdb_2.10-0.11.0.jar
178258234 20 -r-x------ 1 yarn hadoop 16560 Nov 23 20:08 ./__package/lib/api-asn1-api-1.0.0-M20.jar
178258218 20 -r-x------ 1 yarn hadoop 18336 Nov 23 20:08 ./__package/lib/jackson-jaxrs-1.9.13.jar
178258258 28 -r-x------ 1 yarn hadoop 26477 Nov 23 20:07 ./__package/lib/httpmime-4.2.3.jar
178258284 4808 -r-x------ 1 yarn hadoop 4920443 Nov 23 20:08 ./__package/lib/accumulo-core-1.7.0.jar
178258296 4 -r-x------ 1 yarn hadoop 2559 Nov 23 20:08 ./__package/lib/hadoop-client-2.2.0.jar
178258326 312 -r-x------ 1 yarn hadoop 315805 Nov 23 20:08 ./__package/lib/commons-lang3-3.1.jar
178258204 16 -r-x------ 1 yarn hadoop 15010 Nov 23 20:08 ./__package/lib/xmlenc-0.52.jar
178258188 228 -r-x------ 1 yarn hadoop 232248 Nov 23 20:08 ./__package/lib/jackson-core-asl-1.9.13.jar
178258305 28 -r-x------ 1 yarn hadoop 28100 Nov 23 20:08 ./__package/lib/jersey-test-framework-core-1.9.jar
178258206 184 -r-x------ 1 yarn hadoop 185140 Nov 23 20:08 ./__package/lib/commons-io-2.4.jar
178258357 3132 -r-x------ 1 yarn hadoop 3203471 Nov 23 20:08 ./__package/lib/scala-reflect-2.10.4.jar
178258275 2304 -r-x------ 1 yarn hadoop 2356393 Nov 23 20:08 ./__package/lib/lucene-core-5.3.1.jar
178258341 4 -r-x------ 1 yarn hadoop 3745 Nov 23 20:08 ./__package/lib/samza-shell-0.11.0-dist.tgz
178258226 144 -r-x------ 1 yarn hadoop 143602 Nov 23 20:08 ./__package/lib/commons-digester-1.8.jar
178258198 24 -r-x------ 1 yarn hadoop 20998 Nov 23 20:07 ./__package/lib/ahocorasick-0.3.0.jar
178258287 480 -r-x------ 1 yarn hadoop 489884 Nov 23 20:08 ./__package/lib/log4j-1.2.17.jar
178258248 280 -r-x------ 1 yarn hadoop 284220 Nov 23 20:08 ./__package/lib/commons-lang-2.6.jar
178258214 872 -r-x------ 1 yarn hadoop 890168 Nov 23 20:08 ./__package/lib/jaxb-impl-2.2.3-1.jar
178258239 184 -r-x------ 1 yarn hadoop 186273 Nov 23 20:08 ./__package/lib/curator-framework-2.7.1.jar
178258259 16992 -r-x------ 1 yarn hadoop 17397217 Nov 23 20:08 ./__package/lib/broker-kafka-1.0.jar
178258311 24 -r-x------ 1 yarn hadoop 21817 Nov 23 20:08 ./__package/lib/gmbal-api-only-3.0.0-b023.jar
178258185 8 -r-x------ 1 yarn hadoop 7799 Dec 23 11:52 ./__package/lib/canal-common-1.0-SNAPSHOT.jar
178258227 188 -r-x------ 1 yarn hadoop 188671 Nov 23 20:08 ./__package/lib/commons-beanutils-1.7.0.jar
178258219 28 -r-x------ 1 yarn hadoop 27084 Nov 23 20:08 ./__package/lib/jackson-xc-1.9.13.jar
178258223 224 -r-x------ 1 yarn hadoop 227275 Nov 23 20:08 ./__package/lib/httpcore-4.2.4.jar
178258272 29656 -r-x------ 1 yarn hadoop 30365862 Nov 23 20:08 ./__package/lib/secbase-osgi-1.2.2.jar
178258281 852 -r-x------ 1 yarn hadoop 869674 Nov 23 20:08 ./__package/lib/spring-core-3.2.4.RELEASE.jar
178258315 332 -r-x------ 1 yarn hadoop 336904 Nov 23 20:08 ./__package/lib/grizzly-http-servlet-2.1.2.jar
178258252 1204 -r-x------ 1 yarn hadoop 1229125 Nov 23 20:08 ./__package/lib/xercesImpl-2.9.1.jar
178258203 44 -r-x------ 1 yarn hadoop 41123 Nov 23 20:08 ./__package/lib/commons-cli-1.2.jar
178258331 40 -r-x------ 1 yarn hadoop 39115 Nov 23 20:08 ./__package/lib/jetty-xml-8.1.8.v20121106.jar
178258285 60 -r-x------ 1 yarn hadoop 60527 Nov 23 20:08 ./__package/lib/jcommander-1.32.jar
178258270 232 -r-x------ 1 yarn hadoop 233971 Nov 23 20:08 ./__package/lib/jackson-core-lgpl-1.9.7.jar
178258286 2256 -r-x------ 1 yarn hadoop 2308517 Nov 23 20:08 ./__package/lib/guava-19.0.jar
178258299 1640 -r-x------ 1 yarn hadoop 1678642 Nov 23 20:08 ./__package/lib/hadoop-yarn-common-2.7.3.jar
178258306 84 -r-x------ 1 yarn hadoop 85353 Nov 23 20:08 ./__package/lib/javax.servlet-api-3.0.1.jar
178258320 172 -r-x------ 1 yarn hadoop 175554 Nov 23 20:08 ./__package/lib/hadoop-yarn-server-common-2.2.0.jar
178258283 240 -r-x------ 1 yarn hadoop 242236 Nov 23 20:08 ./__package/lib/spring-tx-3.2.4.RELEASE.jar
178258279 8 -r-x------ 1 yarn hadoop 4467 Nov 23 20:08 ./__package/lib/aopalliance-1.0.jar
178258231 92 -r-x------ 1 yarn hadoop 94150 Nov 23 20:08 ./__package/lib/hadoop-auth-2.7.3.jar
178258264 64 -r-x------ 1 yarn hadoop 64009 Nov 23 20:08 ./__package/lib/zkclient-0.3.jar
178258205 300 -r-x------ 1 yarn hadoop 305001 Nov 23 20:08 ./__package/lib/commons-httpclient-3.1.jar
178258271 768 -r-x------ 1 yarn hadoop 785722 Nov 23 20:08 ./__package/lib/jackson-mapper-lgpl-1.9.7.jar
178258301 12 -r-x------ 1 yarn hadoop 9752 Nov 23 20:08 ./__package/lib/slf4j-log4j12-1.6.2.jar
178258343 564 -r-x------ 1 yarn hadoop 573912 Nov 23 20:08 ./__package/lib/joda-time-2.2.jar
178258316 84 -r-x------ 1 yarn hadoop 83945 Nov 23 20:08 ./__package/lib/javax.servlet-3.1.jar
178258289 64 -r-x------ 1 yarn hadoop 63053 Nov 23 20:08 ./__package/lib/accumulo-start-1.7.0.jar
178258327 296 -r-x------ 1 yarn hadoop 300845 Nov 23 20:08 ./__package/lib/jsoup-1.8.1.jar
178258250 24 -r-x------ 1 yarn hadoop 24239 Nov 23 20:08 ./__package/lib/commons-daemon-1.0.13.jar
178258350 120 -r-x------ 1 yarn hadoop 119180 Nov 23 20:08 ./__package/lib/mime-util-2.1.3.jar
178258267 84 -r-x------ 1 yarn hadoop 85449 Nov 23 20:08 ./__package/lib/metrics-core-3.0.1.jar
178258245 96 -r-x------ 1 yarn hadoop 94672 Nov 23 20:08 ./__package/lib/xz-1.0.jar
178258277 844 -r-x------ 1 yarn hadoop 863688 Nov 23 20:08 ./__package/lib/spring-context-3.2.4.RELEASE.jar
178258349 216 -r-x------ 1 yarn hadoop 220813 Nov 23 20:08 ./__package/lib/juniversalchardet-1.0.3.jar
178258233 44 -r-x------ 1 yarn hadoop 44925 Nov 23 20:08 ./__package/lib/apacheds-i18n-2.0.0-M15.jar
178258274 1528 -r-x------ 1 yarn hadoop 1562970 Nov 23 20:08 ./__package/lib/lucene-analyzers-common-5.3.1.jar
178258352 1844 -r-x------ 1 yarn hadoop 1884354 Nov 23 20:08 ./__package/lib/akka-actor_2.10-2.1.2.jar
178258246 8124 -r-x------ 1 yarn hadoop 8316190 Nov 23 20:08 ./__package/lib/hadoop-hdfs-2.7.3.jar
178258186 244 -r-x------ 1 yarn hadoop 248884 Nov 23 20:08 ./__package/lib/weibo-common-1.3.4.jar
178258342 588 -r-x------ 1 yarn hadoop 601136 Nov 23 20:08 ./__package/lib/samza-yarn_2.10-0.11.0.jar
178258249 64 -r-x------ 1 yarn hadoop 62050 Nov 23 20:08 ./__package/lib/commons-logging-1.1.3.jar
178258356 292 -r-x------ 1 yarn hadoop 295250 Nov 23 20:08 ./__package/lib/scalate-util_2.10-1.6.1.jar
178258207 268 -r-x------ 1 yarn hadoop 273370 Nov 23 20:08 ./__package/lib/commons-net-3.1.jar
178258332 276 -r-x------ 1 yarn hadoop 280529 Nov 23 20:08 ./__package/lib/jetty-util-8.1.8.v20121106.jar
178258208 576 -r-x------ 1 yarn hadoop 588337 Nov 23 20:08 ./__package/lib/commons-collections-3.2.2.jar
178258290 408 -r-x------ 1 yarn hadoop 415578 Nov 23 20:08 ./__package/lib/commons-vfs2-2.0.jar
178258262 14108 -r-x------ 1 yarn hadoop 14445780 Nov 23 20:08 ./__package/lib/scala-compiler-2.10.4.jar
178258317 16 -r-x------ 1 yarn hadoop 14786 Nov 23 20:08 ./__package/lib/jersey-guice-1.9.jar
178258294 40 -r-x------ 1 yarn hadoop 40066 Nov 23 20:08 ./__package/lib/maven-scm-provider-svn-commons-1.4.jar
178258351 40 -r-x------ 1 yarn hadoop 38460 Nov 23 20:08 ./__package/lib/joda-convert-1.2.jar
178258278 328 -r-x------ 1 yarn hadoop 335455 Nov 23 20:08 ./__package/lib/spring-aop-3.2.4.RELEASE.jar
178258193 60 -r-x------ 1 yarn hadoop 61379 Nov 23 20:07 ./__package/lib/org.osgi.core-1.2.0.jar
178258304 16 -r-x------ 1 yarn hadoop 12976 Nov 23 20:08 ./__package/lib/jersey-test-framework-grizzly2-1.9.jar
178258273 448 -r-x------ 1 yarn hadoop 455041 Nov 23 20:08 ./__package/lib/logback-core-1.1.3.jar
178258209 528 -r-x------ 1 yarn hadoop 539912 Nov 23 20:08 ./__package/lib/jetty-6.1.26.jar
178258211 448 -r-x------ 1 yarn hadoop 458739 Nov 23 20:08 ./__package/lib/jersey-core-1.9.jar
178258224 20 -r-x------ 1 yarn hadoop 18490 Nov 23 20:08 ./__package/lib/java-xmlbuilder-0.4.jar
178258346 24 -r-x------ 1 yarn hadoop 23193 Nov 23 20:08 ./__package/lib/scalatra-common_2.10-2.2.1.jar
178258348 212 -r-x------ 1 yarn hadoop 216541 Nov 23 20:08 ./__package/lib/rl_2.10-0.4.4.jar
178258319 1424 -r-x------ 1 yarn hadoop 1455001 Nov 23 20:08 ./__package/lib/hadoop-mapreduce-client-core-2.2.0.jar
178258368 1024 -r-x------ 1 yarn hadoop 1045744 Nov 23 20:08 ./__package/lib/leveldbjni-all-1.8.jar
178258238 1172 -r-x------ 1 yarn hadoop 1199572 Nov 23 20:08 ./__package/lib/netty-3.6.2.Final.jar
178258318 164 -r-x------ 1 yarn hadoop 165867 Nov 23 20:08 ./__package/lib/hadoop-yarn-client-2.7.3.jar
178258194 152 -r-x------ 1 yarn hadoop 155295 Nov 23 20:07 ./__package/lib/org.osgi.compendium-1.2.0.jar
178258265 8 -r-x------ 1 yarn hadoop 4229 Nov 23 20:08 ./__package/lib/metrics-annotation-2.2.0.jar
178258313 196 -r-x------ 1 yarn hadoop 198255 Nov 23 20:08 ./__package/lib/grizzly-http-server-2.1.2.jar
178258292 248 -r-x------ 1 yarn hadoop 250546 Nov 23 20:08 ./__package/lib/plexus-utils-1.5.6.jar
178258354 48 -r-x------ 1 yarn hadoop 47030 Nov 23 20:08 ./__package/lib/scalatra-scalate_2.10-2.2.1.jar
178258344 64 -r-x------ 1 yarn hadoop 65012 Nov 23 20:08 ./__package/lib/guice-servlet-3.0.jar
178258291 96 -r-x------ 1 yarn hadoop 94421 Nov 23 20:08 ./__package/lib/maven-scm-api-1.4.jar
178258192 28 -r-x------ 1 yarn hadoop 25689 Nov 23 20:08 ./__package/lib/slf4j-api-1.6.2.jar
178258335 332 -r-x------ 1 yarn hadoop 338985 Nov 23 20:08 ./__package/lib/jetty-server-8.1.8.v20121106.jar
178258189 764 -r-x------ 1 yarn hadoop 780664 Nov 23 20:08 ./__package/lib/jackson-mapper-asl-1.9.13.jar
178258191 580 -r-x------ 1 yarn hadoop 592319 Dec 23 11:52 ./__package/lib/snappy-java-1.1.1.6.jar
178258254 1308 -r-x------ 1 yarn hadoop 1338531 Nov 23 20:08 ./__package/lib/jk-analyzer-1.4.5.jar
178258244 236 -r-x------ 1 yarn hadoop 241367 Nov 23 20:08 ./__package/lib/commons-compress-1.4.1.jar
178258197 2572 -r-x------ 1 yarn hadoop 2630679 Nov 23 20:08 ./__package/lib/hstore-common-1.0.1.jar
178258308 20 -r-x------ 1 yarn hadoop 17542 Nov 23 20:08 ./__package/lib/jersey-grizzly2-1.9.jar
178258366 132 -r-x------ 1 yarn hadoop 132202 Nov 23 20:08 ./__package/lib/irclib-1.10.jar
178258240 68 -r-x------ 1 yarn hadoop 69500 Nov 23 20:08 ./__package/lib/curator-client-2.7.1.jar
178258321 24 -r-x------ 1 yarn hadoop 21537 Nov 23 20:08 ./__package/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar
178258221 44 -r-x------ 1 yarn hadoop 43033 Nov 23 20:08 ./__package/lib/asm-3.1.jar
178258256 48 -r-x------ 1 yarn hadoop 45944 Nov 23 20:07 ./__package/lib/json-20090211.jar
178258190 32 -r-x------ 1 yarn hadoop 29555 Nov 23 20:07 ./__package/lib/paranamer-2.3.jar
178258255 52 -r-x------ 1 yarn hadoop 49572 Nov 23 20:07 ./__package/lib/commons-dbutils-1.4.jar
178258213 68 -r-x------ 1 yarn hadoop 67758 Nov 23 20:08 ./__package/lib/jettison-1.1.jar
178258298 644 -r-x------ 1 yarn hadoop 656365 Nov 23 20:08 ./__package/lib/hadoop-mapreduce-client-common-2.2.0.jar
178258369 16 -r-x------ 1 yarn hadoop 13962 Jan 16 15:08 ./__package/lib/canal-persistent-hstore-1.0-SNAPSHOT.jar
178258253 192 -r-x------ 1 yarn hadoop 194354 Nov 23 20:08 ./__package/lib/xml-apis-1.3.04.jar
178258310 676 -r-x------ 1 yarn hadoop 690573 Nov 23 20:08 ./__package/lib/grizzly-framework-2.1.2.jar
178258269 2496 -r-x------ 1 yarn hadoop 2553049 Nov 23 20:08 ./__package/lib/camel-core-2.12.0.jar
178258360 4124 -r-x------ 1 yarn hadoop 4218933 Nov 23 20:08 ./__package/lib/rocksdbjni-3.13.1.jar
178258228 204 -r-x------ 1 yarn hadoop 206035 Nov 23 20:08 ./__package/lib/commons-beanutils-core-1.8.0.jar
178258261 6960 -r-x------ 1 yarn hadoop 7126372 Nov 23 20:08 ./__package/lib/scala-library-2.10.4.jar
178258288 108 -r-x------ 1 yarn hadoop 110331 Nov 23 20:08 ./__package/lib/accumulo-fate-1.7.0.jar
178258355 1936 -r-x------ 1 yarn hadoop 1979885 Nov 23 20:08 ./__package/lib/scalate-core_2.10-1.6.1.jar
178258345 1212 -r-x------ 1 yarn hadoop 1237017 Nov 23 20:08 ./__package/lib/scalatra_2.10-2.2.1.jar
178258293 72 -r-x------ 1 yarn hadoop 69858 Nov 23 20:08 ./__package/lib/maven-scm-provider-svnexe-1.4.jar
178258297 472 -r-x------ 1 yarn hadoop 482042 Nov 23 20:08 ./__package/lib/hadoop-mapreduce-client-app-2.2.0.jar
178258241 184 -r-x------ 1 yarn hadoop 185746 Nov 23 20:08 ./__package/lib/jsch-0.1.42.jar
178258358 88 -r-x------ 1 yarn hadoop 88353 Nov 23 20:08 ./__package/lib/samza-kv_2.10-0.11.0.jar
178258323 816 -r-x------ 1 yarn hadoop 832410 Nov 23 20:08 ./__package/lib/commons-math-2.1.jar
178258325 212 -r-x------ 1 yarn hadoop 217053 Nov 23 20:08 ./__package/lib/libthrift-0.9.1.jar
178258322 36 -r-x------ 1 yarn hadoop 35216 Nov 23 20:08 ./__package/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar
178258263 52 -r-x------ 1 yarn hadoop 53244 Nov 23 20:08 ./__package/lib/jopt-simple-3.2.jar
178258295 28 -r-x------ 1 yarn hadoop 25429 Nov 23 20:08 ./__package/lib/regexp-1.3.jar
178258232 676 -r-x------ 1 yarn hadoop 691479 Nov 23 20:08 ./__package/lib/apacheds-kerberos-codec-2.0.0-M15.jar
178258210 176 -r-x------ 1 yarn hadoop 177131 Nov 23 20:08 ./__package/lib/jetty-util-6.1.26.jar
178258309 248 -r-x------ 1 yarn hadoop 253086 Nov 23 20:08 ./__package/lib/grizzly-http-2.1.2.jar
178258282 196 -r-x------ 1 yarn hadoop 196807 Nov 23 20:08 ./__package/lib/spring-expression-3.2.4.RELEASE.jar
178258276 236 -r-x------ 1 yarn hadoop 240768 Nov 23 20:08 ./__package/lib/camel-spring-2.12.0.jar
178258365 112 -r-x------ 1 yarn hadoop 111908 Nov 23 20:08 ./__package/lib/metrics-core-3.1.0.jar
178258340 16 -r-x------ 1 yarn hadoop 15833 Nov 23 20:08 ./__package/lib/samza-log4j-0.11.0.jar
178258230 188 -r-x------ 1 yarn hadoop 190432 Nov 23 20:08 ./__package/lib/gson-2.2.4.jar
178258280 596 -r-x------ 1 yarn hadoop 607755 Nov 23 20:08 ./__package/lib/spring-beans-3.2.4.RELEASE.jar
178258220 700 -r-x------ 1 yarn hadoop 713089 Nov 23 20:08 ./__package/lib/jersey-server-1.9.jar
178258215 104 -r-x------ 1 yarn hadoop 105134 Nov 23 20:08 ./__package/lib/jaxb-api-2.2.2.jar
178258199 580 -r-x------ 1 yarn hadoop 590996 Nov 23 20:07 ./__package/lib/mongo-java-driver-2.12.3.jar
178258386 4 drwx--x--- 2 yarn hadoop 4096 Jan 16 15:09 ./tmp
178258394 4 -rw-r--r-- 1 yarn hadoop 16 Jan 16 15:09 ./.default_container_executor.sh.crc
178258387 4 -rw-r--r-- 1 yarn hadoop 110 Jan 16 15:09 ./container_tokens
178258389 4 -rwx------ 1 yarn hadoop 3234 Jan 16 15:09 ./launch_container.sh
178258391 4 -rwx------ 1 yarn hadoop 672 Jan 16 15:09 ./default_container_executor_session.sh
178258388 4 -rw-r--r-- 1 yarn hadoop 12 Jan 16 15:09 ./.container_tokens.crc
broken symlinks(find -L . -maxdepth 5 -type l -ls):
LogType:gc.log.0.current
LogLength:19467
Log Contents:
Java HotSpot(TM) 64-Bit Server VM (25.77-b03) for linux-amd64 JRE (1.8.0_77-b03), built on Mar 20 2016 22:00:46 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 32851128k(21348024k free), swap 0k(0k free)
CommandLine flags: -XX:GCLogFileSize=10241024 -XX:InitialHeapSize=525618048 -XX:MaxHeapSize=3435134976 -XX:NumberOfGCLogFiles=10 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseGCLogFileRotation -XX:+UseParallelGC
2017-01-16T15:09:42.920+0800: 0.814: [GC (System.gc()) 95534K->9562K(493056K), 0.0055023 secs]
2017-01-16T15:09:42.925+0800: 0.820: [Full GC (System.gc()) 9562K->9063K(493056K), 0.0135018 secs]
2017-01-16T15:09:44.478+0800: 2.372: [GC (Allocation Failure) 138087K->17647K(493056K), 0.0053230 secs]
2017-01-16T15:09:44.735+0800: 2.629: [GC (Metadata GC Threshold) 38262K->10823K(493056K), 0.0024434 secs]
2017-01-16T15:09:44.737+0800: 2.632: [Full GC (Metadata GC Threshold) 10823K->4715K(367616K), 0.0206558 secs]
2017-01-16T15:09:46.291+0800: 4.186: [GC (Allocation Failure) 132923K->23902K(367616K), 0.0076182 secs]
2017-01-16T15:09:48.045+0800: 5.940: [GC (Allocation Failure) 152627K->39450K(367616K), 0.0110388 secs]
2017-01-16T15:09:48.428+0800: 6.323: [GC (Allocation Failure) 168474K->46956K(392192K), 0.0162798 secs]
2017-01-16T15:09:49.273+0800: 7.167: [GC (Allocation Failure) 200556K->49752K(411648K), 0.0122730 secs]
2017-01-16T15:09:49.672+0800: 7.566: [GC (Allocation Failure) 203352K->57946K(495104K), 0.0107909 secs]
2017-01-16T15:09:50.151+0800: 8.046: [GC (Allocation Failure) 295002K->54610K(498688K), 0.0091583 secs]
2017-01-16T15:09:50.571+0800: 8.465: [GC (Allocation Failure) 291666K->56044K(560128K), 0.0073873 secs]
2017-01-16T15:09:51.132+0800: 9.026: [GC (Allocation Failure) 378727K->59045K(620544K), 0.0079350 secs]
2017-01-16T15:09:51.742+0800: 9.636: [GC (Allocation Failure) 422053K->68598K(697344K), 0.0113878 secs]
2017-01-16T15:09:52.474+0800: 10.368: [GC (Allocation Failure) 509942K->88758K(696320K), 0.0118573 secs]
2017-01-16T15:09:53.266+0800: 11.161: [GC (Allocation Failure) 530102K->104706K(858112K), 0.0241047 secs]
2017-01-16T15:09:54.374+0800: 12.269: [GC (Allocation Failure) 706306K->121522K(856576K), 0.0098662 secs]
2017-01-16T15:09:55.527+0800: 13.421: [GC (Allocation Failure) 723122K->127142K(1043456K), 0.0070736 secs]
2017-01-16T15:09:58.624+0800: 16.518: [GC (Allocation Failure) 931494K->136885K(1058816K), 0.0082473 secs]
2017-01-16T15:10:00.801+0800: 18.696: [GC (Allocation Failure) 941237K->156345K(1212416K), 0.0163990 secs]
2017-01-16T15:10:02.701+0800: 20.595: [GC (Allocation Failure) 1115833K->152906K(1286656K), 0.0177250 secs]
2017-01-16T15:10:06.354+0800: 24.248: [GC (Allocation Failure) 1190218K->130916K(1293312K), 0.0037100 secs]
2017-01-16T15:10:09.162+0800: 27.057: [GC (Allocation Failure) 1164132K->143615K(1273344K), 0.0059926 secs]
2017-01-16T15:10:11.221+0800: 29.116: [GC (Allocation Failure) 1176831K->138864K(1293312K), 0.0041602 secs]
2017-01-16T15:10:16.274+0800: 34.168: [GC (Allocation Failure) 1173616K->128361K(1291776K), 0.0026016 secs]
2017-01-16T15:10:19.293+0800: 37.187: [GC (Allocation Failure) 1163113K->135221K(1224192K), 0.0045995 secs]
2017-01-16T15:10:21.291+0800: 39.186: [GC (Allocation Failure) 1127477K->131242K(1179648K), 0.0034263 secs]
2017-01-16T15:10:24.182+0800: 42.076: [GC (Allocation Failure) 1083050K->132129K(1142272K), 0.0040361 secs]
2017-01-16T15:10:26.210+0800: 44.104: [GC (Allocation Failure) 1045537K->154051K(1126912K), 0.0087234 secs]
2017-01-16T15:10:27.684+0800: 45.578: [GC (Allocation Failure) 1030595K->157334K(1090560K), 0.0087936 secs]
2017-01-16T15:10:29.264+0800: 47.159: [GC (Allocation Failure) 999062K->144796K(1042432K), 0.0057635 secs]
2017-01-16T15:10:32.092+0800: 49.987: [GC (Allocation Failure) 953244K->134970K(1000448K), 0.0035740 secs]
2017-01-16T15:10:35.154+0800: 53.048: [GC (Allocation Failure) 911674K->168831K(1004544K), 0.0113475 secs]
2017-01-16T15:10:36.569+0800: 54.463: [GC (Allocation Failure) 915327K->153332K(960000K), 0.0062272 secs]
2017-01-16T15:10:37.861+0800: 55.755: [GC (Allocation Failure) 871156K->142163K(921088K), 0.0047419 secs]
2017-01-16T15:10:41.870+0800: 59.764: [GC (Allocation Failure) 832339K->139760K(892928K), 0.0045331 secs]
2017-01-16T15:10:43.149+0800: 61.043: [GC (Allocation Failure) 803824K->140567K(868352K), 0.0048852 secs]
2017-01-16T15:10:44.405+0800: 62.300: [GC (Allocation Failure) 779543K->148396K(852480K), 0.0061491 secs]
2017-01-16T15:10:45.357+0800: 63.252: [GC (Allocation Failure) 763820K->171762K(852480K), 0.0118768 secs]
2017-01-16T15:10:46.448+0800: 64.342: [GC (Allocation Failure) 764658K->161113K(812032K), 0.0089167 secs]
2017-01-16T15:10:47.409+0800: 65.304: [GC (Allocation Failure) 732505K->172200K(876544K), 0.0083530 secs]
2017-01-16T15:10:48.963+0800: 66.857: [GC (Allocation Failure) 797352K->185334K(945664K), 0.0117701 secs]
2017-01-16T15:10:50.276+0800: 68.171: [GC (Allocation Failure) 870390K->163292K(897024K), 0.0060670 secs]
2017-01-16T15:10:52.756+0800: 70.650: [GC (Allocation Failure) 821543K->193696K(902656K), 0.0120097 secs]
2017-01-16T15:10:54.922+0800: 72.817: [GC (Allocation Failure) 828064K->195426K(880640K), 0.0121904 secs]
2017-01-16T15:10:56.546+0800: 74.441: [GC (Allocation Failure) 806242K->187447K(849920K), 0.0103799 secs]
2017-01-16T15:10:57.556+0800: 75.451: [GC (Allocation Failure) 775735K->173169K(814080K), 0.0076372 secs]
2017-01-16T15:10:58.519+0800: 76.414: [GC (Allocation Failure) 739953K->178018K(798720K), 0.0087239 secs]
2017-01-16T15:11:01.505+0800: 79.399: [GC (Allocation Failure) 724322K->184896K(785408K), 0.0116132 secs]
2017-01-16T15:11:02.557+0800: 80.451: [GC (Allocation Failure) 711744K->169351K(767488K), 0.0068837 secs]
2017-01-16T15:11:03.500+0800: 81.394: [GC (Allocation Failure) 677767K->158678K(722432K), 0.0054865 secs]
2017-01-16T15:11:04.415+0800: 82.309: [GC (Allocation Failure) 649174K->159633K(734720K), 0.0050732 secs]
2017-01-16T15:11:07.140+0800: 85.035: [GC (Allocation Failure) 633233K->152172K(682496K), 0.0039760 secs]
2017-01-16T15:11:07.923+0800: 85.817: [GC (Allocation Failure) 609900K->161710K(701952K), 0.0061949 secs]
2017-01-16T15:11:08.842+0800: 86.737: [GC (Allocation Failure) 604078K->182719K(682496K), 0.0101428 secs]
2017-01-16T15:11:09.793+0800: 87.688: [GC (Allocation Failure) 610239K->193868K(700928K), 0.0112609 secs]
2017-01-16T15:11:10.506+0800: 88.400: [GC (Allocation Failure) 616780K->185699K(705536K), 0.0103993 secs]
2017-01-16T15:11:11.203+0800: 89.097: [GC (Allocation Failure) 608611K->185638K(747008K), 0.0130029 secs]
2017-01-16T15:11:12.029+0800: 89.923: [GC (Allocation Failure) 649510K->185756K(747008K), 0.0099941 secs]
2017-01-16T15:11:12.854+0800: 90.748: [GC (Allocation Failure) 649628K->183249K(795136K), 0.0151395 secs]
2017-01-16T15:11:13.804+0800: 91.699: [GC (Allocation Failure) 697809K->173595K(796160K), 0.0072230 secs]
2017-01-16T15:11:15.320+0800: 93.215: [GC (Allocation Failure) 688155K->194951K(843776K), 0.0187709 secs]
2017-01-16T15:11:18.360+0800: 96.254: [GC (Allocation Failure) 759687K->186983K(846848K), 0.0109157 secs]
2017-01-16T15:11:19.392+0800: 97.286: [GC (Allocation Failure) 751719K->176656K(896000K), 0.0113119 secs]
2017-01-16T15:11:20.525+0800: 98.419: [GC (Allocation Failure) 795152K->171621K(897536K), 0.0066831 secs]
2017-01-16T15:11:21.878+0800: 99.772: [GC (Allocation Failure) 790117K->158957K(824832K), 0.0054303 secs]
2017-01-16T15:11:22.978+0800: 100.872: [GC (Allocation Failure) 754925K->157212K(801280K), 0.0035339 secs]
2017-01-16T15:11:25.238+0800: 103.132: [GC (Allocation Failure) 731676K->211793K(833536K), 0.0173994 secs]
2017-01-16T15:11:26.295+0800: 104.189: [GC (Allocation Failure) 765777K->218232K(880640K), 0.0128896 secs]
2017-01-16T15:11:27.465+0800: 105.359: [GC (Allocation Failure) 809592K->204959K(896000K), 0.0123746 secs]
2017-01-16T15:11:29.534+0800: 107.428: [GC (Allocation Failure) 802463K->214365K(899072K), 0.0140191 secs]
2017-01-16T15:11:31.267+0800: 109.161: [GC (Allocation Failure) 811869K->224652K(947200K), 0.0255249 secs]
2017-01-16T15:11:32.707+0800: 110.602: [GC (Allocation Failure) 864389K->221211K(954880K), 0.0155426 secs]
2017-01-16T15:11:34.964+0800: 112.858: [GC (Allocation Failure) 861723K->247045K(1008128K), 0.0329816 secs]
2017-01-16T15:11:36.347+0800: 114.242: [GC (Allocation Failure) 935685K->224064K(1020928K), 0.0148866 secs]
2017-01-16T15:11:37.851+0800: 115.746: [GC (Allocation Failure) 912704K->201395K(1077248K), 0.0157062 secs]
2017-01-16T15:11:39.291+0800: 117.186: [GC (Allocation Failure) 957619K->185823K(1084416K), 0.0081186 secs]
2017-01-16T15:11:40.770+0800: 118.665: [GC (Allocation Failure) 942047K->178224K(1144320K), 0.0110482 secs]
2017-01-16T15:11:42.885+0800: 120.780: [GC (Allocation Failure) 1002032K->163768K(1021440K), 0.0047883 secs]
2017-01-16T15:11:45.921+0800: 123.816: [GC (Allocation Failure) 955320K->175815K(1049600K), 0.0070877 secs]
2017-01-16T15:11:47.309+0800: 125.203: [GC (Allocation Failure) 934883K->161859K(958464K), 0.0041641 secs]
2017-01-16T15:11:49.136+0800: 127.030: [GC (Allocation Failure) 892995K->159822K(994304K), 0.0038813 secs]
2017-01-16T15:11:51.715+0800: 129.609: [GC (Allocation Failure) 862798K->159291K(901120K), 0.0039801 secs]
2017-01-16T15:11:53.186+0800: 131.080: [GC (Allocation Failure) 835643K->160332K(939008K), 0.0035644 secs]
2017-01-16T15:11:55.405+0800: 133.300: [GC (Allocation Failure) 811084K->182812K(874496K), 0.0073541 secs]
2017-01-16T15:11:56.550+0800: 134.445: [GC (Allocation Failure) 809500K->167298K(883712K), 0.0047382 secs]
2017-01-16T15:11:57.895+0800: 135.789: [GC (Allocation Failure) 770946K->160893K(806912K), 0.0035354 secs]
2017-01-16T15:12:00.218+0800: 138.112: [GC (Allocation Failure) 742525K->206577K(851456K), 0.0127414 secs]
2017-01-16T15:12:01.274+0800: 139.169: [GC (Allocation Failure) 767217K->204478K(809472K), 0.0129997 secs]
2017-01-16T15:12:02.472+0800: 140.366: [GC (Allocation Failure) 745150K->191374K(804352K), 0.0111654 secs]
2017-01-16T15:12:03.535+0800: 141.430: [GC (Allocation Failure) 713102K->180515K(747520K), 0.0072562 secs]
2017-01-16T15:12:04.487+0800: 142.382: [GC (Allocation Failure) 683811K->163565K(766976K), 0.0038038 secs]
2017-01-16T15:12:07.080+0800: 144.974: [GC (Allocation Failure) 649453K->161800K(693760K), 0.0034614 secs]
2017-01-16T15:12:09.546+0800: 147.440: [GC (Allocation Failure) 630792K->163752K(731648K), 0.0038389 secs]
2017-01-16T15:12:11.180+0800: 149.075: [GC (Allocation Failure) 616872K->162135K(662016K), 0.0039563 secs]
2017-01-16T15:12:12.065+0800: 149.960: [GC (Allocation Failure) 599895K->166366K(697344K), 0.0050545 secs]
2017-01-16T15:12:12.855+0800: 150.750: [GC (Allocation Failure) 589790K->184277K(655360K), 0.0075001 secs]
2017-01-16T15:12:13.552+0800: 151.447: [GC (Allocation Failure) 593877K->192093K(671232K), 0.0094961 secs]
2017-01-16T15:12:14.247+0800: 152.142: [GC (Allocation Failure) 588381K->190950K(635392K), 0.0079431 secs]
2017-01-16T15:12:15.335+0800: 153.229: [GC (Allocation Failure) 574438K->199053K(650240K), 0.0097160 secs]
2017-01-16T15:12:16.136+0800: 154.031: [GC (Allocation Failure) 570253K->187770K(607744K), 0.0074841 secs]
2017-01-16T15:12:17.119+0800: 155.013: [GC (Allocation Failure) 547194K->194481K(622080K), 0.0081061 secs]
2017-01-16T15:12:18.257+0800: 156.151: [GC (Allocation Failure) 542641K->194436K(591872K), 0.0091323 secs]
2017-01-16T15:12:18.939+0800: 156.833: [GC (Allocation Failure) 531844K->187200K(625664K), 0.0080970 secs]
2017-01-16T15:12:19.503+0800: 157.398: [GC (Allocation Failure) 528192K->192562K(622080K), 0.0087507 secs]
2017-01-16T15:12:20.321+0800: 158.215: [GC (Allocation Failure) 533554K->196194K(653824K), 0.0099833 secs]
2017-01-16T15:12:21.016+0800: 158.911: [GC (Allocation Failure) 570466K->190835K(653824K), 0.0071763 secs]
2017-01-16T15:12:21.693+0800: 159.588: [GC (Allocation Failure) 565107K->179295K(687104K), 0.0080735 secs]
2017-01-16T15:12:22.413+0800: 160.308: [GC (Allocation Failure) 590943K->172495K(687616K), 0.0039411 secs]
2017-01-16T15:12:25.181+0800: 163.076: [GC (Allocation Failure) 584143K->183623K(640512K), 0.0091525 secs]
2017-01-16T15:12:26.225+0800: 164.120: [GC (Allocation Failure) 581959K->168408K(612352K), 0.0036926 secs]
2017-01-16T15:12:28.956+0800: 166.850: [GC (Allocation Failure) 553944K->161944K(626688K), 0.0028057 secs]
2017-01-16T15:12:30.434+0800: 168.329: [GC (Allocation Failure) 535192K->163040K(582656K), 0.0028422 secs]
2017-01-16T15:12:32.268+0800: 170.163: [GC (Allocation Failure) 524512K->163256K(612864K), 0.0034085 secs]
2017-01-16T15:12:34.715+0800: 172.610: [GC (Allocation Failure) 513464K->164153K(561664K), 0.0033389 secs]
2017-01-16T15:12:35.921+0800: 173.815: [GC (Allocation Failure) 503609K->163989K(589312K), 0.0030823 secs]
2017-01-16T15:12:37.922+0800: 175.816: [GC (Allocation Failure) 493205K->165718K(542720K), 0.0032502 secs]
2017-01-16T15:12:40.825+0800: 178.719: [GC (Allocation Failure) 485206K->171661K(565760K), 0.0039286 secs]
2017-01-16T15:12:41.811+0800: 179.706: [GC (Allocation Failure) 481933K->165704K(524800K), 0.0031979 secs]
2017-01-16T15:12:43.504+0800: 181.398: [GC (Allocation Failure) 467272K->163680K(546304K), 0.0030313 secs]
2017-01-16T15:12:46.924+0800: 184.818: [GC (Allocation Failure) 456544K->184052K(525824K), 0.0062485 secs]
2017-01-16T15:12:47.406+0800: 185.300: [GC (Allocation Failure) 468724K->178235K(529408K), 0.0054769 secs]
2017-01-16T15:12:49.720+0800: 187.614: [GC (Allocation Failure) 455227K->197693K(523776K), 0.0103161 secs]
2017-01-16T15:12:50.202+0800: 188.096: [GC (Allocation Failure) 467005K->188676K(523776K), 0.0070836 secs]
2017-01-16T15:12:50.920+0800: 188.814: [GC (Allocation Failure) 450820K->182946K(494592K), 0.0056871 secs]
2017-01-16T15:12:51.333+0800: 189.228: [GC (Allocation Failure) 437922K->177436K(502272K), 0.0048386 secs]
2017-01-16T15:12:51.806+0800: 189.700: [GC (Allocation Failure) 425756K->172204K(470016K), 0.0044861 secs]
2017-01-16T15:12:52.964+0800: 190.858: [GC (Allocation Failure) 413868K->169461K(489472K), 0.0034013 secs]
2017-01-16T15:12:54.331+0800: 192.226: [GC (Allocation Failure) 404981K->168884K(453632K), 0.0033856 secs]
2017-01-16T15:12:55.473+0800: 193.368: [GC (Allocation Failure) 398260K->167537K(476160K), 0.0029393 secs]
2017-01-16T15:12:56.285+0800: 194.179: [GC (Allocation Failure) 390513K->169248K(442368K), 0.0037498 secs]
2017-01-16T15:12:57.064+0800: 194.958: [GC (Allocation Failure) 387360K->172662K(461824K), 0.0046361 secs]
2017-01-16T15:12:57.413+0800: 195.307: [GC (Allocation Failure) 385654K->167552K(430080K), 0.0028434 secs]
2017-01-16T15:12:58.972+0800: 196.866: [GC (Allocation Failure) 375424K->166078K(450560K), 0.0031807 secs]
2017-01-16T15:13:01.330+0800: 199.224: [GC (Allocation Failure) 368830K->167940K(420352K), 0.0028704 secs]
2017-01-16T15:13:04.325+0800: 202.219: [GC (Allocation Failure) 366084K->187421K(445952K), 0.0059199 secs]
2017-01-16T15:13:04.734+0800: 202.629: [GC (Allocation Failure) 380957K->179959K(422912K), 0.0047533 secs]
2017-01-16T15:13:05.054+0800: 202.948: [GC (Allocation Failure) 368887K->173939K(430592K), 0.0036493 secs]
2017-01-16T15:13:05.392+0800: 203.287: [GC (Allocation Failure) 358771K->167465K(401920K), 0.0027131 secs]
2017-01-16T15:13:07.494+0800: 205.389: [GC (Allocation Failure) 348201K->168189K(421888K), 0.0027000 secs]
2017-01-16T15:13:08.982+0800: 206.876: [GC (Allocation Failure) 344660K->170964K(397312K), 0.0032565 secs]
2017-01-16T15:13:11.116+0800: 209.010: [GC (Allocation Failure) 344020K->192540K(417280K), 0.0063911 secs]
2017-01-16T15:13:11.402+0800: 209.296: [GC (Allocation Failure) 360988K->185671K(403968K), 0.0049534 secs]
2017-01-16T15:13:11.738+0800: 209.633: [GC (Allocation Failure) 350535K->179392K(409088K), 0.0044886 secs]
2017-01-16T15:13:12.019+0800: 209.913: [GC (Allocation Failure) 340672K->175017K(414208K), 0.0038887 secs]
2017-01-16T15:13:12.333+0800: 210.228: [GC (Allocation Failure) 336297K->171086K(424448K), 0.0025622 secs]
2017-01-16T15:13:13.401+0800: 211.296: [GC (Allocation Failure) 347214K->182128K(407552K), 0.0043864 secs]
2017-01-16T15:13:13.809+0800: 211.704: [GC (Allocation Failure) 354672K->177169K(411136K), 0.0039125 secs]
2017-01-16T15:13:14.135+0800: 212.029: [GC (Allocation Failure) 346129K->175683K(393728K), 0.0036653 secs]
2017-01-16T15:13:14.393+0800: 212.287: [GC (Allocation Failure) 341059K->173164K(413184K), 0.0031366 secs]
2017-01-16T15:13:14.810+0800: 212.705: [GC (Allocation Failure) 339564K->171873K(387072K), 0.0026856 secs]
2017-01-16T15:13:15.436+0800: 213.330: [GC (Allocation Failure) 334689K->169116K(400896K), 0.0025267 secs]
2017-01-16T15:13:18.565+0800: 216.459: [GC (Allocation Failure) 328860K->168187K(376832K), 0.0024091 secs]
2017-01-16T15:13:20.010+0800: 217.904: [GC (Allocation Failure) 324859K->169334K(393216K), 0.0026372 secs]
2017-01-16T15:13:21.084+0800: 218.978: [GC (Allocation Failure) 322934K->168479K(370688K), 0.0024150 secs]
2017-01-16T15:13:22.535+0800: 220.429: [GC (Allocation Failure) 319007K->197651K(386560K), 0.0075432 secs]
2017-01-16T15:13:23.169+0800: 221.063: [GC (Allocation Failure) 341011K->201822K(391168K), 0.0061870 secs]
2017-01-16T15:13:23.435+0800: 221.329: [GC (Allocation Failure) 342622K->199202K(392704K), 0.0070355 secs]
2017-01-16T15:13:23.983+0800: 221.877: [GC (Allocation Failure) 336930K->196686K(395776K), 0.0054651 secs]
2017-01-16T15:13:24.234+0800: 222.129: [GC (Allocation Failure) 334414K->194603K(410112K), 0.0070012 secs]
2017-01-16T15:13:24.492+0800: 222.387: [GC (Allocation Failure) 345643K->193427K(410112K), 0.0059755 secs]
2017-01-16T15:13:26.022+0800: 223.916: [GC (Allocation Failure) 344467K->215135K(420864K), 0.0158558 secs]
2017-01-16T15:13:26.328+0800: 224.223: [GC (Allocation Failure) 377439K->210870K(428032K), 0.0087450 secs]
2017-01-16T15:13:26.924+0800: 224.819: [GC (Allocation Failure) 373174K->210716K(449536K), 0.0124276 secs]
2017-01-16T15:13:27.257+0800: 225.151: [GC (Allocation Failure) 390940K->204861K(452096K), 0.0064358 secs]
2017-01-16T15:13:27.572+0800: 225.466: [GC (Allocation Failure) 385085K->198063K(470016K), 0.0077427 secs]
2017-01-16T15:13:27.974+0800: 225.868: [GC (Allocation Failure) 401839K->192250K(472576K), 0.0042745 secs]
2017-01-16T15:13:28.309+0800: 226.204: [GC (Allocation Failure) 396026K->190513K(492544K), 0.0060946 secs]
2017-01-16T15:13:28.747+0800: 226.641: [GC (Allocation Failure) 419377K->198450K(494080K), 0.0065398 secs]
2017-01-16T15:13:29.147+0800: 227.041: [GC (Allocation Failure) 427314K->198810K(518144K), 0.0094548 secs]
2017-01-16T15:13:29.629+0800: 227.523: [GC (Allocation Failure) 456346K->199758K(519168K), 0.0065296 secs]
2017-01-16T15:13:30.030+0800: 227.924: [GC (Allocation Failure) 457294K->201628K(549376K), 0.0106034 secs]
2017-01-16T15:13:30.538+0800: 228.432: [GC (Allocation Failure) 490908K->195280K(548864K), 0.0064471 secs]
2017-01-16T15:13:31.287+0800: 229.182: [GC (Allocation Failure) 484560K->196312K(580608K), 0.0057926 secs]
2017-01-16T15:13:31.886+0800: 229.780: [GC (Allocation Failure) 518872K->196186K(582144K), 0.0028623 secs]
2017-01-16T15:13:33.184+0800: 231.078: [GC (Allocation Failure) 518746K->194818K(609280K), 0.0029994 secs]
LogType:launch_container.sh
LogLength:3234
Log Contents:
#!/bin/bash
export LOCAL_DIRS="/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032"
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/usr/hdp/current/hadoop-client/conf"}
export NM_HTTP_PORT="8042"
export SAMZA_COORDINATOR_URL="http://rflow61:43658/"
export JAVA_HOME=${JAVA_HOME:-"/usr/java/latest"}
export LOG_DIRS="/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002"
export NM_AUX_SERVICE_mapreduce_shuffle="AAA0+gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
"
export JAVA_OPTS="-Xmx3276m"
export NM_PORT="45454"
export USER="ant"
export HADOOP_YARN_HOME=${HADOOP_YARN_HOME:-"/usr/hdp/current/hadoop-yarn-nodemanager"}
export NM_HOST="rflow61"
export HADOOP_TOKEN_FILE_LOCATION="/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/container_tokens"
export SAMZA_CONTAINER_ID="0"
export NM_AUX_SERVICE_spark_shuffle=""
export LOCAL_USER_DIRS="/data/hadoop/yarn/local/usercache/ant/"
export LOGNAME="ant"
export JVM_PID="$$"
export PWD="/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002"
export HOME="/home/"
export NM_AUX_SERVICE_spark2_shuffle=""
export CONTAINER_ID="container_e28_1482299868039_0032_01_000002"
export MALLOC_ARENA_MAX="4"
ln -sf "/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/filecache/10/canal-persistent-hstore-1.0-SNAPSHOT-dist.tar.gz" "__package"
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
# Creating copy of launch script
cp "launch_container.sh" "/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/launch_container.sh"
chmod 640 "/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/launch_container.sh"
# Determining directory contents
echo "ls -l:" 1>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
ls -l 1>>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
echo "find -L . -maxdepth 5 -ls:" 1>>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
find -L . -maxdepth 5 -ls 1>>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
echo "broken symlinks(find -L . -maxdepth 5 -type l -ls):" 1>>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
find -L . -maxdepth 5 -type l -ls 1>>"/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/directory.info"
exec /bin/bash -c "export SAMZA_LOG_DIR=/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002 && ln -sfn /data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002 logs && exec ./__package//bin/run-container.sh 1>logs/stdout 2>logs/stderr"
hadoop_shell_errorcode=$?
if [ $hadoop_shell_errorcode -ne 0 ]
then
exit $hadoop_shell_errorcode
fi
LogType:samza-container-0.log
LogLength:73068
Log Contents:
2017-01-16 15:09:42.592 [main] SamzaContainer$ [INFO] Got container ID: 0
2017-01-16 15:09:42.593 [main] SamzaContainer$ [INFO] Got coordinator URL: http://rflow61:43658/
2017-01-16 15:09:42.595 [main] SamzaContainer$ [INFO] Fetching configuration from: http://rflow61:43658/
2017-01-16 15:09:42.857 [main] JmxServer [INFO] According to Util.getLocalHost.getHostName we are rflow61
2017-01-16 15:09:43.003 [main] JmxServer [INFO] Started JmxServer registry port=46790 server port=45942 url=service:jmx:rmi://localhost:45942/jndi/rmi://localhost:46790/jmxrmi
2017-01-16 15:09:43.003 [main] STARTUP_LOGGER [INFO] Started JmxServer registry port=46790 server port=45942 url=service:jmx:rmi://localhost:45942/jndi/rmi://localhost:46790/jmxrmi
2017-01-16 15:09:43.004 [main] JmxServer [INFO] If you are tunneling, you might want to try JmxServer registry port=46790 server port=45942 url=service:jmx:rmi://rflow61:45942/jndi/rmi://rflow61:46790/jmxrmi
2017-01-16 15:09:43.004 [main] STARTUP_LOGGER [INFO] If you are tunneling, you might want to try JmxServer registry port=46790 server port=45942 url=service:jmx:rmi://rflow61:45942/jndi/rmi://rflow61:46790/jmxrmi
2017-01-16 15:09:43.005 [main] SamzaContainer$ [INFO] Setting up Samza container: samza-container-0
2017-01-16 15:09:43.005 [main] SamzaContainer$ [INFO] Samza container PID: 14401@rflow61
2017-01-16 15:09:43.005 [main] STARTUP_LOGGER [INFO] Samza container PID: 14401@rflow61
2017-01-16 15:09:43.006 [main] SamzaContainer$ [INFO] Using configuration: {systems.hstore.samza.key.serde=string, yarn.container.count=2, systems.kafka.samza.factory=org.apache.samza.system.kafka.KafkaSystemFactory, canal.hstore.data.dir=/eefung/shuqi/samza-test, serializers.registry.metrics.class=org.apache.samza.serializers.MetricsSnapshotSerdeFactory, systems.hstore.samza.msg.serde=status, serializers.registry.string.class=org.apache.samza.serializers.StringSerdeFactory, task.checkpoint.system=kafka, task.commit.ms=1000, task.checkpoint.factory=org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory, cluster-manager.container.memory.mb=4096, systems.kafka.samza.msg.serde=status, metrics.reporters=snapshot, serializers.registry.status.class=com.antfact.datacenter.canal.common.serde.StatusSerdeFactory, job.name=canal-status-persistent-hstore, metrics.reporter.snapshot.class=org.apache.samza.metrics.reporter.MetricsSnapshotReporterFactory, systems.kafka.producer.bootstrap.servers=buka1:9096,buka2:9096,buka3:9096, canal.sph.output.system=hstore, systems.kafka.consumer.zookeeper.connect=zk11:3181,zk12:3181,zk13:3181, systems.kafka.samza.key.serde=string, job.coordinator.system=kafka, canal.hstore.interval.new.file=3600000, task.consumer.batch.size=100, task.inputs=kafka.tweets_distinctContent_test, yarn.package.path=hdfs://rflow/rflow-apps/data/canal-persistent-hstore-1.0-SNAPSHOT-dist.tar.gz, job.factory.class=org.apache.samza.job.yarn.YarnJobFactory, task.class=com.antfact.datacenter.canal.task.persistent.HStoreWriterTask, systems.kafka.streams.samza-metrics.samza.msg.serde=metrics, task.opts=-Xmx3276m, canal.hstore.data.type=1, metrics.reporter.snapshot.stream=kafka.samza-metrics, systems.hstore.samza.factory=com.antfact.datacenter.canal.system.HStoreSystemFactory}
2017-01-16 15:09:43.006 [main] STARTUP_LOGGER [INFO] Using configuration: {systems.hstore.samza.key.serde=string, yarn.container.count=2, systems.kafka.samza.factory=org.apache.samza.system.kafka.KafkaSystemFactory, canal.hstore.data.dir=/eefung/shuqi/samza-test, serializers.registry.metrics.class=org.apache.samza.serializers.MetricsSnapshotSerdeFactory, systems.hstore.samza.msg.serde=status, serializers.registry.string.class=org.apache.samza.serializers.StringSerdeFactory, task.checkpoint.system=kafka, task.commit.ms=1000, task.checkpoint.factory=org.apache.samza.checkpoint.kafka.KafkaCheckpointManagerFactory, cluster-manager.container.memory.mb=4096, systems.kafka.samza.msg.serde=status, metrics.reporters=snapshot, serializers.registry.status.class=com.antfact.datacenter.canal.common.serde.StatusSerdeFactory, job.name=canal-status-persistent-hstore, metrics.reporter.snapshot.class=org.apache.samza.metrics.reporter.MetricsSnapshotReporterFactory, systems.kafka.producer.bootstrap.servers=buka1:9096,buka2:9096,buka3:9096, canal.sph.output.system=hstore, systems.kafka.consumer.zookeeper.connect=zk11:3181,zk12:3181,zk13:3181, systems.kafka.samza.key.serde=string, job.coordinator.system=kafka, canal.hstore.interval.new.file=3600000, task.consumer.batch.size=100, task.inputs=kafka.tweets_distinctContent_test, yarn.package.path=hdfs://rflow/rflow-apps/data/canal-persistent-hstore-1.0-SNAPSHOT-dist.tar.gz, job.factory.class=org.apache.samza.job.yarn.YarnJobFactory, task.class=com.antfact.datacenter.canal.task.persistent.HStoreWriterTask, systems.kafka.streams.samza-metrics.samza.msg.serde=metrics, task.opts=-Xmx3276m, canal.hstore.data.type=1, metrics.reporter.snapshot.stream=kafka.samza-metrics, systems.hstore.samza.factory=com.antfact.datacenter.canal.system.HStoreSystemFactory}
2017-01-16 15:09:43.007 [main] SamzaContainer$ [INFO] Using container model: ContainerModel [containerId=0, tasks={Partition 0=TaskModel [taskName=Partition 0, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 0]], changeLogPartition=Partition [partition=1]], Partition 2=TaskModel [taskName=Partition 2, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 2]], changeLogPartition=Partition [partition=3]], Partition 4=TaskModel [taskName=Partition 4, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 4]], changeLogPartition=Partition [partition=5]], Partition 6=TaskModel [taskName=Partition 6, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 6]], changeLogPartition=Partition [partition=7]]}]
2017-01-16 15:09:43.007 [main] STARTUP_LOGGER [INFO] Using container model: ContainerModel [containerId=0, tasks={Partition 0=TaskModel [taskName=Partition 0, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 0]], changeLogPartition=Partition [partition=1]], Partition 2=TaskModel [taskName=Partition 2, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 2]], changeLogPartition=Partition [partition=3]], Partition 4=TaskModel [taskName=Partition 4, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 4]], changeLogPartition=Partition [partition=5]], Partition 6=TaskModel [taskName=Partition 6, systemStreamPartitions=[SystemStreamPartition [kafka, tweets_distinctContent_test, 6]], changeLogPartition=Partition [partition=7]]}]
2017-01-16 15:09:43.047 [main] SamzaContainer$ [INFO] Got system names: Set(kafka, hstore)
2017-01-16 15:09:43.050 [main] SamzaContainer$ [INFO] Got serde streams: Set(SystemStream [system=kafka, stream=samza-metrics])
2017-01-16 15:09:43.052 [main] SamzaContainer$ [INFO] Got serde names: Set(string, metrics, status)
2017-01-16 15:09:43.078 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.084 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_admin-canal_status_persistent_hstore-1
2017-01-16 15:09:43.084 [main] VerifiableProperties [INFO] Property group.id is overridden to undefined-samza-consumer-group-61912d6a-1519-490c-ba88-eedcc74a7aaa
2017-01-16 15:09:43.085 [main] VerifiableProperties [INFO] Property zookeeper.connect is overridden to zk11:3181,zk12:3181,zk13:3181
2017-01-16 15:09:43.094 [main] SamzaContainer$ [INFO] Got system factories: Set(kafka, hstore)
2017-01-16 15:09:43.117 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.117 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_admin-canal_status_persistent_hstore-1
2017-01-16 15:09:43.117 [main] VerifiableProperties [INFO] Property metadata.broker.list is overridden to buka1:9096,buka2:9096,buka3:9096
2017-01-16 15:09:43.117 [main] VerifiableProperties [INFO] Property request.timeout.ms is overridden to 30000
2017-01-16 15:09:43.153 [main] ClientUtils$ [INFO] Fetching metadata from broker id:0,host:buka1,port:9096 with correlation id 0 for 1 topic(s) Set(tweets_distinctContent_test)
2017-01-16 15:09:43.160 [main] SyncProducer [INFO] Connected to buka1:9096 for producing
2017-01-16 15:09:43.179 [main] SyncProducer [INFO] Disconnecting from buka1:9096
2017-01-16 15:09:43.445 [main] SamzaContainer$ [INFO] Got input stream metadata: Map(SystemStream [system=kafka, stream=tweets_distinctContent_test] -> SystemStreamMetadata [streamName=tweets_distinctContent_test, partitionMetadata={Partition [partition=0]=SystemStreamPartitionMetadata [oldestOffset=1296566503, newestOffset=1299898306, upcomingOffset=1299898307], Partition [partition=5]=SystemStreamPartitionMetadata [oldestOffset=1296563006, newestOffset=1299943198, upcomingOffset=1299943199], Partition [partition=1]=SystemStreamPartitionMetadata [oldestOffset=1296530294, newestOffset=1299951243, upcomingOffset=1299951244], Partition [partition=6]=SystemStreamPartitionMetadata [oldestOffset=1294046043, newestOffset=1299909416, upcomingOffset=1299909417], Partition [partition=2]=SystemStreamPartitionMetadata [oldestOffset=1294131809, newestOffset=1299941600, upcomingOffset=1299941601], Partition [partition=7]=SystemStreamPartitionMetadata [oldestOffset=1294074218, newestOffset=1299950307, upcomingOffset=1299950308], Partition [partition=3]=SystemStreamPartitionMetadata [oldestOffset=1294026616, newestOffset=1299910143, upcomingOffset=1299910144], Partition [partition=4]=SystemStreamPartitionMetadata [oldestOffset=1296598194, newestOffset=1299930099, upcomingOffset=1299930100]}])
2017-01-16 15:09:43.447 [main] SamzaContainer$ [INFO] Got stream task class: com.antfact.datacenter.canal.task.persistent.HStoreWriterTask
2017-01-16 15:09:43.449 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.449 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_consumer-canal_status_persistent_hstore-1
2017-01-16 15:09:43.449 [main] VerifiableProperties [INFO] Property group.id is overridden to undefined-samza-consumer-group-76fb6daf-ad77-4d5d-9ce5-132d27d95b05
2017-01-16 15:09:43.449 [main] VerifiableProperties [INFO] Property zookeeper.connect is overridden to zk11:3181,zk12:3181,zk13:3181
2017-01-16 15:09:43.455 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.455 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_admin-canal_status_persistent_hstore-1
2017-01-16 15:09:43.455 [main] VerifiableProperties [INFO] Property group.id is overridden to undefined-samza-consumer-group-db878d58-7774-468a-834a-d02be823b3f4
2017-01-16 15:09:43.455 [main] VerifiableProperties [INFO] Property zookeeper.connect is overridden to zk11:3181,zk12:3181,zk13:3181
2017-01-16 15:09:43.461 [main] SamzaContainer$ [INFO] Got system consumers: Set(kafka)
2017-01-16 15:09:43.498 [main] SamzaContainer$ [INFO] Got system producers: Set(kafka, hstore)
2017-01-16 15:09:43.501 [main] SamzaContainer$ [INFO] Got serdes: Set(string, metrics, status)
2017-01-16 15:09:43.510 [main] SamzaContainer$ [INFO] Got change log system streams: Map()
2017-01-16 15:09:43.511 [main] SamzaContainer$ [INFO] Setting up JVM metrics.
2017-01-16 15:09:43.513 [main] SamzaContainer$ [INFO] Setting up message chooser.
2017-01-16 15:09:43.523 [main] DefaultChooser [INFO] Building default chooser with: useBatching=true, useBootstrapping=false, usePriority=false
2017-01-16 15:09:43.524 [main] SamzaContainer$ [INFO] Setting up metrics reporters.
2017-01-16 15:09:43.527 [main] MetricsSnapshotReporterFactory [INFO] Creating new metrics snapshot reporter.
2017-01-16 15:09:43.529 [main] MetricsSnapshotReporterFactory [WARN] Unable to find implementation version in jar's meta info. Defaulting to 0.0.1.
2017-01-16 15:09:43.530 [main] MetricsSnapshotReporterFactory [INFO] Got system stream SystemStream [system=kafka, stream=samza-metrics].
2017-01-16 15:09:43.530 [main] MetricsSnapshotReporterFactory [INFO] Got system factory org.apache.samza.system.kafka.KafkaSystemFactory@45fd9a4d.
2017-01-16 15:09:43.531 [main] MetricsSnapshotReporterFactory [INFO] Got producer org.apache.samza.system.kafka.KafkaSystemProducer@5f0e9815.
2017-01-16 15:09:43.532 [main] MetricsSnapshotReporterFactory [INFO] Got serde org.apache.samza.serializers.MetricsSnapshotSerde@35229f85.
2017-01-16 15:09:43.533 [main] MetricsSnapshotReporterFactory [INFO] Setting polling interval to 60
2017-01-16 15:09:43.535 [main] MetricsSnapshotReporter [INFO] got metrics snapshot reporter properties [job name: canal-status-persistent-hstore, job id: 1, containerName: samza-container-0, version: 0.0.1, samzaVersion: 0.11.0, host: rflow61, pollingInterval 60]
2017-01-16 15:09:43.535 [main] MetricsSnapshotReporter [INFO] Registering MetricsSnapshotReporterFactory with producer.
2017-01-16 15:09:43.536 [main] SamzaContainer$ [INFO] Got metrics reporters: Set(snapshot)
2017-01-16 15:09:43.536 [main] SamzaContainer$ [INFO] Got security manager: null
2017-01-16 15:09:43.538 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.538 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_admin-canal_status_persistent_hstore-1
2017-01-16 15:09:43.539 [main] VerifiableProperties [INFO] Property group.id is overridden to undefined-samza-consumer-group-dd9498d3-7317-4890-99e2-b793e2ced551
2017-01-16 15:09:43.539 [main] VerifiableProperties [INFO] Property zookeeper.connect is overridden to zk11:3181,zk12:3181,zk13:3181
2017-01-16 15:09:43.545 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.545 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_checkpoint_manager-canal_status_persistent_hstore-1
2017-01-16 15:09:43.545 [main] VerifiableProperties [INFO] Property group.id is overridden to undefined-samza-consumer-group-1e97136f-499a-480d-8cfa-a4ae909559bc
2017-01-16 15:09:43.546 [main] VerifiableProperties [INFO] Property zookeeper.connect is overridden to zk11:3181,zk12:3181,zk13:3181
2017-01-16 15:09:43.552 [main] KafkaCheckpointManager [INFO] Creating KafkaCheckpointManager with: clientId=samza_checkpoint_manager-canal_status_persistent_hstore-1, checkpointTopic=__samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1, systemName=kafka
2017-01-16 15:09:43.552 [main] SamzaContainer$ [INFO] Got checkpoint manager: KafkaCheckpointManager [systemName=kafka, checkpointTopic=__samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1]
2017-01-16 15:09:43.554 [main] OffsetManager$ [INFO] No default offset for SystemStream [system=kafka, stream=tweets_distinctContent_test] defined. Using upcoming.
2017-01-16 15:09:43.556 [main] SamzaContainer$ [INFO] Got offset manager: org.apache.samza.checkpoint.OffsetManager@5d9b7a8a
2017-01-16 15:09:43.561 [main] SamzaContainer$ [INFO] Got storage engines: Set()
2017-01-16 15:09:43.561 [main] SamzaContainer$ [INFO] Got single thread mode: false
2017-01-16 15:09:43.562 [main] SamzaContainer$ [INFO] Got thread pool size: 0
2017-01-16 15:09:43.563 [main] SamzaContainer$ [INFO] Got default storage engine base directory: /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/state
2017-01-16 15:09:43.569 [main] SamzaContainer$ [INFO] Got store consumers: Map()
2017-01-16 15:09:43.569 [main] SamzaContainer$ [WARN] No override was provided for logged store base directory. This disables local state re-use on application restart. If you want to enable this feature, set LOGGED_STORE_BASE_DIR as an environment variable in all machines running the Samza container
2017-01-16 15:09:43.569 [main] SamzaContainer$ [INFO] Got base directory for logged data stores: /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/state
2017-01-16 15:09:43.570 [main] SamzaContainer$ [INFO] Got task stores: Map()
2017-01-16 15:09:43.572 [main] SamzaContainer$ [INFO] Retrieved SystemStreamPartitions Set(SystemStreamPartition [kafka, tweets_distinctContent_test, 0]) for Partition 0
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Got store consumers: Map()
2017-01-16 15:09:43.575 [main] SamzaContainer$ [WARN] No override was provided for logged store base directory. This disables local state re-use on application restart. If you want to enable this feature, set LOGGED_STORE_BASE_DIR as an environment variable in all machines running the Samza container
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Got base directory for logged data stores: /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/state
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Got task stores: Map()
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Retrieved SystemStreamPartitions Set(SystemStreamPartition [kafka, tweets_distinctContent_test, 2]) for Partition 2
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Got store consumers: Map()
2017-01-16 15:09:43.575 [main] SamzaContainer$ [WARN] No override was provided for logged store base directory. This disables local state re-use on application restart. If you want to enable this feature, set LOGGED_STORE_BASE_DIR as an environment variable in all machines running the Samza container
2017-01-16 15:09:43.575 [main] SamzaContainer$ [INFO] Got base directory for logged data stores: /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/state
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Got task stores: Map()
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Retrieved SystemStreamPartitions Set(SystemStreamPartition [kafka, tweets_distinctContent_test, 4]) for Partition 4
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Got store consumers: Map()
2017-01-16 15:09:43.576 [main] SamzaContainer$ [WARN] No override was provided for logged store base directory. This disables local state re-use on application restart. If you want to enable this feature, set LOGGED_STORE_BASE_DIR as an environment variable in all machines running the Samza container
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Got base directory for logged data stores: /data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/state
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Got task stores: Map()
2017-01-16 15:09:43.576 [main] SamzaContainer$ [INFO] Retrieved SystemStreamPartitions Set(SystemStreamPartition [kafka, tweets_distinctContent_test, 6]) for Partition 6
2017-01-16 15:09:43.580 [main] NoThrottlingDiskQuotaPolicy [INFO] Using a no throttling disk quota policy
2017-01-16 15:09:43.582 [main] SamzaContainer$ [INFO] Disk quotas disabled because polling interval is not set (container.disk.poll.interval.ms)
2017-01-16 15:09:43.583 [main] RunLoopFactory [INFO] Got window milliseconds: -1
2017-01-16 15:09:43.583 [main] RunLoopFactory [INFO] Got commit milliseconds: 1000
2017-01-16 15:09:43.583 [main] RunLoopFactory [INFO] Got max messages in flight: 1
2017-01-16 15:09:43.583 [main] RunLoopFactory [INFO] Got callback timeout: -1
2017-01-16 15:09:43.583 [main] RunLoopFactory [INFO] Run loop in asynchronous mode.
2017-01-16 15:09:43.587 [main] SamzaContainer$ [INFO] Samza container setup complete.
2017-01-16 15:09:43.588 [main] SamzaContainer [INFO] Starting container.
2017-01-16 15:09:43.588 [main] SamzaContainer [INFO] Registering task instances with metrics.
2017-01-16 15:09:43.589 [main] MetricsSnapshotReporter [INFO] Registering TaskName-Partition 0 with producer.
2017-01-16 15:09:43.589 [main] MetricsSnapshotReporter [INFO] Registering TaskName-Partition 2 with producer.
2017-01-16 15:09:43.589 [main] MetricsSnapshotReporter [INFO] Registering TaskName-Partition 4 with producer.
2017-01-16 15:09:43.589 [main] MetricsSnapshotReporter [INFO] Registering TaskName-Partition 6 with producer.
2017-01-16 15:09:43.590 [main] SamzaContainer [INFO] Starting JVM metrics.
2017-01-16 15:09:43.590 [main] SamzaContainer [INFO] Starting metrics reporters.
2017-01-16 15:09:43.591 [main] MetricsSnapshotReporter [INFO] Registering samza-container-0 with producer.
2017-01-16 15:09:43.591 [main] MetricsSnapshotReporter [INFO] Starting producer.
2017-01-16 15:09:43.591 [main] MetricsSnapshotReporter [INFO] Starting reporter timer.
2017-01-16 15:09:43.592 [main] SamzaContainer [INFO] Registering task instances with offsets.
2017-01-16 15:09:43.595 [main] SamzaContainer [INFO] Starting offset manager.
2017-01-16 15:09:43.597 [main] KafkaUtil [INFO] Attempting to create topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1.
2017-01-16 15:09:43.606 [ZkClient-EventThread-24-zk11:3181,zk12:3181,zk13:3181] ZkEventThread [INFO] Starting ZkClient event thread.
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:host.name=rflow61
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:java.version=1.8.0_77
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:java.vendor=Oracle Corporation
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:java.home=/usr/java/jdk1.8.0_77/jre
2017-01-16 15:09:43.611 [main] ZooKeeper [INFO] Client environment:java.class.path=/usr/hdp/current/hadoop-client/conf:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-core-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-fate-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-start-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/activation-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/ahocorasick-0.3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/akka-actor_2.10-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/antfact-avro-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/aopalliance-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/apacheds-i18n-2.0.0-M15.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/api-asn1-api-1.0.0-M20.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/api-util-1.0.0-M20.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/asm-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/avro-1.7.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/broker-kafka-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/camel-core-2.12.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/camel-spring-2.12.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/canal-common-1.0-SNAPSHOT.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/canal-persistent-hstore-1.0-SNAPSHOT.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-1.0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-beanutils-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-beanutils-core-1.8.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-cli-1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-codec-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-collections-3.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-compress-1.4.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-configuration-1.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-daemon-1.0.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-dbutils-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-digester-1.8.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-httpclient-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-io-2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-lang-2.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-lang3-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-logging-1.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-logging-api-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-math-2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-math3-3.1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-net-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-vfs2-2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/config-1.0.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-client-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-framework-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-recipes-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/gmbal-api-only-3.0.0-b023.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzled-slf4j_2.10-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-framework-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-server-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-servlet-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-rcm-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/gson-2.2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guava-19.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guice-3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guice-servlet-3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-annotations-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-auth-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-client-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-common-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-hdfs-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-app-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-common-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-core-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-api-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-client-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-common-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-server-common-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hstore-common-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/htrace-core-3.1.0-incubating.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpclient-4.2.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpcore-4.2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpmime-4.2.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/irclib-1.10.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-core-lgpl-1.9.7.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-mapper-lgpl-1.9.7.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-xc-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.inject-1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/java-xmlbuilder-0.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-3.0.0.v201112011016.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-api-3.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jaxb-api-2.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jcommander-1.32.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-client-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-core-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-grizzly2-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-guice-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-json-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-server-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-test-framework-core-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-test-framework-grizzly2-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jets3t-0.9.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jettison-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-6.1.26.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-continuation-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-http-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-io-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-security-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-server-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-servlet-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-util-6.1.26.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-util-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-webapp-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-xml-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jk-analyzer-1.4.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jline-0.9.94.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/joda-convert-1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/joda-time-2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jopt-simple-3.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsch-0.1.42.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/json-20090211.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsoup-1.8.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsr305-3.0.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/juniversalchardet-1.0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka_2.10-0.8.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka_2.8.0-0.8.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka-clients-0.8.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/leveldbjni-all-1.8.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/libthrift-0.9.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/log4j-1.2.17.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/logback-core-1.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lucene-analyzers-common-5.3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lucene-core-5.3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lz4-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/management-api-3.0.0-b012.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-api-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-provider-svn-commons-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-provider-svnexe-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-annotation-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-3.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-3.1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/mime-util-2.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/mongo-java-driver-2.12.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/netty-3.6.2.Final.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/netty-all-4.0.23.Final.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/org.osgi.compendium-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/org.osgi.core-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/paranamer-2.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/plexus-utils-1.5.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/protobuf-java-2.5.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/regexp-1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/rl_2.10-0.4.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/rocksdbjni-3.13.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-api-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-core_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kafka_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kv_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kv-rocksdb_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-log4j-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-yarn_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-compiler-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-library-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-reflect-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalate-core_2.10-1.6.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalate-util_2.10-1.6.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra-common_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra-scalate_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/secbase-osgi-1.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/servlet-api-2.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/slf4j-api-1.6.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/slf4j-log4j12-1.6.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/snappy-java-1.1.1.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-aop-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-beans-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-context-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-core-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-expression-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-tx-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/stax-api-1.0-2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/weibo-common-1.3.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xercesImpl-2.9.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xml-apis-1.3.04.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xmlenc-0.52.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xz-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/zkclient-0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/zookeeper-3.4.6.jar
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:java.library.path=::/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native::/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.5.0.0-1245/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.5.0.0-1245/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:java.io.tmpdir=/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/tmp
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:java.compiler=<NA>
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:os.name=Linux
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:os.arch=amd64
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:os.version=2.6.32-504.el6.x86_64
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:user.name=yarn
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:user.home=/home/yarn
2017-01-16 15:09:43.612 [main] ZooKeeper [INFO] Client environment:user.dir=/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002
2017-01-16 15:09:43.613 [main] ZooKeeper [INFO] Initiating client connection, connectString=zk11:3181,zk12:3181,zk13:3181 sessionTimeout=6000 watcher=org.I0Itec.zkclient.ZkClient@aafcffa
2017-01-16 15:09:43.622 [SAMZA-METRIC-SNAPSHOT-REPORTER] KafkaSystemProducer [INFO] Creating a new producer for system kafka.
2017-01-16 15:09:43.628 [main-SendThread(172.19.105.246:3181)] ClientCnxn [INFO] Opening socket connection to server 172.19.105.246/172.19.105.246:3181. Will not attempt to authenticate using SASL (unknown error)
2017-01-16 15:09:43.629 [main-SendThread(172.19.105.246:3181)] ClientCnxn [INFO] Socket connection established to 172.19.105.246/172.19.105.246:3181, initiating session
2017-01-16 15:09:43.630 [SAMZA-METRIC-SNAPSHOT-REPORTER] ProducerConfig [INFO] ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [buka1:9096, buka2:9096, buka3:9096]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
retries = 2147483647
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
linger.ms = 0
client.id = samza_producer-canal_status_persistent_hstore-1
2017-01-16 15:09:43.653 [main-SendThread(172.19.105.246:3181)] ClientCnxn [INFO] Session establishment complete on server 172.19.105.246/172.19.105.246:3181, sessionid = 0xb58fbdd701a4055, negotiated timeout = 20000
2017-01-16 15:09:43.655 [main-EventThread] ZkClient [INFO] zookeeper state changed (SyncConnected)
2017-01-16 15:09:43.697 [ZkClient-EventThread-24-zk11:3181,zk12:3181,zk13:3181] ZkEventThread [INFO] Terminate ZkClient event thread.
2017-01-16 15:09:43.714 [main] ZooKeeper [INFO] Session: 0xb58fbdd701a4055 closed
2017-01-16 15:09:43.714 [main-EventThread] ClientCnxn [INFO] EventThread shut down
2017-01-16 15:09:43.714 [main] KafkaUtil [INFO] Topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 already exists.
2017-01-16 15:09:43.715 [main] KafkaUtil [INFO] Validating topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1. Expecting partition count: 1
2017-01-16 15:09:43.717 [main] VerifiableProperties [INFO] Verifying properties
2017-01-16 15:09:43.717 [main] VerifiableProperties [INFO] Property client.id is overridden to samza_checkpoint_manager-canal_status_persistent_hstore-1
2017-01-16 15:09:43.717 [main] VerifiableProperties [INFO] Property metadata.broker.list is overridden to buka1:9096,buka2:9096,buka3:9096
2017-01-16 15:09:43.717 [main] VerifiableProperties [INFO] Property request.timeout.ms is overridden to 30000
2017-01-16 15:09:43.718 [main] ClientUtils$ [INFO] Fetching metadata from broker id:0,host:buka1,port:9096 with correlation id 0 for 1 topic(s) Set(__samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1)
2017-01-16 15:09:43.722 [main] SyncProducer [INFO] Connected to buka1:9096 for producing
2017-01-16 15:09:43.749 [main] SyncProducer [INFO] Disconnecting from buka1:9096
2017-01-16 15:09:43.750 [main] KafkaUtil [INFO] Successfully validated topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1.
2017-01-16 15:09:43.751 [main] KafkaCheckpointManager [INFO] Reading checkpoint for taskName Partition 6
2017-01-16 15:09:43.751 [main] KafkaCheckpointManager [INFO] No TaskName to checkpoint mapping provided. Reading for first time.
2017-01-16 15:09:43.756 [main] KafkaCheckpointManager [INFO] Connecting to leader 172.19.105.20:9096 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and to fetch all checkpoint messages.
2017-01-16 15:09:43.769 [main] KafkaCheckpointManager [INFO] Got offset 0 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0. Attempting to fetch messages for checkpoint log.
2017-01-16 15:09:43.777 [main] KafkaCheckpointManager [INFO] Get latest offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0.
2017-01-16 15:09:44.700 [main] KafkaCheckpointManager [INFO] Got checkpoint state for taskName Partition 6: Checkpoint [offsets={SystemStreamPartition [kafka, tweets_distinctContent_test, 6]=1299280079}]
2017-01-16 15:09:44.700 [main] KafkaCheckpointManager [INFO] Reading checkpoint for taskName Partition 0
2017-01-16 15:09:44.700 [main] KafkaCheckpointManager [INFO] Already existing checkpoint mapping. Merging new offsets
2017-01-16 15:09:44.701 [main] KafkaCheckpointManager [INFO] Connecting to leader 172.19.105.20:9096 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and to fetch all checkpoint messages.
2017-01-16 15:09:44.701 [main] KafkaCheckpointManager [INFO] Got offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0. Attempting to fetch messages for checkpoint log.
2017-01-16 15:09:44.710 [main] KafkaCheckpointManager [INFO] Get latest offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0.
2017-01-16 15:09:44.711 [main] KafkaCheckpointManager [INFO] Got checkpoint state for taskName Partition 0: Checkpoint [offsets={SystemStreamPartition [kafka, tweets_distinctContent_test, 0]=1299268605}]
2017-01-16 15:09:44.711 [main] KafkaCheckpointManager [INFO] Reading checkpoint for taskName Partition 2
2017-01-16 15:09:44.711 [main] KafkaCheckpointManager [INFO] Already existing checkpoint mapping. Merging new offsets
2017-01-16 15:09:44.711 [main] KafkaCheckpointManager [INFO] Connecting to leader 172.19.105.20:9096 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and to fetch all checkpoint messages.
2017-01-16 15:09:44.711 [main] KafkaCheckpointManager [INFO] Got offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0. Attempting to fetch messages for checkpoint log.
2017-01-16 15:09:44.719 [main] KafkaCheckpointManager [INFO] Get latest offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0.
2017-01-16 15:09:44.720 [main] KafkaCheckpointManager [INFO] Got checkpoint state for taskName Partition 2: Checkpoint [offsets={SystemStreamPartition [kafka, tweets_distinctContent_test, 2]=1299312099}]
2017-01-16 15:09:44.720 [main] KafkaCheckpointManager [INFO] Reading checkpoint for taskName Partition 4
2017-01-16 15:09:44.720 [main] KafkaCheckpointManager [INFO] Already existing checkpoint mapping. Merging new offsets
2017-01-16 15:09:44.720 [main] KafkaCheckpointManager [INFO] Connecting to leader 172.19.105.20:9096 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and to fetch all checkpoint messages.
2017-01-16 15:09:44.720 [main] KafkaCheckpointManager [INFO] Got offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0. Attempting to fetch messages for checkpoint log.
2017-01-16 15:09:44.730 [main] KafkaCheckpointManager [INFO] Get latest offset 4460 for topic __samza_checkpoint_ver_1_for_canal-status-persistent-hstore_1 and partition 0.
2017-01-16 15:09:44.730 [main] KafkaCheckpointManager [INFO] Got checkpoint state for taskName Partition 4: Checkpoint [offsets={SystemStreamPartition [kafka, tweets_distinctContent_test, 4]=1299300037}]
2017-01-16 15:09:44.732 [main] OffsetManager [INFO] Checkpointed offset is currently 1299280079 for SystemStreamPartition [kafka, tweets_distinctContent_test, 6]
2017-01-16 15:09:44.732 [main] OffsetManager [INFO] Checkpointed offset is currently 1299268605 for SystemStreamPartition [kafka, tweets_distinctContent_test, 0]
2017-01-16 15:09:44.732 [main] OffsetManager [INFO] Checkpointed offset is currently 1299312099 for SystemStreamPartition [kafka, tweets_distinctContent_test, 2]
2017-01-16 15:09:44.732 [main] OffsetManager [INFO] Checkpointed offset is currently 1299300037 for SystemStreamPartition [kafka, tweets_distinctContent_test, 4]
2017-01-16 15:09:44.762 [main] OffsetManager [INFO] Successfully loaded last processed offsets: {Partition 0={SystemStreamPartition [kafka, tweets_distinctContent_test, 0]=1299268605}, Partition 2={SystemStreamPartition [kafka, tweets_distinctContent_test, 2]=1299312099}, Partition 4={SystemStreamPartition [kafka, tweets_distinctContent_test, 4]=1299300037}, Partition 6={SystemStreamPartition [kafka, tweets_distinctContent_test, 6]=1299280079}}
2017-01-16 15:09:44.763 [main] OffsetManager [INFO] Successfully loaded starting offsets: Map(Partition 6 -> Map(SystemStreamPartition [kafka, tweets_distinctContent_test, 6] -> 1299280080), Partition 0 -> Map(SystemStreamPartition [kafka, tweets_distinctContent_test, 0] -> 1299268606), Partition 2 -> Map(SystemStreamPartition [kafka, tweets_distinctContent_test, 2] -> 1299312100), Partition 4 -> Map(SystemStreamPartition [kafka, tweets_distinctContent_test, 4] -> 1299300038))
2017-01-16 15:09:44.763 [main] SamzaContainer [INFO] Registering localityManager for the container
2017-01-16 15:09:44.763 [main] CoordinatorStreamSystemProducer [INFO] Starting coordinator stream producer.
2017-01-16 15:09:44.764 [main] SamzaContainer [INFO] Writing container locality and JMX address to Coordinator Stream
2017-01-16 15:09:44.765 [main] LocalityManager [INFO] Container 0 started at rflow61
2017-01-16 15:09:44.771 [main] KafkaSystemProducer [INFO] Creating a new producer for system kafka.
2017-01-16 15:09:44.771 [main] ProducerConfig [INFO] ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = 1
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [buka1:9096, buka2:9096, buka3:9096]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
retries = 2147483647
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
linger.ms = 0
client.id = samza_producer-canal_status_persistent_hstore-1
2017-01-16 15:09:45.088 [main] SamzaContainer [INFO] Starting task instance stores.
2017-01-16 15:09:45.090 [main] TaskStorageManager [INFO] Validating change log streams
2017-01-16 15:09:45.091 [main] TaskStorageManager [INFO] Got change log stream metadata: Map()
2017-01-16 15:09:45.093 [main] TaskStorageManager [INFO] Assigning oldest change log offsets for taskName Partition 0: Map()
2017-01-16 15:09:45.095 [main] TaskStorageManager [INFO] Validating change log streams
2017-01-16 15:09:45.095 [main] TaskStorageManager [INFO] Got change log stream metadata: Map()
2017-01-16 15:09:45.095 [main] TaskStorageManager [INFO] Assigning oldest change log offsets for taskName Partition 2: Map()
2017-01-16 15:09:45.095 [main] TaskStorageManager [INFO] Validating change log streams
2017-01-16 15:09:45.096 [main] TaskStorageManager [INFO] Got change log stream metadata: Map()
2017-01-16 15:09:45.096 [main] TaskStorageManager [INFO] Assigning oldest change log offsets for taskName Partition 4: Map()
2017-01-16 15:09:45.096 [main] TaskStorageManager [INFO] Validating change log streams
2017-01-16 15:09:45.096 [main] TaskStorageManager [INFO] Got change log stream metadata: Map()
2017-01-16 15:09:45.096 [main] TaskStorageManager [INFO] Assigning oldest change log offsets for taskName Partition 6: Map()
2017-01-16 15:09:45.096 [main] SamzaContainer [INFO] Starting host statistics monitor
2017-01-16 15:09:45.097 [main] SamzaContainer [INFO] Registering task instances with producers.
2017-01-16 15:09:45.099 [main] SamzaContainer [INFO] Starting producer multiplexer.
2017-01-16 15:09:45.660 [main] SamzaContainer [INFO] Initializing stream tasks.
2017-01-16 15:09:45.661 [main] SamzaContainer [INFO] Registering task instances with consumers.
2017-01-16 15:09:45.669 [main] SamzaContainer [INFO] Starting consumer multiplexer.
2017-01-16 15:09:45.672 [main] KafkaSystemConsumer [INFO] Refreshing brokers for: Map([tweets_distinctContent_test,0] -> 1299268606, [tweets_distinctContent_test,4] -> 1299300038, [tweets_distinctContent_test,6] -> 1299280080, [tweets_distinctContent_test,2] -> 1299312100)
2017-01-16 15:09:45.677 [main] BrokerProxy [INFO] Creating new SimpleConsumer for host 172.19.105.22:9096 for system kafka
2017-01-16 15:09:45.679 [main] GetOffset [INFO] Validating offset 1299268606 for topic and partition [tweets_distinctContent_test,0]
2017-01-16 15:09:45.761 [main] GetOffset [INFO] Able to successfully read from offset 1299268606 for topic and partition [tweets_distinctContent_test,0]. Using it to instantiate consumer.
2017-01-16 15:09:45.761 [main] BrokerProxy [INFO] Starting BrokerProxy for 172.19.105.22:9096
2017-01-16 15:09:45.762 [main] GetOffset [INFO] Validating offset 1299300038 for topic and partition [tweets_distinctContent_test,4]
2017-01-16 15:09:45.789 [main] GetOffset [INFO] Able to successfully read from offset 1299300038 for topic and partition [tweets_distinctContent_test,4]. Using it to instantiate consumer.
2017-01-16 15:09:45.790 [main] BrokerProxy [INFO] Creating new SimpleConsumer for host 172.19.105.21:9096 for system kafka
2017-01-16 15:09:45.790 [main] GetOffset [INFO] Validating offset 1299280080 for topic and partition [tweets_distinctContent_test,6]
2017-01-16 15:09:46.266 [main] GetOffset [INFO] Able to successfully read from offset 1299280080 for topic and partition [tweets_distinctContent_test,6]. Using it to instantiate consumer.
2017-01-16 15:09:46.266 [main] BrokerProxy [INFO] Starting BrokerProxy for 172.19.105.21:9096
2017-01-16 15:09:46.266 [main] GetOffset [INFO] Validating offset 1299312100 for topic and partition [tweets_distinctContent_test,2]
2017-01-16 15:09:47.542 [main] GetOffset [INFO] Able to successfully read from offset 1299312100 for topic and partition [tweets_distinctContent_test,2]. Using it to instantiate consumer.
2017-01-16 15:09:47.572 [main] SamzaContainer [INFO] Entering run loop.
2017-01-16 15:09:47.577 [main] TaskInstance [INFO] SystemStreamPartition [kafka, tweets_distinctContent_test, 4] is catched up.
2017-01-16 15:09:47.723 [main] ZlibFactory [INFO] Successfully loaded & initialized native-zlib library
2017-01-16 15:09:47.724 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.729 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/sina/20170116150947_14bb4073.opened
2017-01-16 15:09:47.746 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.747 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/sina-interaction/20170116150947_42ee4109.opened
2017-01-16 15:09:47.763 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.763 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/twitter/20170116150947_6693fefa.opened
2017-01-16 15:09:47.780 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.780 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/twitter-interaction/20170116150947_9eb7be58.opened
2017-01-16 15:09:47.788 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.788 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/tencent/20170116150947_158615c1.opened
2017-01-16 15:09:47.796 [main] CodecPool [INFO] Got brand-new compressor [.deflate]
2017-01-16 15:09:47.796 [main] HStoreSystemStatusProducer [INFO] Open file: /eefung/shuqi/samza-test/tencent-interaction/20170116150947_47e4cd69.opened
2017-01-16 15:09:47.835 [main] TaskInstance [INFO] SystemStreamPartition [kafka, tweets_distinctContent_test, 0] is catched up.
2017-01-16 15:09:47.930 [main] TaskInstance [INFO] SystemStreamPartition [kafka, tweets_distinctContent_test, 6] is catched up.
2017-01-16 15:09:48.584 [main] ProducerConfig [INFO] ProducerConfig values:
compression.type = none
metric.reporters = []
metadata.max.age.ms = 300000
metadata.fetch.timeout.ms = 60000
acks = all
batch.size = 16384
reconnect.backoff.ms = 10
bootstrap.servers = [buka1:9096, buka2:9096, buka3:9096]
receive.buffer.bytes = 32768
retry.backoff.ms = 100
buffer.memory = 33554432
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
retries = 2147483647
max.request.size = 1048576
block.on.buffer.full = true
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
metrics.sample.window.ms = 30000
send.buffer.bytes = 131072
max.in.flight.requests.per.connection = 1
metrics.num.samples = 2
linger.ms = 0
client.id = samza_checkpoint_manager-canal_status_persistent_hstore-1
2017-01-16 15:09:49.776 [main] TaskInstance [INFO] SystemStreamPartition [kafka, tweets_distinctContent_test, 2] is catched up.
2017-01-16 15:11:43.859 [kafka-producer-network-thread | samza_producer-canal_status_persistent_hstore-1] Selector [WARN] Error in I/O with buka1/172.19.105.21
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Thread.java:745)
2017-01-16 15:11:45.235 [kafka-producer-network-thread | samza_producer-canal_status_persistent_hstore-1] Selector [WARN] Error in I/O with /172.19.105.22
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Thread.java:745)
2017-01-16 15:11:45.356 [kafka-producer-network-thread | samza_producer-canal_status_persistent_hstore-1] Selector [WARN] Error in I/O with buka1/172.19.105.21
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Thread.java:745)
2017-01-16 15:11:48.853 [kafka-producer-network-thread | samza_checkpoint_manager-canal_status_persistent_hstore-1] Selector [WARN] Error in I/O with buka3/172.19.105.22
java.io.EOFException
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
at org.apache.kafka.common.network.Selector.poll(Selector.java:248)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122)
at java.lang.Thread.run(Thread.java:745)
2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096
2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096
2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer...
LogType:stderr
LogLength:141
Log Contents:
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)
LogType:stdout
LogLength:30591
Log Contents:
home_dir=/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002
framework base (location of this script). base_dir=/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package
/usr/java/latest/bin/java -Xmx3276m -server -Dsamza.container.id=0 -Dsamza.container.name=samza-container-0 -Dlog4j.configuration=file:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/log4j.xml -Dsamza.log.dir=/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002 -Djava.io.tmpdir=/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/tmp -XX:+PrintGCDateStamps -Xloggc:/data/hadoop/yarn/log/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10241024 -d64 -cp /usr/hdp/current/hadoop-client/conf:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-core-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-fate-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/accumulo-start-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/activation-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/ahocorasick-0.3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/akka-actor_2.10-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/antfact-avro-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/aopalliance-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/apacheds-i18n-2.0.0-M15.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/api-asn1-api-1.0.0-M20.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/api-util-1.0.0-M20.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/asm-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/avro-1.7.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/broker-kafka-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/camel-core-2.12.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/camel-spring-2.12.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/canal-common-1.0-SNAPSHOT.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/canal-persistent-hstore-1.0-SNAPSHOT.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-1.0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-beanutils-1.7.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-beanutils-core-1.8.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-cli-1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-codec-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-collections-3.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-compress-1.4.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-configuration-1.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-daemon-1.0.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-dbutils-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-digester-1.8.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-httpclient-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-io-2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-lang-2.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-lang3-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-logging-1.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-logging-api-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-math-2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-math3-3.1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-net-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/commons-vfs2-2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/config-1.0.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-client-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-framework-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/curator-recipes-2.7.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/gmbal-api-only-3.0.0-b023.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzled-slf4j_2.10-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-framework-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-server-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-http-servlet-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/grizzly-rcm-2.1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/gson-2.2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guava-19.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guice-3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/guice-servlet-3.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-annotations-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-auth-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-client-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-common-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-hdfs-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-app-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-common-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-core-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-jobclient-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-mapreduce-client-shuffle-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-api-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-client-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-common-2.7.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hadoop-yarn-server-common-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/hstore-common-1.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/htrace-core-3.1.0-incubating.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpclient-4.2.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpcore-4.2.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/httpmime-4.2.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/irclib-1.10.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-core-asl-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-core-lgpl-1.9.7.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-jaxrs-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-mapper-asl-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-mapper-lgpl-1.9.7.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jackson-xc-1.9.13.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.inject-1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/java-xmlbuilder-0.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-3.0.0.v201112011016.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/javax.servlet-api-3.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jaxb-api-2.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jaxb-impl-2.2.3-1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jcommander-1.32.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-client-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-core-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-grizzly2-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-guice-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-json-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-server-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-test-framework-core-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jersey-test-framework-grizzly2-1.9.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jets3t-0.9.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jettison-1.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-6.1.26.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-continuation-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-http-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-io-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-security-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-server-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-servlet-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-util-6.1.26.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-util-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-webapp-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jetty-xml-8.1.8.v20121106.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jk-analyzer-1.4.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jline-0.9.94.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/joda-convert-1.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/joda-time-2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jopt-simple-3.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsch-0.1.42.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/json-20090211.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsoup-1.8.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/jsr305-3.0.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/juniversalchardet-1.0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka_2.10-0.8.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka_2.8.0-0.8.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/kafka-clients-0.8.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/leveldbjni-all-1.8.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/libthrift-0.9.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/log4j-1.2.17.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/logback-core-1.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lucene-analyzers-common-5.3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lucene-core-5.3.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/lz4-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/management-api-3.0.0-b012.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-api-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-provider-svn-commons-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/maven-scm-provider-svnexe-1.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-annotation-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-2.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-3.0.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/metrics-core-3.1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/mime-util-2.1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/mongo-java-driver-2.12.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/netty-3.6.2.Final.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/netty-all-4.0.23.Final.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/org.osgi.compendium-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/org.osgi.core-1.2.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/paranamer-2.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/plexus-utils-1.5.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/protobuf-java-2.5.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/regexp-1.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/rl_2.10-0.4.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/rocksdbjni-3.13.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-api-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-core_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kafka_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kv_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-kv-rocksdb_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-log4j-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/samza-yarn_2.10-0.11.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-compiler-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-library-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scala-reflect-2.10.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalate-core_2.10-1.6.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalate-util_2.10-1.6.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra-common_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/scalatra-scalate_2.10-2.2.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/secbase-osgi-1.2.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/servlet-api-2.5.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/slf4j-api-1.6.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/slf4j-log4j12-1.6.2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/snappy-java-1.1.1.6.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-aop-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-beans-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-context-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-core-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-expression-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/spring-tx-3.2.4.RELEASE.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/stax-api-1.0-2.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/weibo-common-1.3.4.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xercesImpl-2.9.1.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xml-apis-1.3.04.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xmlenc-0.52.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/xz-1.0.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/zkclient-0.3.jar:/data/hadoop/yarn/local/usercache/ant/appcache/application_1482299868039_0032/container_e28_1482299868039_0032_01_000002/__package/lib/zookeeper-3.4.6.jar org.apache.samza.container.SamzaContainer
————————
舒琦
地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
网址:http://www.eefung.com
微博:http://weibo.com/eefung
邮编:410013
电话:400-677-0986
传真:0731-88519609
> 在 2017年1月17日,11:24,舒琦 <sh...@eefung.com> 写道:
>
> Sorry, forget the log file.
>
> ————————
> 舒琦
> 地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
> 网址:http://www.eefung.com <http://www.eefung.com/>
> 微博:http://weibo.com/eefung <http://weibo.com/eefung>
> 邮编:410013
> 电话:400-677-0986
> 传真:0731-88519609
>
>> 在 2017年1月17日,10:40,舒琦 <shuqi@eefung.com <ma...@eefung.com>> 写道:
>>
>> Hi,
>>
>> Actually I check the log by using“tail" on the yarn local data dir on which the container is running, the container log I found in hdfs already, but can’t tell the format for log.
>>
>> ————————
>> 舒琦
>> 地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
>> 网址:http://www.eefung.com
>> 微博:http://weibo.com/eefung
>> 邮编:410013
>> 电话:400-677-0986
>> 传真:0731-88519609
>>
>>> 在 2017年1月16日,19:40,Liu Bo <di...@gmail.com> 写道:
>>>
>>> Hi
>>>
>>> I don't think you can view samza container logs in the web as mr job history, try to check the dump folder at HDFS.
>>>
>>> There should be one aggregated log file per machine in the folder named according to the job_id
>>>
>>> On 16 January 2017 at 15:30, 舒琦 <shuqi@eefung.com <ma...@eefung.com> <mailto:shuqi@eefung.com <ma...@eefung.com>>> wrote:
>>> Hi,
>>>
>>> Thanks for your help.
>>>
>>> Here are 2 questions:
>>>
>>> 1. I have defined my own HDFS producer which implemented SystemProducer and overwrite stop method(I log something in the first line of stop method), but when I kill the app, the log are not printed out. The tricky thing is the logic defined in stop method sometimes can be executed and sometimes not.
>>>
>>> Below is stop method:
>>>
>>> @Override
>>> public void stop() {
>>> try {
>>> LOGGER.info <http://logger.info/>("Begin to close files");
>>> closeFiles();
>>> } catch (IOException e) {
>>> LOGGER.error("Error when close Files", e);
>>> }
>>>
>>> if (fs != null) {
>>> try {
>>> fs.close();
>>> } catch (IOException e) {
>>> //do nothing
>>> }
>>> }
>>> }
>>>
>>> Below is the log:
>>>
>>> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
>>> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
>>> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
>>> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/ <http://172.19.105.22:9096/>>
>>> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
>>> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 <http://172.19.105.22:9096/ <http://172.19.105.22:9096/>> for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
>>> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/ <http://172.19.105.21:9096/>>
>>> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>>>
>>> You can see the log “Begin to close files” are not printed out and of course the logic is not executed.
>>>
>>> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also enabled, but logs of containers can not be collected, only the log of am can be seen.
>>>
>>>
>>>
>>>
>>> ————————
>>> ShuQi
>>>
>>>> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <ma...@gmail.com> <mailto:diablo47@gmail.com <ma...@gmail.com>>> 写道:
>>>>
>>>> Hi,
>>>>
>>>> *container log will be removed automatically,*
>>>>
>>>>
>>>> you can turn on yarn log aggregation, so that terminated yarn jobs' log
>>>> will be dumped to HDFS
>>>>
>>>> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <ma...@gmail.com> <mailto:nickpan47@gmail.com <ma...@gmail.com>>> wrote:
>>>>
>>>>> Hi, Qi,
>>>>>
>>>>> Sorry to reply late. I am curious on your comment that the close and stop
>>>>> methods are not called. When user initiated a kill request, the graceful
>>>>> shutdown sequence is triggered by the shutdown hook added to
>>>>> SamzaContainer. The shutdown sequence is the following in the code:
>>>>> {code}
>>>>> info("Shutting down.")
>>>>>
>>>>> shutdownConsumers
>>>>> shutdownTask
>>>>> shutdownStores
>>>>> shutdownDiskSpaceMonitor
>>>>> shutdownHostStatisticsMonitor
>>>>> shutdownProducers
>>>>> shutdownLocalityManager
>>>>> shutdownOffsetManager
>>>>> shutdownMetrics
>>>>> shutdownSecurityManger
>>>>>
>>>>> info("Shutdown complete.")
>>>>> {code}
>>>>>
>>>>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>>>>> SystemProducer.close() is invoked in shutdownProducers.
>>>>>
>>>>> Could you explain why you are not able to shutdown a Samza job gracefully?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> -Yi
>>>>>
>>>>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com> <mailto:shuqi@eefung.com <ma...@eefung.com>>> wrote:
>>>>>
>>>>>> Hi Guys,
>>>>>>
>>>>>> How can I stop running samza job gracefully except killing it?
>>>>>>
>>>>>> Because when samza job was killed, the close and stop method in
>>>>>> BaseMessageChooser and SystemProducer will not be called and the
>>>>> container
>>>>>> log will be removed automatically, how can resolve this?
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> ————————
>>>>>> ShuQi
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> All the best
>>>>
>>>> Liu Bo
>>>
>>>
>>>
>>>
>>> --
>>> All the best
>>>
>>> Liu Bo
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Sorry, forget the log file.
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Hi,
I used AggregatedLogFormat to read the log file, it is the same as I uses “tail” on yarn local data dir, please check it in attachment.
In the log file , you can find somethings like“Open file: /eefung/shuqi/samza-test”, but can not find lines start with“Begin to close files”.
And another problem is the error infomation“samza_checkpoint_manager-canal_status_persistent_hstore-1] Selector [WARN] Error in I/O with buka3/172.19.105.22 java.io.EOFException” will be reported about every 2 minutes, but it seems not affect reading and writing data from kafka. Now the kafka version I use is 0.10.1.0, samza version is 0.11.0, kafka client version samza using is 0.8.2.1.
Thanks.
————————
舒琦
地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
网址:http://www.eefung.com
微博:http://weibo.com/eefung
邮编:410013
电话:400-677-0986
传真:0731-88519609
> 在 2017年1月17日,10:40,舒琦 <sh...@eefung.com> 写道:
>
> Hi,
>
> Actually I check the log by using“tail" on the yarn local data dir on which the container is running, the container log I found in hdfs already, but can’t tell the format for log.
>
> ————————
> 舒琦
> 地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
> 网址:http://www.eefung.com
> 微博:http://weibo.com/eefung
> 邮编:410013
> 电话:400-677-0986
> 传真:0731-88519609
>
>> 在 2017年1月16日,19:40,Liu Bo <di...@gmail.com> 写道:
>>
>> Hi
>>
>> I don't think you can view samza container logs in the web as mr job history, try to check the dump folder at HDFS.
>>
>> There should be one aggregated log file per machine in the folder named according to the job_id
>>
>> On 16 January 2017 at 15:30, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
>> Hi,
>>
>> Thanks for your help.
>>
>> Here are 2 questions:
>>
>> 1. I have defined my own HDFS producer which implemented SystemProducer and overwrite stop method(I log something in the first line of stop method), but when I kill the app, the log are not printed out. The tricky thing is the logic defined in stop method sometimes can be executed and sometimes not.
>>
>> Below is stop method:
>>
>> @Override
>> public void stop() {
>> try {
>> LOGGER.info("Begin to close files");
>> closeFiles();
>> } catch (IOException e) {
>> LOGGER.error("Error when close Files", e);
>> }
>>
>> if (fs != null) {
>> try {
>> fs.close();
>> } catch (IOException e) {
>> //do nothing
>> }
>> }
>> }
>>
>> Below is the log:
>>
>> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
>> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
>> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
>> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/>
>> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
>> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 <http://172.19.105.22:9096/> for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
>> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/>
>> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>>
>> You can see the log “Begin to close files” are not printed out and of course the logic is not executed.
>>
>> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also enabled, but logs of containers can not be collected, only the log of am can be seen.
>>
>>
>>
>>
>> ————————
>> ShuQi
>>
>>> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <ma...@gmail.com>> 写道:
>>>
>>> Hi,
>>>
>>> *container log will be removed automatically,*
>>>
>>>
>>> you can turn on yarn log aggregation, so that terminated yarn jobs' log
>>> will be dumped to HDFS
>>>
>>> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <ma...@gmail.com>> wrote:
>>>
>>>> Hi, Qi,
>>>>
>>>> Sorry to reply late. I am curious on your comment that the close and stop
>>>> methods are not called. When user initiated a kill request, the graceful
>>>> shutdown sequence is triggered by the shutdown hook added to
>>>> SamzaContainer. The shutdown sequence is the following in the code:
>>>> {code}
>>>> info("Shutting down.")
>>>>
>>>> shutdownConsumers
>>>> shutdownTask
>>>> shutdownStores
>>>> shutdownDiskSpaceMonitor
>>>> shutdownHostStatisticsMonitor
>>>> shutdownProducers
>>>> shutdownLocalityManager
>>>> shutdownOffsetManager
>>>> shutdownMetrics
>>>> shutdownSecurityManger
>>>>
>>>> info("Shutdown complete.")
>>>> {code}
>>>>
>>>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>>>> SystemProducer.close() is invoked in shutdownProducers.
>>>>
>>>> Could you explain why you are not able to shutdown a Samza job gracefully?
>>>>
>>>> Thanks!
>>>>
>>>> -Yi
>>>>
>>>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
>>>>
>>>>> Hi Guys,
>>>>>
>>>>> How can I stop running samza job gracefully except killing it?
>>>>>
>>>>> Because when samza job was killed, the close and stop method in
>>>>> BaseMessageChooser and SystemProducer will not be called and the
>>>> container
>>>>> log will be removed automatically, how can resolve this?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> ————————
>>>>> ShuQi
>>>>
>>>
>>>
>>>
>>> --
>>> All the best
>>>
>>> Liu Bo
>>
>>
>>
>>
>> --
>> All the best
>>
>> Liu Bo
>
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Hi,
Actually I check the log by using“tail" on the yarn local data dir on which the container is running, the container log I found in hdfs already, but can’t tell the format for log.
————————
舒琦
地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
网址:http://www.eefung.com
微博:http://weibo.com/eefung
邮编:410013
电话:400-677-0986
传真:0731-88519609
> 在 2017年1月16日,19:40,Liu Bo <di...@gmail.com> 写道:
>
> Hi
>
> I don't think you can view samza container logs in the web as mr job history, try to check the dump folder at HDFS.
>
> There should be one aggregated log file per machine in the folder named according to the job_id
>
> On 16 January 2017 at 15:30, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
> Hi,
>
> Thanks for your help.
>
> Here are 2 questions:
>
> 1. I have defined my own HDFS producer which implemented SystemProducer and overwrite stop method(I log something in the first line of stop method), but when I kill the app, the log are not printed out. The tricky thing is the logic defined in stop method sometimes can be executed and sometimes not.
>
> Below is stop method:
>
> @Override
> public void stop() {
> try {
> LOGGER.info("Begin to close files");
> closeFiles();
> } catch (IOException e) {
> LOGGER.error("Error when close Files", e);
> }
>
> if (fs != null) {
> try {
> fs.close();
> } catch (IOException e) {
> //do nothing
> }
> }
> }
>
> Below is the log:
>
> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/>
> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 <http://172.19.105.22:9096/> for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/>
> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>
> You can see the log “Begin to close files” are not printed out and of course the logic is not executed.
>
> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also enabled, but logs of containers can not be collected, only the log of am can be seen.
>
>
>
>
> ————————
> ShuQi
>
>> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <ma...@gmail.com>> 写道:
>>
>> Hi,
>>
>> *container log will be removed automatically,*
>>
>>
>> you can turn on yarn log aggregation, so that terminated yarn jobs' log
>> will be dumped to HDFS
>>
>> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <ma...@gmail.com>> wrote:
>>
>>> Hi, Qi,
>>>
>>> Sorry to reply late. I am curious on your comment that the close and stop
>>> methods are not called. When user initiated a kill request, the graceful
>>> shutdown sequence is triggered by the shutdown hook added to
>>> SamzaContainer. The shutdown sequence is the following in the code:
>>> {code}
>>> info("Shutting down.")
>>>
>>> shutdownConsumers
>>> shutdownTask
>>> shutdownStores
>>> shutdownDiskSpaceMonitor
>>> shutdownHostStatisticsMonitor
>>> shutdownProducers
>>> shutdownLocalityManager
>>> shutdownOffsetManager
>>> shutdownMetrics
>>> shutdownSecurityManger
>>>
>>> info("Shutdown complete.")
>>> {code}
>>>
>>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>>> SystemProducer.close() is invoked in shutdownProducers.
>>>
>>> Could you explain why you are not able to shutdown a Samza job gracefully?
>>>
>>> Thanks!
>>>
>>> -Yi
>>>
>>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
>>>
>>>> Hi Guys,
>>>>
>>>> How can I stop running samza job gracefully except killing it?
>>>>
>>>> Because when samza job was killed, the close and stop method in
>>>> BaseMessageChooser and SystemProducer will not be called and the
>>> container
>>>> log will be removed automatically, how can resolve this?
>>>>
>>>> Thanks.
>>>>
>>>> ————————
>>>> ShuQi
>>>
>>
>>
>>
>> --
>> All the best
>>
>> Liu Bo
>
>
>
>
> --
> All the best
>
> Liu Bo
Re: How to gracefully stop samza job
Posted by Liu Bo <di...@gmail.com>.
Hi
I don't think you can view samza container logs in the web as mr job
history, try to check the dump folder at HDFS.
There should be one aggregated log file per machine in the folder named
according to the job_id
On 16 January 2017 at 15:30, 舒琦 <sh...@eefung.com> wrote:
> Hi,
>
> Thanks for your help.
>
> Here are 2 questions:
>
> 1. I have defined my own HDFS producer which implemented SystemProducer
> and overwrite stop method(I log something in the first line of stop
> method), but when I kill the app, the log are not printed out. The tricky
> thing is the logic defined in stop method sometimes can be executed and
> sometimes not.
>
> Below is stop method:
>
> @Override
> public void stop() {
> try {
> LOGGER.info("Begin to close files");
> closeFiles();
> } catch (IOException e) {
> LOGGER.error("Error when close Files", e);
> }
>
> if (fs != null) {
> try {
> fs.close();
> } catch (IOException e) {
> //do nothing
> }
> }
> }
>
>
> Below is the log:
>
> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down,
> will wait up to 5000 ms
> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down
> consumer multiplexer.
> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.22:9096
> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple
> consumer...
> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at
> 172.19.105.22:9096 for client samza_consumer-canal_status_persistent_hstore-1]
> BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.21:9096
> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>
> You can see the log “Begin to close files” are not printed out and of
> course the logic is not executed.
>
> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also
> enabled, but logs of containers can not be collected, only the log of am
> can be seen.
>
>
>
> ————————
> ShuQi
>
> 在 2017年1月16日,10:39,Liu Bo <di...@gmail.com> 写道:
>
> Hi,
>
> *container log will be removed automatically,*
>
>
> you can turn on yarn log aggregation, so that terminated yarn jobs' log
> will be dumped to HDFS
>
> On 14 January 2017 at 07:44, Yi Pan <ni...@gmail.com> wrote:
>
> Hi, Qi,
>
> Sorry to reply late. I am curious on your comment that the close and stop
> methods are not called. When user initiated a kill request, the graceful
> shutdown sequence is triggered by the shutdown hook added to
> SamzaContainer. The shutdown sequence is the following in the code:
> {code}
> info("Shutting down.")
>
> shutdownConsumers
> shutdownTask
> shutdownStores
> shutdownDiskSpaceMonitor
> shutdownHostStatisticsMonitor
> shutdownProducers
> shutdownLocalityManager
> shutdownOffsetManager
> shutdownMetrics
> shutdownSecurityManger
>
> info("Shutdown complete.")
> {code}
>
> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
> SystemProducer.close() is invoked in shutdownProducers.
>
> Could you explain why you are not able to shutdown a Samza job gracefully?
>
> Thanks!
>
> -Yi
>
> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <sh...@eefung.com> wrote:
>
> Hi Guys,
>
> How can I stop running samza job gracefully except killing it?
>
> Because when samza job was killed, the close and stop method in
> BaseMessageChooser and SystemProducer will not be called and the
>
> container
>
> log will be removed automatically, how can resolve this?
>
> Thanks.
>
> ————————
> ShuQi
>
>
>
>
>
> --
> All the best
>
> Liu Bo
>
>
>
--
All the best
Liu Bo
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Hi,
After killing the app, all the containers don’t hang, all the processes were gone, so I can’t make a thread dump.
Here I only need to write data to HDFS, so my implementation of SystemFactory return null for both getConsumer and getAdmin method, only provide a valid SystemProducer.
Is this a problem?
Thanks.
————————
舒琦
地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
网址:http://www.eefung.com
微博:http://weibo.com/eefung
邮编:410013
电话:400-677-0986
传真:0731-88519609
> 在 2017年1月18日,01:59,Yi Pan <ni...@gmail.com> 写道:
>
> You probably should return a valid SystemAdmin object, but returning null
> for SystemConsumer should be OK. Again, two questions:
> 1) Did the container hangs during the shutdown? Or it just crashes w/
> exception? Since stderr does not show anything, I was assuming that the
> container hangs???
> 2) If the container hangs, could you take a thread dump?
>
> Thanks!
>
> -Yi
>
> On Tue, Jan 17, 2017 at 1:50 AM, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
>
>> Hi,
>>
>> My SystemFactory implementation return null for both 『getConsumer』 and
>> 『getAdmin』, is this the cause of the problem?
>>
>> Thanks.
>>
>> ————————
>> 舒琦
>> 地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
>> 网址:http://www.eefung.com
>> 微博:http://weibo.com/eefung
>> 邮编:410013
>> 电话:400-677-0986
>> 传真:0731-88519609
>>
>>> 在 2017年1月17日,17:18,Yi Pan <ni...@gmail.com> 写道:
>>>
>>> Hi, Qi,
>>>
>>> In your log, the log line stops at "closing simple consumer...". It is
>> part of the shutdownConsumers() method in the shutdown sequence. Are you
>> sure that the container process actually proceed further in the shutdown
>> sequence? If the container process does not proceed further (i.e. somehow
>> stuck at certain steps before shutdownProducers() method), your producer
>> stop() method will not be executed. I noticed that in your log file, there
>> is not even a line "Shutting down task instance stream tasks.", which means
>> your program does not even executed shutdownTasks() in the shutdown
>> sequence (right after the shutdownConsumers()). Since in your stderr, there
>> is no exception reported either, can you check your implementation of
>> HStoreSystemConsumer to see whether the consumer hangs on shutdown? A
>> thread-dump would be super helpful here.
>>>
>>> On Sun, Jan 15, 2017 at 11:30 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com> <mailto:
>> shuqi@eefung.com <ma...@eefung.com>>> wrote:
>>> Hi,
>>>
>>> Thanks for your help.
>>>
>>> Here are 2 questions:
>>>
>>> 1. I have defined my own HDFS producer which implemented SystemProducer
>> and overwrite stop method(I log something in the first line of stop
>> method), but when I kill the app, the log are not printed out. The tricky
>> thing is the logic defined in stop method sometimes can be executed and
>> sometimes not.
>>>
>>> Below is stop method:
>>>
>>> @Override
>>> public void stop() {
>>> try {
>>> LOGGER.info <http://logger.info/>("Begin to close files");
>>> closeFiles();
>>> } catch (IOException e) {
>>> LOGGER.error("Error when close Files", e);
>>> }
>>>
>>> if (fs != null) {
>>> try {
>>> fs.close();
>>> } catch (IOException e) {
>>> //do nothing
>>> }
>>> }
>>> }
>>>
>>> Below is the log:
>>>
>>> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down,
>> will wait up to 5000 ms
>>> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
>>> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down
>> consumer multiplexer.
>>> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down
>> BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/ <http://172.19.105.22:9096/>>
>>> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple
>> consumer...
>>> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed
>> at 172.19.105.22:9096 <http://172.19.105.22:9096/ <http://172.19.105.22:9096/>> for client
>> samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got
>> interrupt exception in broker proxy thread.
>>> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down
>> BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/ <http://172.19.105.21:9096/>>
>>> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple
>> consumer…
>>>
>>> You can see the log “Begin to close files” are not printed out and of
>> course the logic is not executed.
>>>
>>> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also
>> enabled, but logs of containers can not be collected, only the log of am
>> can be seen.
>>>
>>>
>>>
>>>
>>> ————————
>>> ShuQi
>>>
>>>> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <ma...@gmail.com> <mailto:
>> diablo47@gmail.com <ma...@gmail.com>>> 写道:
>>>>
>>>> Hi,
>>>>
>>>> *container log will be removed automatically,*
>>>>
>>>> you can turn on yarn log aggregation, so that terminated yarn jobs' log
>>>> will be dumped to HDFS
>>>>
>>>> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <ma...@gmail.com> <mailto:
>> nickpan47@gmail.com <ma...@gmail.com>>> wrote:
>>>>
>>>>> Hi, Qi,
>>>>>
>>>>> Sorry to reply late. I am curious on your comment that the close and
>> stop
>>>>> methods are not called. When user initiated a kill request, the
>> graceful
>>>>> shutdown sequence is triggered by the shutdown hook added to
>>>>> SamzaContainer. The shutdown sequence is the following in the code:
>>>>> {code}
>>>>> info("Shutting down.")
>>>>>
>>>>> shutdownConsumers
>>>>> shutdownTask
>>>>> shutdownStores
>>>>> shutdownDiskSpaceMonitor
>>>>> shutdownHostStatisticsMonitor
>>>>> shutdownProducers
>>>>> shutdownLocalityManager
>>>>> shutdownOffsetManager
>>>>> shutdownMetrics
>>>>> shutdownSecurityManger
>>>>>
>>>>> info("Shutdown complete.")
>>>>> {code}
>>>>>
>>>>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>>>>> SystemProducer.close() is invoked in shutdownProducers.
>>>>>
>>>>> Could you explain why you are not able to shutdown a Samza job
>> gracefully?
>>>>>
>>>>> Thanks!
>>>>>
>>>>> -Yi
>>>>>
>>>>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com> <mailto:
>> shuqi@eefung.com <ma...@eefung.com>>> wrote:
>>>>>
>>>>>> Hi Guys,
>>>>>>
>>>>>> How can I stop running samza job gracefully except killing it?
>>>>>>
>>>>>> Because when samza job was killed, the close and stop method in
>>>>>> BaseMessageChooser and SystemProducer will not be called and the
>>>>> container
>>>>>> log will be removed automatically, how can resolve this?
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> ————————
>>>>>> ShuQi
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> All the best
>>>>
>>>> Liu Bo
Re: How to gracefully stop samza job
Posted by Yi Pan <ni...@gmail.com>.
You probably should return a valid SystemAdmin object, but returning null
for SystemConsumer should be OK. Again, two questions:
1) Did the container hangs during the shutdown? Or it just crashes w/
exception? Since stderr does not show anything, I was assuming that the
container hangs???
2) If the container hangs, could you take a thread dump?
Thanks!
-Yi
On Tue, Jan 17, 2017 at 1:50 AM, 舒琦 <sh...@eefung.com> wrote:
> Hi,
>
> My SystemFactory implementation return null for both 『getConsumer』 and
> 『getAdmin』, is this the cause of the problem?
>
> Thanks.
>
> ————————
> 舒琦
> 地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
> 网址:http://www.eefung.com
> 微博:http://weibo.com/eefung
> 邮编:410013
> 电话:400-677-0986
> 传真:0731-88519609
>
> > 在 2017年1月17日,17:18,Yi Pan <ni...@gmail.com> 写道:
> >
> > Hi, Qi,
> >
> > In your log, the log line stops at "closing simple consumer...". It is
> part of the shutdownConsumers() method in the shutdown sequence. Are you
> sure that the container process actually proceed further in the shutdown
> sequence? If the container process does not proceed further (i.e. somehow
> stuck at certain steps before shutdownProducers() method), your producer
> stop() method will not be executed. I noticed that in your log file, there
> is not even a line "Shutting down task instance stream tasks.", which means
> your program does not even executed shutdownTasks() in the shutdown
> sequence (right after the shutdownConsumers()). Since in your stderr, there
> is no exception reported either, can you check your implementation of
> HStoreSystemConsumer to see whether the consumer hangs on shutdown? A
> thread-dump would be super helpful here.
> >
> > On Sun, Jan 15, 2017 at 11:30 PM, 舒琦 <shuqi@eefung.com <mailto:
> shuqi@eefung.com>> wrote:
> > Hi,
> >
> > Thanks for your help.
> >
> > Here are 2 questions:
> >
> > 1. I have defined my own HDFS producer which implemented SystemProducer
> and overwrite stop method(I log something in the first line of stop
> method), but when I kill the app, the log are not printed out. The tricky
> thing is the logic defined in stop method sometimes can be executed and
> sometimes not.
> >
> > Below is stop method:
> >
> > @Override
> > public void stop() {
> > try {
> > LOGGER.info("Begin to close files");
> > closeFiles();
> > } catch (IOException e) {
> > LOGGER.error("Error when close Files", e);
> > }
> >
> > if (fs != null) {
> > try {
> > fs.close();
> > } catch (IOException e) {
> > //do nothing
> > }
> > }
> > }
> >
> > Below is the log:
> >
> > 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down,
> will wait up to 5000 ms
> > 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
> > 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down
> consumer multiplexer.
> > 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/>
> > 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple
> consumer...
> > 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed
> at 172.19.105.22:9096 <http://172.19.105.22:9096/> for client
> samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got
> interrupt exception in broker proxy thread.
> > 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/>
> > 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple
> consumer…
> >
> > You can see the log “Begin to close files” are not printed out and of
> course the logic is not executed.
> >
> > 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also
> enabled, but logs of containers can not be collected, only the log of am
> can be seen.
> >
> >
> >
> >
> > ————————
> > ShuQi
> >
> >> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <mailto:
> diablo47@gmail.com>> 写道:
> >>
> >> Hi,
> >>
> >> *container log will be removed automatically,*
> >>
> >> you can turn on yarn log aggregation, so that terminated yarn jobs' log
> >> will be dumped to HDFS
> >>
> >> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <mailto:
> nickpan47@gmail.com>> wrote:
> >>
> >>> Hi, Qi,
> >>>
> >>> Sorry to reply late. I am curious on your comment that the close and
> stop
> >>> methods are not called. When user initiated a kill request, the
> graceful
> >>> shutdown sequence is triggered by the shutdown hook added to
> >>> SamzaContainer. The shutdown sequence is the following in the code:
> >>> {code}
> >>> info("Shutting down.")
> >>>
> >>> shutdownConsumers
> >>> shutdownTask
> >>> shutdownStores
> >>> shutdownDiskSpaceMonitor
> >>> shutdownHostStatisticsMonitor
> >>> shutdownProducers
> >>> shutdownLocalityManager
> >>> shutdownOffsetManager
> >>> shutdownMetrics
> >>> shutdownSecurityManger
> >>>
> >>> info("Shutdown complete.")
> >>> {code}
> >>>
> >>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
> >>> SystemProducer.close() is invoked in shutdownProducers.
> >>>
> >>> Could you explain why you are not able to shutdown a Samza job
> gracefully?
> >>>
> >>> Thanks!
> >>>
> >>> -Yi
> >>>
> >>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <mailto:
> shuqi@eefung.com>> wrote:
> >>>
> >>>> Hi Guys,
> >>>>
> >>>> How can I stop running samza job gracefully except killing it?
> >>>>
> >>>> Because when samza job was killed, the close and stop method in
> >>>> BaseMessageChooser and SystemProducer will not be called and the
> >>> container
> >>>> log will be removed automatically, how can resolve this?
> >>>>
> >>>> Thanks.
> >>>>
> >>>> ————————
> >>>> ShuQi
> >>>
> >>
> >>
> >>
> >> --
> >> All the best
> >>
> >> Liu Bo
> >
> >
>
>
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Hi,
My SystemFactory implementation return null for both 『getConsumer』 and 『getAdmin』, is this the cause of the problem?
Thanks.
————————
舒琦
地址:长沙市岳麓区文轩路27号麓谷企业广场A4栋1单元6F
网址:http://www.eefung.com
微博:http://weibo.com/eefung
邮编:410013
电话:400-677-0986
传真:0731-88519609
> 在 2017年1月17日,17:18,Yi Pan <ni...@gmail.com> 写道:
>
> Hi, Qi,
>
> In your log, the log line stops at "closing simple consumer...". It is part of the shutdownConsumers() method in the shutdown sequence. Are you sure that the container process actually proceed further in the shutdown sequence? If the container process does not proceed further (i.e. somehow stuck at certain steps before shutdownProducers() method), your producer stop() method will not be executed. I noticed that in your log file, there is not even a line "Shutting down task instance stream tasks.", which means your program does not even executed shutdownTasks() in the shutdown sequence (right after the shutdownConsumers()). Since in your stderr, there is no exception reported either, can you check your implementation of HStoreSystemConsumer to see whether the consumer hangs on shutdown? A thread-dump would be super helpful here.
>
> On Sun, Jan 15, 2017 at 11:30 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
> Hi,
>
> Thanks for your help.
>
> Here are 2 questions:
>
> 1. I have defined my own HDFS producer which implemented SystemProducer and overwrite stop method(I log something in the first line of stop method), but when I kill the app, the log are not printed out. The tricky thing is the logic defined in stop method sometimes can be executed and sometimes not.
>
> Below is stop method:
>
> @Override
> public void stop() {
> try {
> LOGGER.info("Begin to close files");
> closeFiles();
> } catch (IOException e) {
> LOGGER.error("Error when close Files", e);
> }
>
> if (fs != null) {
> try {
> fs.close();
> } catch (IOException e) {
> //do nothing
> }
> }
> }
>
> Below is the log:
>
> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096 <http://172.19.105.22:9096/>
> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 <http://172.19.105.22:9096/> for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096 <http://172.19.105.21:9096/>
> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>
> You can see the log “Begin to close files” are not printed out and of course the logic is not executed.
>
> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also enabled, but logs of containers can not be collected, only the log of am can be seen.
>
>
>
>
> ————————
> ShuQi
>
>> 在 2017年1月16日,10:39,Liu Bo <diablo47@gmail.com <ma...@gmail.com>> 写道:
>>
>> Hi,
>>
>> *container log will be removed automatically,*
>>
>> you can turn on yarn log aggregation, so that terminated yarn jobs' log
>> will be dumped to HDFS
>>
>> On 14 January 2017 at 07:44, Yi Pan <nickpan47@gmail.com <ma...@gmail.com>> wrote:
>>
>>> Hi, Qi,
>>>
>>> Sorry to reply late. I am curious on your comment that the close and stop
>>> methods are not called. When user initiated a kill request, the graceful
>>> shutdown sequence is triggered by the shutdown hook added to
>>> SamzaContainer. The shutdown sequence is the following in the code:
>>> {code}
>>> info("Shutting down.")
>>>
>>> shutdownConsumers
>>> shutdownTask
>>> shutdownStores
>>> shutdownDiskSpaceMonitor
>>> shutdownHostStatisticsMonitor
>>> shutdownProducers
>>> shutdownLocalityManager
>>> shutdownOffsetManager
>>> shutdownMetrics
>>> shutdownSecurityManger
>>>
>>> info("Shutdown complete.")
>>> {code}
>>>
>>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>>> SystemProducer.close() is invoked in shutdownProducers.
>>>
>>> Could you explain why you are not able to shutdown a Samza job gracefully?
>>>
>>> Thanks!
>>>
>>> -Yi
>>>
>>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <shuqi@eefung.com <ma...@eefung.com>> wrote:
>>>
>>>> Hi Guys,
>>>>
>>>> How can I stop running samza job gracefully except killing it?
>>>>
>>>> Because when samza job was killed, the close and stop method in
>>>> BaseMessageChooser and SystemProducer will not be called and the
>>> container
>>>> log will be removed automatically, how can resolve this?
>>>>
>>>> Thanks.
>>>>
>>>> ————————
>>>> ShuQi
>>>
>>
>>
>>
>> --
>> All the best
>>
>> Liu Bo
>
>
Re: How to gracefully stop samza job
Posted by Yi Pan <ni...@gmail.com>.
Hi, Qi,
In your log, the log line stops at "closing simple consumer...". It is part
of the shutdownConsumers() method in the shutdown sequence. Are you sure
that the container process actually proceed further in the shutdown
sequence? If the container process does not proceed further (i.e. somehow
stuck at certain steps before shutdownProducers() method), your producer
stop() method will not be executed. I noticed that in your log file, there
is not even a line "Shutting down task instance stream tasks.", which means
your program does not even executed shutdownTasks() in the shutdown
sequence (right after the shutdownConsumers()). Since in your stderr, there
is no exception reported either, can you check your implementation of
HStoreSystemConsumer to see whether the consumer hangs on shutdown? A
thread-dump would be super helpful here.
On Sun, Jan 15, 2017 at 11:30 PM, 舒琦 <sh...@eefung.com> wrote:
> Hi,
>
> Thanks for your help.
>
> Here are 2 questions:
>
> 1. I have defined my own HDFS producer which implemented SystemProducer
> and overwrite stop method(I log something in the first line of stop
> method), but when I kill the app, the log are not printed out. The tricky
> thing is the logic defined in stop method sometimes can be executed and
> sometimes not.
>
> Below is stop method:
>
> @Override
> public void stop() {
> try {
> LOGGER.info("Begin to close files");
> closeFiles();
> } catch (IOException e) {
> LOGGER.error("Error when close Files", e);
> }
>
> if (fs != null) {
> try {
> fs.close();
> } catch (IOException e) {
> //do nothing
> }
> }
> }
>
>
> Below is the log:
>
> 2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down,
> will wait up to 5000 ms
> 2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
> 2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down
> consumer multiplexer.
> 2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.22:9096
> 2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple
> consumer...
> 2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at
> 172.19.105.22:9096 for client samza_consumer-canal_status_persistent_hstore-1]
> BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
> 2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down
> BrokerProxy for 172.19.105.21:9096
> 2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
>
> You can see the log “Begin to close files” are not printed out and of
> course the logic is not executed.
>
> 2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also
> enabled, but logs of containers can not be collected, only the log of am
> can be seen.
>
>
>
> ————————
> ShuQi
>
> 在 2017年1月16日,10:39,Liu Bo <di...@gmail.com> 写道:
>
> Hi,
>
> *container log will be removed automatically,*
>
> you can turn on yarn log aggregation, so that terminated yarn jobs' log
> will be dumped to HDFS
>
> On 14 January 2017 at 07:44, Yi Pan <ni...@gmail.com> wrote:
>
> Hi, Qi,
>
> Sorry to reply late. I am curious on your comment that the close and stop
> methods are not called. When user initiated a kill request, the graceful
> shutdown sequence is triggered by the shutdown hook added to
> SamzaContainer. The shutdown sequence is the following in the code:
> {code}
> info("Shutting down.")
>
> shutdownConsumers
> shutdownTask
> shutdownStores
> shutdownDiskSpaceMonitor
> shutdownHostStatisticsMonitor
> shutdownProducers
> shutdownLocalityManager
> shutdownOffsetManager
> shutdownMetrics
> shutdownSecurityManger
>
> info("Shutdown complete.")
> {code}
>
> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
> SystemProducer.close() is invoked in shutdownProducers.
>
> Could you explain why you are not able to shutdown a Samza job gracefully?
>
> Thanks!
>
> -Yi
>
> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <sh...@eefung.com> wrote:
>
> Hi Guys,
>
> How can I stop running samza job gracefully except killing it?
>
> Because when samza job was killed, the close and stop method in
> BaseMessageChooser and SystemProducer will not be called and the
>
> container
>
> log will be removed automatically, how can resolve this?
>
> Thanks.
>
> ————————
> ShuQi
>
>
>
>
>
> --
> All the best
>
> Liu Bo
>
>
>
Re: How to gracefully stop samza job
Posted by 舒琦 <sh...@eefung.com>.
Hi,
Thanks for your help.
Here are 2 questions:
1. I have defined my own HDFS producer which implemented SystemProducer and overwrite stop method(I log something in the first line of stop method), but when I kill the app, the log are not printed out. The tricky thing is the logic defined in stop method sometimes can be executed and sometimes not.
Below is stop method:
@Override
public void stop() {
try {
LOGGER.info("Begin to close files");
closeFiles();
} catch (IOException e) {
LOGGER.error("Error when close Files", e);
}
if (fs != null) {
try {
fs.close();
} catch (IOException e) {
//do nothing
}
}
}
Below is the log:
2017-01-16 15:13:35.273 [Thread-9] SamzaContainer [INFO] Shutting down, will wait up to 5000 ms
2017-01-16 15:13:35.284 [main] SamzaContainer [INFO] Shutting down.
2017-01-16 15:13:35.285 [main] SamzaContainer [INFO] Shutting down consumer multiplexer.
2017-01-16 15:13:35.287 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.22:9096
2017-01-16 15:13:35.288 [main] BrokerProxy [INFO] closing simple consumer...
2017-01-16 15:13:35.340 [SAMZA-BROKER-PROXY-BrokerProxy thread pointed at 172.19.105.22:9096 for client samza_consumer-canal_status_persistent_hstore-1] BrokerProxy [INFO] Got interrupt exception in broker proxy thread.
2017-01-16 15:13:35.340 [main] BrokerProxy [INFO] Shutting down BrokerProxy for 172.19.105.21:9096
2017-01-16 15:13:35.341 [main] BrokerProxy [INFO] closing simple consumer…
You can see the log “Begin to close files” are not printed out and of course the logic is not executed.
2. The hadoop cluster I use is “HDP-2.5.0”,the log aggregation is also enabled, but logs of containers can not be collected, only the log of am can be seen.
————————
ShuQi
> 在 2017年1月16日,10:39,Liu Bo <di...@gmail.com> 写道:
>
> Hi,
>
> *container log will be removed automatically,*
>
> you can turn on yarn log aggregation, so that terminated yarn jobs' log
> will be dumped to HDFS
>
> On 14 January 2017 at 07:44, Yi Pan <ni...@gmail.com> wrote:
>
>> Hi, Qi,
>>
>> Sorry to reply late. I am curious on your comment that the close and stop
>> methods are not called. When user initiated a kill request, the graceful
>> shutdown sequence is triggered by the shutdown hook added to
>> SamzaContainer. The shutdown sequence is the following in the code:
>> {code}
>> info("Shutting down.")
>>
>> shutdownConsumers
>> shutdownTask
>> shutdownStores
>> shutdownDiskSpaceMonitor
>> shutdownHostStatisticsMonitor
>> shutdownProducers
>> shutdownLocalityManager
>> shutdownOffsetManager
>> shutdownMetrics
>> shutdownSecurityManger
>>
>> info("Shutdown complete.")
>> {code}
>>
>> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
>> SystemProducer.close() is invoked in shutdownProducers.
>>
>> Could you explain why you are not able to shutdown a Samza job gracefully?
>>
>> Thanks!
>>
>> -Yi
>>
>> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <sh...@eefung.com> wrote:
>>
>>> Hi Guys,
>>>
>>> How can I stop running samza job gracefully except killing it?
>>>
>>> Because when samza job was killed, the close and stop method in
>>> BaseMessageChooser and SystemProducer will not be called and the
>> container
>>> log will be removed automatically, how can resolve this?
>>>
>>> Thanks.
>>>
>>> ————————
>>> ShuQi
>>
>
>
>
> --
> All the best
>
> Liu Bo
Re: How to gracefully stop samza job
Posted by Liu Bo <di...@gmail.com>.
Hi,
*container log will be removed automatically,*
you can turn on yarn log aggregation, so that terminated yarn jobs' log
will be dumped to HDFS
On 14 January 2017 at 07:44, Yi Pan <ni...@gmail.com> wrote:
> Hi, Qi,
>
> Sorry to reply late. I am curious on your comment that the close and stop
> methods are not called. When user initiated a kill request, the graceful
> shutdown sequence is triggered by the shutdown hook added to
> SamzaContainer. The shutdown sequence is the following in the code:
> {code}
> info("Shutting down.")
>
> shutdownConsumers
> shutdownTask
> shutdownStores
> shutdownDiskSpaceMonitor
> shutdownHostStatisticsMonitor
> shutdownProducers
> shutdownLocalityManager
> shutdownOffsetManager
> shutdownMetrics
> shutdownSecurityManger
>
> info("Shutdown complete.")
> {code}
>
> in which, MessageChooser.stop() is invoked in shutdownConsumers, and
> SystemProducer.close() is invoked in shutdownProducers.
>
> Could you explain why you are not able to shutdown a Samza job gracefully?
>
> Thanks!
>
> -Yi
>
> On Mon, Dec 12, 2016 at 6:33 PM, 舒琦 <sh...@eefung.com> wrote:
>
> > Hi Guys,
> >
> > How can I stop running samza job gracefully except killing it?
> >
> > Because when samza job was killed, the close and stop method in
> > BaseMessageChooser and SystemProducer will not be called and the
> container
> > log will be removed automatically, how can resolve this?
> >
> > Thanks.
> >
> > ————————
> > ShuQi
>
--
All the best
Liu Bo