You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@kylin.apache.org by ShaoFeng Shi <sh...@apache.org> on 2017/12/02 14:41:42 UTC

Re: Apache kylin 2.1 on Spark

Hi Manoj,

From the log in the first email, I can see the location is correct, for
example: /opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
jars/htrace-core-3.2.0-incubating.jar

But still not knowing why it reports another wrong folder. (It is not from
configuration).

Actually here Kylin doesn't need upload the hbase jars to Spark since v2.1.
We will remove it since next version.

Regarding the second question, for CDH release, you don't need to specify
the hdp.version property; But leave it there has no impact on CDH.

2017-11-30 21:17 GMT+08:00 Kumar, Manoj H <ma...@jpmorgan.com>:

> Can you pls. update on this? What kylin.properties I need to set for
> Clourdera? It seems its related to Yarn Launch.
>
>
>
> Does this required for Cloudera or Its specific for HDP(Hortonworks)
>
>
>
> kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
> kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
> kylin.engine.spark-conf.spark.executor.extraJavaOptions=-
> Dhdp.version=current
>
>
>
> Regards,
>
> Manoj
>
>
>
> *From:* Kumar, Manoj H
> *Sent:* Thursday, November 30, 2017 4:53 PM
> *To:* 'user@kylin.apache.org'
> *Subject:* RE: Apache kylin 2.1 on Spark
>
>
>
>
>
>
>
> Can you pls. tell how this Path is getting formed? Is it coming from
> properties file or from Code base? How this Resource file getting formed?
> Jar file path is not correct.
>
>
>
>
>
> 17/11/30 06:16:56 INFO yarn.Client: Source and destination file systems
> are the same. Not copying hdfs://sfpdev/tenants/rft/
> rcmo/kylin/spark/spark-libs.jar
>
> 17/11/30 06:16:56 INFO yarn.Client: Uploading resource
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/lib/.9.1-1.cdh5.9.1.p0.4/jars/
> htrace-core-3.2.0-incubating.jar,/opt/cloudera/parcels/CDH-
> 5.9.1-1.cdh5.9.1.p0.4/jars/hbase-client-1.2.0-cdh5.9.1.
> jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9 ->
> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/application_
> 1509132635807_32626/CDH-5.9.1-1.cdh5.9
>
>
>
> Exception in thread "main" java.io.FileNotFoundException: File
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/lib/.9.1-1.cdh5.9.1.p0.4/jars/
> htrace-core-3.2.0-incubating.jar,/opt/cloudera/parcels/CDH-
> 5.9.1-1.cdh5.9.1.p0.4/jars/hbase-client-1.2.0-cdh5.9.1.
> jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9 does not exist
>
>         at org.apache.hadoop.fs.RawLocalFileSystem.
> deprecatedGetFileStatus(RawLocalFileSystem.java:537)
>
>         at org.apache.hadoop.fs.RawLocalFileSystem.
> getFileLinkStatusInternal(RawLocalFileSystem.java:750)
>
>         at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(
> RawLocalFileSystem.java:527)
>
>         at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(
> FilterFileSystem.java:409)
>
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
>
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
>
>         at org.apache.spark.deploy.yarn.Client.copyFileToRemote(
> Client.scala:371)
>
>         at org.apache.spark.deploy.yarn.Client.org$apache$spark$
> deploy$yarn$Client$$distribute$1(Client.scala:490)
>
>         at org.apache.spark.deploy.yarn.Client$$anonfun$
> prepareLocalResources$10.apply(Client.scala:588)
>
>         at org.apache.spark.deploy.yarn.Client$$anonfun$
> prepareLocalResources$10.apply(Client.scala:587)
>
>         at scala.Option.foreach(Option.scala:257)
>
>         at org.apache.spark.deploy.yarn.Client.prepareLocalResources(
> Client.scala:587)
>
>         at org.apache.spark.deploy.yarn.Client.
> createContainerLaunchContext(Client.scala:882)
>
>         at org.apache.spark.deploy.yarn.Client.submitApplication(
> Client.scala:171)
>
>         at org.apache.spark.deploy.yarn.Client.run(Client.scala:1167)
>
>         at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1226)
>
>         at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
>
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:497)
>
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
>
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(
> SparkSubmit.scala:187)
>
>         at org.apache.spark.deploy.SparkSubmit$.submit(
> SparkSubmit.scala:212)
>
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.
> scala:126)
>
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> -
>
>
>
> Regards,
>
> Manoj
>
>
>
> *From:* Kumar, Manoj H
> *Sent:* Thursday, November 30, 2017 3:01 PM
> *To:* 'user@kylin.apache.org'
> *Subject:* Apache kylin 2.1 on Spark
>
>
>
> Pls. advise on this as I am running Cube building process using Spark
> Engine. What setting is missing here?
>
>
>
> kylin.env.hadoop-conf-dir=/etc/hive/conf
>
> 224 #
>
> 225 ## Estimate the RDD partition numbers
>
> 226 kylin.engine.spark.rdd-partition-cut-mb=100
>
> 227 #
>
> 228 ## Minimal partition numbers of rdd
>
> 229 kylin.engine.spark.min-partition=1
>
> 230 #
>
> 231 ## Max partition numbers of rdd
>
> 232 kylin.engine.spark.max-partition=5000
>
> 233 #
>
> 234 ## Spark conf (default is in spark/conf/spark-defaults.conf)
>
> 235 kylin.engine.spark-conf.spark.master=yarn
>
> 236 kylin.engine.spark-conf.spark.submit.deployMode=cluster
>
> 237 kylin.engine.spark-conf.spark.yarn.queue=RCMO_Pool
>
> 238 kylin.engine.spark-conf.spark.executor.memory=4G
>
> 239 kylin.engine.spark-conf.spark.executor.cores=2
>
> 240 kylin.engine.spark-conf.spark.executor.instances=1
>
> 241 kylin.engine.spark-conf.spark.eventLog.enabled=true
>
> 242 kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/
> spark-history
>
> 243 kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:
> ///kylin/spark-history
>
> 244 kylin.engine.spark-conf.spark.hadoop.yarn.timeline-service.
> enabled=false
>
> 245 #
>
> 246 ## manually upload spark-assembly jar to HDFS and then set this
> property will avoid repeatedly uploading jar at runtime
>
> 247 ##kylin.engine.spark-conf.spark.yarn.jar=hdfs://
> namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
>
> 248 kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io
> .SnappyCompressionCodec
>
>
>
>
>
>
>
> 2017-11-30 04:16:50,156 ERROR [Job 50b5d7ce-35e6-438d-94f9-0b969adfc1bb-192]
> spark.SparkExecutable:133 : error run spark job:
>
> 5207 java.io.IOException: OS command error exit with 1 -- export
> HADOOP_CONF_DIR=/etc/hive/conf && /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/spark/bin/sp
> ark-submit --class org.apache.kylin.common.util.SparkEntry  --conf
> spark.executor.instances=1  --conf spark.yarn.queue=RCMO_Pool  --conf
> spark.history.fs.logDirectory=hdfs:///kylin/spa     rk-history  --conf
> spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec
> --conf spark.master=yarn  --conf spark.hadoop.yarn.timeline-service.enabled=false
> --conf spar     k.executor.memory=4G  --conf spark.eventLog.enabled=true
> --conf spark.eventLog.dir=hdfs:///kylin/spark-history  --conf
> spark.executor.cores=2  --conf spark.submit.deployMode=cluster -
> -files /etc/hbase/conf.cloudera.hbase/hbase-site.xml --jars
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-
> 2.1.0-KYLIN-2846-cdh57/spark/jars/htrace-core-3.0.4.jar,/opt/cloudera
> /parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/htrace-core-3.2.0-
> incubating.jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.
> p0.4/jars/hbase-client-1.2.0-cdh5.9.1.jar,/opt/cloudera/parcels/CDH-
> 5.9.1-1.cdh5.9.1.p0.4/jars/hbase-common-1.2.0-cdh5.9.1.
> jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/
> hbase-protocol-1.2.0-cdh5.9.1.jar,/opt/cloudera/parcels/CDH-5.9.1-1.cdh5
> .9.1.p0.4/jars/metrics-core-2.2.0.jar,/opt/cloudera/parcels/
> CDH-5.9.1-1.cdh5.9.1.p0.4/jars/guava-12.0.1.jar,
> /apps/rft/rcmo/apps/kylin/kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh
> 57/lib/kylin-job-2.1.0.jar -className org.apache.kylin.engine.spark.SparkCubingByLayer
> -hiveTable db_rft_rcmo_rfda.kylin_intermediate_drr_cube_saprk_
> 15128ffc_67b5_476b_a942_48645346b64     f -segmentId
> 15128ffc-67b5-476b-a942-48645346b64f -confPath /apps/rft/rcmo/apps/kylin/
> kylin_namespace/apache-kylin-2.1.0-KYLIN-2846-cdh57/conf -output
> hdfs://sfpdev/tenants/rft/rcmo/ky     lin/ns_rft_rcmo_creg_poc-
> kylin_metadata/kylin-50b5d7ce-35e6-438d-94f9-0b969adfc1bb/DRR_CUBE_SAPRK/cuboid/
> -cubename DRR_CUBE_SAPRK
>
> 5208 17/11/30 04:16:10 INFO client.ConfiguredRMFailoverProxyProvider:
> Failing over to rm76
>
> 5209 17/11/30 04:16:11 INFO yarn.Client: Requesting a new application from
> cluster with 19 NodeManagers
>
> 5210 17/11/30 04:16:11 INFO yarn.Client: Verifying our application has not
> requested more than the maximum memory capability of the cluster (272850 MB
> per container)
>
> 5211 17/11/30 04:16:11 INFO yarn.Client: Will allocate AM container, with
> 1408 MB memory including 384 MB overhead
>
> 5212 17/11/30 04:16:11 INFO yarn.Client: Setting up container launch
> context for our AM
>
> 5213 17/11/30 04:16:11 INFO yarn.Client: Setting up the launch environment
> for our AM container
>
> 5214 17/11/30 04:16:11 INFO yarn.Client: Preparing resources for our AM
> container
>
> 5215 17/11/30 04:16:11 INFO security.HDFSCredentialProvider: getting
> token for namenode: hdfs://sfpdev/user/a_rcmo_nd
>
> 5216 17/11/30 04:16:11 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN
> token 704970 for a_rcmo_nd on ha-hdfs:sfpdev
>
> 5217 17/11/30 04:16:14 INFO hive.metastore: Trying to connect to metastore
> with URI thrift://bdtpisr3n1.svr.us.jpmchase.net:9083
>
> 5218 17/11/30 04:16:14 INFO hive.metastore: Connected to metastore.
>
> 5219 17/11/30 04:16:15 WARN token.Token: Cannot find class for token kind
> HIVE_DELEGATION_TOKEN
>
> 5220 17/11/30 04:16:15 INFO security.HiveCredentialProvider: Get Token
> from hive metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 25
> 61 5f 72 63 6d 6f 5f 6e 64 40 4e 41 45 41 53      54 2e 41 44 2e 4a 50 4d
> 4f 52 47 41 4e 43 48 41 53 45 2e 43 4f 4d 04 68 69 76 65 00 8a 01 60 0c 36
> 56 a7 8a 01 60 30 42 da a7 8e 39 8a 22
>
> 5221 17/11/30 04:16:15 WARN yarn.Client: Neither spark.yarn.jars nor
> spark.yarn.archive is set, falling back to uploading libraries under
> SPARK_HOME.
>
> 5222 17/11/30 04:16:19 INFO yarn.Client: Uploading resource
> file:/tmp/spark-8cfdcc86-1bf2-4baf-b4c1-4f776c708490/__spark_libs__4437186145258387608.zip
> -> hdfs://sfpdev/user/a_rcmo_nd/.spark     Staging/application_
> 1509132635807_32198/__spark_libs__4437186145258387608.zip
>
> 5223 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/lib/kylin-job-2.1.0.jar ->
> hdfs://sfpdev/user/     a_rcmo_nd/.sparkStaging/application_1509132635807_
> 32198/kylin-job-2.1.0.jar
>
> 5224 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/apps/rft/rcmo/apps/kylin/kylin_namespace/apache-
> kylin-2.1.0-KYLIN-2846-cdh57/spark/jars/htrace-core-3.0.4.jar ->
> hdfs://sfp     dev/user/a_rcmo_nd/.sparkStaging/application_
> 1509132635807_32198/htrace-core-3.0.4.jar
>
> 5225 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/htrace-core-3.2.0-incubating.jar -> hdfs://sfpdev/user/a_rcmo_nd/.spark
> Staging/application_1509132635807_32198/htrace-core-3.2.0-incubating.jar
>
> 5226 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-client-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.sparkS
> taging/application_1509132635807_32198/hbase-client-1.2.0-cdh5.9.1.jar
>
> 5227 17/11/30 04:16:21 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-common-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.sparkS
> taging/application_1509132635807_32198/hbase-common-1.2.0-cdh5.9.1.jar
>
> 5228 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/
> jars/hbase-protocol-1.2.0-cdh5.9.1.jar -> hdfs://sfpdev/user/a_rcmo_nd/.spar
> kStaging/application_1509132635807_32198/hbase-protocol-1.2.0-cdh5.9.1.jar
>
> 5229 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/metrics-core-2.2.0.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/ap
> plication_1509132635807_32198/metrics-core-2.2.0.jar
>
>
>
> 9 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/metrics-core-2.2.0.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/ap
> plication_1509132635807_32198/metrics-core-2.2.0.jar
>
>
>
> 5230 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/opt/cloudera/parcels/CDH-5.9.1-1.cdh5.9.1.p0.4/jars/guava-12.0.1.jar
> -> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/applicat
> ion_1509132635807_32198/guava-12.0.1.jar
>
>
>
> 5231 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/etc/hbase/conf.cloudera.hbase/hbase-site.xml ->
> hdfs://sfpdev/user/a_rcmo_nd/.sparkStaging/application_1509132635807_32198/
> hbase-site.xml
>
>
>
> 5232 17/11/30 04:16:22 INFO yarn.Client: Uploading resource
> file:/tmp/spark-8cfdcc86-1bf2-4baf-b4c1-4f776c708490/__spark_conf__5140031382599260375.zip
> -> hdfs://sfpdev/user/a_rcmo_nd/.spark     Staging/application_
> 1509132635807_32198/__spark_conf__.zip
>
> 5233 17/11/30 04:16:22 INFO spark.SecurityManager: Changing view acls to:
> a_rcmo_nd
>
> 5234 17/11/30 04:16:22 INFO spark.SecurityManager: Changing modify acls
> to: a_rcmo_nd
>
> 5235 17/11/30 04:16:22 INFO spark.SecurityManager: Changing view acls
> groups to:
>
> 5236 17/11/30 04:16:22 INFO spark.SecurityManager: Changing modify acls
> groups to:
>
>
>
> 5237 17/11/30 04:16:22 INFO spark.SecurityManager: SecurityManager:
> authentication disabled; ui acls disabled; users  with view permissions:
> Set(a_rcmo_nd); groups with view permissions: Se     t(); users  with
> modify permissions: Set(a_rcmo_nd); groups with modify permissions: Set()
>
> 5238 17/11/30 04:16:22 INFO yarn.Client: Submitting application
> application_1509132635807_32198 to ResourceManager
>
> 5239 17/11/30 04:16:22 INFO impl.YarnClientImpl: Submitted application
> application_1509132635807_32198
>
> 5240 17/11/30 04:16:23 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5241 17/11/30 04:16:23 INFO yarn.Client:
>
> 5242          client token: Token { kind: YARN_CLIENT_TOKEN, service:  }
>
> 5243          diagnostics: N/A
>
> 5244          ApplicationMaster host: N/A
>
> 5245          ApplicationMaster RPC port: -1
>
> 5246          queue: root.RCMO_Pool
>
> 5247          start time: 1512033382431
>
> 5248          final status: UNDEFINED
>
> 5249          tracking URL: http://bdtpisr3n2.svr.us.
> jpmchase.net:8088/proxy/application_1509132635807_32198/
>
> 5250          user: a_rcmo_nd
>
> 5251 17/11/30 04:16:24 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5252 17/11/30 04:16:25 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5253 17/11/30 04:16:26 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5254 17/11/30 04:16:27 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5255 17/11/30 04:16:28 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5256 17/11/30 04:16:29 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5257 17/11/30 04:16:30 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5258 17/11/30 04:16:31 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5259 17/11/30 04:16:32 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5260 17/11/30 04:16:33 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5261 17/11/30 04:16:34 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5262 17/11/30 04:16:35 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5263 17/11/30 04:16:36 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5264 17/11/30 04:16:37 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5265 17/11/30 04:16:38 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
>
>
> 5266 17/11/30 04:16:39 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: ACCEPTED)
>
> 5276 17/11/30 04:16:49 INFO yarn.Client: Application report for
> application_1509132635807_32198 (state: FAILED)
>
> 5277 17/11/30 04:16:49 INFO yarn.Client:
>
> 5278          client token: N/A
>
> 5279          diagnostics: Application application_1509132635807_32198
> failed 2 times due to AM Container for appattempt_1509132635807_32198_000002
> exited with  exitCode: 15
>
> 5280 For more detailed output, check application tracking page:
> http://bdtpisr3n2.svr.us.jpmchase.net:8088/proxy/
> application_1509132635807_32198/Then, click on links to logs of each
> attempt.
>
> 5281 Diagnostics: Exception from container-launch.
>
> 5282 Container id: container_e113_1509132635807_32198_02_000001
>
> 5283 Exit code: 15
>
> 5284 Stack trace: ExitCodeException exitCode=15:
>
> 5285         at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
>
> 5286         at org.apache.hadoop.util.Shell.run(Shell.java:504)
>
> 5287         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
> Shell.java:786)
>
> 5288         at org.apache.hadoop.yarn.server.nodemanager.
> LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:373)
>
> 5289         at org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch.call(
> ContainerLaunch.java:302)
>
> 5290         at org.apache.hadoop.yarn.server.
> nodemanager.containermanager.launcher.ContainerLaunch.call(
> ContainerLaunch.java:82)
>
> 5291         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
> 5292         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
>
> 5293         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
>
> 5294         at java.lang.Thread.run(Thread.java:745)
>
> 5295
>
> 5296 Shell output: main : command provided 1
>
>
>
>
>
> Regards,
>
> Manoj
>
>
>
> This message is confidential and subject to terms at: http://
> www.jpmorgan.com/emaildisclaimer including on confidentiality, legal
> privilege, viruses and monitoring of electronic messages. If you are not
> the intended recipient, please delete this message and notify the sender
> immediately. Any unauthorized use is strictly prohibited.
>



-- 
Best regards,

Shaofeng Shi 史少锋