You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/07/26 09:25:20 UTC

[jira] [Resolved] (SPARK-16723) exception in thread main org.apache.spark.sparkexception application finished with failed status

     [ https://issues.apache.org/jira/browse/SPARK-16723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-16723.
-------------------------------
    Resolution: Invalid

[~soma] JIRA isn't really for discussion or troubleshooting. If this is still mostly a question it belongs on the mailing list. Pasting miles of logs generally isn't useful. Narrow it down to the relevant parts.

> exception in thread main org.apache.spark.sparkexception application finished with failed status
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-16723
>                 URL: https://issues.apache.org/jira/browse/SPARK-16723
>             Project: Spark
>          Issue Type: Question
>          Components: Streaming
>    Affects Versions: 1.6.2
>         Environment: Dataprock cluster from google
>            Reporter: Asmaa Ali 
>              Labels: beginner
>   Original Estimate: 60h
>  Remaining Estimate: 60h
>
> What is the reason of this exception ?!
> cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class SparkBWA --master yarn-cluster --
> conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589
> Using properties file: /usr/lib/spark/conf/spark-defaults.conf
> Adding default property: spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> Adding default property: spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
> Adding default property: spark.eventLog.enabled=true
> Adding default property: spark.driver.maxResultSize=1920m
> Adding default property: spark.shuffle.service.enabled=true
> Adding default property: spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
> Adding default property: spark.sql.parquet.cacheMetadata=false
> Adding default property: spark.driver.memory=3840m
> Adding default property: spark.dynamicAllocation.maxExecutors=10000
> Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
> Adding default property: spark.yarn.am.memoryOverhead=558
> Adding default property: spark.yarn.am.memory=5586m
> Adding default property: spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> Adding default property: spark.master=yarn-cluster
> Adding default property: spark.executor.memory=5586m
> Adding default property: spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
> Adding default property: spark.dynamicAllocation.enabled=true
> Adding default property: spark.executor.cores=2
> Adding default property: spark.yarn.executor.memoryOverhead=558
> Adding default property: spark.dynamicAllocation.minExecutors=1
> Adding default property: spark.dynamicAllocation.initialExecutors=10000
> Adding default property: spark.akka.frameSize=512
> Parsed arguments:
> master yarn-cluster
> deployMode null
> executorMemory 1500m
> executorCores 1
> totalExecutorCores null
> propertiesFile /usr/lib/spark/conf/spark-defaults.conf
> driverMemory 1500m
> driverCores null
> driverExtraClassPath null
> driverExtraLibraryPath null
> driverExtraJavaOptions -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> supervise false
> queue null
> numExecutors null
> files null
> pyFiles null
> archives file:/home/cancerdetector/SparkBWA/build/./bwa.zip
> mainClass SparkBWA
> primaryResource file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
> name SparkBWA
> childArgs [-algorithm mem -reads paired -index /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastq Output_ERR000589]
> jars null
> packages null
> packagesExclusions null
> repositories null
> verbose true
> Spark properties used, including those specified through
> --conf and those from the properties file /usr/lib/spark/conf/spark-defaults.conf:
> spark.yarn.am.memoryOverhead -> 558
> spark.driver.memory -> 1500m
> spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
> spark.executor.memory -> 5586m
> spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
> spark.eventLog.enabled -> true
> spark.scheduler.minRegisteredResourcesRatio -> 0.0
> spark.dynamicAllocation.maxExecutors -> 10000
> spark.akka.frameSize -> 512
> spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.sql.parquet.cacheMetadata -> false
> spark.shuffle.service.enabled -> true
> spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.dynamicAllocation.initialExecutors -> 10000
> spark.dynamicAllocation.minExecutors -> 1
> spark.yarn.executor.memoryOverhead -> 558
> spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.yarn.am.memory -> 5586m
> spark.driver.maxResultSize -> 1920m
> spark.master -> yarn-cluster
> spark.dynamicAllocation.enabled -> true
> spark.executor.cores -> 2
> Main class:
> org.apache.spark.deploy.yarn.Client
> Arguments:
> --name
> SparkBWA
> --driver-memory
> 1500m
> --executor-memory
> 1500m
> --executor-cores
> 1
> --archives
> file:/home/cancerdetector/SparkBWA/build/./bwa.zip
> --jar
> file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
> --class
> SparkBWA
> --arg
> -algorithm
> --arg
> mem
> --arg
> -reads
> --arg
> paired
> --arg
> -index
> --arg
> /Data/HumanBase/hg38
> --arg
> -partitions
> --arg
> 32
> --arg
> ERR000589_1.filt.fastq
> --arg
> ERR000589_2.filt.fastq
> --arg
> Output_ERR000589
> System properties:
> spark.yarn.am.memoryOverhead -> 558
> spark.driver.memory -> 1500m
> spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
> spark.executor.memory -> 1500m
> spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
> spark.eventLog.enabled -> true
> spark.scheduler.minRegisteredResourcesRatio -> 0.0
> SPARK_SUBMIT -> true
> spark.dynamicAllocation.maxExecutors -> 10000
> spark.akka.frameSize -> 512
> spark.sql.parquet.cacheMetadata -> false
> -reads
> spark.executor.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.app.name -> SparkBWA
> spark.shuffle.service.enabled -> true
> spark.history.fs.logDirectory -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.dynamicAllocation.initialExecutors -> 10000
> spark.dynamicAllocation.minExecutors -> 1
> spark.yarn.executor.memoryOverhead -> 558
> spark.driver.extraJavaOptions -> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.submit.deployMode -> cluster
> spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.yarn.am.memory -> 5586m
> spark.driver.maxResultSize -> 1920m
> spark.master -> yarn-cluster
> spark.dynamicAllocation.enabled -> true
> spark.executor.cores -> 1
> Classpath elements:
> spark.yarn.am.memory is set but does not apply in cluster mode.
> spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
> 16/07/22 16:21:11 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at cluster-cancerdet
> ector-m/10.132.0.2:8032
> 16/07/22 16:21:12 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: Submitted application application_
> 1467990031555_0089
> Exception in thread "main" org.apache.spark.SparkException: Application application_1467990031555_0089 finished 
> with failed status
> at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
> at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
> at org.apache.spark.deploy.yarn.Client.main(Client.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
> 31)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org