You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by aminn_524 <am...@yahoo.com> on 2014/07/04 12:14:43 UTC

sparck Stdout and stderr

0
down vote
favorite
I am running spark-1.0.0 by connecting to a spark standalone cluster which
has one master and two slaves. I ran wordcount.py by Spark-submit, actually
it reads data from HDFS and also write the results into HDFS. So far
everything is fine and the results will correctly be written into HDFS. But
the thing makes me concern is that when I check Stdout for each worker, it
is empty I dont know whether it is suppose to be empty? and I got following
in stderr:

stderr log page for Some(app-20140704174955-0002)

Spark 
Executor Command: "java" "-cp" "::
/usr/local/spark-1.0.0/conf:
/usr/local/spark-1.0.0
/assembly/target/scala-2.10/spark-assembly-1.0.0-hadoop1.2.1.jar:/usr/local/hadoop/conf"
"
-XX:MaxPermSize=128m" "-Xms512M" "-Xmx512M"
"org.apache.spark.executor.CoarseGrainedExecutorBackend
" "akka.tcp://spark@master:54477/user/CoarseGrainedScheduler" "0" "slave2"
"1
" "akka.tcp://sparkWorker@slave2:41483/user/Worker"
"app-20140704174955-0002"
========================================


14/07/04 17:50:14 ERROR CoarseGrainedExecutorBackend: 
Driver Disassociated [akka.tcp://sparkExecutor@slave2:33758] -> 
[akka.tcp://spark@master:54477] disassociated! Shutting down.



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sparck-Stdout-and-stderr-tp8799.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.