You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Siddharth Ubale <si...@syncoms.com> on 2016/01/20 13:29:47 UTC

Container exited with a non-zero exit code 1-SparkJOb on YARN

Hi,

I am running a Spark Job on the yarn cluster.
The spark job is a spark streaming application which is reading JSON from a kafka topic , inserting the JSON values to hbase tables via Phoenix , ands then sending out certain messages to a websocket if the JSON satisfies a certain criteria.

My cluster is a 3 node cluster with 24GB ram and 24 cores in total.

Now :
1. when I am submitting the job with 10GB memory, the application fails saying memory is insufficient to run the job
2. The job is submitted with 6G ram. However, it does not run successfully always.Common issues faced :
                a. Container exited with a non-zero exit code 1 , and after multiple such warning the job is finished.
                d. The failed job notifies that it was unable to find a file in HDFS which is something like _hadoop_conf_xxxxxx.zip

Can someone pls let me know why am I seeing the above 2 issues.

Thanks,
Siddharth Ubale,


RE: Container exited with a non-zero exit code 1-SparkJOb on YARN

Posted by Siddharth Ubale <si...@syncoms.com>.
Hi Wellington,

Thanks for the reply.

I have kept the default values for the below 2 features which have been mentioned.
The zip file is expected by the spark job in the spark staging folder in hdfs. None of the documentation has mentioned regarding this file.
Also, I have noticed one more thing that whenever yarn allocates containers on the machine from where I am running the code the spark job runs else
It always fails.

Thanks,
Siddharth Ubale



-----Original Message-----
From: Wellington Chevreuil [mailto:wellington.chevreuil@gmail.com] 
Sent: Thursday, January 21, 2016 3:44 PM
To: Siddharth Ubale <si...@syncoms.com>
Subject: Re: Container exited with a non-zero exit code 1-SparkJOb on YARN

Hi,

For the memory issues, you might need to review current values for maximum allowed container memory on YARN configuration. Check values current defined for "yarn.nodemanager.resource.memory-mb" and "yarn.scheduler.maximum-allocation-mb" properties.

Regarding the file issue, is the file available on hdfs? Is there anything else writing/changing the file while the job runs?


> On 20 Jan 2016, at 12:29, Siddharth Ubale <si...@syncoms.com> wrote:
> 
> Hi,
>  
> I am running a Spark Job on the yarn cluster.
> The spark job is a spark streaming application which is reading JSON from a kafka topic , inserting the JSON values to hbase tables via Phoenix , ands then sending out certain messages to a websocket if the JSON satisfies a certain criteria.
>  
> My cluster is a 3 node cluster with 24GB ram and 24 cores in total.
>  
> Now :
> 1. when I am submitting the job with 10GB memory, the application 
> fails saying memory is insufficient to run the job 2. The job is submitted with 6G ram. However, it does not run successfully always.Common issues faced :
>                 a. Container exited with a non-zero exit code 1 , and after multiple such warning the job is finished.
>                 d. The failed job notifies that it was unable to find 
> a file in HDFS which is something like _hadoop_conf_xxxxxx.zip
>  
> Can someone pls let me know why am I seeing the above 2 issues.
>  
> Thanks,
> Siddharth Ubale,


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org