You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Dmytro Shkvyra (JIRA)" <ji...@apache.org> on 2014/01/24 19:03:39 UTC

[jira] [Created] (AMBARI-4416) BUG-12604 HDFS start failed on 2.1.1 stack

Dmytro Shkvyra created AMBARI-4416:
--------------------------------------

             Summary: BUG-12604 HDFS start failed on 2.1.1 stack
                 Key: AMBARI-4416
                 URL: https://issues.apache.org/jira/browse/AMBARI-4416
             Project: Ambari
          Issue Type: Bug
          Components: controller
    Affects Versions: 1.5.0
         Environment: 2-node CentOS6.4 cluster
ambari-server --version: 1.5.0.13
ambari-server --hash: b7f6163a5cf728fb8ed4c750fc10fef799597d7c
HDP 2.1.1
            Reporter: Dmytro Shkvyra
            Assignee: Dmytro Shkvyra
             Fix For: 1.5.0


*STR:*
# Deployed minimal cluster with HDFS and ZK.
# HDFS start failed.
# Added YARN+MR2, Nagios and Ganglia.
# Picture with HDFS was the same.

Output:
{noformat}
Fail: Execution of 'ulimit -c unlimited && if [ `ulimit -c` != 'unlimited' ]; then exit 77; fi &&  export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode' returned 1. 
-bash: line 0: ulimit: core file size: cannot modify limit: Operation not permitted
{noformat}

Full folders with logs are attached.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)