You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "anubhav singh (JIRA)" <ji...@apache.org> on 2014/04/24 22:27:15 UTC
[jira] [Created] (AMBARI-5566) Namenode fails to startup
anubhav singh created AMBARI-5566:
-------------------------------------
Summary: Namenode fails to startup
Key: AMBARI-5566
URL: https://issues.apache.org/jira/browse/AMBARI-5566
Project: Ambari
Issue Type: Bug
Components: Ambari-SCOM
Affects Versions: 1.5.0
Reporter: anubhav singh
Ambari server fails to startup even after successful installation.
2014-04-24 12:53:32,513 - Skipping Execute['sh /tmp/checkForFormat.sh hdfs /etc/hadoop/conf /var/run/hadoop/hdfs/namenode/formatted/ /data/hadoop/hdfs/namenode'] due to not_if
2014-04-24 12:53:32,513 - Execute['mkdir -p /var/run/hadoop/hdfs/namenode/formatted/'] {}
2014-04-24 12:53:32,535 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2014-04-24 12:53:32,539 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2014-04-24 12:53:32,539 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}
2014-04-24 12:53:32,540 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1', 'ignore_failures': True}
2014-04-24 12:53:32,567 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2014-04-24 12:53:32,567 - Execute['ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode'] {'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1', 'user': 'hdfs'}
2014-04-24 12:53:36,689 - Error while executing command 'start':
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 95, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/namenode.py", line 38, in start
namenode(action="start")
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs_namenode.py", line 45, in namenode
create_log_dir=True
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/utils.py", line 63, in service
not_if=service_is_up
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 239, in action_run
raise ex
Fail: Execution of 'ulimit -c unlimited; export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-vmhost4-vm0.frem.wandisco.com.out
--
This message was sent by Atlassian JIRA
(v6.2#6252)