You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by 赵全超 <zh...@ondemand.cn> on 2014/07/11 10:56:10 UTC
Ambari-1.5.1 Agent Start HDFS failture throw exeception:Too small new size specified
HDFS: HDP-2.1
2014-07-11 15:24:24,930 - Execute['ulimit -c unlimited; export
HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/h
adoop-daemon.sh --config /etc/hadoop/conf start namenode'] {'not_if': 'ls
/var/run/hadoop/hadoop/hadoop-hadoop-namenode.pid >/dev/nu
ll 2>&1 && ps `cat /var/run/hadoop/hadoop/hadoop-hadoop-namenode.pid`
>/dev/null 2>&1', 'user': 'hadoop'}
2014-07-11 15:24:29,028 - Error while executing command 'start':
Traceback (most recent call last):
File
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/scrip
t.py", line 106, in execute
method(env)
File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/
namenode.py", line 38, in start
namenode(action="start")
File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/
hdfs_namenode.py", line 45, in namenode
create_log_dir=True
File
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS/package/scripts/
utils.py", line 63, in service
not_if=service_is_up
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
line 148, in __init__
self.env.run()
File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 149, in run
self.run_action(resource, action)
File
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
line 115, in run_action
provider_action()
File
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.
py", line 239, in action_run
raise ex
Fail: Execution of 'ulimit -c unlimited; export
HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec &&
/usr/lib/hadoop/sbin/hadoop-daemon.sh
--config /etc/hadoop/conf start namenode' returned 1. starting namenode,
logging to /var/log/hadoop/hadoop/hadoop-hadoop-namenode-h
adoop12.out
Error occurred during initialization of VM
Too small new size specified
I set namenode heap size is 200 in hadoop-env.sh, why occur this exception?