You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Andrew Onischuk (JIRA)" <ji...@apache.org> on 2015/12/08 13:22:10 UTC
[jira] [Created] (AMBARI-14264) Some component fails to start at
single node cluster via Blueprint when AMS is not on cluster
Andrew Onischuk created AMBARI-14264:
----------------------------------------
Summary: Some component fails to start at single node cluster via Blueprint when AMS is not on cluster
Key: AMBARI-14264
URL: https://issues.apache.org/jira/browse/AMBARI-14264
Project: Ambari
Issue Type: Bug
Reporter: Andrew Onischuk
Assignee: Andrew Onischuk
Fix For: 2.2.0
Setup single node cluster via Blueprint
{
"configurations": [],
"host_groups": [
{
"name": "host1",
"cardinality": "1",
"components": [
{
"name": "ZOOKEEPER_SERVER"
},
{
"name": "ZOOKEEPER_CLIENT"
},
{
"name": "NIMBUS"
},
{
"name": "SUPERVISOR"
},
{
"name": "STORM_UI_SERVER"
},
{
"name": "DRPC_SERVER"
}
]
}
],
"Blueprints": {
"blueprint_name": "STORM",
"stack_name": "HDP",
"stack_version": "2.3"
}
}
{
"blueprint": "STORM",
"default_password": "password",
"config_recommendation_strategy": "NEVER_APPLY",
"host_groups": [
{
"name": "host1",
"hosts": [
{
"fqdn": "c6401.ambari.apache.org",
"ip": "192.168.64.101"
}
]
}
]
}
{
"href" : "http://172.22.123.182:8080/api/v1/clusters/cl1/requests/5/tasks/12",
"Tasks" : {
"attempt_cnt" : 1,
"cluster_name" : "cl1",
"command" : "START",
"command_detail" : "SUPERVISOR START",
"end_time" : 1448629012734,
"error_log" : "/var/lib/ambari-agent/data/errors-12.txt",
"exit_code" : 1,
"host_name" : "os-r6-aqtpzu-ambari-rare-4-re-5.novalocal",
"id" : 12,
"output_log" : "/var/lib/ambari-agent/data/output-12.txt",
"request_id" : 5,
"role" : "SUPERVISOR",
"stage_id" : 2,
"start_time" : 1448628896461,
"status" : "FAILED",
"stderr" : "Traceback (most recent call last):\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/supervisor.py\", line 104, in <module>\n Supervisor().execute()\n File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", line 217, in execute\n method(env)\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/supervisor.py\", line 87, in start\n service(\"supervisor\", action=\"start\")\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/service.py\", line 77, in service\n path = params.storm_bin_dir)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 154, in __init__\n self.env.run()\n File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 158, in run\n self.run_action(resource, action)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 121, in run_action\n provider_action()\n File \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\", line 238, in action_run\n tries=self.resource.tries, try_sleep=self.resource.try_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 70, in inner\n result = function(command, **kwargs)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 92, in checked_call\n tries=tries, try_sleep=try_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 140, in _call_wrapper\n result = _call(command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 291, in _call\n raise Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra",
"stdout" : "2015-11-27 12:55:57,680 - Group['hadoop'] {}\n2015-11-27 12:55:57,683 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}\n2015-11-27 12:55:57,684 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}\n2015-11-27 12:55:57,685 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}\n2015-11-27 12:55:57,686 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-11-27 12:55:57,838 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2015-11-27 12:55:57,854 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if\n2015-11-27 12:55:57,872 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-11-27 12:55:58,293 - Directory['/var/log/storm'] {'owner': 'storm', 'group': 'hadoop', 'recursive': True, 'mode': 0777}\n2015-11-27 12:55:58,376 - Directory['/var/run/storm'] {'owner': 'storm', 'cd_access': 'a', 'group': 'hadoop', 'mode': 0755, 'recursive': True}\n2015-11-27 12:55:58,569 - Directory['/hadoop/storm'] {'owner': 'storm', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-11-27 12:55:58,736 - Directory['/usr/hdp/current/storm-supervisor/conf'] {'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-11-27 12:55:58,796 - Changing group for /usr/hdp/current/storm-supervisor/conf from 0 to hadoop\n2015-11-27 12:55:59,000 - File['/usr/hdp/current/storm-supervisor/conf/config.yaml'] {'owner': 'storm', 'content': Template('config.yaml.j2'), 'group': 'hadoop'}\n2015-11-27 12:55:59,100 - File['/usr/hdp/current/storm-supervisor/conf/storm.yaml'] {'owner': 'storm', 'content': InlineTemplate(...), 'group': 'hadoop'}\n2015-11-27 12:55:59,296 - File['/usr/hdp/current/storm-supervisor/conf/storm-env.sh'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 12:55:59,361 - Writing File['/usr/hdp/current/storm-supervisor/conf/storm-env.sh'] because contents don't match\n2015-11-27 12:55:59,412 - Directory['/usr/hdp/current/storm-supervisor/log4j2'] {'owner': 'storm', 'group': 'hadoop', 'recursive': True, 'mode': 0755}\n2015-11-27 12:55:59,482 - File['/usr/hdp/current/storm-supervisor/log4j2/cluster.xml'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 12:55:59,574 - File['/usr/hdp/current/storm-supervisor/log4j2/worker.xml'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 12:55:59,664 - Execute['source /usr/hdp/current/storm-supervisor/conf/storm-env.sh ; export PATH=$JAVA_HOME/bin:$PATH ; storm supervisor > /var/log/storm/supervisor.out 2>&1'] {'wait_for_finish': False, 'path': ['/usr/hdp/current/storm-supervisor/bin'], 'user': 'storm', 'not_if': \"ambari-sudo.sh su storm -l -s /bin/bash -c 'ls /var/run/storm/supervisor.pid >/dev/null 2>&1 && ps -p `cat /var/run/storm/supervisor.pid` >/dev/null 2>&1'\"}\n2015-11-27 12:55:59,722 - Execute['/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid'] {'logoutput': True, 'path': ['/usr/hdp/current/storm-supervisor/bin'], 'tries': 6, 'user': 'storm', 'try_sleep': 10}\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:00,161 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:10,981 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:21,347 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:31,671 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 12:56:42,034 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.supervisor$ | awk {'print $1'} > /var/run/storm/supervisor.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra",
"structured_out" : {
"version" : "2.3.4.0-3349"
}
}
}
{
"href" : "http://172.22.123.182:8080/api/v1/clusters/cl1/requests/12/tasks/31",
"Tasks" : {
"attempt_cnt" : 1,
"cluster_name" : "cl1",
"command" : "START",
"command_detail" : "NIMBUS START",
"end_time" : 1448629968713,
"error_log" : "/var/lib/ambari-agent/data/errors-31.txt",
"exit_code" : 1,
"host_name" : "os-r6-aqtpzu-ambari-rare-4-re-5.novalocal",
"id" : 31,
"output_log" : "/var/lib/ambari-agent/data/output-31.txt",
"request_id" : 12,
"role" : "NIMBUS",
"stage_id" : 0,
"start_time" : 1448629907648,
"status" : "FAILED",
"stderr" : "Traceback (most recent call last):\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/nimbus.py\", line 149, in <module>\n Nimbus().execute()\n File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", line 217, in execute\n method(env)\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/nimbus.py\", line 70, in start\n service(\"nimbus\", action=\"start\")\n File \"/var/lib/ambari-agent/cache/common-services/STORM/0.9.1.2.1/package/scripts/service.py\", line 77, in service\n path = params.storm_bin_dir)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 154, in __init__\n self.env.run()\n File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 158, in run\n self.run_action(resource, action)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 121, in run_action\n provider_action()\n File \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\", line 238, in action_run\n tries=self.resource.tries, try_sleep=self.resource.try_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 70, in inner\n result = function(command, **kwargs)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 92, in checked_call\n tries=tries, try_sleep=try_sleep)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 140, in _call_wrapper\n result = _call(command, **kwargs_copy)\n File \"/usr/lib/python2.6/site-packages/resource_management/core/shell.py\", line 291, in _call\n raise Fail(err_msg)\nresource_management.core.exceptions.Fail: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra",
"stdout" : "2015-11-27 13:11:52,585 - Group['hadoop'] {}\n2015-11-27 13:11:52,588 - User['storm'] {'gid': 'hadoop', 'groups': ['hadoop']}\n2015-11-27 13:11:52,589 - User['zookeeper'] {'gid': 'hadoop', 'groups': ['hadoop']}\n2015-11-27 13:11:52,590 - User['ambari-qa'] {'gid': 'hadoop', 'groups': ['users']}\n2015-11-27 13:11:52,591 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-11-27 13:11:52,734 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2015-11-27 13:11:52,749 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if\n2015-11-27 13:11:52,767 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-11-27 13:11:53,209 - Directory['/var/log/storm'] {'owner': 'storm', 'group': 'hadoop', 'recursive': True, 'mode': 0777}\n2015-11-27 13:11:53,295 - Directory['/var/run/storm'] {'owner': 'storm', 'cd_access': 'a', 'group': 'hadoop', 'mode': 0755, 'recursive': True}\n2015-11-27 13:11:53,493 - Directory['/hadoop/storm'] {'owner': 'storm', 'mode': 0755, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-11-27 13:11:53,645 - Directory['/usr/hdp/current/storm-nimbus/conf'] {'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-11-27 13:11:53,713 - Changing group for /usr/hdp/current/storm-nimbus/conf from 0 to hadoop\n2015-11-27 13:11:54,004 - File['/usr/hdp/current/storm-nimbus/conf/config.yaml'] {'owner': 'storm', 'content': Template('config.yaml.j2'), 'group': 'hadoop'}\n2015-11-27 13:11:54,109 - File['/usr/hdp/current/storm-nimbus/conf/storm.yaml'] {'owner': 'storm', 'content': InlineTemplate(...), 'group': 'hadoop'}\n2015-11-27 13:11:54,434 - File['/usr/hdp/current/storm-nimbus/conf/storm-env.sh'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 13:11:54,517 - Writing File['/usr/hdp/current/storm-nimbus/conf/storm-env.sh'] because contents don't match\n2015-11-27 13:11:54,554 - Directory['/usr/hdp/current/storm-nimbus/log4j2'] {'owner': 'storm', 'group': 'hadoop', 'recursive': True, 'mode': 0755}\n2015-11-27 13:11:54,635 - File['/usr/hdp/current/storm-nimbus/log4j2/cluster.xml'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 13:11:54,750 - File['/usr/hdp/current/storm-nimbus/log4j2/worker.xml'] {'content': InlineTemplate(...), 'owner': 'storm'}\n2015-11-27 13:11:54,857 - Ranger admin not installed\n2015-11-27 13:11:54,858 - Execute['source /usr/hdp/current/storm-nimbus/conf/storm-env.sh ; export PATH=$JAVA_HOME/bin:$PATH ; storm nimbus > /var/log/storm/nimbus.out 2>&1'] {'wait_for_finish': False, 'path': ['/usr/hdp/current/storm-nimbus/bin'], 'user': 'storm', 'not_if': \"ambari-sudo.sh su storm -l -s /bin/bash -c 'ls /var/run/storm/nimbus.pid >/dev/null 2>&1 && ps -p `cat /var/run/storm/nimbus.pid` >/dev/null 2>&1'\"}\n2015-11-27 13:11:54,956 - Execute['/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid'] {'logoutput': True, 'path': ['/usr/hdp/current/storm-nimbus/bin'], 'tries': 6, 'user': 'storm', 'try_sleep': 10}\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 13:11:55,513 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 13:12:06,641 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 13:12:17,310 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 13:12:27,627 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n2015-11-27 13:12:37,969 - Retrying after 10 seconds. Reason: Execution of '/usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ && /usr/jdk64/jdk1.8.0_60/bin/jps -l | grep storm.daemon.nimbus$ | awk {'print $1'} > /var/run/storm/nimbus.pid' returned 1. ######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra\n######## Hortonworks #############\nThis is MOTD message, added for testing in qe infra",
"structured_out" : {
"version" : "2.3.4.0-3349"
}
}
}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)