You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by "Hudson (JIRA)" <ji...@apache.org> on 2015/05/18 19:10:00 UTC

[jira] [Commented] (AMBARI-11212) Some services start fails due to Permission denied exception

    [ https://issues.apache.org/jira/browse/AMBARI-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14548305#comment-14548305 ] 

Hudson commented on AMBARI-11212:
---------------------------------

SUCCESS: Integrated in Ambari-trunk-Commit #2634 (See [https://builds.apache.org/job/Ambari-trunk-Commit/2634/])
AMBARI-11212. Some services start fails due to Permission denied exception (aonishuk) (aonishuk: http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=7389c57f75c5b09b78a319b946aaaca69bc9f643)
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_datanode.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_snamenode.py
* ambari-server/src/main/resources/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_zkfc.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_journalnode.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_nfsgateway.py
* ambari-server/src/test/python/stacks/2.0.6/HDFS/test_namenode.py


> Some services start fails due to Permission denied exception
> ------------------------------------------------------------
>
>                 Key: AMBARI-11212
>                 URL: https://issues.apache.org/jira/browse/AMBARI-11212
>             Project: Ambari
>          Issue Type: Bug
>            Reporter: Andrew Onischuk
>            Assignee: Andrew Onischuk
>             Fix For: 2.1.0
>
>
>     "stderr" : "2015-05-18 01:11:42,014 - Error while executing command 'start':\nTraceback (most recent call last):\n  File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", line 214, in execute\n    method(env)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py\", line 74, in start\n    namenode(action=\"start\", rolling_restart=rolling_restart, env=env)\n  File \"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py\", line 89, in thunk\n    return fn(*args, **kwargs)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py\", line 120, in namenode\n    create_hdfs_directories(dfs_check_nn_status_cmd)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py\", line 185, in create_hdfs_directories\n    only_if=check #skip creation when HA not active\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 148, in __init__\n    self.env.run()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 152, in run\n    self.run_action(resource, action)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 118, in run_action\n    provider_action()\n  File \"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py\", line 114, in action_execute\n    logoutput=logoutput,\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 148, in __init__\n    self.env.run()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 152, in run\n    self.run_action(resource, action)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 118, in run_action\n    provider_action()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\", line 246, in action_run\n    raise ex\nFail: Execution of 'hadoop --config /usr/hdp/current/hadoop-client/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/data/hdfs_resources.json' returned 1. WARNING: Use \"yarn jar\" to launch YARN applications.\r\nException in thread \"main\" java.io.IOException: Error opening job jar: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar\r\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:160)\r\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:136)\r\nCaused by: java.io.FileNotFoundException: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (Permission denied)\r\n\tat java.util.zip.ZipFile.open(Native Method)\r\n\tat java.util.zip.ZipFile.<init>(ZipFile.java:215)\r\n\tat java.util.zip.ZipFile.<init>(ZipFile.java:145)\r\n\tat java.util.jar.JarFile.<init>(JarFile.java:154)\r\n\tat java.util.jar.JarFile.<init>(JarFile.java:91)\r\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:158)\r\n\t... 1 more"
>     
>     
>     "stdout" : "2015-05-18 01:11:22,436 - Group['hadoop'] {'ignore_failures': False}\n2015-05-18 01:11:22,438 - Modifying group hadoop\n2015-05-18 01:11:22,525 - Group['users'] {'ignore_failures': False}\n2015-05-18 01:11:22,526 - Modifying group users\n2015-05-18 01:11:22,592 - Group['knox'] {'ignore_failures': False}\n2015-05-18 01:11:22,592 - Modifying group knox\n2015-05-18 01:11:22,644 - Group['spark'] {'ignore_failures': False}\n2015-05-18 01:11:22,644 - Modifying group spark\n2015-05-18 01:11:22,699 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}\n2015-05-18 01:11:22,699 - Modifying user oozie\n2015-05-18 01:11:22,741 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,742 - Modifying user hive\n2015-05-18 01:11:22,784 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}\n2015-05-18 01:11:22,785 - Modifying user ambari-qa\n2015-05-18 01:11:22,831 - User['flume'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,831 - Modifying user flume\n2015-05-18 01:11:22,873 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,873 - Modifying user hdfs\n2015-05-18 01:11:22,915 - User['knox'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,916 - Modifying user knox\n2015-05-18 01:11:22,957 - User['storm'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,958 - Modifying user storm\n2015-05-18 01:11:22,998 - User['spark'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:22,999 - Modifying user spark\n2015-05-18 01:11:23,041 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,042 - Modifying user mapred\n2015-05-18 01:11:23,082 - User['accumulo'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,082 - Modifying user accumulo\n2015-05-18 01:11:23,125 - User['hbase'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,126 - Modifying user hbase\n2015-05-18 01:11:23,165 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}\n2015-05-18 01:11:23,166 - Modifying user tez\n2015-05-18 01:11:23,205 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,205 - Modifying user zookeeper\n2015-05-18 01:11:23,245 - User['falcon'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,246 - Modifying user falcon\n2015-05-18 01:11:23,285 - User['sqoop'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,286 - Modifying user sqoop\n2015-05-18 01:11:23,330 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,331 - Modifying user yarn\n2015-05-18 01:11:23,376 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,376 - Modifying user hcat\n2015-05-18 01:11:23,422 - User['ams'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,423 - Modifying user ams\n2015-05-18 01:11:23,462 - User['atlas'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}\n2015-05-18 01:11:23,463 - Modifying user atlas\n2015-05-18 01:11:23,504 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-05-18 01:11:23,506 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}\n2015-05-18 01:11:23,543 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if\n2015-05-18 01:11:23,544 - Directory['/hadoop/hbase'] {'owner': 'hbase', 'recursive': True, 'mode': 0775, 'cd_access': 'a'}\n2015-05-18 01:11:23,545 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}\n2015-05-18 01:11:23,546 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}\n2015-05-18 01:11:23,582 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/hadoop/hbase'] due to not_if\n2015-05-18 01:11:23,583 - Group['hdfs'] {'ignore_failures': False}\n2015-05-18 01:11:23,583 - Modifying group hdfs\n2015-05-18 01:11:23,638 - User['hdfs'] {'ignore_failures': False, 'groups': [u'hadoop', 'users', 'hdfs', 'hadoop', u'hdfs']}\n2015-05-18 01:11:23,638 - Modifying user hdfs\n2015-05-18 01:11:23,678 - Directory['/etc/hadoop'] {'mode': 0755}\n2015-05-18 01:11:23,695 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}\n2015-05-18 01:11:23,713 - Execute['('setenforce', '0')'] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}\n2015-05-18 01:11:23,861 - Directory['/grid/0/log/hadoop'] {'owner': 'root', 'mode': 0775, 'group': 'hadoop', 'recursive': True, 'cd_access': 'a'}\n2015-05-18 01:11:23,862 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True, 'cd_access': 'a'}\n2015-05-18 01:11:23,863 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True, 'cd_access': 'a'}\n2015-05-18 01:11:23,869 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}\n2015-05-18 01:11:23,871 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}\n2015-05-18 01:11:23,872 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}\n2015-05-18 01:11:23,885 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}\n2015-05-18 01:11:23,886 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}\n2015-05-18 01:11:23,887 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}\n2015-05-18 01:11:23,895 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'group': 'hadoop'}\n2015-05-18 01:11:23,896 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'mode': 0755}\n2015-05-18 01:11:24,231 - Directory['/etc/security/limits.d'] {'owner': 'root', 'group': 'root', 'recursive': True}\n2015-05-18 01:11:24,238 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}\n2015-05-18 01:11:24,239 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}\n2015-05-18 01:11:24,251 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml\n2015-05-18 01:11:24,251 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-05-18 01:11:24,260 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] because contents don't match\n2015-05-18 01:11:24,261 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}\n2015-05-18 01:11:24,272 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml\n2015-05-18 01:11:24,273 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-05-18 01:11:24,278 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] because contents don't match\n2015-05-18 01:11:24,279 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}\n2015-05-18 01:11:24,290 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml\n2015-05-18 01:11:24,290 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-05-18 01:11:24,296 - Writing File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] because contents don't match\n2015-05-18 01:11:24,296 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}\n2015-05-18 01:11:24,307 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml\n2015-05-18 01:11:24,307 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}\n2015-05-18 01:11:24,349 - Writing File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] because contents don't match\n2015-05-18 01:11:24,350 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}\n2015-05-18 01:11:24,360 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml\n2015-05-18 01:11:24,361 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}\n2015-05-18 01:11:24,382 - Writing File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] because contents don't match\n2015-05-18 01:11:24,384 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}\n2015-05-18 01:11:24,385 - File['/var/lib/ambari-agent/lib/fast-hdfs-resource.jar'] {'content': StaticFile('fast-hdfs-resource.jar')}\n2015-05-18 01:11:24,441 - Package['hadoop-lzo'] {}\n2015-05-18 01:11:25,090 - Skipping installation of existing package hadoop-lzo\n2015-05-18 01:11:25,091 - Package['lzo'] {}\n2015-05-18 01:11:25,759 - Skipping installation of existing package lzo\n2015-05-18 01:11:25,760 - Package['hadoop-lzo-native'] {}\n2015-05-18 01:11:26,413 - Skipping installation of existing package hadoop-lzo-native\n2015-05-18 01:11:26,413 - Package['hadooplzo_2_3_*'] {}\n2015-05-18 01:11:27,567 - Skipping installation of existing package hadooplzo_2_3_*\n2015-05-18 01:11:27,568 - Directory['/grid/0/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'recursive': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}\n2015-05-18 01:11:27,570 - Ranger admin not installed\n2015-05-18 01:11:27,571 - Execute['ls /grid/0/hadoop/hdfs/namenode | wc -l  | grep -q ^0$'] {}\n2015-05-18 01:11:27,608 - Execute['yes Y | hdfs --config /usr/hdp/current/hadoop-client/conf namenode -format'] {'path': ['/usr/hdp/current/hadoop-client/bin'], 'user': 'hdfs'}\n2015-05-18 01:11:30,327 - Directory['/grid/0/hadoop/hdfs/namenode/namenode-formatted/'] {'recursive': True}\n2015-05-18 01:11:30,328 - Creating directory Directory['/grid/0/hadoop/hdfs/namenode/namenode-formatted/']\n2015-05-18 01:11:30,332 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}\n2015-05-18 01:11:30,333 - Writing File['/etc/hadoop/conf/dfs.exclude'] because it doesn't exist\n2015-05-18 01:11:30,333 - Changing owner for /etc/hadoop/conf/dfs.exclude from 0 to hdfs\n2015-05-18 01:11:30,334 - Changing group for /etc/hadoop/conf/dfs.exclude from 0 to hadoop\n2015-05-18 01:11:30,334 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}\n2015-05-18 01:11:30,335 - Changing owner for /var/run/hadoop from 0 to hdfs\n2015-05-18 01:11:30,335 - Changing group for /var/run/hadoop from 0 to hadoop\n2015-05-18 01:11:30,336 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}\n2015-05-18 01:11:30,337 - Directory['/grid/0/log/hadoop/hdfs'] {'owner': 'hdfs', 'recursive': True}\n2015-05-18 01:11:30,337 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1'}\n2015-05-18 01:11:30,374 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ls /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid >/dev/null 2>&1 && ps -p `cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid` >/dev/null 2>&1'}\n2015-05-18 01:11:34,608 - call['hadoop dfsadmin -fs hdfs://ip-172-31-29-42.ec2.internal:8020 -safemode get | grep 'Safe mode is OFF''] {}\n2015-05-18 01:11:38,219 - call returned (0, 'DEPRECATED: Use of this script to execute hdfs command is deprecated.\\r\\nInstead use the hdfs command for it.\\r\\n\\r\\nSafe mode is OFF\\r')\n2015-05-18 01:11:38,220 - Execute['hadoop dfsadmin -fs hdfs://ip-172-31-29-42.ec2.internal:8020 -safemode get | grep 'Safe mode is OFF''] {'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 40, 'only_if': None, 'user': 'hdfs', 'try_sleep': 10}\n2015-05-18 01:11:41,635 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'kinit_path_local': '/usr/bin/kinit', 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0777}\n2015-05-18 01:11:41,638 - HdfsResource['/user/ambari-qa'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'kinit_path_local': '/usr/bin/kinit', 'user': 'hdfs', 'owner': 'ambari-qa', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'mode': 0770}\n2015-05-18 01:11:41,638 - HdfsResource['None'] {'security_enabled': False, 'only_if': None, 'keytab': [EMPTY], 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'kinit_path_local': '/usr/bin/kinit', 'user': 'hdfs', 'action': ['execute'], 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf'}\n2015-05-18 01:11:41,638 - File['/var/lib/ambari-agent/data/hdfs_resources.json'] {'content': '[{\"action\": \"create\", \"owner\": \"hdfs\", \"type\": \"directory\", \"target\": \"/tmp\", \"mode\": \"777\"}, {\"action\": \"create\", \"owner\": \"ambari-qa\", \"type\": \"directory\", \"target\": \"/user/ambari-qa\", \"mode\": \"770\"}]', 'owner': 'hdfs'}\n2015-05-18 01:11:41,639 - Writing File['/var/lib/ambari-agent/data/hdfs_resources.json'] because it doesn't exist\n2015-05-18 01:11:41,639 - Changing owner for /var/lib/ambari-agent/data/hdfs_resources.json from 0 to hdfs\n2015-05-18 01:11:41,640 - Execute['hadoop --config /usr/hdp/current/hadoop-client/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/data/hdfs_resources.json'] {'logoutput': None, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'user': 'hdfs'}\n2015-05-18 01:11:42,014 - Error while executing command 'start':\nTraceback (most recent call last):\n  File \"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py\", line 214, in execute\n    method(env)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py\", line 74, in start\n    namenode(action=\"start\", rolling_restart=rolling_restart, env=env)\n  File \"/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py\", line 89, in thunk\n    return fn(*args, **kwargs)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py\", line 120, in namenode\n    create_hdfs_directories(dfs_check_nn_status_cmd)\n  File \"/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py\", line 185, in create_hdfs_directories\n    only_if=check #skip creation when HA not active\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 148, in __init__\n    self.env.run()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 152, in run\n    self.run_action(resource, action)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 118, in run_action\n    provider_action()\n  File \"/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py\", line 114, in action_execute\n    logoutput=logoutput,\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/base.py\", line 148, in __init__\n    self.env.run()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 152, in run\n    self.run_action(resource, action)\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/environment.py\", line 118, in run_action\n    provider_action()\n  File \"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py\", line 246, in action_run\n    raise ex\nFail: Execution of 'hadoop --config /usr/hdp/current/hadoop-client/conf jar /var/lib/ambari-agent/lib/fast-hdfs-resource.jar /var/lib/ambari-agent/data/hdfs_resources.json' returned 1. WARNING: Use \"yarn jar\" to launch YARN applications.\r\nException in thread \"main\" java.io.IOException: Error opening job jar: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar\r\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:160)\r\n\tat org.apache.hadoop.util.RunJar.main(RunJar.java:136)\r\nCaused by: java.io.FileNotFoundException: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (Permission denied)\r\n\tat java.util.zip.ZipFile.open(Native Method)\r\n\tat java.util.zip.ZipFile.<init>(ZipFile.java:215)\r\n\tat java.util.zip.ZipFile.<init>(ZipFile.java:145)\r\n\tat java.util.jar.JarFile.<init>(JarFile.java:154)\r\n\tat java.util.jar.JarFile.<init>(JarFile.java:91)\r\n\tat org.apache.hadoop.util.RunJar.run(RunJar.java:158)\r\n\t... 1 more"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)