You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Greg Hill <gr...@RACKSPACE.COM> on 2015/03/18 17:22:27 UTC

Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: COMMERCIAL:Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Posted by Greg Hill <gr...@RACKSPACE.COM>.
Thanks, that seems to do it.

Greg

From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: COMMERCIAL:Re: Did something get broken for webhcat today?

See if the API call here helps…might be what you are looking for…

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Step4:SetupStackRepositories%28Optional%29



From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 1:11 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Ok, I'll see if I can figure out the API equivalent.  We are automating everything since we provide hdp clusters as a service.

Greg

From: Yusaku Sako <yu...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:06 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Greg,

Ambari does automatically retrieve the repo info for the latest maintenance version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL that you originally used to install the cluster with.

Yusaku

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including ambari.  So our version of Ambari didn't change, and we didn't inject those new repos.  Does ambari self-update or phone home to get the latest repos?  I can't figure out how the new repo got injected.

Greg


From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Posted by Jeff Sposetti <je...@hortonworks.com>.
See if the API call here helps…might be what you are looking for…

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Step4:SetupStackRepositories%28Optional%29



From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 1:11 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Ok, I'll see if I can figure out the API equivalent.  We are automating everything since we provide hdp clusters as a service.

Greg

From: Yusaku Sako <yu...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:06 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Greg,

Ambari does automatically retrieve the repo info for the latest maintenance version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL that you originally used to install the cluster with.

Yusaku

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including ambari.  So our version of Ambari didn't change, and we didn't inject those new repos.  Does ambari self-update or phone home to get the latest repos?  I can't figure out how the new repo got injected.

Greg


From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Posted by Greg Hill <gr...@RACKSPACE.COM>.
Ok, I'll see if I can figure out the API equivalent.  We are automating everything since we provide hdp clusters as a service.

Greg

From: Yusaku Sako <yu...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:06 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Greg,

Ambari does automatically retrieve the repo info for the latest maintenance version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL that you originally used to install the cluster with.

Yusaku

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including ambari.  So our version of Ambari didn't change, and we didn't inject those new repos.  Does ambari self-update or phone home to get the latest repos?  I can't figure out how the new repo got injected.

Greg


From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Yusaku Sako <yu...@hortonworks.com>.
Greg,

Ambari does automatically retrieve the repo info for the latest maintenance version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL that you originally used to install the cluster with.

Yusaku

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including ambari.  So our version of Ambari didn't change, and we didn't inject those new repos.  Does ambari self-update or phone home to get the latest repos?  I can't figure out how the new repo got injected.

Greg


From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Greg Hill <gr...@RACKSPACE.COM>.
We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including ambari.  So our version of Ambari didn't change, and we didn't inject those new repos.  Does ambari self-update or phone home to get the latest repos?  I can't figure out how the new repo got injected.

Greg


From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<ma...@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Jeff Sposetti <je...@hortonworks.com>.
In Ambari Web > Admin > Stack (or during install, on Select Stack, expand Advanced Repository Options): can you update your HDP repo Base URL to use the HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0


________________________________
From: Greg Hill <gr...@RACKSPACE.COM>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Greg Hill <gr...@RACKSPACE.COM>.
Interestingly:

[root@gateway-1 ~]# yum list installed hadoop*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
hadoop-lzo.x86_64                              0.6.0-1                          @HDP-UTILS-1.1.0.20
hadoop-lzo-native.x86_64                       0.6.0-1                          @HDP-UTILS-1.1.0.20
hadoop_2_2_0_0_2041.x86_64                     2.6.0.2.2.0.0-2041.el6           @HDP-2.2.0.0
hadoop_2_2_0_0_2041-client.x86_64              2.6.0.2.2.0.0-2041.el6           @HDP-2.2.0.0
hadoop_2_2_0_0_2041-hdfs.x86_64                2.6.0.2.2.0.0-2041.el6           @HDP-2.2.0.0
hadoop_2_2_0_0_2041-mapreduce.x86_64           2.6.0.2.2.0.0-2041.el6           @HDP-2.2.0.0
hadoop_2_2_0_0_2041-yarn.x86_64                2.6.0.2.2.0.0-2041.el6           @HDP-2.2.0.0
hadoop_2_2_3_0_2611.x86_64                     2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadoop_2_2_3_0_2611-client.x86_64              2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadoop_2_2_3_0_2611-hdfs.x86_64                2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadoop_2_2_3_0_2611-libhdfs.x86_64             2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadoop_2_2_3_0_2611-mapreduce.x86_64           2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadoop_2_2_3_0_2611-yarn.x86_64                2.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadooplzo_2_2_3_0_2611.x86_64                  0.6.0.2.2.3.0-2611.el6           @HDP-2.2
hadooplzo_2_2_3_0_2611-native.x86_64           0.6.0.2.2.3.0-2611.el6           @HDP-2.2

Looks like I have multiple versions installed.  Because Hortonworks stopped following sane packaging practices and put the version numbers in the package names, it doesn't recognize them as the same packages, so it just installed new versions alongside the old rather than updating.

I also don't understand how the repo got moved from the 2.2.0 one to the 2.2.3 one without me doing so manually.  Does Ambari update the repos automatically without any input from the user?

Greg

From: Greg <gr...@rackspace.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:41 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Greg Hill <gr...@RACKSPACE.COM>.
We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti <je...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory

Re: Did something get broken for webhcat today?

Posted by Jeff Sposetti <je...@hortonworks.com>.
Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local repos for the HDP stack packages)?

From: Greg Hill <gr...@RACKSPACE.COM>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This is rather urgent as we are no longer able to provision HDP 2.2 clusters at all because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': '...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2015-03-18 15:58:08,330 - File['/etc/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2015-03-18 15:58:09,219 - HdfsDirectory['/user/hcat'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0755, 'owner': 'hcat', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create_delayed']}
2015-03-18 15:58:09,220 - HdfsDirectory['None'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'action': ['create'], 'bin_dir': '/usr/hdp/current/hadoop-client/bin'}
2015-03-18 15:58:09,228 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` /user/hcat && hadoop --config /etc/hadoop/conf fs -chmod  755 /user/hcat && hadoop --config /etc/hadoop/conf fs -chown  hcat /user/hcat'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls /user/hcat'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:37,822 - Directory['/var/run/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,823 - Changing group for /var/run/webhcat from 0 to hadoop
2015-03-18 15:58:37,823 - Directory['/var/log/webhcat'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True, 'mode': 0755}
2015-03-18 15:58:37,824 - Creating directory Directory['/var/log/webhcat']
2015-03-18 15:58:37,824 - Changing owner for /var/log/webhcat from 0 to hcat
2015-03-18 15:58:37,824 - Changing group for /var/log/webhcat from 0 to hadoop
2015-03-18 15:58:37,824 - Directory['/etc/hive-webhcat/conf'] {'owner': 'hcat', 'group': 'hadoop', 'recursive': True}
2015-03-18 15:58:37,825 - Changing owner for /etc/hive-webhcat/conf from 0 to hcat
2015-03-18 15:58:37,825 - Changing group for /etc/hive-webhcat/conf from 0 to hadoop
2015-03-18 15:58:37,893 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:37,896 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive/hive.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:58:43,597 - -bash: line 1: 2.2.3.0-2611/hive/hive.tar.gz: No such file or directory
2015-03-18 15:58:43,599 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:58:43,601 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:58:54,904 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:58:54,906 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig/pig.tar.gz'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:00,322 - -bash: line 1: 2.2.3.0-2611/pig/pig.tar.gz: No such file or directory
2015-03-18 15:59:00,323 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:00,326 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig'] {'not_if': "su - hdfs -c 'export PATH=$PATH:/usr/hdp/current/hadoop-client/bin ; hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041\n2.2.3.0-2611/pig'", 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:11,576 - ExecuteHadoop['fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'user': 'hcat', 'conf_dir': '/etc/hadoop/conf'}
2015-03-18 15:59:11,578 - Execute['hadoop --config /etc/hadoop/conf fs -ls hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce/hadoop-streaming.jar'] {'logoutput': True, 'path': ['/usr/hdp/current/hadoop-client/bin'], 'tries': 1, 'user': 'hcat', 'try_sleep': 0}
2015-03-18 15:59:17,094 - -bash: line 1: 2.2.3.0-2611/mapreduce/hadoop-streaming.jar: No such file or directory
2015-03-18 15:59:17,097 - HdfsDirectory['hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'security_enabled': False, 'keytab': [EMPTY], 'conf_dir': '/etc/hadoop/conf', 'hdfs_user': 'hdfs', 'kinit_path_local': '', 'mode': 0555, 'owner': 'hdfs', 'bin_dir': '/usr/hdp/current/hadoop-client/bin', 'action': ['create']}
2015-03-18 15:59:17,099 - Execute['hadoop --config /etc/hadoop/conf fs -mkdir `rpm -q hadoop | grep -q "hadoop-1" || echo "-p"` hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chmod  555 hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce && hadoop --config /etc/hadoop/conf fs -chown  hdfs hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/hive hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/pig hdfs:///hdp/apps/2.2.0.0-2041
2.2.3.0-2611/mapreduce'] {'not_if': '...', 'user': 'hdfs', 'path': ['/usr/hdp/current/hadoop-client/bin']}
2015-03-18 15:59:28,070 - Could not find file: /usr/hdp/current/sqoop-client/sqoop.tar.gz
2015-03-18 15:59:28,071 - XmlConfig['webhcat-site.xml'] {'owner': 'hcat', 'group': 'hadoop', 'conf_dir': '/etc/hive-webhcat/conf', 'configuration_attributes': ..., 'configurations': ...}
2015-03-18 15:59:28,090 - Generating config: /etc/hive-webhcat/conf/webhcat-site.xml
2015-03-18 15:59:28,091 - File['/etc/hive-webhcat/conf/webhcat-site.xml'] {'owner': 'hcat', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2015-03-18 15:59:28,092 - Writing File['/etc/hive-webhcat/conf/webhcat-site.xml'] because it doesn't exist
2015-03-18 15:59:28,093 - Changing owner for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hcat
2015-03-18 15:59:28,093 - Changing group for /etc/hive-webhcat/conf/webhcat-site.xml from 0 to hadoop
2015-03-18 15:59:28,095 - File['/etc/hive-webhcat/conf/webhcat-env.sh'] {'content': InlineTemplate(...), 'owner': 'hcat', 'group': 'hadoop'}
2015-03-18 15:59:28,096 - Writing File['/etc/hive-webhcat/conf/webhcat-env.sh'] because it doesn't exist
2015-03-18 15:59:28,096 - Changing owner for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hcat
2015-03-18 15:59:28,096 - Changing group for /etc/hive-webhcat/conf/webhcat-env.sh from 0 to hadoop
2015-03-18 15:59:28,097 - Execute['env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start'] {'not_if': 'ls /var/run/webhcat/webhcat.pid >/dev/null 2>&1 && ps `cat /var/run/webhcat/webhcat.pid` >/dev/null 2>&1', 'user': 'hcat'}
2015-03-18 15:59:28,179 - Error while executing command 'start':
Traceback (most recent call last):
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 123, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_server.py", line 39, in start
    webhcat_service(action = 'start')
  File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat_service.py", line 33, in webhcat_service
    not_if=no_op_test
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 148, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 149, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 115, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py", line 241, in action_run
    raise ex
Fail: Execution of 'env HADOOP_HOME=/usr/hdp/current/hadoop-client /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh start' returned 127. env: /usr/hdp/current/hive-webhcat/sbin/webhcat_server.sh: No such file or directory