You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Alejandro Fernandez <af...@hortonworks.com> on 2014/10/29 00:18:29 UTC

Re: Review Request 27311: Ambari to manage tarballs on HDFS

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27311/
-----------------------------------------------------------

(Updated Oct. 28, 2014, 11:18 p.m.)


Review request for Ambari, Dmytro Sen, Mahadev Konar, Raja Aluri, Sumit Mohanty, Srimanth Gunturi, Sid Wagle, Vinod Kumar Vavilapalli, and Yusaku Sako.


Bugs: AMBARI-7842
    https://issues.apache.org/jira/browse/AMBARI-7842


Repository: ambari


Description
-------

With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS.
The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999).

```
/hdp/apps/$(hdp-stack-version)
  |---- mapreduce/mapreduce.tar.gz
  |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link)
  |---- tez/tez.tar.gz
  |---- pig/pig.tar.gz
  |---- hive/hive.tar.gz
  |---- sqoop/sqoop.tar.gz
```

Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444.
The owner should be hdfs, and the group should be hadoop.


Diffs
-----

  ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py 728620e 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py 5e2000d 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py 62d37a8 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py 29692fc 
  ambari-server/src/main/resources/stacks/HDP/2.2/configuration/cluster-env.xml da15055 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/webhcat-site.xml 5182d82 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/TEZ/configuration/tez-site.xml 9bac52a 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration-mapred/mapred-site.xml 10b621f 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml 6ba2c95 
  ambari-server/src/test/python/stacks/2.2/configs/default.json 888b0ca 
  ambari-server/src/test/python/stacks/2.2/configs/secured.json 7607a5d 
  ambari-web/app/data/HDP2/site_properties.js a13e94a 

Diff: https://reviews.apache.org/r/27311/diff/


Testing (updated)
-------

mvn clean test
INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 24:34.850s
[INFO] Finished at: Tue Oct 28 15:52:11 PDT 2014
[INFO] Final Memory: 64M/618M
[INFO] ------------------------------------------------------------------------

Ran a system test, and service checks for MR, Yarn, Pig, Hive passed.

1. Set properties
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_source                                     "/usr/hdp/current/tez-client/lib/tez.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_source                                    "/usr/hdp/current/hive-client/hive.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_destination_folder                        "hdfs:///hdp/apps/{{ hdp_stack_version }}/hive/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_source                                     "/usr/hdp/current/pig-client/pig.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/pig/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_source                        "/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_destination_folder            "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_source                                   "/usr/hdp/current/sqoop-client/sqoop.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_destination_folder                       "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_source                               "/usr/hdp/current/hadoop-client/mapreduce.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_destination_folder                   "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
Verify properties were saved
http://c6408.ambari.apache.org:8080/api/v1/clusters/dev/configurations?type=cluster-env&tag=version1414527575195546269

2. Save changed properties in tez-site, mapred-site, yarn-site and webhcat-site because they need to be set through UI.

3. Set web server symlink to pick up changes to site_properties.js

4. Copy changed files
yes | cp /vagrant/ambari/ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py  /usr/lib/ambari-server/lib/resource_management/libraries/functions/dynamic_variable_interpretation.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py            /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py                /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py          /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py

5. Delete existing tarballs in HDFS, and prepare new tarballs without the version number
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -rm -r /hdp/apps/2.2.0.0-1114/
[root@c6408 ~]# cd /usr/hdp/current
[root@c6408 current]# cp tez-client/lib/tez-*.tar.gz tez-client/lib/tez.tar.gz
[root@c6408 current]# cp hive-client/hive-*.tar.gz   hive-client/hive.tar.gz
[root@c6408 current]# cp pig-client/pig-*.tar.gz     pig-client/pig.tar.gz
[root@c6408 current]# cp sqoop-client/sqoop-*.tar.gz sqoop-client/sqoop.tar.gz
[root@c6408 current]# cp hadoop-client/mr-*.tar.gz   hadoop-client/mapreduce.tar.gz

6. Restart services, MR, YARN, HIVE (including WebHcat), OOZIE
Re-run tez-client (in order to copy tarball),
python /var/lib/ambari-agent/cache/stacks/HDP/2.1/services/TEZ/package/scripts/tez_client.py CONFIGURE /var/lib/ambari-agent/data/command-169.json /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS /var/lib/ambari-agent/data/output-169.txt DEBUG /var/lib/ambari-agent/data/tmp
Check if files were copied,
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
mapreduce/mapreduce.tar.gz and hadoop-streaming.jar
hive
tez
pig
sqoop

7. Run Service Checks, MR, YARN, TEZ, Hive, Oozie

8. Check file and folder permissions, "444" = "-r--r--r--", "555" = "dr-xr-xr-x"

[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
Found 5 items
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:08 /hdp/apps/2.2.0.0-1114/hive
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:09 /hdp/apps/2.2.0.0-1114/pig
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/sqoop
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:00 /hdp/apps/2.2.0.0-1114/tez
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/mapreduce/
Found 2 items
-r--r--r--   3 hdfs hadoop     104936 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce/hadoop-streaming.jar
-r--r--r--   3 hdfs hadoop  183280502 2014-10-28 20:51 /hdp/apps/2.2.0.0-1114/mapreduce/mapreduce.tar.gz


Thanks,

Alejandro Fernandez


Re: Review Request 27311: Ambari to manage tarballs on HDFS

Posted by Sumit Mohanty <sm...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27311/#review58935
-----------------------------------------------------------

Ship it!


Ship It!

- Sumit Mohanty


On Oct. 28, 2014, 11:56 p.m., Alejandro Fernandez wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27311/
> -----------------------------------------------------------
> 
> (Updated Oct. 28, 2014, 11:56 p.m.)
> 
> 
> Review request for Ambari, Dmytro Sen, Mahadev Konar, Raja Aluri, Sumit Mohanty, Srimanth Gunturi, Sid Wagle, Vinod Kumar Vavilapalli, and Yusaku Sako.
> 
> 
> Bugs: AMBARI-7842
>     https://issues.apache.org/jira/browse/AMBARI-7842
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS.
> The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999).
> 
> ```
> /hdp/apps/$(hdp-stack-version)
>   |---- mapreduce/mapreduce.tar.gz
>   |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link)
>   |---- tez/tez.tar.gz
>   |---- pig/pig.tar.gz
>   |---- hive/hive.tar.gz
>   |---- sqoop/sqoop.tar.gz
> ```
> 
> Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444.
> The owner should be hdfs, and the group should be hadoop.
> 
> 
> Diffs
> -----
> 
>   ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py 728620e 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py 5e2000d 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py 62d37a8 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py 29692fc 
>   ambari-server/src/main/resources/stacks/HDP/2.2/configuration/cluster-env.xml da15055 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/webhcat-site.xml 5182d82 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/TEZ/configuration/tez-site.xml 9bac52a 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration-mapred/mapred-site.xml 10b621f 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml 6ba2c95 
>   ambari-server/src/test/python/stacks/2.2/configs/default.json 888b0ca 
>   ambari-server/src/test/python/stacks/2.2/configs/secured.json 7607a5d 
>   ambari-web/app/data/HDP2/site_properties.js a13e94a 
> 
> Diff: https://reviews.apache.org/r/27311/diff/
> 
> 
> Testing
> -------
> 
> mvn clean test
> INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 24:34.850s
> [INFO] Finished at: Tue Oct 28 15:52:11 PDT 2014
> [INFO] Final Memory: 64M/618M
> [INFO] ------------------------------------------------------------------------
> 
> Ran a system test, and service checks for MR, Yarn, Pig, Hive, and Oozie passed.
> 
> 1. Set properties
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_source                                     "/usr/hdp/current/tez-client/lib/tez.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_source                                    "/usr/hdp/current/hive-client/hive.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_destination_folder                        "hdfs:///hdp/apps/{{ hdp_stack_version }}/hive/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_source                                     "/usr/hdp/current/pig-client/pig.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/pig/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_source                        "/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_destination_folder            "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_source                                   "/usr/hdp/current/sqoop-client/sqoop.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_destination_folder                       "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_source                               "/usr/hdp/current/hadoop-client/mapreduce.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_destination_folder                   "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
> Verify properties were saved
> http://c6408.ambari.apache.org:8080/api/v1/clusters/dev/configurations?type=cluster-env&tag=version1414527575195546269
> 
> 2. Save changed properties in tez-site, mapred-site, yarn-site and webhcat-site because they need to be set through UI.
> 
> 3. Set web server symlink to pick up changes to site_properties.js
> 
> 4. Copy changed files
> yes | cp /vagrant/ambari/ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py  /usr/lib/ambari-server/lib/resource_management/libraries/functions/dynamic_variable_interpretation.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py            /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py                /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py          /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py
> 
> 5. Delete existing tarballs in HDFS, and prepare new tarballs without the version number
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -rm -r /hdp/apps/2.2.0.0-1114/
> [root@c6408 ~]# cd /usr/hdp/current
> [root@c6408 current]# cp tez-client/lib/tez-*.tar.gz tez-client/lib/tez.tar.gz
> [root@c6408 current]# cp hive-client/hive-*.tar.gz   hive-client/hive.tar.gz
> [root@c6408 current]# cp pig-client/pig-*.tar.gz     pig-client/pig.tar.gz
> [root@c6408 current]# cp sqoop-client/sqoop-*.tar.gz sqoop-client/sqoop.tar.gz
> [root@c6408 current]# cp hadoop-client/mr-*.tar.gz   hadoop-client/mapreduce.tar.gz
> 
> 6. Restart services, MR, YARN, HIVE (including WebHcat), OOZIE
> Re-run tez-client (in order to copy tarball),
> python /var/lib/ambari-agent/cache/stacks/HDP/2.1/services/TEZ/package/scripts/tez_client.py CONFIGURE /var/lib/ambari-agent/data/command-169.json /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS /var/lib/ambari-agent/data/output-169.txt DEBUG /var/lib/ambari-agent/data/tmp
> Check if files were copied,
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
> mapreduce/mapreduce.tar.gz and hadoop-streaming.jar
> hive
> tez
> pig
> sqoop
> 
> 7. Run Service Checks, MR, YARN, TEZ, Hive, Oozie
> 
> 8. Check file and folder permissions, "444" = "-r--r--r--", "555" = "dr-xr-xr-x"
> 
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
> Found 5 items
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:08 /hdp/apps/2.2.0.0-1114/hive
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:09 /hdp/apps/2.2.0.0-1114/pig
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/sqoop
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:00 /hdp/apps/2.2.0.0-1114/tez
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/mapreduce/
> Found 2 items
> -r--r--r--   3 hdfs hadoop     104936 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce/hadoop-streaming.jar
> -r--r--r--   3 hdfs hadoop  183280502 2014-10-28 20:51 /hdp/apps/2.2.0.0-1114/mapreduce/mapreduce.tar.gz
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>


Re: Review Request 27311: Ambari to manage tarballs on HDFS

Posted by Alejandro Fernandez <af...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27311/
-----------------------------------------------------------

(Updated Oct. 28, 2014, 11:56 p.m.)


Review request for Ambari, Dmytro Sen, Mahadev Konar, Raja Aluri, Sumit Mohanty, Srimanth Gunturi, Sid Wagle, Vinod Kumar Vavilapalli, and Yusaku Sako.


Bugs: AMBARI-7842
    https://issues.apache.org/jira/browse/AMBARI-7842


Repository: ambari


Description
-------

With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS.
The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999).

```
/hdp/apps/$(hdp-stack-version)
  |---- mapreduce/mapreduce.tar.gz
  |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link)
  |---- tez/tez.tar.gz
  |---- pig/pig.tar.gz
  |---- hive/hive.tar.gz
  |---- sqoop/sqoop.tar.gz
```

Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444.
The owner should be hdfs, and the group should be hadoop.


Diffs
-----

  ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py 728620e 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py 5e2000d 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py 62d37a8 
  ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py 29692fc 
  ambari-server/src/main/resources/stacks/HDP/2.2/configuration/cluster-env.xml da15055 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/webhcat-site.xml 5182d82 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/TEZ/configuration/tez-site.xml 9bac52a 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration-mapred/mapred-site.xml 10b621f 
  ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml 6ba2c95 
  ambari-server/src/test/python/stacks/2.2/configs/default.json 888b0ca 
  ambari-server/src/test/python/stacks/2.2/configs/secured.json 7607a5d 
  ambari-web/app/data/HDP2/site_properties.js a13e94a 

Diff: https://reviews.apache.org/r/27311/diff/


Testing (updated)
-------

mvn clean test
INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 24:34.850s
[INFO] Finished at: Tue Oct 28 15:52:11 PDT 2014
[INFO] Final Memory: 64M/618M
[INFO] ------------------------------------------------------------------------

Ran a system test, and service checks for MR, Yarn, Pig, Hive, and Oozie passed.

1. Set properties
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_source                                     "/usr/hdp/current/tez-client/lib/tez.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_source                                    "/usr/hdp/current/hive-client/hive.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_destination_folder                        "hdfs:///hdp/apps/{{ hdp_stack_version }}/hive/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_source                                     "/usr/hdp/current/pig-client/pig.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/pig/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_source                        "/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_destination_folder            "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_source                                   "/usr/hdp/current/sqoop-client/sqoop.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_destination_folder                       "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_source                               "/usr/hdp/current/hadoop-client/mapreduce.tar.gz"
/var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_destination_folder                   "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
Verify properties were saved
http://c6408.ambari.apache.org:8080/api/v1/clusters/dev/configurations?type=cluster-env&tag=version1414527575195546269

2. Save changed properties in tez-site, mapred-site, yarn-site and webhcat-site because they need to be set through UI.

3. Set web server symlink to pick up changes to site_properties.js

4. Copy changed files
yes | cp /vagrant/ambari/ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py  /usr/lib/ambari-server/lib/resource_management/libraries/functions/dynamic_variable_interpretation.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py            /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py                /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py
yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py          /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py

5. Delete existing tarballs in HDFS, and prepare new tarballs without the version number
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -rm -r /hdp/apps/2.2.0.0-1114/
[root@c6408 ~]# cd /usr/hdp/current
[root@c6408 current]# cp tez-client/lib/tez-*.tar.gz tez-client/lib/tez.tar.gz
[root@c6408 current]# cp hive-client/hive-*.tar.gz   hive-client/hive.tar.gz
[root@c6408 current]# cp pig-client/pig-*.tar.gz     pig-client/pig.tar.gz
[root@c6408 current]# cp sqoop-client/sqoop-*.tar.gz sqoop-client/sqoop.tar.gz
[root@c6408 current]# cp hadoop-client/mr-*.tar.gz   hadoop-client/mapreduce.tar.gz

6. Restart services, MR, YARN, HIVE (including WebHcat), OOZIE
Re-run tez-client (in order to copy tarball),
python /var/lib/ambari-agent/cache/stacks/HDP/2.1/services/TEZ/package/scripts/tez_client.py CONFIGURE /var/lib/ambari-agent/data/command-169.json /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS /var/lib/ambari-agent/data/output-169.txt DEBUG /var/lib/ambari-agent/data/tmp
Check if files were copied,
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
mapreduce/mapreduce.tar.gz and hadoop-streaming.jar
hive
tez
pig
sqoop

7. Run Service Checks, MR, YARN, TEZ, Hive, Oozie

8. Check file and folder permissions, "444" = "-r--r--r--", "555" = "dr-xr-xr-x"

[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
Found 5 items
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:08 /hdp/apps/2.2.0.0-1114/hive
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:09 /hdp/apps/2.2.0.0-1114/pig
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/sqoop
dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:00 /hdp/apps/2.2.0.0-1114/tez
[hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/mapreduce/
Found 2 items
-r--r--r--   3 hdfs hadoop     104936 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce/hadoop-streaming.jar
-r--r--r--   3 hdfs hadoop  183280502 2014-10-28 20:51 /hdp/apps/2.2.0.0-1114/mapreduce/mapreduce.tar.gz


Thanks,

Alejandro Fernandez


Re: Review Request 27311: Ambari to manage tarballs on HDFS

Posted by Mahadev Konar <ma...@apache.org>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/27311/#review58922
-----------------------------------------------------------

Ship it!


Ship It!

- Mahadev Konar


On Oct. 28, 2014, 11:18 p.m., Alejandro Fernandez wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/27311/
> -----------------------------------------------------------
> 
> (Updated Oct. 28, 2014, 11:18 p.m.)
> 
> 
> Review request for Ambari, Dmytro Sen, Mahadev Konar, Raja Aluri, Sumit Mohanty, Srimanth Gunturi, Sid Wagle, Vinod Kumar Vavilapalli, and Yusaku Sako.
> 
> 
> Bugs: AMBARI-7842
>     https://issues.apache.org/jira/browse/AMBARI-7842
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> With HDP 2.2, Ambari needs to copy the tarballs/jars from the local file system to a certain location in HDFS.
> The tarballs/jars no longer have a version number (either component version or HDP stack version + build) in the name), but the destination folder in HDFS does contain the HDP Version (e.g., 2.2.0.0-999).
> 
> ```
> /hdp/apps/$(hdp-stack-version)
>   |---- mapreduce/mapreduce.tar.gz
>   |---- mapreduce/hadoop-streaming.jar (which is needed by WebHcat. In the file system, it is a symlink to a versioned file, so HDFS needs to follow the link)
>   |---- tez/tez.tar.gz
>   |---- pig/pig.tar.gz
>   |---- hive/hive.tar.gz
>   |---- sqoop/sqoop.tar.gz
> ```
> 
> Furthermore, the folders created in HDFS need to have a permission of 0555, while files need 0444.
> The owner should be hdfs, and the group should be hadoop.
> 
> 
> Diffs
> -----
> 
>   ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py 728620e 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py 5e2000d 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py 62d37a8 
>   ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py 29692fc 
>   ambari-server/src/main/resources/stacks/HDP/2.2/configuration/cluster-env.xml da15055 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/HIVE/configuration/webhcat-site.xml 5182d82 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/TEZ/configuration/tez-site.xml 9bac52a 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration-mapred/mapred-site.xml 10b621f 
>   ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration/yarn-site.xml 6ba2c95 
>   ambari-server/src/test/python/stacks/2.2/configs/default.json 888b0ca 
>   ambari-server/src/test/python/stacks/2.2/configs/secured.json 7607a5d 
>   ambari-web/app/data/HDP2/site_properties.js a13e94a 
> 
> Diff: https://reviews.apache.org/r/27311/diff/
> 
> 
> Testing
> -------
> 
> mvn clean test
> INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 24:34.850s
> [INFO] Finished at: Tue Oct 28 15:52:11 PDT 2014
> [INFO] Final Memory: 64M/618M
> [INFO] ------------------------------------------------------------------------
> 
> Ran a system test, and service checks for MR, Yarn, Pig, Hive passed.
> 
> 1. Set properties
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_source                                     "/usr/hdp/current/tez-client/lib/tez.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env tez_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/tez/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_source                                    "/usr/hdp/current/hive-client/hive.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hive_tar_destination_folder                        "hdfs:///hdp/apps/{{ hdp_stack_version }}/hive/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_source                                     "/usr/hdp/current/pig-client/pig.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env pig_tar_destination_folder                         "hdfs:///hdp/apps/{{ hdp_stack_version }}/pig/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_source                        "/usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env hadoop-streaming_tar_destination_folder            "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_source                                   "/usr/hdp/current/sqoop-client/sqoop.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env sqoop_tar_destination_folder                       "hdfs:///hdp/apps/{{ hdp_stack_version }}/sqoop/"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_source                               "/usr/hdp/current/hadoop-client/mapreduce.tar.gz"
> /var/lib/ambari-server/resources/scripts/configs.sh set localhost dev cluster-env mapreduce_tar_destination_folder                   "hdfs:///hdp/apps/{{ hdp_stack_version }}/mapreduce/"
> Verify properties were saved
> http://c6408.ambari.apache.org:8080/api/v1/clusters/dev/configurations?type=cluster-env&tag=version1414527575195546269
> 
> 2. Save changed properties in tez-site, mapred-site, yarn-site and webhcat-site because they need to be set through UI.
> 
> 3. Set web server symlink to pick up changes to site_properties.js
> 
> 4. Copy changed files
> yes | cp /vagrant/ambari/ambari-common/src/main/python/resource_management/libraries/functions/dynamic_variable_interpretation.py  /usr/lib/ambari-server/lib/resource_management/libraries/functions/dynamic_variable_interpretation.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py            /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_server.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py                /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/webhcat.py
> yes | cp /vagrant/ambari/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py          /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/historyserver.py
> 
> 5. Delete existing tarballs in HDFS, and prepare new tarballs without the version number
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -rm -r /hdp/apps/2.2.0.0-1114/
> [root@c6408 ~]# cd /usr/hdp/current
> [root@c6408 current]# cp tez-client/lib/tez-*.tar.gz tez-client/lib/tez.tar.gz
> [root@c6408 current]# cp hive-client/hive-*.tar.gz   hive-client/hive.tar.gz
> [root@c6408 current]# cp pig-client/pig-*.tar.gz     pig-client/pig.tar.gz
> [root@c6408 current]# cp sqoop-client/sqoop-*.tar.gz sqoop-client/sqoop.tar.gz
> [root@c6408 current]# cp hadoop-client/mr-*.tar.gz   hadoop-client/mapreduce.tar.gz
> 
> 6. Restart services, MR, YARN, HIVE (including WebHcat), OOZIE
> Re-run tez-client (in order to copy tarball),
> python /var/lib/ambari-agent/cache/stacks/HDP/2.1/services/TEZ/package/scripts/tez_client.py CONFIGURE /var/lib/ambari-agent/data/command-169.json /var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HDFS /var/lib/ambari-agent/data/output-169.txt DEBUG /var/lib/ambari-agent/data/tmp
> Check if files were copied,
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
> mapreduce/mapreduce.tar.gz and hadoop-streaming.jar
> hive
> tez
> pig
> sqoop
> 
> 7. Run Service Checks, MR, YARN, TEZ, Hive, Oozie
> 
> 8. Check file and folder permissions, "444" = "-r--r--r--", "555" = "dr-xr-xr-x"
> 
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/
> Found 5 items
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:08 /hdp/apps/2.2.0.0-1114/hive
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:09 /hdp/apps/2.2.0.0-1114/pig
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/sqoop
> dr-xr-xr-x   - hdfs hdfs          0 2014-10-28 21:00 /hdp/apps/2.2.0.0-1114/tez
> [hdfs@c6408 ~]$ hdfs --config /etc/hadoop/conf dfs -ls /hdp/apps/2.2.0.0-1114/mapreduce/
> Found 2 items
> -r--r--r--   3 hdfs hadoop     104936 2014-10-28 21:02 /hdp/apps/2.2.0.0-1114/mapreduce/hadoop-streaming.jar
> -r--r--r--   3 hdfs hadoop  183280502 2014-10-28 20:51 /hdp/apps/2.2.0.0-1114/mapreduce/mapreduce.tar.gz
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>