You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Tom Beerbower <tb...@hortonworks.com> on 2016/02/01 18:05:37 UTC

Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/
-----------------------------------------------------------

Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.


Bugs: AMBARI-14853
    https://issues.apache.org/jira/browse/AMBARI-14853


Repository: ambari


Description
-------

Three additional steps need to be done to to install Atlas 0.6 via Ambari.

1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’

    atlas.notification.embedded" : false,
    atlas.kafka.data = /tmp 
    atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
    atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
    atlas.kafka.hook.group.id = atlas
    atlas.kafka.entities.group.id = entities


* Note: 
For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
The directory specified in “atlas.kaka.data” should exist.

2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
    export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}

*Note:
It is important that the atlas directories are prepended to the existing classpath.

3. Restart the Atlas and Hive services after the cluster is fully provisioned


Diffs
-----

  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
  ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
  ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
  ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
  ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 

Diff: https://reviews.apache.org/r/43050/diff/


Testing
-------

Manual test and verify configuration and Atlas operation.

mvn clean test : all tests pass

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:02 h
[INFO] Finished at: 2016-02-01T11:56:24-05:00
[INFO] Final Memory: 44M/1696M
[INFO] ------------------------------------------------------------------------


Thanks,

Tom Beerbower


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Alejandro Fernandez <af...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117335
-----------------------------------------------------------




ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml (line 72)
<https://reviews.apache.org/r/43050/#comment178492>

    Even with the find/replace-with construct, what we really want to do is append, which I believe we don't have a way of doing today.
    
    cc Nate Cole and Jonathan Hurley


- Alejandro Fernandez


On Feb. 1, 2016, 5:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 5:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Tom Beerbower <tb...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/
-----------------------------------------------------------

(Updated Feb. 2, 2016, 9:17 p.m.)


Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.


Changes
-------

update patch


Bugs: AMBARI-14853
    https://issues.apache.org/jira/browse/AMBARI-14853


Repository: ambari


Description
-------

Three additional steps need to be done to to install Atlas 0.6 via Ambari.

1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’

    atlas.notification.embedded" : false,
    atlas.kafka.data = /tmp 
    atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
    atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
    atlas.kafka.hook.group.id = atlas
    atlas.kafka.entities.group.id = entities


* Note: 
For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
The directory specified in “atlas.kaka.data” should exist.

2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
    export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}

*Note:
It is important that the atlas directories are prepended to the existing classpath.

3. Restart the Atlas and Hive services after the cluster is fully provisioned


Diffs (updated)
-----

  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
  ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
  ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
  ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
  ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
  ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 

Diff: https://reviews.apache.org/r/43050/diff/


Testing
-------

Manual test and verify configuration and Atlas operation.

mvn clean test : all tests pass

[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:02 h
[INFO] Finished at: 2016-02-01T11:56:24-05:00
[INFO] Final Memory: 44M/1696M
[INFO] ------------------------------------------------------------------------


Thanks,

Tom Beerbower


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Robert Levas <rl...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117235
-----------------------------------------------------------


Ship it!




Ship It!

- Robert Levas


On Feb. 1, 2016, 12:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 12:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Nate Cole <nc...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117398
-----------------------------------------------------------


Ship it!




Ship It!

- Nate Cole


On Feb. 1, 2016, 12:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 12:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Tom Beerbower <tb...@hortonworks.com>.

> On Feb. 1, 2016, 6:30 p.m., Nate Cole wrote:
> > Should have some python tests for Atlas scripts.

Thanks for the review!

I had assumed that the assertions in the existing tests would cover new properties added to the configuration templates.  For example, in test_metadata_server.py ... 


      self.assertResourceCalled('PropertiesFile',
                                '/etc/atlas/conf/application.properties',
                                properties=appprops,
                                owner='atlas',
                                group='hadoop',
                                mode=0644,
                                
Doesn't this assert that the application.properties file is created with the given set of properties (from the default.json that I updated with the new property values)?


> On Feb. 1, 2016, 6:30 p.m., Nate Cole wrote:
> > ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py, lines 417-418
> > <https://reviews.apache.org/r/43050/diff/1/?file=1228061#file1228061line417>
> >
> >     How could these ever get set?  I wouldn't think Ambari agent code execution isn't setting any environment variables like this.

I guess not.  I cut and pasted this code from the Atlas params.py file.  So, how in the Hive params python script can I determine the atlas directories?  Thanks.


- Tom


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117234
-----------------------------------------------------------


On Feb. 1, 2016, 5:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 5:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Nate Cole <nc...@hortonworks.com>.

> On Feb. 1, 2016, 1:30 p.m., Nate Cole wrote:
> > ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py, lines 417-418
> > <https://reviews.apache.org/r/43050/diff/1/?file=1228061#file1228061line417>
> >
> >     How could these ever get set?  I wouldn't think Ambari agent code execution isn't setting any environment variables like this.
> 
> Tom Beerbower wrote:
>     I guess not.  I cut and pasted this code from the Atlas params.py file.  So, how in the Hive params python script can I determine the atlas directories?  Thanks.

I see that you're setting a bunch of stuff in params.py and params_linux.py, but no script uses them - they're all used for writing out the xml file?  It should be possible to capture the XmlConfig (or whatever) to make sure the substitution makes it into the persisted XML file.  If it's too daunting, I won't hold up the review for it.


- Nate


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117234
-----------------------------------------------------------


On Feb. 1, 2016, 12:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 12:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Nate Cole <nc...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117234
-----------------------------------------------------------



Should have some python tests for Atlas scripts.


ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py (lines 417 - 418)
<https://reviews.apache.org/r/43050/#comment178362>

    How could these ever get set?  I wouldn't think Ambari agent code execution isn't setting any environment variables like this.


- Nate Cole


On Feb. 1, 2016, 12:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 12:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Tom Beerbower <tb...@hortonworks.com>.

> On Feb. 2, 2016, 12:02 a.m., Alejandro Fernandez wrote:
> > ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml, line 72
> > <https://reviews.apache.org/r/43050/diff/1/?file=1228062#file1228062line72>
> >
> >     RU/EU from HDP 2.2 to 2.3 will need to make these config changes by modifying the config packs.
> >     E.g.,
> >     ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/config-upgrade.xml
> >     <replace key="content" find="foo" replace-with="bar" />

Thanks for the review!

So, would it be something like this? ... 

    <replace key="content" find="export HADOOP_CLASSPATH=" replace-with="export HADOOP_CLASSPATH={{atlas_conf_dir}}:{{atlas_home_dir}}/hook/hive:" />
    
What if the content doesn't contain "export HADOOP_CLASSPATH="?  That will likely be the case, I think.  In that case I want to add the entire line ...

    export HADOOP_CLASSPATH={{atlas_conf_dir}}:{{atlas_home_dir}}/hook/hive:${HADOOP_CLASSPATH}

Is there a way to do that in through the upgrade mechanism?


- Tom


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117309
-----------------------------------------------------------


On Feb. 1, 2016, 5:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 5:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Tom Beerbower <tb...@hortonworks.com>.

> On Feb. 2, 2016, 12:02 a.m., Alejandro Fernandez wrote:
> > ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml, line 72
> > <https://reviews.apache.org/r/43050/diff/1/?file=1228062#file1228062line72>
> >
> >     RU/EU from HDP 2.2 to 2.3 will need to make these config changes by modifying the config packs.
> >     E.g.,
> >     ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/config-upgrade.xml
> >     <replace key="content" find="foo" replace-with="bar" />
> 
> Tom Beerbower wrote:
>     Thanks for the review!
>     
>     So, would it be something like this? ... 
>     
>         <replace key="content" find="export HADOOP_CLASSPATH=" replace-with="export HADOOP_CLASSPATH={{atlas_conf_dir}}:{{atlas_home_dir}}/hook/hive:" />
>         
>     What if the content doesn't contain "export HADOOP_CLASSPATH="?  That will likely be the case, I think.  In that case I want to add the entire line ...
>     
>         export HADOOP_CLASSPATH={{atlas_conf_dir}}:{{atlas_home_dir}}/hook/hive:${HADOOP_CLASSPATH}
>     
>     Is there a way to do that in through the upgrade mechanism?

I'll make the RU change in a separate patch.

I've opened a Jira to track this ...

https://issues.apache.org/jira/browse/AMBARI-14888


- Tom


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117309
-----------------------------------------------------------


On Feb. 2, 2016, 9:17 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 2, 2016, 9:17 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Alejandro Fernandez <af...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117309
-----------------------------------------------------------




ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py (line 119)
<https://reviews.apache.org/r/43050/#comment178469>

    Should check that length is > 0 before getting index [0]



ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml (line 72)
<https://reviews.apache.org/r/43050/#comment178471>

    RU/EU from HDP 2.2 to 2.3 will need to make these config changes by modifying the config packs.
    E.g.,
    ambari-server/src/main/resources/stacks/HDP/2.3/upgrades/config-upgrade.xml
    <replace key="content" find="foo" replace-with="bar" />


- Alejandro Fernandez


On Feb. 1, 2016, 5:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 5:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by Tom Beerbower <tb...@hortonworks.com>.

> On Feb. 2, 2016, 4:25 p.m., John Speidel wrote:
> > ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml, line 150
> > <https://reviews.apache.org/r/43050/diff/1/?file=1228057#file1228057line150>
> >
> >     The mechanism of setting the topology related properties(kafka and zk) in the agent will result in issues with blueprint installs.
> >     
> >     Because each node in a blueprint provisioned cluster is provisioned independently of other nodes, the blueprint topology manager needs to block provisioning of all nodes until all hosts required for configuration topology resolution are known.  This is determined in BlueprintConfigurationProcessor.  Because these configurations are not registered with the blueprint configuration processor it is possible for the atlas host to be provisioned prior to the Kafka hosts being known which result in the failure of the agent to set these topology related properties.
> >     
> >     In BlueprintConfigurationProcessor you will need too register updaters for all of the topology related properties, in this case "atlas.kafka.bootstrap.servers" and "atlas.kafka.zookeeper.connect".
> >     
> >     For example:
> >     atlasPropsMap.put("atlas.kafka.bootstrap.servers", new MultipleHostTopologyUpdater("KAFKA_BROKER"));

Thanks for the review!

I'll make the blueprint changes in a separate patch.

I've opened a new Jira to track this ...

https://issues.apache.org/jira/browse/AMBARI-14887


- Tom


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117413
-----------------------------------------------------------


On Feb. 2, 2016, 9:17 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 2, 2016, 9:17 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>


Re: Review Request 43050: Atlas Integration: Support deploying latest Atlas(which depends on kafka) using Ambari

Posted by John Speidel <js...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/43050/#review117413
-----------------------------------------------------------



Although the issue that I raised is important, I think that it is ok to merge the patch as is and to resolve this issue in a follow up patch.


ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml (line 150)
<https://reviews.apache.org/r/43050/#comment178594>

    The mechanism of setting the topology related properties(kafka and zk) in the agent will result in issues with blueprint installs.
    
    Because each node in a blueprint provisioned cluster is provisioned independently of other nodes, the blueprint topology manager needs to block provisioning of all nodes until all hosts required for configuration topology resolution are known.  This is determined in BlueprintConfigurationProcessor.  Because these configurations are not registered with the blueprint configuration processor it is possible for the atlas host to be provisioned prior to the Kafka hosts being known which result in the failure of the agent to set these topology related properties.
    
    In BlueprintConfigurationProcessor you will need too register updaters for all of the topology related properties, in this case "atlas.kafka.bootstrap.servers" and "atlas.kafka.zookeeper.connect".
    
    For example:
    atlasPropsMap.put("atlas.kafka.bootstrap.servers", new MultipleHostTopologyUpdater("KAFKA_BROKER"));


- John Speidel


On Feb. 1, 2016, 5:05 p.m., Tom Beerbower wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/43050/
> -----------------------------------------------------------
> 
> (Updated Feb. 1, 2016, 5:05 p.m.)
> 
> 
> Review request for Ambari, John Speidel, Nate Cole, and Robert Levas.
> 
> 
> Bugs: AMBARI-14853
>     https://issues.apache.org/jira/browse/AMBARI-14853
> 
> 
> Repository: ambari
> 
> 
> Description
> -------
> 
> Three additional steps need to be done to to install Atlas 0.6 via Ambari.
> 
> 1. Add new Atlas Kafka related properties to the Atlas configuration ‘application.properties’
> 
>     atlas.notification.embedded" : false,
>     atlas.kafka.data = /tmp 
>     atlas.kafka.bootstrap.servers = c6401.ambari.apache.org:6667
>     atlas.kafka.zookeeper.connect = c6401.ambari.apache.org:2181
>     atlas.kafka.hook.group.id = atlas
>     atlas.kafka.entities.group.id = entities
> 
> 
> * Note: 
> For “atlas.kafka.bootstrap.servers” and “atlas.kafka.zookeeper.connect”, modify host names based on your cluster topology.  
> The directory specified in “atlas.kaka.data” should exist.
> 
> 2. Add an export of HADOOP_CLASSPATH which includes the required atlas directories to hive-env.xml in the 2.3 HDP stack
>     export HADOOP_CLASSPATH=/etc/atlas/conf:/usr/hdp/current/atlas-server/hook/hive:${HADOOP_CLASSPATH}
> 
> *Note:
> It is important that the atlas directories are prepended to the existing classpath.
> 
> 3. Restart the Atlas and Hive services after the cluster is fully provisioned
> 
> 
> Diffs
> -----
> 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/configuration/application-properties.xml 82dacb6 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/metainfo.xml 2600fc4 
>   ambari-server/src/main/resources/common-services/ATLAS/0.1.0.2.3/package/scripts/params.py 1a0c67b 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/configuration/hive-env.xml 6db42c9 
>   ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/params_linux.py a2131b0 
>   ambari-server/src/main/resources/stacks/HDP/2.3/services/HIVE/configuration/hive-env.xml 92c0c03 
>   ambari-server/src/test/python/stacks/2.3/configs/default.json 21bff13 
> 
> Diff: https://reviews.apache.org/r/43050/diff/
> 
> 
> Testing
> -------
> 
> Manual test and verify configuration and Atlas operation.
> 
> mvn clean test : all tests pass
> 
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 01:02 h
> [INFO] Finished at: 2016-02-01T11:56:24-05:00
> [INFO] Final Memory: 44M/1696M
> [INFO] ------------------------------------------------------------------------
> 
> 
> Thanks,
> 
> Tom Beerbower
> 
>