You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Vitaly Brodetskyi <vb...@hortonworks.com> on 2018/07/26 19:21:55 UTC

Відп.: install HDP3.0 using ambari

Hi Lian Jiang


    According to stack trace from SPARK2 service, looks like you don't have HIVE service installed on your cluster. Because if HIVE service installed, "hive-env" config should be available for sure. As i know HIVE is required service for SPARK2.


Regards

Vitalyi

________________________________
Від: Lian Jiang <ji...@gmail.com>
Надіслано: 26 липня 2018 р. 22:08
Кому: user@ambari.apache.org
Тема: Re: install HDP3.0 using ambari

During migration, spark2 and livy2-server fail to start due to:


2018-07-26 18:18:09,024 - The 'livy2-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported.
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 148, in <module>
    LivyServer().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 43, in install
    import params
  File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/params.py", line 220, in <module>
    if hive_metastore_db_type == "mssql":
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
    raise Fail("Configuration parameter '" + self.name<http://self.name> + "' was not found in configurations dictionary!")
resource_management.core.exceptions.Fail: Configuration parameter 'hive-env' was not found in configurations dictionary!


I observed that:
/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/YARN/configuration/yarn-site.xml does have:

<property>
    <name>yarn.nodemanager.kill-escape.user</name>
    <value>hive</value>
    <depends-on>
      <property>
        <type>hive-env</type>
        <name>hive_user</name>
      </property>
    </depends-on>
    <on-ambari-upgrade add="false"/>
  </property>

/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/configuration/yarn-site.xml doesn't.


Below files are the same:

/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-env.xml

vs
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-env.xml



/var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-conf.xml

vs
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-conf.xml



I don't see anything wrong in
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-defaults.xml
/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-env.xml

either.



Any idea will be highly appreciated! Thanks.

On Tue, Jul 24, 2018 at 3:56 PM, Lian Jiang <ji...@gmail.com>> wrote:
Thanks. I will try 1 given that I cannot find enough documents/examples online for the blueprint schema changes online.

On Tue, Jul 24, 2018 at 3:49 PM, Benoit Perroud <be...@noisette.ch>> wrote:
HDP 3 don’t have any more spark (1.x), only spark2.

In general, old blueprints are not fully compatible and have to be tweaked a bit.

I see two options from where you are:

1) Upgrade your current blueprint, i.e. use it with HDP 2.6+, run the upgrade wizard from Ambari 2.7 to HDP 3, and export a new version of the blueprint.
2) Manually update the blueprint and remove the spark-defaults section it has. This is still not giving you the guarantee the blueprint will work, you might need to do more customisation.

Benoit




On 25 Jul 2018, at 00:05, Lian Jiang <ji...@gmail.com>> wrote:

Thanks Benoit for the advice.

I switched to ambari 2.7. However, when I create the cluster, it failed due to "config types are not defined in the stack: [spark-defaults]".

Below links point to a spec < ambari2.7.
https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-BlueprintStructure
https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_using_ambari_blueprints.html

https://github.com/apache/ambari/tree/release-2.7.0/ambari-server/src/main/resources/stacks/HDP does not have HDP3.0. This makes it hard to troubleshoot.

Do you know where I can find the source code of HDP3.0 ambari stack so that I can check what configs are supported in new ambari?

Thanks.



On Mon, Jul 23, 2018 at 2:35 PM, Benoit Perroud <be...@noisette.ch>> wrote:
Are you using Ambari 2.7?

Make sure you upgrade Ambari to 2.7 first, since this version is required for HDP 3

Benoit


On 23 Jul 2018, at 23:32, Lian Jiang <ji...@gmail.com>> wrote:

Hi,

I am using ambari blueprint to install HDP 3.0 and cannot register the vdf file.

The vdf file is (the url works):

{
  "VersionDefinition": {
     "version_url": "http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml"
  }
}

The error is "An internal system exception occurred: Stack data, Stack HDP 3.0 is not found in Ambari metainfo"

Any idea? Thanks.






Re: Відп.: install HDP3.0 using ambari

Posted by Lian Jiang <ji...@gmail.com>.
Ambari 2.7 cannot create ranger repositories for components such as hdfs.
The error is:

2018-07-27 21:58:24,712 [http-bio-6080-exec-9] INFO
org.apache.ranger.common.RESTErrorUtil (RESTErrorUtil.java:345)
 - Request failed. loginId=amb_ranger_admin, logMessage=User is not allowed
to access the API
javax.ws.rs.WebApplicationException
        at
org.apache.ranger.common.RESTErrorUtil.createRESTException(RESTErrorUtil.java:337)
        at
org.apache.ranger.security.context.RangerPreAuthSecurityHandler.isAPISpnegoAccessible(RangerPreAuthSecuri
tyHandler.java:106)



My settings are:
"ranger-env": {
        "properties_attributes": {},
        "properties": {
          "admin_username": "admin",
          "admin_password": "%SERVICE_PASSWORD%",
          "bind_anonymous": "false",
          "create_db_dbuser": "false",
          "is_solrCloud_enabled": "true",
          "keyadmin_user_password": "%SERVICE_PASSWORD%",
          "rangertagsync_user_password": "%SERVICE_PASSWORD%",
          "rangerusersync_user_password": "%SERVICE_PASSWORD%",
          "ranger_admin_username": "amb_ranger_admin",
          "ranger_admin_password": "%SERVICE_PASSWORD%",
          "ranger-hbase-plugin-enabled": "Yes",
          "ranger-hdfs-plugin-enabled": "Yes",
          "ranger-hive-plugin-enabled": "Yes",
          "ranger-kafka-plugin-enabled": "Yes",
          "ranger-knox-plugin-enabled": "Yes",
          "ranger-storm-plugin-enabled": "Yes",
          "ranger-yarn-plugin-enabled": "Yes",
          "ranger_group": "ranger",
          "ranger_user": "ranger",
          "ranger_privelege_user_jdbc_url": "jdbc:oracle:thin:@
//%ORACEL_DB_HOST%",
          "xasecure.audit.destination.hdfs": "true",
          "xasecure.audit.destination.hdfs.dir":
"hdfs://%ENV%-cluster/ranger/audit",
          "xasecure.audit.destination.solr": "true",
          "xasecure.audit.destination.solr.zookeepers":
"%ENV%-namenode.%SUBNET%.%VCN%.oraclevcn.com:2181/infra-solr"

        }


I see amb_ranger_admin is user role instead of admin role in Ranger admin
UI. However, the same settings worked in HDP2.6. What changed in HDP3.0 so
that amb_ranger_admin cannot create repositories? Appreciate any clue.

On Thu, Jul 26, 2018 at 12:30 PM, Lian Jiang <ji...@gmail.com> wrote:

> Thanks. I don't have hive installed in HDP2.6 which spark 2.2.0. I don't
> know spark2.3.1 introduces hard dependency on hive and I did not find such
> info in spark2.3.1 document.
>
> I will try install hive on my HDP3.0 cluster.
>
> On Thu, Jul 26, 2018 at 12:21 PM, Vitaly Brodetskyi <
> vbrodetskyi@hortonworks.com> wrote:
>
>> Hi Lian Jiang
>>
>>
>>     According to stack trace from SPARK2 service, looks like you don't
>> have HIVE service installed on your cluster. Because if HIVE service
>> installed, "hive-env" config should be available for sure. As i know HIVE
>> is required service for SPARK2.
>>
>>
>> Regards
>>
>> Vitalyi
>> ------------------------------
>> *Від:* Lian Jiang <ji...@gmail.com>
>> *Надіслано:* 26 липня 2018 р. 22:08
>> *Кому:* user@ambari.apache.org
>> *Тема:* Re: install HDP3.0 using ambari
>>
>> During migration, spark2 and livy2-server fail to start due to:
>>
>> 2018-07-26 18:18:09,024 - The 'livy2-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported.
>> Traceback (most recent call last):
>>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 148, in <module>
>>     LivyServer().execute()
>>   File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
>>     method(env)
>>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 43, in install
>>     import params
>>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/params.py", line 220, in <module>
>>     if hive_metastore_db_type == "mssql":
>>   File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
>>     raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
>> resource_management.core.exceptions.Fail: Configuration parameter 'hive-env' was not found in configurations dictionary!
>>
>> I observed that:
>> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/YARN/configuration/yarn-site.xml does have:
>>
>> <property>
>>     <name>yarn.nodemanager.kill-escape.user</name>
>>     <value>hive</value>
>>     <depends-on>
>>       <property>
>>         <type>hive-env</type>
>>         <name>hive_user</name>
>>       </property>
>>     </depends-on>
>>     <on-ambari-upgrade add="false"/>
>>   </property>
>>
>> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/configuration/yarn-site.xml doesn't.
>>
>> Below files are the same:
>>
>> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-env.xml
>>
>> vs
>> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-env.xml
>>
>>
>>
>> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-conf.xml
>>
>> vs
>> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-conf.xml
>>
>>
>> I don't see anything wrong in
>> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-defaults.xml
>> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-env.xml
>>
>> either.
>>
>>
>> Any idea will be highly appreciated! Thanks.
>>
>>
>> On Tue, Jul 24, 2018 at 3:56 PM, Lian Jiang <ji...@gmail.com>
>> wrote:
>>
>>> Thanks. I will try 1 given that I cannot find enough documents/examples
>>> online for the blueprint schema changes online.
>>>
>>> On Tue, Jul 24, 2018 at 3:49 PM, Benoit Perroud <be...@noisette.ch>
>>> wrote:
>>>
>>>> HDP 3 don’t have any more spark (1.x), only spark2.
>>>>
>>>> In general, old blueprints are not fully compatible and have to be
>>>> tweaked a bit.
>>>>
>>>> I see two options from where you are:
>>>>
>>>> 1) Upgrade your current blueprint, i.e. use it with HDP 2.6+, run the
>>>> upgrade wizard from Ambari 2.7 to HDP 3, and export a new version of the
>>>> blueprint.
>>>> 2) Manually update the blueprint and remove the spark-defaults section
>>>> it has. This is still not giving you the guarantee the blueprint will work,
>>>> you might need to do more customisation.
>>>>
>>>> Benoit
>>>>
>>>>
>>>>
>>>>
>>>> On 25 Jul 2018, at 00:05, Lian Jiang <ji...@gmail.com> wrote:
>>>>
>>>> Thanks Benoit for the advice.
>>>>
>>>> I switched to ambari 2.7. However, when I create the cluster, it failed
>>>> due to "config types are not defined in the stack: [spark-defaults]".
>>>>
>>>> Below links point to a spec < ambari2.7.
>>>> https://cwiki.apache.org/confluence/display/AMBARI/Blueprint
>>>> s#Blueprints-BlueprintStructure
>>>> https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/adm
>>>> inistering-ambari/content/amb_using_ambari_blueprints.html
>>>>
>>>> https://github.com/apache/ambari/tree/release-2.7.0/ambari-s
>>>> erver/src/main/resources/stacks/HDP does not have HDP3.0. This makes
>>>> it hard to troubleshoot.
>>>>
>>>> Do you know where I can find the source code of HDP3.0 ambari stack so
>>>> that I can check what configs are supported in new ambari?
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>> On Mon, Jul 23, 2018 at 2:35 PM, Benoit Perroud <be...@noisette.ch>
>>>> wrote:
>>>>
>>>>> Are you using Ambari 2.7?
>>>>>
>>>>> Make sure you upgrade Ambari to 2.7 first, since this version is
>>>>> required for HDP 3
>>>>>
>>>>> Benoit
>>>>>
>>>>>
>>>>> On 23 Jul 2018, at 23:32, Lian Jiang <ji...@gmail.com> wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I am using ambari blueprint to install HDP 3.0 and cannot register the
>>>>> vdf file.
>>>>>
>>>>> The vdf file is (the url works):
>>>>>
>>>>> {
>>>>>   "VersionDefinition": {
>>>>>      "version_url": "http://public-repo-1.hortonwo
>>>>> rks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml"
>>>>>   }
>>>>> }
>>>>>
>>>>> The error is "An internal system exception occurred: Stack data, Stack
>>>>> HDP 3.0 is not found in Ambari metainfo"
>>>>>
>>>>> Any idea? Thanks.
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>

Re: Відп.: install HDP3.0 using ambari

Posted by Lian Jiang <ji...@gmail.com>.
Thanks. I don't have hive installed in HDP2.6 which spark 2.2.0. I don't
know spark2.3.1 introduces hard dependency on hive and I did not find such
info in spark2.3.1 document.

I will try install hive on my HDP3.0 cluster.

On Thu, Jul 26, 2018 at 12:21 PM, Vitaly Brodetskyi <
vbrodetskyi@hortonworks.com> wrote:

> Hi Lian Jiang
>
>
>     According to stack trace from SPARK2 service, looks like you don't
> have HIVE service installed on your cluster. Because if HIVE service
> installed, "hive-env" config should be available for sure. As i know HIVE
> is required service for SPARK2.
>
>
> Regards
>
> Vitalyi
> ------------------------------
> *Від:* Lian Jiang <ji...@gmail.com>
> *Надіслано:* 26 липня 2018 р. 22:08
> *Кому:* user@ambari.apache.org
> *Тема:* Re: install HDP3.0 using ambari
>
> During migration, spark2 and livy2-server fail to start due to:
>
> 2018-07-26 18:18:09,024 - The 'livy2-server' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (3.0.0.0-1634). This is the version that will be reported.
> Traceback (most recent call last):
>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 148, in <module>
>     LivyServer().execute()
>   File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 353, in execute
>     method(env)
>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/livy2_server.py", line 43, in install
>     import params
>   File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/params.py", line 220, in <module>
>     if hive_metastore_db_type == "mssql":
>   File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/config_dictionary.py", line 73, in __getattr__
>     raise Fail("Configuration parameter '" + self.name + "' was not found in configurations dictionary!")
> resource_management.core.exceptions.Fail: Configuration parameter 'hive-env' was not found in configurations dictionary!
>
> I observed that:
> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/YARN/configuration/yarn-site.xml does have:
>
> <property>
>     <name>yarn.nodemanager.kill-escape.user</name>
>     <value>hive</value>
>     <depends-on>
>       <property>
>         <type>hive-env</type>
>         <name>hive_user</name>
>       </property>
>     </depends-on>
>     <on-ambari-upgrade add="false"/>
>   </property>
>
> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/YARN/configuration/yarn-site.xml doesn't.
>
> Below files are the same:
>
> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-env.xml
>
> vs
> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-env.xml
>
>
>
> /var/lib/ambari-agent/cache/stacks/HDP/2.6/services/SPARK2/configuration/livy2-conf.xml
>
> vs
> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/livy2-conf.xml
>
>
> I don't see anything wrong in
> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-defaults.xml
> /var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/configuration/spark2-env.xml
>
> either.
>
>
> Any idea will be highly appreciated! Thanks.
>
>
> On Tue, Jul 24, 2018 at 3:56 PM, Lian Jiang <ji...@gmail.com> wrote:
>
>> Thanks. I will try 1 given that I cannot find enough documents/examples
>> online for the blueprint schema changes online.
>>
>> On Tue, Jul 24, 2018 at 3:49 PM, Benoit Perroud <be...@noisette.ch>
>> wrote:
>>
>>> HDP 3 don’t have any more spark (1.x), only spark2.
>>>
>>> In general, old blueprints are not fully compatible and have to be
>>> tweaked a bit.
>>>
>>> I see two options from where you are:
>>>
>>> 1) Upgrade your current blueprint, i.e. use it with HDP 2.6+, run the
>>> upgrade wizard from Ambari 2.7 to HDP 3, and export a new version of the
>>> blueprint.
>>> 2) Manually update the blueprint and remove the spark-defaults section
>>> it has. This is still not giving you the guarantee the blueprint will work,
>>> you might need to do more customisation.
>>>
>>> Benoit
>>>
>>>
>>>
>>>
>>> On 25 Jul 2018, at 00:05, Lian Jiang <ji...@gmail.com> wrote:
>>>
>>> Thanks Benoit for the advice.
>>>
>>> I switched to ambari 2.7. However, when I create the cluster, it failed
>>> due to "config types are not defined in the stack: [spark-defaults]".
>>>
>>> Below links point to a spec < ambari2.7.
>>> https://cwiki.apache.org/confluence/display/AMBARI/Blueprint
>>> s#Blueprints-BlueprintStructure
>>> https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/adm
>>> inistering-ambari/content/amb_using_ambari_blueprints.html
>>>
>>> https://github.com/apache/ambari/tree/release-2.7.0/ambari-s
>>> erver/src/main/resources/stacks/HDP does not have HDP3.0. This makes it
>>> hard to troubleshoot.
>>>
>>> Do you know where I can find the source code of HDP3.0 ambari stack so
>>> that I can check what configs are supported in new ambari?
>>>
>>> Thanks.
>>>
>>>
>>>
>>> On Mon, Jul 23, 2018 at 2:35 PM, Benoit Perroud <be...@noisette.ch>
>>> wrote:
>>>
>>>> Are you using Ambari 2.7?
>>>>
>>>> Make sure you upgrade Ambari to 2.7 first, since this version is
>>>> required for HDP 3
>>>>
>>>> Benoit
>>>>
>>>>
>>>> On 23 Jul 2018, at 23:32, Lian Jiang <ji...@gmail.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I am using ambari blueprint to install HDP 3.0 and cannot register the
>>>> vdf file.
>>>>
>>>> The vdf file is (the url works):
>>>>
>>>> {
>>>>   "VersionDefinition": {
>>>>      "version_url": "http://public-repo-1.hortonwo
>>>> rks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml"
>>>>   }
>>>> }
>>>>
>>>> The error is "An internal system exception occurred: Stack data, Stack
>>>> HDP 3.0 is not found in Ambari metainfo"
>>>>
>>>> Any idea? Thanks.
>>>>
>>>>
>>>>
>>>
>>>
>>
>