You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ambari.apache.org by Andrey Klochkov <ak...@griddynamics.com> on 2016/06/14 21:19:01 UTC

Ambari not creating /hdp/apps/ when using a blueprint

Hi,
Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when
creating a cluster from a blueprint?

Seems it doesn't do that in our case. I see that there's Ambaripreupload.py
script in Ambari sources that does that but can't figure out how that
script is supposed to be executed. No references in source code. Nothing in
the logs. How can I troubleshoot that?

Thanks!

-- 
Andrey Klochkov

Re: Ambari not creating /hdp/apps/ when using a blueprint

Posted by Andrey Klochkov <ak...@griddynamics.com>.
Thanks Sumit. The version is 2.4.2.0-258. Today we recreated the
environment and it worked fine now. Not sure what the problem was. We will
be creating and recreating these environments, so I'll update this thread
if we hit it again.

On Tue, Jun 14, 2016 at 7:23 PM, Sumit Mohanty <sm...@hortonworks.com>
wrote:

> ​Even blueprint deployments also uploads the tar-balls as needed. The
> pre-upload py script is for different scenarios.
>
>
> What version are you using and what version of the stack it is?
>
>
>
> ------------------------------
> *From:* Alejandro Fernandez
> *Sent:* Tuesday, June 14, 2016 5:18 PM
> *To:* Andrey Klochkov; Sumit Mohanty; Andrew Onishuk
> *Cc:* user@ambari.apache.org
>
> *Subject:* Re: Ambari not creating /hdp/apps/<hdp-version> when using a
> blueprint
>
> + Andrew and Sumit who may know more about how BP are deployed
>
> From: Andrey Klochkov <ak...@griddynamics.com>
> Date: Tuesday, June 14, 2016 at 5:12 PM
> To: Alejandro Fernandez <af...@hortonworks.com>
> Cc: "user@ambari.apache.org" <us...@ambari.apache.org>
> Subject: Re: Ambari not creating /hdp/apps/<hdp-version> when using a
> blueprint
>
> Alejandro,
> All the tarballs are missing, the "/hdp" directory itself doesn't exist.
> Ambari shows that all services are up but I'm getting FileNotFoundException
> for mapreduce.tar.gz when trying to run MR jobs.
>
> How can I execute these ops after installation?
>
> Can somebody check if these are invoked when deploying via blueprints?
>
> Thanks for your help!
>
> On Tue, Jun 14, 2016 at 4:32 PM, Alejandro Fernandez <
> afernandez@hortonworks.com> wrote:
>
>> Which tarballs are missing?
>>
>> They are uploaded to HDFS when certain services start, e.g., Hive Server,
>> History Server, Tez Service Check, Spark Service Check.
>> You can add logging to the method copy_to_hdfs in copy_tarball.py to
>> ensure it is called.
>> Even after installation, you can perform these ops to try uploading
>> tarballs.
>>
>> For VMs deployed via blueprints, I'm not quite sure if we don't run those
>> steps to improve start-up time.
>>
>> Thanks,
>> Alejandro
>>
>> From: Andrey Klochkov <ak...@griddynamics.com>
>> Reply-To: "user@ambari.apache.org" <us...@ambari.apache.org>
>> Date: Tuesday, June 14, 2016 at 2:19 PM
>> To: "user@ambari.apache.org" <us...@ambari.apache.org>
>> Subject: Ambari not creating /hdp/apps/<hdp-version> when using a
>> blueprint
>>
>> Hi,
>> Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when
>> creating a cluster from a blueprint?
>>
>> Seems it doesn't do that in our case. I see that there's
>> Ambaripreupload.py script in Ambari sources that does that but can't figure
>> out how that script is supposed to be executed. No references in source
>> code. Nothing in the logs. How can I troubleshoot that?
>>
>> Thanks!
>>
>> --
>> Andrey Klochkov
>>
>>
>
>
> --
> Andrey Klochkov
> Grid Dynamics
> Skype: aklochkov_gd
> www.griddynamics.com
> aklochkov@griddynamics.com
>



-- 
Andrey Klochkov
Grid Dynamics
Skype: aklochkov_gd
www.griddynamics.com
aklochkov@griddynamics.com

Re: Ambari not creating /hdp/apps/ when using a blueprint

Posted by Sumit Mohanty <sm...@hortonworks.com>.
?Even blueprint deployments also uploads the tar-balls as needed. The pre-upload py script is for different scenarios.


What version are you using and what version of the stack it is?



________________________________
From: Alejandro Fernandez
Sent: Tuesday, June 14, 2016 5:18 PM
To: Andrey Klochkov; Sumit Mohanty; Andrew Onishuk
Cc: user@ambari.apache.org
Subject: Re: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

+ Andrew and Sumit who may know more about how BP are deployed

From: Andrey Klochkov <ak...@griddynamics.com>>
Date: Tuesday, June 14, 2016 at 5:12 PM
To: Alejandro Fernandez <af...@hortonworks.com>>
Cc: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Alejandro,
All the tarballs are missing, the "/hdp" directory itself doesn't exist. Ambari shows that all services are up but I'm getting FileNotFoundException for mapreduce.tar.gz when trying to run MR jobs.

How can I execute these ops after installation?

Can somebody check if these are invoked when deploying via blueprints?

Thanks for your help!

On Tue, Jun 14, 2016 at 4:32 PM, Alejandro Fernandez <af...@hortonworks.com>> wrote:
Which tarballs are missing?

They are uploaded to HDFS when certain services start, e.g., Hive Server, History Server, Tez Service Check, Spark Service Check.
You can add logging to the method copy_to_hdfs in copy_tarball.py to ensure it is called.
Even after installation, you can perform these ops to try uploading tarballs.

For VMs deployed via blueprints, I'm not quite sure if we don't run those steps to improve start-up time.

Thanks,
Alejandro

From: Andrey Klochkov <ak...@griddynamics.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Tuesday, June 14, 2016 at 2:19 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Hi,
Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when creating a cluster from a blueprint?

Seems it doesn't do that in our case. I see that there's Ambaripreupload.py script in Ambari sources that does that but can't figure out how that script is supposed to be executed. No references in source code. Nothing in the logs. How can I troubleshoot that?

Thanks!

--
Andrey Klochkov




--
Andrey Klochkov
Grid Dynamics
Skype: aklochkov_gd
www.griddynamics.com<http://www.griddynamics.com>
aklochkov@griddynamics.com<ma...@griddynamics.com>

Re: Ambari not creating /hdp/apps/ when using a blueprint

Posted by Alejandro Fernandez <af...@hortonworks.com>.
+ Andrew and Sumit who may know more about how BP are deployed

From: Andrey Klochkov <ak...@griddynamics.com>>
Date: Tuesday, June 14, 2016 at 5:12 PM
To: Alejandro Fernandez <af...@hortonworks.com>>
Cc: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Re: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Alejandro,
All the tarballs are missing, the "/hdp" directory itself doesn't exist. Ambari shows that all services are up but I'm getting FileNotFoundException for mapreduce.tar.gz when trying to run MR jobs.

How can I execute these ops after installation?

Can somebody check if these are invoked when deploying via blueprints?

Thanks for your help!

On Tue, Jun 14, 2016 at 4:32 PM, Alejandro Fernandez <af...@hortonworks.com>> wrote:
Which tarballs are missing?

They are uploaded to HDFS when certain services start, e.g., Hive Server, History Server, Tez Service Check, Spark Service Check.
You can add logging to the method copy_to_hdfs in copy_tarball.py to ensure it is called.
Even after installation, you can perform these ops to try uploading tarballs.

For VMs deployed via blueprints, I'm not quite sure if we don't run those steps to improve start-up time.

Thanks,
Alejandro

From: Andrey Klochkov <ak...@griddynamics.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Tuesday, June 14, 2016 at 2:19 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Hi,
Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when creating a cluster from a blueprint?

Seems it doesn't do that in our case. I see that there's Ambaripreupload.py script in Ambari sources that does that but can't figure out how that script is supposed to be executed. No references in source code. Nothing in the logs. How can I troubleshoot that?

Thanks!

--
Andrey Klochkov




--
Andrey Klochkov
Grid Dynamics
Skype: aklochkov_gd
www.griddynamics.com<http://www.griddynamics.com>
aklochkov@griddynamics.com<ma...@griddynamics.com>

Re: Ambari not creating /hdp/apps/ when using a blueprint

Posted by Andrey Klochkov <ak...@griddynamics.com>.
Alejandro,
All the tarballs are missing, the "/hdp" directory itself doesn't exist.
Ambari shows that all services are up but I'm getting FileNotFoundException
for mapreduce.tar.gz when trying to run MR jobs.

How can I execute these ops after installation?

Can somebody check if these are invoked when deploying via blueprints?

Thanks for your help!

On Tue, Jun 14, 2016 at 4:32 PM, Alejandro Fernandez <
afernandez@hortonworks.com> wrote:

> Which tarballs are missing?
>
> They are uploaded to HDFS when certain services start, e.g., Hive Server,
> History Server, Tez Service Check, Spark Service Check.
> You can add logging to the method copy_to_hdfs in copy_tarball.py to
> ensure it is called.
> Even after installation, you can perform these ops to try uploading
> tarballs.
>
> For VMs deployed via blueprints, I'm not quite sure if we don't run those
> steps to improve start-up time.
>
> Thanks,
> Alejandro
>
> From: Andrey Klochkov <ak...@griddynamics.com>
> Reply-To: "user@ambari.apache.org" <us...@ambari.apache.org>
> Date: Tuesday, June 14, 2016 at 2:19 PM
> To: "user@ambari.apache.org" <us...@ambari.apache.org>
> Subject: Ambari not creating /hdp/apps/<hdp-version> when using a
> blueprint
>
> Hi,
> Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when
> creating a cluster from a blueprint?
>
> Seems it doesn't do that in our case. I see that there's
> Ambaripreupload.py script in Ambari sources that does that but can't figure
> out how that script is supposed to be executed. No references in source
> code. Nothing in the logs. How can I troubleshoot that?
>
> Thanks!
>
> --
> Andrey Klochkov
>
>


-- 
Andrey Klochkov
Grid Dynamics
Skype: aklochkov_gd
www.griddynamics.com
aklochkov@griddynamics.com

Re: Ambari not creating /hdp/apps/ when using a blueprint

Posted by Alejandro Fernandez <af...@hortonworks.com>.
Which tarballs are missing?

They are uploaded to HDFS when certain services start, e.g., Hive Server, History Server, Tez Service Check, Spark Service Check.
You can add logging to the method copy_to_hdfs in copy_tarball.py to ensure it is called.
Even after installation, you can perform these ops to try uploading tarballs.

For VMs deployed via blueprints, I'm not quite sure if we don't run those steps to improve start-up time.

Thanks,
Alejandro

From: Andrey Klochkov <ak...@griddynamics.com>>
Reply-To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Date: Tuesday, June 14, 2016 at 2:19 PM
To: "user@ambari.apache.org<ma...@ambari.apache.org>" <us...@ambari.apache.org>>
Subject: Ambari not creating /hdp/apps/<hdp-version> when using a blueprint

Hi,
Should Ambari upload tarballs to /hdp/apps/<hdp-version> in HDFS when creating a cluster from a blueprint?

Seems it doesn't do that in our case. I see that there's Ambaripreupload.py script in Ambari sources that does that but can't figure out how that script is supposed to be executed. No references in source code. Nothing in the logs. How can I troubleshoot that?

Thanks!

--
Andrey Klochkov