You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Mudit Kumar <mu...@askme.in> on 2016/07/27 19:31:20 UTC

Hive on spark

Hi All,

I need to configure hive cluster based on spark engine (yarn).
I already have a running hadoop cluster.

Can someone point me to relevant documentation?

TIA.

Thanks,
Mudit


Re: Hive on spark

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

You can download the pdf from here
<https://talebzadehmich.files.wordpress.com/2016/08/hive_on_spark_only.pdf>

HTH

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 1 August 2016 at 03:05, Chandrakanth Akkinepalli <
chandrakanth.akkinepalli@gmail.com> wrote:

> Hi Dr.Mich,
> Can you please share your London meetup presentation. Curious to see the
> comparison according to you of various query engines.
>
> Thanks,
> Chandra
>
> On Jul 28, 2016, at 12:13 AM, Mich Talebzadeh <mi...@gmail.com>
> wrote:
>
> Hi,
>
> I made a presentation in London on 20th July on this subject:. In that I
> explained how to make Spark work as an execution engine for Hive.
>
> Query Engines for Hive, MR, Spark, Tez and LLAP – Considerations
> <http://www.meetup.com/futureofdata-london/events/232423292/>!
>
> See if I can send the presentation
>
> Cheers
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 28 July 2016 at 04:24, Mudit Kumar <mu...@askme.in> wrote:
>
>> Yes Mich,exactly.
>>
>> Thanks,
>> Mudit
>>
>> From: Mich Talebzadeh <mi...@gmail.com>
>> Reply-To: <us...@hive.apache.org>
>> Date: Thursday, July 28, 2016 at 1:08 AM
>> To: user <us...@hive.apache.org>
>> Subject: Re: Hive on spark
>>
>> You mean you want to run Hive using Spark as the execution engine which
>> uses Yarn by default?
>>
>>
>> Something like below
>>
>> hive> select max(id) from oraclehadoop.dummy_parquet;
>> Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
>> Query Hive on Spark job[1] stages:
>> 2
>> 3
>> Status: Running (Hive on Spark job[1])
>> Job Progress Format
>> CurrentTime StageId_StageAttemptId:
>> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
>> [StageCost]
>> 2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1
>> Finished
>> Status: Finished successfully in 13.14 seconds
>> OK
>> 100000000
>> Time taken: 13.426 seconds, Fetched: 1 row(s)
>>
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>> On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
>>
>>> Hi All,
>>>
>>> I need to configure hive cluster based on spark engine (yarn).
>>> I already have a running hadoop cluster.
>>>
>>> Can someone point me to relevant documentation?
>>>
>>> TIA.
>>>
>>> Thanks,
>>> Mudit
>>>
>>
>>
>

Re: Hive on spark

Posted by Chandrakanth Akkinepalli <ch...@gmail.com>.
Hi Dr.Mich,
Can you please share your London meetup presentation. Curious to see the comparison according to you of various query engines.

Thanks,
Chandra

> On Jul 28, 2016, at 12:13 AM, Mich Talebzadeh <mi...@gmail.com> wrote:
> 
> Hi,
> 
> I made a presentation in London on 20th July on this subject:. In that I explained how to make Spark work as an execution engine for Hive.
> 
> Query Engines for Hive, MR, Spark, Tez and LLAP – Considerations!
> 
> See if I can send the presentation
> 
> Cheers
> 
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> http://talebzadehmich.wordpress.com
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.
>  
> 
>> On 28 July 2016 at 04:24, Mudit Kumar <mu...@askme.in> wrote:
>> Yes Mich,exactly.
>> 
>> Thanks,
>> Mudit
>> 
>> From: Mich Talebzadeh <mi...@gmail.com>
>> Reply-To: <us...@hive.apache.org>
>> Date: Thursday, July 28, 2016 at 1:08 AM
>> To: user <us...@hive.apache.org>
>> Subject: Re: Hive on spark
>> 
>> You mean you want to run Hive using Spark as the execution engine which uses Yarn by default?
>> 
>> 
>> Something like below
>> 
>> hive> select max(id) from oraclehadoop.dummy_parquet;
>> Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
>> Query Hive on Spark job[1] stages:
>> 2
>> 3
>> Status: Running (Hive on Spark job[1])
>> Job Progress Format
>> CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
>> 2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
>> 2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
>> 2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1 Finished
>> Status: Finished successfully in 13.14 seconds
>> OK
>> 100000000
>> Time taken: 13.426 seconds, Fetched: 1 row(s)
>> 
>> 
>> HTH
>> 
>> Dr Mich Talebzadeh
>>  
>> LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>  
>> http://talebzadehmich.wordpress.com
>> 
>> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.
>>  
>> 
>>> On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
>>> Hi All,
>>> 
>>> I need to configure hive cluster based on spark engine (yarn).
>>> I already have a running hadoop cluster.
>>> 
>>> Can someone point me to relevant documentation?
>>> 
>>> TIA.
>>> 
>>> Thanks,
>>> Mudit
> 

Re: Hive on spark

Posted by Mudit Kumar <mu...@askme.in>.
Thanks Guys for the help!

Thanks,
Mudit

From:  Mich Talebzadeh <mi...@gmail.com>
Reply-To:  <us...@hive.apache.org>
Date:  Thursday, July 28, 2016 at 9:43 AM
To:  user <us...@hive.apache.org>
Subject:  Re: Hive on spark

Hi,

I made a presentation in London on 20th July on this subject:. In that I explained how to make Spark work as an execution engine for Hive.

Query Engines for Hive, MR, Spark, Tez and LLAP – Considerations! 

See if I can send the presentation 

Cheers


Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. 

 

On 28 July 2016 at 04:24, Mudit Kumar <mu...@askme.in> wrote:
Yes Mich,exactly.

Thanks,
Mudit

From:  Mich Talebzadeh <mi...@gmail.com>
Reply-To:  <us...@hive.apache.org>
Date:  Thursday, July 28, 2016 at 1:08 AM
To:  user <us...@hive.apache.org>
Subject:  Re: Hive on spark

You mean you want to run Hive using Spark as the execution engine which uses Yarn by default?


Something like below

hive> select max(id) from oraclehadoop.dummy_parquet;
Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
Query Hive on Spark job[1] stages:
2
3
Status: Running (Hive on Spark job[1])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1 Finished
Status: Finished successfully in 13.14 seconds
OK
100000000
Time taken: 13.426 seconds, Fetched: 1 row(s)


HTH

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. 

 

On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
Hi All,

I need to configure hive cluster based on spark engine (yarn).
I already have a running hadoop cluster.

Can someone point me to relevant documentation?

TIA.

Thanks,
Mudit




Re: Hive on spark

Posted by Mich Talebzadeh <mi...@gmail.com>.
Hi,

I made a presentation in London on 20th July on this subject:. In that I
explained how to make Spark work as an execution engine for Hive.

Query Engines for Hive, MR, Spark, Tez and LLAP – Considerations
<http://www.meetup.com/futureofdata-london/events/232423292/>!

See if I can send the presentation

Cheers


Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 28 July 2016 at 04:24, Mudit Kumar <mu...@askme.in> wrote:

> Yes Mich,exactly.
>
> Thanks,
> Mudit
>
> From: Mich Talebzadeh <mi...@gmail.com>
> Reply-To: <us...@hive.apache.org>
> Date: Thursday, July 28, 2016 at 1:08 AM
> To: user <us...@hive.apache.org>
> Subject: Re: Hive on spark
>
> You mean you want to run Hive using Spark as the execution engine which
> uses Yarn by default?
>
>
> Something like below
>
> hive> select max(id) from oraclehadoop.dummy_parquet;
> Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
> Query Hive on Spark job[1] stages:
> 2
> 3
> Status: Running (Hive on Spark job[1])
> Job Progress Format
> CurrentTime StageId_StageAttemptId:
> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
> [StageCost]
> 2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
> 2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
> 2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1
> Finished
> Status: Finished successfully in 13.14 seconds
> OK
> 100000000
> Time taken: 13.426 seconds, Fetched: 1 row(s)
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
>
>> Hi All,
>>
>> I need to configure hive cluster based on spark engine (yarn).
>> I already have a running hadoop cluster.
>>
>> Can someone point me to relevant documentation?
>>
>> TIA.
>>
>> Thanks,
>> Mudit
>>
>
>

Re: Hive on spark

Posted by karthi keyan <ka...@gmail.com>.
mudit,

this link can guide you -
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started

Thanks,
Karthik

On Thu, Jul 28, 2016 at 8:54 AM, Mudit Kumar <mu...@askme.in> wrote:

> Yes Mich,exactly.
>
> Thanks,
> Mudit
>
> From: Mich Talebzadeh <mi...@gmail.com>
> Reply-To: <us...@hive.apache.org>
> Date: Thursday, July 28, 2016 at 1:08 AM
> To: user <us...@hive.apache.org>
> Subject: Re: Hive on spark
>
> You mean you want to run Hive using Spark as the execution engine which
> uses Yarn by default?
>
>
> Something like below
>
> hive> select max(id) from oraclehadoop.dummy_parquet;
> Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
> Query Hive on Spark job[1] stages:
> 2
> 3
> Status: Running (Hive on Spark job[1])
> Job Progress Format
> CurrentTime StageId_StageAttemptId:
> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
> [StageCost]
> 2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
> 2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
> 2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
> 2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1
> Finished
> Status: Finished successfully in 13.14 seconds
> OK
> 100000000
> Time taken: 13.426 seconds, Fetched: 1 row(s)
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
>
>> Hi All,
>>
>> I need to configure hive cluster based on spark engine (yarn).
>> I already have a running hadoop cluster.
>>
>> Can someone point me to relevant documentation?
>>
>> TIA.
>>
>> Thanks,
>> Mudit
>>
>
>

Re: Hive on spark

Posted by Mudit Kumar <mu...@askme.in>.
Yes Mich,exactly.

Thanks,
Mudit

From:  Mich Talebzadeh <mi...@gmail.com>
Reply-To:  <us...@hive.apache.org>
Date:  Thursday, July 28, 2016 at 1:08 AM
To:  user <us...@hive.apache.org>
Subject:  Re: Hive on spark

You mean you want to run Hive using Spark as the execution engine which uses Yarn by default?


Something like below

hive> select max(id) from oraclehadoop.dummy_parquet;
Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
Query Hive on Spark job[1] stages:
2
3
Status: Running (Hive on Spark job[1])
Job Progress Format
CurrentTime StageId_StageAttemptId: SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount [StageCost]
2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1 Finished
Status: Finished successfully in 13.14 seconds
OK
100000000
Time taken: 13.426 seconds, Fetched: 1 row(s)


HTH

Dr Mich Talebzadeh

 

LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

 

http://talebzadehmich.wordpress.com



Disclaimer: Use it at your own risk. Any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on this email's technical content is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction. 

 

On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:
Hi All,

I need to configure hive cluster based on spark engine (yarn).
I already have a running hadoop cluster.

Can someone point me to relevant documentation?

TIA.

Thanks,
Mudit



Re: Hive on spark

Posted by Mich Talebzadeh <mi...@gmail.com>.
You mean you want to run Hive using Spark as the execution engine which
uses Yarn by default?


Something like below

hive> select max(id) from oraclehadoop.dummy_parquet;
Starting Spark Job = 8218859d-1d7c-419c-adc7-4de175c3ca6d
Query Hive on Spark job[1] stages:
2
3
Status: Running (Hive on Spark job[1])
Job Progress Format
CurrentTime StageId_StageAttemptId:
SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
[StageCost]
2016-07-27 20:38:17,269 Stage-2_0: 0(+8)/24     Stage-3_0: 0/1
2016-07-27 20:38:20,298 Stage-2_0: 8(+4)/24     Stage-3_0: 0/1
2016-07-27 20:38:22,309 Stage-2_0: 11(+1)/24    Stage-3_0: 0/1
2016-07-27 20:38:23,330 Stage-2_0: 12(+8)/24    Stage-3_0: 0/1
2016-07-27 20:38:26,360 Stage-2_0: 17(+7)/24    Stage-3_0: 0/1
2016-07-27 20:38:27,386 Stage-2_0: 20(+4)/24    Stage-3_0: 0/1
2016-07-27 20:38:28,391 Stage-2_0: 21(+3)/24    Stage-3_0: 0/1
2016-07-27 20:38:29,395 Stage-2_0: 24/24 Finished       Stage-3_0: 1/1
Finished
Status: Finished successfully in 13.14 seconds
OK
100000000
Time taken: 13.426 seconds, Fetched: 1 row(s)


HTH

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 27 July 2016 at 20:31, Mudit Kumar <mu...@askme.in> wrote:

> Hi All,
>
> I need to configure hive cluster based on spark engine (yarn).
> I already have a running hadoop cluster.
>
> Can someone point me to relevant documentation?
>
> TIA.
>
> Thanks,
> Mudit
>