You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by 469564481 <46...@qq.com> on 2016/04/08 12:30:25 UTC

回复:Work on Spark engine for Hive

I do not install spark engines.
  I can use jdbc  connectting to hive and execute sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive, can not execute sql.




------------------ 原始邮件 ------------------
发件人: "Mich Talebzadeh";<mi...@gmail.com>;
发送时间: 2016年4月8日(星期五) 下午5:02
收件人: "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>; 

主题: Work on Spark engine for Hive



Hi,


Is there any scheduled work to enable Hive to use recent version of Spark engines?


This is becoming an issue as some applications have to rely on MapR engine to do operations on Hive 2 which is serial and slow.


Thanks


 
Dr Mich Talebzadeh
 
 
 
LinkedIn  https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 
 
 
http://talebzadehmich.wordpress.com

Re: Work on Spark engine for Hive

Posted by Szehon Ho <sz...@cloudera.com>.
So I only know that latest CDH released version does have Hive (based on
1.2) on Spark 1.6 , though admittedly have not tested Hive 2.0 branch on
that.  So I would recommend for you try the latest 1.6-based Spark assembly
from CDH (the version that we test) to rule out possibility of building it
differently.

If there are still issues, it seems the dependency-conflict issue would be
because of the Hive-2.x branch.  That's my hunch, as its plausible that a
lot more libraries have been added by community to Hive 2.x branch as
opposed to Hive 1.x.  Let us know your findings after trying that one, and
we could look further.

Thanks,
Szehon

On Fri, Apr 8, 2016 at 1:03 PM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> The fundamental problem seems to be the spark-assembly-n.n.n-hadoopn.n..jar
> libraries that are incompatible and cause issues. For example Hive does
> not work with existing Spark 1.6.1 binaries, In other words if you set
> hive.execution.engine in the following $HIVE_HOME/cong/hive-site.xml
>
>     <name>hive.execution.engine</name>
>
> *<value>spark</value>*    <description>
>       Expects one of [mr, tez, spark].
>       Chooses execution engine. Options are: mr (Map reduce, default),
> tez, spark. While MR
>       remains the default engine for historical reasons, it is itself a
> historical engine
>       and is deprecated in Hive 2 line. It may be removed without further
> warning.
>     </description>
>
> It will crash.
>
> In short it only currently works for me Spark 1.3.1 binaries together with
> putting the spark assembly jar file spark-assembly-1.3.1-hadoop2.4.0.jar (to
> be extracted via Spark 1.3.1 source build) and put in $HIVE_HOME/lib and
> installing Spark 1.3.1 binaries.
>
> Afterwards whenever you invoke Hive you will need to initialise it using
> the following:
>
> set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
> set hive.execution.engine=spark;
> set spark.master=yarn-client;
> This is just a work-around which is not what you want.
>
> HTH
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 8 April 2016 at 19:16, Szehon Ho <sz...@cloudera.com> wrote:
>
>> Yes, that is a good goal we will have to do eventually.  I was not aware
>> that it is not working to be honest.
>>
>> Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
>> via filing a JIRA on HIVE side?
>>
>> On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> This is a different thing. the question is when will Hive 2 be able to
>>> run on Spark 1.6.1 installed binaries as execution engine.
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>>>
>>>>   I do not install spark engines.
>>>>   I can use jdbc  connectting to hive and execute
>>>> sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive,
>>>> can not execute sql.
>>>>
>>>>
>>>> ------------------ 原始邮件 ------------------
>>>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>>>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>>>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<
>>>> user@spark.apache.org>;
>>>> *主题:* Work on Spark engine for Hive
>>>>
>>>> Hi,
>>>>
>>>> Is there any scheduled work to enable Hive to use recent version of
>>>> Spark engines?
>>>>
>>>> This is becoming an issue as some applications have to rely on MapR
>>>> engine to do operations on Hive 2 which is serial and slow.
>>>>
>>>> Thanks
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>>
>>>
>>>
>>
>

Re: Work on Spark engine for Hive

Posted by Szehon Ho <sz...@cloudera.com>.
So I only know that latest CDH released version does have Hive (based on
1.2) on Spark 1.6 , though admittedly have not tested Hive 2.0 branch on
that.  So I would recommend for you try the latest 1.6-based Spark assembly
from CDH (the version that we test) to rule out possibility of building it
differently.

If there are still issues, it seems the dependency-conflict issue would be
because of the Hive-2.x branch.  That's my hunch, as its plausible that a
lot more libraries have been added by community to Hive 2.x branch as
opposed to Hive 1.x.  Let us know your findings after trying that one, and
we could look further.

Thanks,
Szehon

On Fri, Apr 8, 2016 at 1:03 PM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> The fundamental problem seems to be the spark-assembly-n.n.n-hadoopn.n..jar
> libraries that are incompatible and cause issues. For example Hive does
> not work with existing Spark 1.6.1 binaries, In other words if you set
> hive.execution.engine in the following $HIVE_HOME/cong/hive-site.xml
>
>     <name>hive.execution.engine</name>
>
> *<value>spark</value>*    <description>
>       Expects one of [mr, tez, spark].
>       Chooses execution engine. Options are: mr (Map reduce, default),
> tez, spark. While MR
>       remains the default engine for historical reasons, it is itself a
> historical engine
>       and is deprecated in Hive 2 line. It may be removed without further
> warning.
>     </description>
>
> It will crash.
>
> In short it only currently works for me Spark 1.3.1 binaries together with
> putting the spark assembly jar file spark-assembly-1.3.1-hadoop2.4.0.jar (to
> be extracted via Spark 1.3.1 source build) and put in $HIVE_HOME/lib and
> installing Spark 1.3.1 binaries.
>
> Afterwards whenever you invoke Hive you will need to initialise it using
> the following:
>
> set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
> set hive.execution.engine=spark;
> set spark.master=yarn-client;
> This is just a work-around which is not what you want.
>
> HTH
>
>
>
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 8 April 2016 at 19:16, Szehon Ho <sz...@cloudera.com> wrote:
>
>> Yes, that is a good goal we will have to do eventually.  I was not aware
>> that it is not working to be honest.
>>
>> Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
>> via filing a JIRA on HIVE side?
>>
>> On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <
>> mich.talebzadeh@gmail.com> wrote:
>>
>>> This is a different thing. the question is when will Hive 2 be able to
>>> run on Spark 1.6.1 installed binaries as execution engine.
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>>>
>>>>   I do not install spark engines.
>>>>   I can use jdbc  connectting to hive and execute
>>>> sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive,
>>>> can not execute sql.
>>>>
>>>>
>>>> ------------------ 原始邮件 ------------------
>>>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>>>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>>>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<
>>>> user@spark.apache.org>;
>>>> *主题:* Work on Spark engine for Hive
>>>>
>>>> Hi,
>>>>
>>>> Is there any scheduled work to enable Hive to use recent version of
>>>> Spark engines?
>>>>
>>>> This is becoming an issue as some applications have to rely on MapR
>>>> engine to do operations on Hive 2 which is serial and slow.
>>>>
>>>> Thanks
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>>
>>>
>>>
>>
>

Re: Work on Spark engine for Hive

Posted by Mich Talebzadeh <mi...@gmail.com>.
The fundamental problem seems to be the spark-assembly-n.n.n-hadoopn.n..jar
libraries that are incompatible and cause issues. For example Hive does not
work with existing Spark 1.6.1 binaries, In other words if you set
hive.execution.engine in the following $HIVE_HOME/cong/hive-site.xml

    <name>hive.execution.engine</name>

*<value>spark</value>*    <description>
      Expects one of [mr, tez, spark].
      Chooses execution engine. Options are: mr (Map reduce, default), tez,
spark. While MR
      remains the default engine for historical reasons, it is itself a
historical engine
      and is deprecated in Hive 2 line. It may be removed without further
warning.
    </description>

It will crash.

In short it only currently works for me Spark 1.3.1 binaries together with
putting the spark assembly jar file spark-assembly-1.3.1-hadoop2.4.0.jar (to
be extracted via Spark 1.3.1 source build) and put in $HIVE_HOME/lib and
installing Spark 1.3.1 binaries.

Afterwards whenever you invoke Hive you will need to initialise it using
the following:

set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
set hive.execution.engine=spark;
set spark.master=yarn-client;
This is just a work-around which is not what you want.

HTH






Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 8 April 2016 at 19:16, Szehon Ho <sz...@cloudera.com> wrote:

> Yes, that is a good goal we will have to do eventually.  I was not aware
> that it is not working to be honest.
>
> Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
> via filing a JIRA on HIVE side?
>
> On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <mich.talebzadeh@gmail.com
> > wrote:
>
>> This is a different thing. the question is when will Hive 2 be able to
>> run on Spark 1.6.1 installed binaries as execution engine.
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>>
>>>   I do not install spark engines.
>>>   I can use jdbc  connectting to hive and execute
>>> sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive,
>>> can not execute sql.
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>>>
>>> *主题:* Work on Spark engine for Hive
>>>
>>> Hi,
>>>
>>> Is there any scheduled work to enable Hive to use recent version of
>>> Spark engines?
>>>
>>> This is becoming an issue as some applications have to rely on MapR
>>> engine to do operations on Hive 2 which is serial and slow.
>>>
>>> Thanks
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>
>>
>

Re: Work on Spark engine for Hive

Posted by Mich Talebzadeh <mi...@gmail.com>.
The fundamental problem seems to be the spark-assembly-n.n.n-hadoopn.n..jar
libraries that are incompatible and cause issues. For example Hive does not
work with existing Spark 1.6.1 binaries, In other words if you set
hive.execution.engine in the following $HIVE_HOME/cong/hive-site.xml

    <name>hive.execution.engine</name>

*<value>spark</value>*    <description>
      Expects one of [mr, tez, spark].
      Chooses execution engine. Options are: mr (Map reduce, default), tez,
spark. While MR
      remains the default engine for historical reasons, it is itself a
historical engine
      and is deprecated in Hive 2 line. It may be removed without further
warning.
    </description>

It will crash.

In short it only currently works for me Spark 1.3.1 binaries together with
putting the spark assembly jar file spark-assembly-1.3.1-hadoop2.4.0.jar (to
be extracted via Spark 1.3.1 source build) and put in $HIVE_HOME/lib and
installing Spark 1.3.1 binaries.

Afterwards whenever you invoke Hive you will need to initialise it using
the following:

set spark.home=/usr/lib/spark-1.3.1-bin-hadoop2.6;
set hive.execution.engine=spark;
set spark.master=yarn-client;
This is just a work-around which is not what you want.

HTH






Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 8 April 2016 at 19:16, Szehon Ho <sz...@cloudera.com> wrote:

> Yes, that is a good goal we will have to do eventually.  I was not aware
> that it is not working to be honest.
>
> Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
> via filing a JIRA on HIVE side?
>
> On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <mich.talebzadeh@gmail.com
> > wrote:
>
>> This is a different thing. the question is when will Hive 2 be able to
>> run on Spark 1.6.1 installed binaries as execution engine.
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>>
>>>   I do not install spark engines.
>>>   I can use jdbc  connectting to hive and execute
>>> sql(create,drop...),but odbc testcase(HiveclientTest) can connect to hive,
>>> can not execute sql.
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>>>
>>> *主题:* Work on Spark engine for Hive
>>>
>>> Hi,
>>>
>>> Is there any scheduled work to enable Hive to use recent version of
>>> Spark engines?
>>>
>>> This is becoming an issue as some applications have to rely on MapR
>>> engine to do operations on Hive 2 which is serial and slow.
>>>
>>> Thanks
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>
>>
>

Re: Work on Spark engine for Hive

Posted by Szehon Ho <sz...@cloudera.com>.
Yes, that is a good goal we will have to do eventually.  I was not aware
that it is not working to be honest.

Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
via filing a JIRA on HIVE side?

On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> This is a different thing. the question is when will Hive 2 be able to run
> on Spark 1.6.1 installed binaries as execution engine.
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>
>>   I do not install spark engines.
>>   I can use jdbc  connectting to hive and execute sql(create,drop...),but
>> odbc testcase(HiveclientTest) can connect to hive, can not execute sql.
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>>
>> *主题:* Work on Spark engine for Hive
>>
>> Hi,
>>
>> Is there any scheduled work to enable Hive to use recent version of Spark
>> engines?
>>
>> This is becoming an issue as some applications have to rely on MapR
>> engine to do operations on Hive 2 which is serial and slow.
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>
>

Re: Work on Spark engine for Hive

Posted by Szehon Ho <sz...@cloudera.com>.
Yes, that is a good goal we will have to do eventually.  I was not aware
that it is not working to be honest.

Can you let us know what is broken on Hive 2 on Spark 1.6.1?  Preferably
via filing a JIRA on HIVE side?

On Fri, Apr 8, 2016 at 7:47 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> This is a different thing. the question is when will Hive 2 be able to run
> on Spark 1.6.1 installed binaries as execution engine.
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:
>
>>   I do not install spark engines.
>>   I can use jdbc  connectting to hive and execute sql(create,drop...),but
>> odbc testcase(HiveclientTest) can connect to hive, can not execute sql.
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
>> *发送时间:* 2016年4月8日(星期五) 下午5:02
>> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>>
>> *主题:* Work on Spark engine for Hive
>>
>> Hi,
>>
>> Is there any scheduled work to enable Hive to use recent version of Spark
>> engines?
>>
>> This is becoming an issue as some applications have to rely on MapR
>> engine to do operations on Hive 2 which is serial and slow.
>>
>> Thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>
>

Re: Work on Spark engine for Hive

Posted by Mich Talebzadeh <mi...@gmail.com>.
This is a different thing. the question is when will Hive 2 be able to run
on Spark 1.6.1 installed binaries as execution engine.

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:

>   I do not install spark engines.
>   I can use jdbc  connectting to hive and execute sql(create,drop...),but
> odbc testcase(HiveclientTest) can connect to hive, can not execute sql.
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
> *发送时间:* 2016年4月8日(星期五) 下午5:02
> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>
> *主题:* Work on Spark engine for Hive
>
> Hi,
>
> Is there any scheduled work to enable Hive to use recent version of Spark
> engines?
>
> This is becoming an issue as some applications have to rely on MapR engine
> to do operations on Hive 2 which is serial and slow.
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>

Re: Work on Spark engine for Hive

Posted by Mich Talebzadeh <mi...@gmail.com>.
This is a different thing. the question is when will Hive 2 be able to run
on Spark 1.6.1 installed binaries as execution engine.

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 8 April 2016 at 11:30, 469564481 <46...@qq.com> wrote:

>   I do not install spark engines.
>   I can use jdbc  connectting to hive and execute sql(create,drop...),but
> odbc testcase(HiveclientTest) can connect to hive, can not execute sql.
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Mich Talebzadeh";<mi...@gmail.com>;
> *发送时间:* 2016年4月8日(星期五) 下午5:02
> *收件人:* "user"<us...@hive.apache.org>; "user @spark"<us...@spark.apache.org>;
>
> *主题:* Work on Spark engine for Hive
>
> Hi,
>
> Is there any scheduled work to enable Hive to use recent version of Spark
> engines?
>
> This is becoming an issue as some applications have to rely on MapR engine
> to do operations on Hive 2 which is serial and slow.
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>