You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Rostyslav Sotnychenko <r....@gmail.com> on 2016/06/15 09:35:39 UTC

Hive-on-Spark with access to Hive from Spark jobs

Hello!

I have a question regarding Hive and Spark.

As far as I know, in order to use Hive-on-Spark one need to compile Spark
without Hive profile, but that means that it won't be possible to access
Hive from normal Spark jobs.

How is community going to address this issue? Making two different
spark-assembly jars or something else?


Thanks,
Rostyslav

Re: Hive-on-Spark with access to Hive from Spark jobs

Posted by Reynold Xin <rx...@databricks.com>.
Are you running Spark on YARN, Mesos, Standalone? For all of them you can
make the Hive dependency just part of your application, and then you can
manage this pretty easily.


On Wed, Jun 15, 2016 at 2:35 AM, Rostyslav Sotnychenko <
r.sotnychenko@gmail.com> wrote:

> Hello!
>
> I have a question regarding Hive and Spark.
>
> As far as I know, in order to use Hive-on-Spark one need to compile Spark
> without Hive profile, but that means that it won't be possible to access
> Hive from normal Spark jobs.
>
> How is community going to address this issue? Making two different
> spark-assembly jars or something else?
>
>
> Thanks,
> Rostyslav
>