You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Jenny Zhao <li...@gmail.com> on 2014/07/22 19:19:05 UTC

Spark sql with hive table running on Yarn-cluster mode

Hi,

For running spark sql, the dataneuclus*.jar are automatically added in
classpath, this works fine for spark standalone mode and yarn-client mode,
however, for Yarn-cluster mode, I have to explicitly put these jars using
--jars option when submitting this job, otherwise, the job will fail, why
it won't  work for yarn-cluster mode?

Thank you for your help!

Jenny

Re: Spark sql with hive table running on Yarn-cluster mode

Posted by Andrew Or <an...@databricks.com>.
Hi Jenny,

It won't work for yarn-cluster simply because this part of the code is
missing. For now, you will have to manually add them using `--jars`. Thanks
for letting us know; we will fix this by 1.1.

Andrew


2014-07-22 10:19 GMT-07:00 Jenny Zhao <li...@gmail.com>:

> Hi,
>
> For running spark sql, the dataneuclus*.jar are automatically added in
> classpath, this works fine for spark standalone mode and yarn-client mode,
> however, for Yarn-cluster mode, I have to explicitly put these jars using
> --jars option when submitting this job, otherwise, the job will fail, why
> it won't  work for yarn-cluster mode?
>
> Thank you for your help!
>
> Jenny
>