You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Jeff Zhang <zj...@gmail.com> on 2015/12/04 03:40:01 UTC

Spark doesn't unset HADOOP_CONF_DIR when testing ?

I try to do test on HiveSparkSubmitSuite on local box, but fails. The cause
is that spark is still using my local single node cluster hadoop when doing
the unit test. I don't think it make sense to do that. These environment
variable should be unset before the testing. And I suspect dev/run-tests
also
didn't do that either.

Here's the error message:

Cause: java.lang.RuntimeException: java.lang.RuntimeException: The root
scratch dir: /tmp/hive on HDFS should be writable. Current permissions are:
rwxr-xr-x
[info]   at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
[info]   at
org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
[info]   at
org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162)
[info]   at
org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160)



-- 
Best Regards

Jeff Zhang

Re: Spark doesn't unset HADOOP_CONF_DIR when testing ?

Posted by Jeff Zhang <zj...@gmail.com>.
Thanks Josh, created https://issues.apache.org/jira/browse/SPARK-12166



On Mon, Dec 7, 2015 at 4:32 AM, Josh Rosen <jo...@databricks.com> wrote:

> I agree that we should unset this in our tests. Want to file a JIRA and
> submit a PR to do this?
>
> On Thu, Dec 3, 2015 at 6:40 PM Jeff Zhang <zj...@gmail.com> wrote:
>
>> I try to do test on HiveSparkSubmitSuite on local box, but fails. The
>> cause is that spark is still using my local single node cluster hadoop when
>> doing the unit test. I don't think it make sense to do that. These
>> environment variable should be unset before the testing. And I suspect
>> dev/run-tests also
>> didn't do that either.
>>
>> Here's the error message:
>>
>> Cause: java.lang.RuntimeException: java.lang.RuntimeException: The root
>> scratch dir: /tmp/hive on HDFS should be writable. Current permissions are:
>> rwxr-xr-x
>> [info]   at
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
>> [info]   at
>> org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
>> [info]   at
>> org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162)
>> [info]   at
>> org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160)
>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
>


-- 
Best Regards

Jeff Zhang

Re: Spark doesn't unset HADOOP_CONF_DIR when testing ?

Posted by Josh Rosen <jo...@databricks.com>.
I agree that we should unset this in our tests. Want to file a JIRA and
submit a PR to do this?

On Thu, Dec 3, 2015 at 6:40 PM Jeff Zhang <zj...@gmail.com> wrote:

> I try to do test on HiveSparkSubmitSuite on local box, but fails. The
> cause is that spark is still using my local single node cluster hadoop when
> doing the unit test. I don't think it make sense to do that. These
> environment variable should be unset before the testing. And I suspect
> dev/run-tests also
> didn't do that either.
>
> Here's the error message:
>
> Cause: java.lang.RuntimeException: java.lang.RuntimeException: The root
> scratch dir: /tmp/hive on HDFS should be writable. Current permissions are:
> rwxr-xr-x
> [info]   at
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
> [info]   at
> org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
> [info]   at
> org.apache.spark.sql.hive.HiveContext.executionHive$lzycompute(HiveContext.scala:162)
> [info]   at
> org.apache.spark.sql.hive.HiveContext.executionHive(HiveContext.scala:160)
>
>
>
> --
> Best Regards
>
> Jeff Zhang
>