You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jeff Zhang (JIRA)" <ji...@apache.org> on 2016/06/17 07:07:05 UTC

[jira] [Created] (SPARK-16013) Add option to disable HiveContext in spark-shell/pyspark

Jeff Zhang created SPARK-16013:
----------------------------------

             Summary: Add option to disable HiveContext in spark-shell/pyspark
                 Key: SPARK-16013
                 URL: https://issues.apache.org/jira/browse/SPARK-16013
             Project: Spark
          Issue Type: Improvement
          Components: PySpark, Spark Shell
    Affects Versions: 1.6.1
            Reporter: Jeff Zhang


In spark2.0, we can disable hivecontext by setting spark.sql.catalogImplementation, but in spark 1.6 it seems there's no option to turn off hivecontext.  This bring several issues I can see.
* If user spark multiple spark-shell on the same machine but don't specify hive-site.xml will cause embedded derby conflict.
* For any spark downstream project, if they want to reuse the code in spark-shell, then they have to reply on whether hive profile is turned on to decide whether HiveContext is created. This doesn't make sense to me.

Although spark 2.0 don't have such issue, considering most of people are still use spark 1.x, this feature would benefit users I think.  I can create PR if this feature is reasonable for users. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org