You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Zongheng Yang (JIRA)" <ji...@apache.org> on 2014/06/04 21:25:01 UTC

[jira] [Commented] (SPARK-1508) Add support for reading from SparkConf

    [ https://issues.apache.org/jira/browse/SPARK-1508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018043#comment-14018043 ] 

Zongheng Yang commented on SPARK-1508:
--------------------------------------

WIP PR: https://github.com/apache/spark/pull/956

We'd want to support:

(1) API calls on SQLConf objects to get/set properties.
(2) Support SQL/HiveQL SET commands of various kinds, e.g. "SET key=val", "SET", "SET key" in the sense that these should be reflected in / go through SQLConf objects.
(3) Make sql("SET ...").collect() (or perhaps also some other operations; also for hql()) return expected results, i.e. the key/val pairs. To do this there are some necessary refactorings for the QueryExecution pipeline.

> Add support for reading from SparkConf
> --------------------------------------
>
>                 Key: SPARK-1508
>                 URL: https://issues.apache.org/jira/browse/SPARK-1508
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Michael Armbrust
>            Assignee: Zongheng Yang
>             Fix For: 1.1.0
>
>
> Right now we have no ability to configure things in Spark SQL.  A good start would be passing a SparkConf though the planner such that users could override the number of partitions used during an Exchange.
> Note that while current spark confs are immutable after the context is created, we want some ability to change settings on a per query basis.



--
This message was sent by Atlassian JIRA
(v6.2#6252)