You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "guojh (JIRA)" <ji...@apache.org> on 2018/10/30 08:54:00 UTC

[jira] [Comment Edited] (SPARK-25880) user set some hadoop configurations can not work

    [ https://issues.apache.org/jira/browse/SPARK-25880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16668263#comment-16668263 ] 

guojh edited comment on SPARK-25880 at 10/30/18 8:53 AM:
---------------------------------------------------------

The root cause of this issue is that spark use the sparkContext's conf to construct the hadoop conf which is use to broadcasted. So, We should filter the properties that startwith 'spark.hadoop.'


was (Author: gjhkael):
The root cause of this issue is that use the sparkContext's conf to construct the hadoop conf which is use to broadcasted. So, We should filter the properties that startwith 'spark.hadoop.'

> user set some hadoop configurations can not work
> ------------------------------------------------
>
>                 Key: SPARK-25880
>                 URL: https://issues.apache.org/jira/browse/SPARK-25880
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, SQL
>    Affects Versions: 2.2.0
>            Reporter: guojh
>            Priority: Major
>
> When user set some hadoop configuration in spark-defaults.conf, for instance: 
> spark.hadoop.mapreduce.input.fileinputformat.split.maxsize   100000
> and then user use the spark-sql and use set command to overwrite this configuration, but it can not cover the value which set in the file of spark-defaults.conf. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org