You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@iceberg.apache.org by GitBox <gi...@apache.org> on 2018/12/06 05:55:51 UTC

[GitHub] rdsr commented on a change in pull request #7: Allow custom hadoop properties to be loaded in the Spark data source

rdsr commented on a change in pull request #7: Allow custom hadoop properties to be loaded in the Spark data source
URL: https://github.com/apache/incubator-iceberg/pull/7#discussion_r239335625
 
 

 ##########
 File path: spark/src/main/java/com/netflix/iceberg/spark/source/IcebergSource.java
 ##########
 @@ -109,10 +113,18 @@ protected SparkSession lazySparkSession() {
     return lazySpark;
   }
 
-  protected Configuration lazyConf() {
+  protected Configuration lazyBaseConf() {
     if (lazyConf == null) {
       this.lazyConf = lazySparkSession().sparkContext().hadoopConfiguration();
     }
     return lazyConf;
   }
+
+  protected Configuration mergeIcebergHadoopConfs(Configuration baseConf, Map<String, String> options) {
 
 Review comment:
   Is this method meant to be overridden in subclasses? Or would it be simpler to make it static private?
   
   Also, would it be more simpler to not modify the pass in configuration [baseconf] and return a new Configuration object? It would make the usage simpler IMO. For instance, in line 60 we create a copy of baseconf before passing and in line 62, as we modify the passed in configuration, withTableConfs and conf end up being the same, which I'm not sure was intended or not

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services