You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@iceberg.apache.org by GitBox <gi...@apache.org> on 2018/12/11 00:26:12 UTC

[GitHub] mccheah commented on a change in pull request #6: Support customizing the location where data is written in Spark

mccheah commented on a change in pull request #6: Support customizing the location where data is written in Spark
URL: https://github.com/apache/incubator-iceberg/pull/6#discussion_r240433428
 
 

 ##########
 File path: spark/src/main/java/com/netflix/iceberg/spark/source/IcebergSource.java
 ##########
 @@ -89,7 +92,11 @@ public DataSourceReader createReader(DataSourceOptions options) {
           .toUpperCase(Locale.ENGLISH));
     }
 
-    return Optional.of(new Writer(table, lazyConf(), format));
+    String dataLocation = options.get(TableProperties.WRITE_NEW_DATA_LOCATION)
+        .orElse(table.properties().getOrDefault(
+            TableProperties.WRITE_NEW_DATA_LOCATION,
+            new Path(new Path(table.location()), "data").toString()));
+    return Optional.of(new Writer(table, lazyConf(), format, dataLocation));
 
 Review comment:
   I think for our use case we can have the write location specified in the table property. That would be sufficient. I also don't see the downside of introducing the extra flexibility of allowing the override to be specified in data source options, but we could defer the feature until later.
   
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services