You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/10/27 11:59:31 UTC

[GitHub] [iceberg] rymurr commented on pull request #1640: Allow loading custom Catalog implementation in Spark and Flink

rymurr commented on pull request #1640:
URL: https://github.com/apache/iceberg/pull/1640#issuecomment-717194528


   
   > I think we also want to pass options from the catalog config in Flink and Spark, where users can pass properties like `uri` and `warehouse`. Could you add a `Map` to this to pass the catalog config options?
   
   I like the Map over Configuration suggestion as well. In #1587 I made the constructor take `String name, Map props, Configuration conf` as it still needs a `HadoopFileIO`
   
   Has anyone thought of how to do this for the `IcebergSource`? Currently `df.write().format("iceberg")` is as far as I understand going to use Hive/HDFS regardless of these settings.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org