You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/04/19 15:11:51 UTC

[GitHub] [spark] cloud-fan opened a new pull request #24416: [SPARK-27521][SQL] move data source v2 to catalyst module

cloud-fan opened a new pull request #24416: [SPARK-27521][SQL] move data source v2 to catalyst module
URL: https://github.com/apache/spark/pull/24416
 
 
   ## What changes were proposed in this pull request?
   
   Currently we are in a strange status that, some data source v2 interfaces(catalog related) are in sql/catalyst, some data source v2 interfaces(Table, ScanBuilder, DataReader, etc.) are in sql/core.
   
   I don't see a reason to keep data source v2 API in 2 modules. If we should pick one module, I think sql/catalyst is the one to go.
   
   Catalyst module already has some user-facing stuff like DataType, InternalRow, etc. And we have to update `Analyzer` and `SessionCatalog` to support the new catalog plugin, which is in the catalyst module.
   
   This PR can solve the problem we have in https://github.com/apache/spark/pull/24246
   
   ## How was this patch tested?
   
   existing tests

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org