You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by HyukjinKwon <gi...@git.apache.org> on 2018/08/08 06:17:35 UTC
[GitHub] spark pull request #21118: SPARK-23325: Use InternalRow when reading with Da...
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/21118#discussion_r208466428
--- Diff: sql/core/src/main/java/org/apache/spark/sql/sources/v2/reader/DataSourceReader.java ---
@@ -76,5 +76,5 @@
* If this method fails (by throwing an exception), the action will fail and no Spark job will be
* submitted.
*/
- List<InputPartition<Row>> planInputPartitions();
+ List<InputPartition<InternalRow>> planInputPartitions();
--- End diff --
I am sorry for a question in a old PR like this and I think this might not be directly related with this PR. but please allow me ask a question here. Does this mean developers should produce `InternalRow` here for each partition? `InternalRow` is under catalyst and not meant to be exposed.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org