You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Chao Sun (Jira)" <ji...@apache.org> on 2021/08/10 22:36:00 UTC

[jira] [Issue Comment Deleted] (SPARK-34861) Support nested column in Spark vectorized readers

     [ https://issues.apache.org/jira/browse/SPARK-34861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chao Sun updated SPARK-34861:
-----------------------------
    Comment: was deleted

(was: Synced with [~chengsu] offline and I will take over this JIRA.)

> Support nested column in Spark vectorized readers
> -------------------------------------------------
>
>                 Key: SPARK-34861
>                 URL: https://issues.apache.org/jira/browse/SPARK-34861
>             Project: Spark
>          Issue Type: Umbrella
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: Cheng Su
>            Priority: Minor
>
> This is the umbrella task to track the overall progress. The task is to support nested column type in Spark vectorized reader, namely Parquet and ORC. Currently both Parquet and ORC vectorized readers do not support nested column type (struct, array and map). We implemented nested column vectorized reader for FB-ORC in our internal fork of Spark. We are seeing performance improvement compared to non-vectorized reader when reading nested columns. In addition, this can also help improve the non-nested column performance when reading non-nested and nested columns together in one query.
>  
> Parquet: [https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala#L173]
>  
> ORC:
> [https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFileFormat.scala#L138]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org