You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jiri Humpolicek (Jira)" <ji...@apache.org> on 2021/11/23 13:05:00 UTC

[jira] [Commented] (SPARK-37450) Spark SQL reads unnecessary nested fields (another type of pruning case)

    [ https://issues.apache.org/jira/browse/SPARK-37450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17448001#comment-17448001 ] 

Jiri Humpolicek commented on SPARK-37450:
-----------------------------------------

same situation with {{size}} function:
{code:scala}
read.select(size('items)).explain(true)
// ReadSchema: struct<items:array<struct<itemData:string,itemId:bigint>>>
{code}

> Spark SQL reads unnecessary nested fields (another type of pruning case)
> ------------------------------------------------------------------------
>
>                 Key: SPARK-37450
>                 URL: https://issues.apache.org/jira/browse/SPARK-37450
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.2.0
>            Reporter: Jiri Humpolicek
>            Priority: Major
>
> Based on this [SPARK-34638|https://issues.apache.org/jira/browse/SPARK-34638] Maybe I found another nested fields pruning case. In this case I found full read with `count` function
> Example:
> 1) Loading data
> {code:scala}
> val jsonStr = """{
>  "items": [
>    {"itemId": 1, "itemData": "a"},
>    {"itemId": 2, "itemData": "b"}
>  ]
> }"""
> val df = spark.read.json(Seq(jsonStr).toDS)
> df.write.format("parquet").mode("overwrite").saveAsTable("persisted")
> {code}
> 2) read query with explain
> {code:scala}
> val read = spark.table("persisted")
> spark.conf.set("spark.sql.optimizer.nestedSchemaPruning.enabled", true)
> read.select(explode($"items").as('item)).select(count(lit(true))).explain(true)
> // ReadSchema: struct<items:array<struct<itemData:string,itemId:bigint>>>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org