You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jiale He (Jira)" <ji...@apache.org> on 2022/12/28 10:00:00 UTC
[jira] [Updated] (SPARK-41741) [SQL] ParquetFilters StringStartsWith push down matching string do not use UTF-8
[ https://issues.apache.org/jira/browse/SPARK-41741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jiale He updated SPARK-41741:
-----------------------------
Description:
Hello ~
I found a problem, but there is a way to work around this problem, so I don't know whether to open ISSUE to fix.
The parquet filter is pushed down. When using the like '***%' statement to query, if the system default encoding is not UTF-8, it may cause an error.
There are two ways to bypass this problem as far as I know
1. spark.executor.extraJavaOptions="-Dfile.encoding=UTF-8"
2. spark.sql.parquet.filterPushdown.string.startsWith=false
The following is the information to reproduce this problem
was:
Hello ~
I found a problem, but there is a way to work around this problem, so I don't know whether to open ISSUE to fix.
The parquet filter is pushed down. When using the like '***%' statement to query, if the system default encoding is not UTF-8, it may cause an error.
There are two ways to bypass this problem as far as I know
1. spark.executor.extraJavaOptions="-Dfile.encoding=UTF-8"
2. spark.sql.parquet.filterPushdown.string.startsWith=false
The following is the information to reproduce this problem
!image-2022-12-28-17-58-34-491.png!
> [SQL] ParquetFilters StringStartsWith push down matching string do not use UTF-8
> --------------------------------------------------------------------------------
>
> Key: SPARK-41741
> URL: https://issues.apache.org/jira/browse/SPARK-41741
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 2.4.0
> Reporter: Jiale He
> Priority: Major
> Attachments: part-00000-30432312-7cdb-43ef-befe-93bcfd174878-c000.snappy.parquet
>
>
> Hello ~
>
> I found a problem, but there is a way to work around this problem, so I don't know whether to open ISSUE to fix.
>
> The parquet filter is pushed down. When using the like '***%' statement to query, if the system default encoding is not UTF-8, it may cause an error.
>
> There are two ways to bypass this problem as far as I know
> 1. spark.executor.extraJavaOptions="-Dfile.encoding=UTF-8"
> 2. spark.sql.parquet.filterPushdown.string.startsWith=false
>
> The following is the information to reproduce this problem
>
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org