You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Wes McKinney (JIRA)" <ji...@apache.org> on 2019/06/18 17:05:00 UTC
[jira] [Resolved] (ARROW-4076) [Python] schema validation and
filters
[ https://issues.apache.org/jira/browse/ARROW-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Wes McKinney resolved ARROW-4076.
---------------------------------
Resolution: Fixed
Issue resolved by pull request 4600
[https://github.com/apache/arrow/pull/4600]
> [Python] schema validation and filters
> --------------------------------------
>
> Key: ARROW-4076
> URL: https://issues.apache.org/jira/browse/ARROW-4076
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Reporter: George Sakkis
> Assignee: Joris Van den Bossche
> Priority: Minor
> Labels: datasets, easyfix, parquet, pull-request-available
> Fix For: 0.14.0
>
> Time Spent: 1h
> Remaining Estimate: 0h
>
> Currently [schema validation|https://github.com/apache/arrow/blob/758bd557584107cb336cbc3422744dacd93978af/python/pyarrow/parquet.py#L900] of {{ParquetDataset}} takes place before filtering. This may raise a {{ValueError}} if the schema is different in some dataset pieces, even if these pieces would be subsequently filtered out. I think validation should happen after filtering to prevent such spurious errors:
> {noformat}
> --- a/pyarrow/parquet.py
> +++ b/pyarrow/parquet.py
> @@ -878,13 +878,13 @@
> if split_row_groups:
> raise NotImplementedError("split_row_groups not yet implemented")
>
> - if validate_schema:
> - self.validate_schemas()
> -
> if filters is not None:
> filters = _check_filters(filters)
> self._filter(filters)
>
> + if validate_schema:
> + self.validate_schemas()
> +
> def validate_schemas(self):
> open_file = self._get_open_file_func()
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)