You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "ASF GitHub Bot (Jira)" <ji...@apache.org> on 2020/05/04 12:15:00 UTC
[jira] [Updated] (ARROW-8644) [Python] Dask integration tests
failing due to change in not including partition columns
[ https://issues.apache.org/jira/browse/ARROW-8644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ASF GitHub Bot updated ARROW-8644:
----------------------------------
Labels: pull-request-available (was: )
> [Python] Dask integration tests failing due to change in not including partition columns
> ----------------------------------------------------------------------------------------
>
> Key: ARROW-8644
> URL: https://issues.apache.org/jira/browse/ARROW-8644
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Reporter: Joris Van den Bossche
> Priority: Major
> Labels: pull-request-available
> Time Spent: 10m
> Remaining Estimate: 0h
>
> In ARROW-3861 (https://github.com/apache/arrow/pull/7050), I "fixed" a bug that the partition columns are always included even when the user did a manual column selection.
> But apparently, this behaviour was being relied upon by dask. See the failing nightly integration tests: https://circleci.com/gh/ursa-labs/crossbow/11854?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
> So the best option might be to just keep the "old" behaviour for the legacy ParquetDataset, when using the new datasets API ({{use_legacy_datasets=False}}), you get the new / correct behaviour.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)