You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@arrow.apache.org by "Joris Van den Bossche (Jira)" <ji...@apache.org> on 2020/04/30 08:58:00 UTC

[jira] [Created] (ARROW-8644) [Python] Dask integration tests failing due to change in not including partition columns

Joris Van den Bossche created ARROW-8644:
--------------------------------------------

             Summary: [Python] Dask integration tests failing due to change in not including partition columns
                 Key: ARROW-8644
                 URL: https://issues.apache.org/jira/browse/ARROW-8644
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
            Reporter: Joris Van den Bossche


In ARROW-3861 (https://github.com/apache/arrow/pull/7050), I "fixed" a bug that the partition columns are always included even when the user did a manual column selection.

But apparently, this behaviour was being relied upon by dask. See the failing nightly integration tests: https://circleci.com/gh/ursa-labs/crossbow/11854?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link

So the best option might be to just keep the "old" behaviour for the legacy ParquetDataset, when using the new datasets API ({{use_legacy_datasets=False}}), you get the new / correct behaviour.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)