You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Frederik Fabritius (Jira)" <ji...@apache.org> on 2022/06/22 06:38:00 UTC
[jira] [Comment Edited] (ARROW-16872) open_csv throws ArrowInvalid if csv does not end with a new line and is above 16384 lines
[ https://issues.apache.org/jira/browse/ARROW-16872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17557267#comment-17557267 ]
Frederik Fabritius edited comment on ARROW-16872 at 6/22/22 6:37 AM:
---------------------------------------------------------------------
hm, it happened both in production on a google cloud compute vm debian-instance, and made that minimal reproduction test locally on my x86 mac.
When trying to run this in my dev environment, it worked this morning? But i found that it doesn't work with a fresh environment installed via `mamba` (a faster conda).
Here are steps to fully reproduce it:
```
mamba create --name test pyarrow=8 python=3.9
conda activate test
python
```
And then in the python console, use the same reproduction test as before:
```
import pyarrow as pa
import pyarrow.csv
from io import BytesIO
for _ in pa.csv.open_csv(BytesIO('\n'.join(['review_id,filter_outcome'] + ['62593aaec7628b203bad4c6e,fabrication']*16385).encode())): pass
```
So it seems to only happen in a fresh environment, where the pa.csv.open_csv has not run successfully before. Could be a mamba issue, a conda issue, a packaging issue or a compilation/abi issue. Have not dug deeper into what is going on.
was (Author: JIRAUSER291373):
hm, it happened both in production on a google cloud compute vm debian-instance, and made that minimal reproduction test locally on my x86 mac.
When trying to run this in my dev environment, it worked this morning? But i found that it doesn't work with a fresh environment installed via `mamba` (a faster conda).
Here are steps to fully reproduce it:
```
mamba create --name test pyarrow=8 python=3.9
conda activate test
python
import pyarrow as pa
import pyarrow.csv
from io import BytesIO
for _ in pa.csv.open_csv(BytesIO('\n'.join(['review_id,filter_outcome'] + ['62593aaec7628b203bad4c6e,fabrication']*16385).encode())): pass
```
So it seems to only happen in a fresh environment, where the pa.csv.open_csv has not run successfully before. Could be a mamba issue, a conda issue, a packaging issue or a compilation/abi issue. Have not dug deeper into what is going on.
> open_csv throws ArrowInvalid if csv does not end with a new line and is above 16384 lines
> -----------------------------------------------------------------------------------------
>
> Key: ARROW-16872
> URL: https://issues.apache.org/jira/browse/ARROW-16872
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 7.0.0, 8.0.0
> Reporter: Frederik Fabritius
> Priority: Major
> Labels: csvparser, open_csv
>
> `pyarrow.csv.open_csv` throws ArrowInvalid if csv does not end with a new line and is above 16384 lines. Tested with both pyarrow 7.0.0 and 8.0.0. Error seen both in production app and on developer laptop.
>
> Here's a minimal case for reproducing the issue:
> ```python
> import pyarrow as pa
> import pyarrow.csv
> from io import BytesIO
> for _ in pa.csv.open_csv(BytesIO('\n'.join(['review_id,filter_outcome'] + ['62593aaec7628b203bad4c6e,fabrication']*16385).encode())): pass
> ```
>
> Error is thrown:
> ArrowInvalid: CSV parse error: Expected 2 columns, got 1:
--
This message was sent by Atlassian Jira
(v8.20.7#820007)