You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Igor Yastrebov (JIRA)" <ji...@apache.org> on 2019/07/17 10:02:00 UTC
[jira] [Updated] (ARROW-5966) [Python] Capacity error when
converting large string numpy array to arrow array
[ https://issues.apache.org/jira/browse/ARROW-5966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Igor Yastrebov updated ARROW-5966:
----------------------------------
External issue URL: (was: https://github.com/apache/arrow/issues/1855)
Description:
Trying to create a large string array fails with
ArrowCapacityError: Encoded string length exceeds maximum size (2GB)
instead of creating a chunked array.
A reproducible example:
{code:java}
import uuid
import numpy as np
import pyarrow as pa
li = []
for i in range(100000000):
li.append(uuid.uuid4().hex)
arr = np.array(li)
parr = pa.array(arr)
{code}
Is it a regression or was it never properly fixed: [link title|[https://github.com/apache/arrow/issues/1855]]?
was:
Trying to create a large string array fails with
ArrowCapacityError: Encoded string length exceeds maximum size (2GB)
instead of creating a chunked array.
A reproducible example:
{code:java}
import uuid
import numpy as np
import pyarrow as pa
li = []
for i in range(100000000):
li.append(uuid.uuid4().hex)
arr = np.array(li)
parr = pa.array(arr)
{code}
Is it a regression or was it never properly fixed?
> [Python] Capacity error when converting large string numpy array to arrow array
> -------------------------------------------------------------------------------
>
> Key: ARROW-5966
> URL: https://issues.apache.org/jira/browse/ARROW-5966
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.13.0, 0.14.0
> Reporter: Igor Yastrebov
> Priority: Major
>
> Trying to create a large string array fails with
> ArrowCapacityError: Encoded string length exceeds maximum size (2GB)
> instead of creating a chunked array.
>
> A reproducible example:
> {code:java}
> import uuid
> import numpy as np
> import pyarrow as pa
> li = []
> for i in range(100000000):
> li.append(uuid.uuid4().hex)
> arr = np.array(li)
> parr = pa.array(arr)
> {code}
> Is it a regression or was it never properly fixed: [link title|[https://github.com/apache/arrow/issues/1855]]?
>
>
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)