You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Ian Cook (Jira)" <ji...@apache.org> on 2021/01/29 04:13:00 UTC
[jira] [Comment Edited] (ARROW-6582) [R] Arrow to R fails with
embedded nuls in strings
[ https://issues.apache.org/jira/browse/ARROW-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17274132#comment-17274132 ]
Ian Cook edited comment on ARROW-6582 at 1/29/21, 4:12 AM:
-----------------------------------------------------------
For future reference/testing purposes: attached is a tiny uncompressed Parquet file named {{embedded_nul.parquet}} that contains a single string column named {{x}} with a single record containing a one-byte string consisting of an embedded nul. The file was written with Spark 3.0.1 using this PySpark code:
{code:python}
from pyspark.sql.types import *
schema = StructType([StructField("x", StringType(), True)])
json = '[{"x": "\\u0000"}]'
df = spark.read.schema(schema).json(sc.parallelize([json]))
df.repartition(1).write.parquet('embedded_nul.parquet', compression = 'none')
{code}
was (Author: icook):
For future reference/testing purposes: attached is a tiny uncompressed Parquet file named {{embedded_nul.parquet}} that contains a single string column named {{x}} with a single record containing a one-byte string consisting of an embedded nul. The file was written with Spark 3.0.1 using this PySpark code:
{code:python}
from pyspark.sql.types import *
schema = StructType([StructField("x", StringType(), True)])
json = '[{"x": "\\u0000"}]}'
df = spark.read.schema(schema).json(sc.parallelize([json]))
df.repartition(1).write.parquet('embedded_nul.parquet', compression = 'none')
{code}
> [R] Arrow to R fails with embedded nuls in strings
> --------------------------------------------------
>
> Key: ARROW-6582
> URL: https://issues.apache.org/jira/browse/ARROW-6582
> Project: Apache Arrow
> Issue Type: Bug
> Components: R
> Affects Versions: 0.14.1
> Environment: Windows 10
> R 3.4.4
> Reporter: John Cassil
> Assignee: Neal Richardson
> Priority: Major
> Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: embedded_nul.parquet
>
> Time Spent: 3h 40m
> Remaining Estimate: 0h
>
> Apologies if this issue isn't categorized or documented appropriately. Please be gentle! :)
> As a heavy R user that normally interacts with parquet files using SparklyR, I have recently decided to try to use arrow::read_parquet() on a few parquet files that were on my local machine rather than in hadoop. I was not able to proceed after several various attempts due to embedded nuls. For example:
> try({df <- read_parquet('out_2019-09_data_1.snappy.parquet') })
> Error in Table__to_dataframe(x, use_threads = option_use_threads()) :
> embedded nul in string: 'INSTALL BOTH LEFT FRONT AND RIGHT FRONT TORQUE ARMS\0 ARMS'
> Is there a solution to this?
> I have also hit roadblocks with embedded nuls in the past with csvs using data.table::fread(), but readr::read_delim() seems to handle them gracefully with just a warning after proceeding.
> Apologies that I do not have a handy reprex. I don't know if I can even recreate a parquet file with embedded nuls using arrow if it won't let me read one in, and I can't share this file due to company restrictions.
> Please let me know how I can be of any more help!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)