You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Akash Shah (Jira)" <ji...@apache.org> on 2020/10/16 06:26:00 UTC
[jira] [Updated] (ARROW-10324) function
read_parquet(*,as_data_frame=TRUE) fails when embedded nuls present.
[ https://issues.apache.org/jira/browse/ARROW-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Akash Shah updated ARROW-10324:
-------------------------------
Docs Text:
> sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.5 LTS
Matrix products: default
BLAS: /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.7.1
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.7.1
locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C LC_TIME=en_US.UTF-8
[4] LC_COLLATE=en_US.UTF-8 LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] stringr_1.4.0 dplyr_1.0.2 tictoc_1.0 arrow_1.0.1 sparklyr_1.4.0
Description:
For the following code snippet
{code:java}
// code placeholder
library(arrow)
download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.parquet','sample.parquet')
read_parquet(file = 'sample.parquet',as_data_frame = TRUE)
{code}
I get -
{code:java}
Error in Table__to_dataframe(x, use_threads = option_use_threads()) : embedded nul in string: '\0 at \0'
{code}
So, I thought, what if I could read the file as binaries and replace the embedded nul character \0 myself.
{code:java}
parquet <- read_parquet(file = 'sample.parquet',as_data_frame = FALSE)
raw <- write_to_raw(parquet,format = "file")
print(raw){code}
In this case, I get an indecipherable stream of characters and nuls, which makes it very difficult to remove '00' characters that are problematic in the stream.
{code:java}
[1] 41 52 52 4f 57 31 00 00 ff ff ff ff d0 02 00 00 10 00 00 00 00 00 0a 00 0c 00 06 00
[29] 05 00 08 00 0a 00 00 00 00 01 04 00 0c 00 00 00 08 00 08 00 00 00 04 00 08 00 00 00
[57] 04 00 00 00 0d 00 00 00 70 02 00 00 38 02 00 00 10 02 00 00 d0 01 00 00 a4 01 00 00
[85] 74 01 00 00 34 01 00 00 04 01 00 00 cc 00 00 00 9c 00 00 00 64 00 00 00 34 00 00 00
[113] 04 00 00 00 d4 fd ff ff 00 00 01 05 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00
[141] c4 fd ff ff 0a 00 00 00 77 61 72 63 5f 6c 61 6e 67 73 00 00 00 fe ff ff 00 00 01 05
[169] 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00 f0 fd ff ff 0b 00 00 00 6c 61 6e 67
[197] 5f 64 65 74 65 63 74 00 2c fe ff ff 00 00 01 03 18 00 00 00 0c 00 00 00 04 00
{code}
Is there a way to handle this while reading Apache parquet?
was:
For the following code snippet
{code:java}
// code placeholder
library(arrow)
download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.parquet','sample.parquet')
read_parquet(file = 'sample.parquet',as_data_frame = TRUE)
{code}
I get -
{code:java}
Error in Table__to_dataframe(x, use_threads = option_use_threads()) : embedded nul in string: '\0 at \0'
{code}
| |
So, I thought, what if I could read the file as binaries and replace the embedded nul character \0 myself.
| {code:java}
parquet <- read_parquet(file = 'sample.parquet',as_data_frame = FALSE)
raw <- write_to_raw(parquet,format = "file")
print(raw)
{code}
|
In this case, I get an indecipherable stream of characters and nuls, which makes it very difficult to remove '00' characters that are problematic in the stream.
| {code:java}
[1] 41 52 52 4f 57 31 00 00 ff ff ff ff d0 02 00 00 10 00 00 00 00 00 0a 00 0c 00 06 00
[29] 05 00 08 00 0a 00 00 00 00 01 04 00 0c 00 00 00 08 00 08 00 00 00 04 00 08 00 00 00
[57] 04 00 00 00 0d 00 00 00 70 02 00 00 38 02 00 00 10 02 00 00 d0 01 00 00 a4 01 00 00
[85] 74 01 00 00 34 01 00 00 04 01 00 00 cc 00 00 00 9c 00 00 00 64 00 00 00 34 00 00 00
[113] 04 00 00 00 d4 fd ff ff 00 00 01 05 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00
[141] c4 fd ff ff 0a 00 00 00 77 61 72 63 5f 6c 61 6e 67 73 00 00 00 fe ff ff 00 00 01 05
[169] 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00 f0 fd ff ff 0b 00 00 00 6c 61 6e 67
[197] 5f 64 65 74 65 63 74 00 2c fe ff ff 00 00 01 03 18 00 00 00 0c 00 00 00 04 00
{code}
|
Is there a way to handle this while reading Apache parquet?
Issue Type: Improvement (was: Bug)
> function read_parquet(*,as_data_frame=TRUE) fails when embedded nuls present.
> ------------------------------------------------------------------------------
>
> Key: ARROW-10324
> URL: https://issues.apache.org/jira/browse/ARROW-10324
> Project: Apache Arrow
> Issue Type: Improvement
> Components: R
> Reporter: Akash Shah
> Priority: Major
>
> For the following code snippet
> {code:java}
> // code placeholder
> library(arrow)
> download.file('https://github.com/akashshah59/embedded_nul_parquet/raw/main/CC-MAIN-20200702045758-20200702075758-00007.parquet','sample.parquet')
> read_parquet(file = 'sample.parquet',as_data_frame = TRUE)
> {code}
> I get -
>
> {code:java}
> Error in Table__to_dataframe(x, use_threads = option_use_threads()) : embedded nul in string: '\0 at \0'
> {code}
>
> So, I thought, what if I could read the file as binaries and replace the embedded nul character \0 myself.
>
> {code:java}
> parquet <- read_parquet(file = 'sample.parquet',as_data_frame = FALSE)
> raw <- write_to_raw(parquet,format = "file")
> print(raw){code}
>
> In this case, I get an indecipherable stream of characters and nuls, which makes it very difficult to remove '00' characters that are problematic in the stream.
>
> {code:java}
> [1] 41 52 52 4f 57 31 00 00 ff ff ff ff d0 02 00 00 10 00 00 00 00 00 0a 00 0c 00 06 00
> [29] 05 00 08 00 0a 00 00 00 00 01 04 00 0c 00 00 00 08 00 08 00 00 00 04 00 08 00 00 00
> [57] 04 00 00 00 0d 00 00 00 70 02 00 00 38 02 00 00 10 02 00 00 d0 01 00 00 a4 01 00 00
> [85] 74 01 00 00 34 01 00 00 04 01 00 00 cc 00 00 00 9c 00 00 00 64 00 00 00 34 00 00 00
> [113] 04 00 00 00 d4 fd ff ff 00 00 01 05 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00
> [141] c4 fd ff ff 0a 00 00 00 77 61 72 63 5f 6c 61 6e 67 73 00 00 00 fe ff ff 00 00 01 05
> [169] 14 00 00 00 0c 00 00 00 04 00 00 00 00 00 00 00 f0 fd ff ff 0b 00 00 00 6c 61 6e 67
> [197] 5f 64 65 74 65 63 74 00 2c fe ff ff 00 00 01 03 18 00 00 00 0c 00 00 00 04 00
> {code}
>
> Is there a way to handle this while reading Apache parquet?
--
This message was sent by Atlassian Jira
(v8.3.4#803005)