You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Wes McKinney (Jira)" <ji...@apache.org> on 2020/10/15 21:22:00 UTC

[jira] [Updated] (ARROW-10276) [Python] Armv7 orc and flight not supported for build. Compat error on using with spark

     [ https://issues.apache.org/jira/browse/ARROW-10276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Wes McKinney updated ARROW-10276:
---------------------------------
    Summary: [Python] Armv7 orc and flight not supported for build. Compat error on using with spark  (was: Armv7 orc and flight not supported for build. Compat error on using with spark)

> [Python] Armv7 orc and flight not supported for build. Compat error on using with spark
> ---------------------------------------------------------------------------------------
>
>                 Key: ARROW-10276
>                 URL: https://issues.apache.org/jira/browse/ARROW-10276
>             Project: Apache Arrow
>          Issue Type: Bug
>    Affects Versions: 0.17.0
>            Reporter: utsav
>            Priority: Major
>         Attachments: arrow_compat_error, build_pip_wheel.sh, dpu_stream_spark.ipynb, get_arrow_and_create_venv.sh, run_build.sh
>
>
> I'm using a Arm Cortex A9 processor on the Xilinx Pynq Z2 board. People have tried to use it for the raspberry pi 3 without luck in previous posts.
> I figured out how to successfully build it for armv7 using the script below but cannot use orc and flight flags. People had looked into it in ARROW-8420 but I don't know if they faced these issues.
> I tried converting a spark dataframe to pandas using pyarrow but now it complains about a compat feature. I have attached images below
> Any help would be appreciated. Thanks
> Spark Version: 2.4.5.
>  The code is as follows:
> ```
> import pandas as pd
> df_pd = df.toPandas()
> npArr = df_pd.to_numpy()
> ```
> The error is as follows:-
> ```
> /opt/spark/python/pyspark/sql/dataframe.py:2110: UserWarning: toPandas attempted Arrow optimization because 'spark.sql.execution.arrow.enabled' is set to true; however, failed by the reason below:
>  module 'pyarrow' has no attribute 'compat'
>  Attempting non-optimization as 'spark.sql.execution.arrow.fallback.enabled' is set to true.
>  warnings.warn(msg)
> ``` 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)