You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Vitamin_C (JIRA)" <ji...@apache.org> on 2019/03/21 15:37:00 UTC

[jira] [Closed] (SPARK-27230) 当我在hive2.3.4中创建一个表后,然后使用pyspark调用hivecontext无法使用表中数据,同时报错

     [ https://issues.apache.org/jira/browse/SPARK-27230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Vitamin_C closed SPARK-27230.
-----------------------------

> 当我在hive2.3.4中创建一个表后,然后使用pyspark调用hivecontext无法使用表中数据,同时报错
> ---------------------------------------------------------
>
>                 Key: SPARK-27230
>                 URL: https://issues.apache.org/jira/browse/SPARK-27230
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 2.3.3
>         Environment: ubuntu 16.04
> hadoop-2.8.5
> spark-2.3.3
> hive-2.3.4
>            Reporter: Vitamin_C
>            Priority: Minor
>
> 我在hive中建表mdw.t_sd_mobile_user_log
> 然后使用pyspark执行查询
> from pyspark.sql import HiveContext
>  sq= HiveContext(sc)
>  sq.sql('show databases').show()
>  sq.sql('use mdw').show()
>  sq.sql('show tables').show()
>  sq.sql('select * from mdw.t_sd_mobile_user_log').show()
> 然后报错
> Traceback (most recent call last): File "/usr/local/spark-2.3.3/python/lib/pyspark.zip/pyspark/sql/dataframe.py", line 350:undefined, in show print(self._jdf.showString(n, 20, vertical)) File "/usr/local/spark-2.3.3/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__ answer, self.gateway_client, self.target_id, self.name) File "/usr/local/spark-2.3.3/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco return f(*a, **kw) File "/usr/local/spark-2.3.3/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value format(target_id, ".", name), value) Py4JJavaError: An error occurred while calling o280.showString. : java.lang.AssertionError: assertion failed: No plan for HiveTableRelation `mdw`.`t_sd_mobile_user_log`, org.apache.hadoop.hive.serde2.OpenCSVSerde, [imei#272, start_time#273, end_time#274, type1#275, jizhan_num#276, platform#277, app_type#278, app_name#279, sz_ll#280, xz_ll#281|#272, start_time#273, end_time#274, type1#275, jizhan_num#276, platform#277, app_type#278, app_name#279, sz_ll#280, xz_ll#281], [statis_day#282|#282] at scala.Predef$.assert(Predef.scala:170) at 
>  
>  
> 但是 我在spark-sql中建表然后可以在pyspark中以同样的方式,可以读取。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org