You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:04:18 UTC
[jira] [Updated] (SPARK-18780) "org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree fromunixtime(cast(…))"
[ https://issues.apache.org/jira/browse/SPARK-18780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-18780:
---------------------------------
Labels: bulk-closed (was: )
> "org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding attribute, tree fromunixtime(cast(…))"
> ---------------------------------------------------------------------------------------------------------------
>
> Key: SPARK-18780
> URL: https://issues.apache.org/jira/browse/SPARK-18780
> Project: Spark
> Issue Type: Bug
> Components: Spark Shell, SQL
> Affects Versions: 1.6.0
> Environment: hdp 2.4.0.0-169 with 10 servers in CentOS 6.5;
> spark 1.6.0 hive 1.2.1 hadoop 2.7.1
> Reporter: SunYonggang
> Priority: Minor
> Labels: bulk-closed
>
> In Spark-Shell, I want to generate RDD from hivecontext.sql(hql_content), the syntax is fine, when i use actions like "first, collect", it occurs error like this "org.apache.spark.sql.catalyst.errors.package$TreeNodeException: Binding
> attribute, tree fromunixtime(cast(…))" .
> I searched in jira, and didn't find my solution. Is this a bug in Spark 1.6.0?
> sql like below: "select a, collect_set(b)[0], (from_unixtime(cast(round(unix_timestamp(c, ‘yyyyMMddHHmmSS’) / 60) * 60 as bigint))) as minute from table_name group by a, (from_unixtime(cast(round(unix_timestamp(c, ‘yyyyMMddHHmmSS’) / 60) * 60 as bigint))) having length(a) = 32"
> 😇This sql can work well in hive, but in spark-shell, it get this error message😖
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org