You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kent Yao (Jira)" <ji...@apache.org> on 2019/11/27 03:58:00 UTC
[jira] [Commented] (SPARK-29836) PostgreSQL dialect: cast
[ https://issues.apache.org/jira/browse/SPARK-29836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983150#comment-16983150 ]
Kent Yao commented on SPARK-29836:
----------------------------------
The works seem to introduce a lot of duplicated logic here, the main difference here is to return now or throw exception.
I know that presto has a function `try`, works like this
{code:sql}
presto> select cast('blablah' as boolean);
Query 20191126_041930_00002_46mbf failed: Cannot cast 'blablah' to BOOLEAN
presto> select try(cast('blablah' as boolean));
_col0
-------
NULL
(1 row)
Query 20191126_041941_00003_46mbf, FINISHED, 1 node
Splits: 17 total, 17 done (100.00%)
0:00 [0 rows, 0B] [0 rows/s, 0B/s]
{code}
can we simply add an implicit `try` for spark dialect, and change the related code to throw exceptions by default. CC [~cloud_fan] [~maropu] [~dongjoon] [~hyukjin.kwon] [~yumwang]
> PostgreSQL dialect: cast
> ------------------------
>
> Key: SPARK-29836
> URL: https://issues.apache.org/jira/browse/SPARK-29836
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 3.0.0
> Reporter: wuyi
> Priority: Major
>
> SparkSQL and PostgreSQL have a lot different cast behavior between types by default. We should make SparkSQL's cast behavior be consistent with PostgreSQL when spark.sql.dialect is configured as PostgreSQL.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org