You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2018/12/16 02:16:36 UTC

[GitHub] cloud-fan commented on issue #23308: [SPARK-26308][SQL] Infer abstract decimal type for java/scala BigDecimal

cloud-fan commented on issue #23308: [SPARK-26308][SQL] Infer abstract decimal type for java/scala BigDecimal
URL: https://github.com/apache/spark/pull/23308#issuecomment-447612359
 
 
   I think we need to separate 2 cases:
   1. infer data type of user-provided data, e.g. turn `Seq[Product]` to `Dataset`. In this case, we must provide a specific decimal type, as `Dataset` must have a schema.
   2. infer data type of UDF which will process spark-provided data. In this case, the inferred type is used to validate UDF's input expressions, so we only need an abstract decimal type, to indicate that any decimal type is allowed.
   
   About implementation, I think it's a too big change to introduce abstract decimal type. I have a new idea: do not let `ScalaUDF` extends any type check trait. We add rules in `TypeCoercion` and implement `ScalaUDF.checkInputTypes`, to do type cast and type validation all by ourselves, and special-handle decimal type there.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org