You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/06/09 09:40:00 UTC
[jira] [Commented] (SPARK-8159) Improve expression coverage
[ https://issues.apache.org/jira/browse/SPARK-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14578488#comment-14578488 ]
Sean Owen commented on SPARK-8159:
----------------------------------
I suppose it's not a big deal, but do there really need to be _hundreds_ of JIRAs to track each function? is there no logical groupings of these that form tasks?
> Improve expression coverage
> ---------------------------
>
> Key: SPARK-8159
> URL: https://issues.apache.org/jira/browse/SPARK-8159
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Reporter: Reynold Xin
>
> This is an umbrella ticket to track new expressions we are adding to SQL/DataFrame.
> For each new expression, we should:
> 1. Add a new Expression implementation in org.apache.spark.sql.catalyst.expressions
> 2. If applicable, implement the code generated version (by implementing genCode).
> 3. Add comprehensive unit tests (for all the data types the expressions support).
> 4. If applicable, add a new function for DataFrame in org.apache.spark.sql.functions, and python/pyspark/sql/functions.py for Python.
> For date/time functions, put them in expressions/datetime.scala, and create a DateTimeFunctionSuite.scala for testing.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org