You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by phegstrom <gi...@git.apache.org> on 2018/10/01 14:57:06 UTC

[GitHub] spark pull request #22227: [SPARK-25202] [SQL] Implements split with limit s...

Github user phegstrom commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22227#discussion_r221641267
  
    --- Diff: R/pkg/R/functions.R ---
    @@ -3404,19 +3404,27 @@ setMethod("collect_set",
     #' Equivalent to \code{split} SQL function.
     #'
     #' @rdname column_string_functions
    +#' @param limit determines the length of the returned array.
    +#'              \itemize{
    +#'              \item \code{limit > 0}: length of the array will be at most \code{limit}
    +#'              \item \code{limit <= 0}: the returned array can have any length
    +#'              }
    +#'
     #' @aliases split_string split_string,Column-method
     #' @examples
     #'
     #' \dontrun{
     #' head(select(df, split_string(df$Sex, "a")))
     #' head(select(df, split_string(df$Class, "\\d")))
    +#' head(select(df, split_string(df$Class, "\\d", 2)))
     #' # This is equivalent to the following SQL expression
     #' head(selectExpr(df, "split(Class, '\\\\d')"))}
    --- End diff --
    
    yes will make that change @viirya @felixcheung 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org