You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by me...@apache.org on 2015/07/30 17:20:55 UTC

spark git commit: [SPARK-] [MLLIB] minor fix on tokenizer doc

Repository: spark
Updated Branches:
  refs/heads/master d212a3142 -> 9c0501c5d


[SPARK-] [MLLIB] minor fix on tokenizer doc

A trivial fix for the comments of RegexTokenizer.

Maybe this is too small, yet I just noticed it and think it can be quite misleading. I can create a jira if necessary.

Author: Yuhao Yang <hh...@gmail.com>

Closes #7791 from hhbyyh/docFix and squashes the following commits:

cdf2542 [Yuhao Yang] minor fix on tokenizer doc


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9c0501c5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9c0501c5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9c0501c5

Branch: refs/heads/master
Commit: 9c0501c5d04d83ca25ce433138bf64df6a14dc58
Parents: d212a31
Author: Yuhao Yang <hh...@gmail.com>
Authored: Thu Jul 30 08:20:52 2015 -0700
Committer: Xiangrui Meng <me...@databricks.com>
Committed: Thu Jul 30 08:20:52 2015 -0700

----------------------------------------------------------------------
 mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/9c0501c5/mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
----------------------------------------------------------------------
diff --git a/mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala b/mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
index 0b3af47..248288c 100644
--- a/mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
+++ b/mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
@@ -50,7 +50,7 @@ class Tokenizer(override val uid: String) extends UnaryTransformer[String, Seq[S
 /**
  * :: Experimental ::
  * A regex based tokenizer that extracts tokens either by using the provided regex pattern to split
- * the text (default) or repeatedly matching the regex (if `gaps` is true).
+ * the text (default) or repeatedly matching the regex (if `gaps` is false).
  * Optional parameters also allow filtering tokens using a minimal length.
  * It returns an array of strings that can be empty.
  */


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org