You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by mengxr <gi...@git.apache.org> on 2014/08/20 20:17:12 UTC

[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

GitHub user mengxr opened a pull request:

    https://github.com/apache/spark/pull/2061

    [SPARK-3143][MLLIB] add tf-idf user guide

    Moved TF-IDF before Word2Vec because the former is more basic. I also added a link for Word2Vec. @atalwalkar

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/mengxr/spark tfidf-doc

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/2061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2061
    
----
commit a5ea4b4ba5b09ab0bb87d52119fd5131fc473550
Author: Xiangrui Meng <me...@databricks.com>
Date:   2014-08-20T18:15:06Z

    add tf-idf user guide

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2061#issuecomment-52850157
  
      [QA tests have finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18992/consoleFull) for   PR 2061 at commit [`ca04c70`](https://github.com/apache/spark/commit/ca04c70d38c1294274833c3ba2c09ddf694b11d6).
     * This patch **passes** unit tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by atalwalkar <gi...@git.apache.org>.
Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16501343
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    --- End diff --
    
    "This approach avoids the need to compute a global term-to-index map, which can be expensive for a large corpus,
    but it suffers from potential hash collisions, where different raw features may become the same term after hashing."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2061#issuecomment-52829311
  
      [QA tests have finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18966/consoleFull) for   PR 2061 at commit [`a5ea4b4`](https://github.com/apache/spark/commit/a5ea4b4ba5b09ab0bb87d52119fd5131fc473550).
     * This patch **passes** unit tests.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/spark/pull/2061


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16504428
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    --- End diff --
    
    done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16504432
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    --- End diff --
    
    done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16504078
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    --- End diff --
    
    We use `2^20`. I will mention the default value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2061#issuecomment-52843305
  
      [QA tests have started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18992/consoleFull) for   PR 2061 at commit [`ca04c70`](https://github.com/apache/spark/commit/ca04c70d38c1294274833c3ba2c09ddf694b11d6).
     * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16513141
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    +
    +**Note:** MLlib doesn't provide tools for text segmentation.
    +We refer users to the [Stanford NLP Group](http://nlp.stanford.edu/) and 
    +[scalanlp/chalk](https://github.com/scalanlp/chalk).
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +TF and IDF are implemented in [HashingTF](api/scala/index.html#org.apache.spark.mllib.feature.HashingTF)
    +and [IDF](api/scala/index.html#org.apache.spark.mllib.feature.IDF).
    +`HashingTF` takes an `RDD[Iterable[_]]` as the input.
    +Each record could be an iterable of strings or other types.
    +
    +{% highlight scala %}
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.SparkContext
    +import org.apache.spark.mllib.feature.HashingTF
    +import org.apache.spark.mllib.linalg.Vector
    +
    +val sc: SparkContext = ...
    +
    +// Load documents (one per line).
    +val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
    +
    +val numFeatures = 1000000
    +val hashingTF = new HashingTF(numFeatures)
    +val tf: RDD[Vector] = hasingTF.transform(documents)
    +{% endhighlight %}
    +
    +While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes: 
    +first to compute the IDF vector and second to scale the term frequencies by IDF.
    +
    +{% highlight scala %}
    +import org.apache.spark.mllib.feature.IDF
    +
    +// ... continue from the previous example
    +tf.cache()
    +val idf = new IDF().fit(tf)
    +val tfidf: RDD[Vector] = idf.transform(tf)
    +{% endhighlight %}
    +</div>
    +</div>
    +
     ## Word2Vec 
     
    -Word2Vec computes distributed vector representation of words. The main advantage of the distributed
    +[Word2Vec](https://code.google.com/p/word2vec/) computes distributed vector representation of words.
    --- End diff --
    
    This is independent of this PR. Does the current doc look good to you?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16513128
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    +
    +**Note:** MLlib doesn't provide tools for text segmentation.
    +We refer users to the [Stanford NLP Group](http://nlp.stanford.edu/) and 
    +[scalanlp/chalk](https://github.com/scalanlp/chalk).
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +TF and IDF are implemented in [HashingTF](api/scala/index.html#org.apache.spark.mllib.feature.HashingTF)
    +and [IDF](api/scala/index.html#org.apache.spark.mllib.feature.IDF).
    +`HashingTF` takes an `RDD[Iterable[_]]` as the input.
    +Each record could be an iterable of strings or other types.
    +
    +{% highlight scala %}
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.SparkContext
    +import org.apache.spark.mllib.feature.HashingTF
    +import org.apache.spark.mllib.linalg.Vector
    +
    +val sc: SparkContext = ...
    +
    +// Load documents (one per line).
    +val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
    +
    +val numFeatures = 1000000
    +val hashingTF = new HashingTF(numFeatures)
    +val tf: RDD[Vector] = hasingTF.transform(documents)
    +{% endhighlight %}
    +
    +While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes: 
    +first to compute the IDF vector and second to scale the term frequencies by IDF.
    +
    +{% highlight scala %}
    +import org.apache.spark.mllib.feature.IDF
    +
    +// ... continue from the previous example
    +tf.cache()
    +val idf = new IDF().fit(tf)
    +val tfidf: RDD[Vector] = idf.transform(tf)
    +{% endhighlight %}
    +</div>
    +</div>
    +
     ## Word2Vec 
     
    -Word2Vec computes distributed vector representation of words. The main advantage of the distributed
    +[Word2Vec](https://code.google.com/p/word2vec/) computes distributed vector representation of words.
    --- End diff --
    
    It is used in the original paper and the term "distributed" is from http://www.indiana.edu/~clcl/BEAGLE/Jones_Mewhort_PR.pdf . I have trouble understanding "distributed vector representation" as well. I think "distributed" means we map a single word to multiple values ....


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by atalwalkar <gi...@git.apache.org>.
Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16501444
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    --- End diff --
    
    Is there a default value that we use for number of hash buckets?  In VW, the default is 2^18 = 262K.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by atalwalkar <gi...@git.apache.org>.
Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16501062
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    --- End diff --
    
    "...`$d$`. And..." -> "...`$d$`, while..."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by mengxr <gi...@git.apache.org>.
Github user mengxr commented on the pull request:

    https://github.com/apache/spark/pull/2061#issuecomment-52865276
  
    I've merged this into master and branch-1.1. Thanks @atalwalkar for reviewing!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the pull request:

    https://github.com/apache/spark/pull/2061#issuecomment-52819381
  
      [QA tests have started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18966/consoleFull) for   PR 2061 at commit [`a5ea4b4`](https://github.com/apache/spark/commit/a5ea4b4ba5b09ab0bb87d52119fd5131fc473550).
     * This patch merges cleanly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by atalwalkar <gi...@git.apache.org>.
Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16510399
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    +
    +**Note:** MLlib doesn't provide tools for text segmentation.
    +We refer users to the [Stanford NLP Group](http://nlp.stanford.edu/) and 
    +[scalanlp/chalk](https://github.com/scalanlp/chalk).
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +TF and IDF are implemented in [HashingTF](api/scala/index.html#org.apache.spark.mllib.feature.HashingTF)
    +and [IDF](api/scala/index.html#org.apache.spark.mllib.feature.IDF).
    +`HashingTF` takes an `RDD[Iterable[_]]` as the input.
    +Each record could be an iterable of strings or other types.
    +
    +{% highlight scala %}
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.SparkContext
    +import org.apache.spark.mllib.feature.HashingTF
    +import org.apache.spark.mllib.linalg.Vector
    +
    +val sc: SparkContext = ...
    +
    +// Load documents (one per line).
    +val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
    +
    +val numFeatures = 1000000
    +val hashingTF = new HashingTF(numFeatures)
    +val tf: RDD[Vector] = hasingTF.transform(documents)
    +{% endhighlight %}
    +
    +While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes: 
    +first to compute the IDF vector and second to scale the term frequencies by IDF.
    +
    +{% highlight scala %}
    +import org.apache.spark.mllib.feature.IDF
    +
    +// ... continue from the previous example
    +tf.cache()
    +val idf = new IDF().fit(tf)
    +val tfidf: RDD[Vector] = idf.transform(tf)
    +{% endhighlight %}
    +</div>
    +</div>
    +
     ## Word2Vec 
     
    -Word2Vec computes distributed vector representation of words. The main advantage of the distributed
    +[Word2Vec](https://code.google.com/p/word2vec/) computes distributed vector representation of words.
    --- End diff --
    
    What does "distributed" mean in "distributed vector representation"?  Does it refer to the fact that the computation is distributed?  If so, could we say "...computes vector representation of words in a distributed fashion."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request: [SPARK-3143][MLLIB] add tf-idf user guide

Posted by atalwalkar <gi...@git.apache.org>.
Github user atalwalkar commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2061#discussion_r16513296
  
    --- Diff: docs/mllib-feature-extraction.md ---
    @@ -7,9 +7,87 @@ displayTitle: <a href="mllib-guide.html">MLlib</a> - Feature Extraction
     * Table of contents
     {:toc}
     
    +
    +## TF-IDF
    +
    +[Term frequency-inverse document frequency (TF-IDF)](http://en.wikipedia.org/wiki/Tf%E2%80%93idf) is a feature 
    +vectorization method widely used in text mining to reflect the importance of a term to a document in the corpus.
    +Denote a term by `$t$`, a document by `$d$`, and the corpus by `$D$`.
    +Term frequency `$TF(t, d)$` is the number of times that term `$t$` appears in document `$d$`.
    +And document frequency `$DF(t, D)$` is the number of documents that contains term `$t$`.
    +If we only use term frequency to measure the importance, it is very easy to over-emphasize terms that
    +appear very often but carry little information about the document, e.g., "a", "the", and "of".
    +If a term appears very often across the corpus, it means it doesn't carry special information about
    +a particular document.
    +Inverse document frequency is a numerical measure of how much information a term provides:
    +`\[
    +IDF(t, D) = \log \frac{|D| + 1}{DF(t, D) + 1},
    +\]`
    +where `$|D|$` is the total number of documents in the corpus.
    +Since logarithm is used, if a term appears in all documents, its IDF value becomes 0.
    +Note that a smoothing term is applied to avoid dividing by zero for terms outside the corpus.
    +The TF-IDF measure is simply the product of TF and IDF:
    +`\[
    +TFIDF(t, d, D) = TF(t, d) \cdot IDF(t, D).
    +\]`
    +There are several variants on the definition of term frequency and document frequency.
    +In MLlib, we separate TF and IDF to make them flexible.
    +
    +Our implementation of term frequency utilizes the
    +[hashing trick](http://en.wikipedia.org/wiki/Feature_hashing).
    +A raw feature is mapped into an index (term) by applying a hash function.
    +Then term frequencies are calculated based on the mapped indices.
    +This approach saves the global term-to-index map, which is expensive for a large corpus,
    +but it suffers from hash collision, where different raw features may become the same term after hashing.
    +To reduce the chance of collision, we can increase the target feature dimension, i.e., 
    +the number of buckets of the hash table.
    +
    +**Note:** MLlib doesn't provide tools for text segmentation.
    +We refer users to the [Stanford NLP Group](http://nlp.stanford.edu/) and 
    +[scalanlp/chalk](https://github.com/scalanlp/chalk).
    +
    +<div class="codetabs">
    +<div data-lang="scala" markdown="1">
    +
    +TF and IDF are implemented in [HashingTF](api/scala/index.html#org.apache.spark.mllib.feature.HashingTF)
    +and [IDF](api/scala/index.html#org.apache.spark.mllib.feature.IDF).
    +`HashingTF` takes an `RDD[Iterable[_]]` as the input.
    +Each record could be an iterable of strings or other types.
    +
    +{% highlight scala %}
    +import org.apache.spark.rdd.RDD
    +import org.apache.spark.SparkContext
    +import org.apache.spark.mllib.feature.HashingTF
    +import org.apache.spark.mllib.linalg.Vector
    +
    +val sc: SparkContext = ...
    +
    +// Load documents (one per line).
    +val documents: RDD[Seq[String]] = sc.textFile("...").map(_.split(" ").toSeq)
    +
    +val numFeatures = 1000000
    +val hashingTF = new HashingTF(numFeatures)
    +val tf: RDD[Vector] = hasingTF.transform(documents)
    +{% endhighlight %}
    +
    +While applying `HashingTF` only needs a single pass to the data, applying `IDF` needs two passes: 
    +first to compute the IDF vector and second to scale the term frequencies by IDF.
    +
    +{% highlight scala %}
    +import org.apache.spark.mllib.feature.IDF
    +
    +// ... continue from the previous example
    +tf.cache()
    +val idf = new IDF().fit(tf)
    +val tfidf: RDD[Vector] = idf.transform(tf)
    +{% endhighlight %}
    +</div>
    +</div>
    +
     ## Word2Vec 
     
    -Word2Vec computes distributed vector representation of words. The main advantage of the distributed
    +[Word2Vec](https://code.google.com/p/word2vec/) computes distributed vector representation of words.
    --- End diff --
    
    yes, the TF-IDF stuff LGTM.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org