You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by Robin Anil <ro...@gmail.com> on 2010/02/10 11:19:15 UTC

Mahout Usage and Beyond

Hi Mahouters
      I am trying to find out how you are using Mahout for your work or
project, or which among the algorithms in Mahout are more important for you
to do that work. And finally what do you expect to see in Mahout(A kind of a
wish list). It wont take much of your time. Please reply with this details.
 It will help a great deal in figuring out where what we need to
prioritize.

Thanks
Robin

Re: Mahout Usage and Beyond

Posted by Robin Anil <ro...@gmail.com>.
The keyword extraction is As discussed in:

   -
   http://www.lucidimagination.com/search/document/d051123800ab6ce7/collocations_in_mahout#26634d6364c2c0d2
   -
   http://www.lucidimagination.com/search/document/b8d5bb0745eef6e8/n_grams_for_terms#f16fa54417697d8e

Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
It's been a while since I last did it, but it was pretty straightforward.  Just not sure on the M/R aspects of it yet (but, since it is essentially PageRank, I know it can be done).

On Feb 11, 2010, at 9:04 AM, Claudio Martella wrote:

> That sounds nice, I'll try to contribute too. Give me "a call" if you
> need some contribution :)
> 
> 
> Grant Ingersoll wrote:
>> On Feb 11, 2010, at 4:09 AM, Claudio Martella wrote:
>> 
>> 
>>> I don't know what kind of algorithm you're using, have you every thought
>>> of textrank? pagerank applied to automatic keyword extraction.
>>> 
>> 
>> I've done a sequential implementation of TextRank in the past and would like to do one for Mahout one of these days.
>> 
> 
> 
> -- 
> Claudio Martella
> Digital Technologies
> Unit Research & Development - Analyst
> 
> TIS innovation park
> Via Siemens 19 | Siemensstr. 19
> 39100 Bolzano | 39100 Bozen
> Tel. +39 0471 068 123
> Fax  +39 0471 068 129
> claudio.martella@tis.bz.it http://www.tis.bz.it
> 
> Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.
> 
> 

--------------------------
Grant Ingersoll
http://www.lucidimagination.com/

Search the Lucene ecosystem using Solr/Lucene: http://www.lucidimagination.com/search


Re: Mahout Usage and Beyond

Posted by Claudio Martella <cl...@tis.bz.it>.
That sounds nice, I'll try to contribute too. Give me "a call" if you
need some contribution :)


Grant Ingersoll wrote:
> On Feb 11, 2010, at 4:09 AM, Claudio Martella wrote:
>
>   
>> I don't know what kind of algorithm you're using, have you every thought
>> of textrank? pagerank applied to automatic keyword extraction.
>>     
>
> I've done a sequential implementation of TextRank in the past and would like to do one for Mahout one of these days.
>   


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
claudio.martella@tis.bz.it http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.



Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
On Feb 11, 2010, at 4:09 AM, Claudio Martella wrote:

> I don't know what kind of algorithm you're using, have you every thought
> of textrank? pagerank applied to automatic keyword extraction.

I've done a sequential implementation of TextRank in the past and would like to do one for Mahout one of these days.

Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
On Feb 15, 2010, at 9:30 AM, Isabel Drost wrote:

> On Fri Ted Dunning <te...@gmail.com> wrote:
>> So I think that we are actually at about 7 or 8 / 10 with several
>> interesting additions.
>> 
>> More than the original 10, we need realistic and simple examples.
> 
> And probably some real numbers on performance: How many examples/
> dimensions can we deal with? How well do the current implementations
> scale with increasing data volume/ increasing number of machines
> available?
> 
> I know that benchmarks should always be taken with a grain of salt,
> however it would still be nice to be able to quantify improvements in
> performance from release to release (based on whatever we define as
> desirable improvements).

The Open Relevance Project is starting to get some traction with some useful datasets and tools that are all completely open.  We can start leveraging these soon, too.

-Grant

Re: Mahout Usage and Beyond

Posted by Isabel Drost <is...@apache.org>.
On Fri Ted Dunning <te...@gmail.com> wrote:
> So I think that we are actually at about 7 or 8 / 10 with several
> interesting additions.
> 
> More than the original 10, we need realistic and simple examples.

And probably some real numbers on performance: How many examples/
dimensions can we deal with? How well do the current implementations
scale with increasing data volume/ increasing number of machines
available?

I know that benchmarks should always be taken with a grain of salt,
however it would still be nice to be able to quantify improvements in
performance from release to release (based on whatever we define as
desirable improvements).

Isabel

Re: Mahout Usage and Beyond

Posted by Ted Dunning <te...@gmail.com>.
On Fri, Feb 12, 2010 at 4:27 AM, Robin Anil <ro...@gmail.com> wrote:

>   1. Locally Weighted Linear Regression
>

Not sure how important this one is.


>    2. Naive Bayes(We have this and CBayes as a bonus)
>   3. Gaussian Discriminative Analysis (GDA)
>

DP clustering does this, effectively, I think.


>    4. Logistic Regression (LR) (In development)
>

SGD.  In dev as you say.


>    5. k-means(we have this and kmeans++ is in development)
>   6. Neural Network (NN)
>

SGD could implement this if we like.  Not sure that we need M/R to get speed
here.


>    7. Principal Components Analysis (PCA)
>

= SVD and Jake's contribution.


>    8. Independent Component Analysis (ICA)
>   9. Expectation Maximization (EM) (We have it in pig script and in couple
>   of algorithms not generic yet)
>

DP clustering is a version of this for some applications.


>    10. Support Vector Machine (SVM)(In development - The pegasus version)
>


So I think that we are actually at about 7 or 8 / 10 with several
interesting additions.

More than the original 10, we need realistic and simple examples.

-- 
Ted Dunning, CTO
DeepDyve

Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
On Feb 12, 2010, at 7:27 AM, Robin Anil wrote:

>   1. Locally Weighted Linear Regression
>   2. Naive Bayes(We have this and CBayes as a bonus)
>   3. Gaussian Discriminative Analysis (GDA)
>   4. Logistic Regression (LR) (In development)
>   5. k-means(we have this and kmeans++ is in development)
>   6. Neural Network (NN)
>   7. Principal Components Analysis (PCA)
>   8. Independent Component Analysis (ICA)
>   9. Expectation Maximization (EM) (We have it in pig script and in couple
>   of algorithms not generic yet)
>   10. Support Vector Machine (SVM)(In development - The pegasus version)
> 
> Apart from this we have Recommenders, Couple of clustering algorithms,
> PFPGrowth, LDA, SGD, DF
> 
> This is where we are right now 4.5/10 + bonuses. I guess at 0.2 release we
> are alright, but at 2.5 years, we need more hands :)

Right, we have done other things and Mahout really has reached critical mass in a lot of ways and has good momentum.  It never was a req. that we do all of these algs, but it still would be good to have them.  

We also have Winnow/Perceptron in non-M/R form, I believe.

-Grant

Re: Mahout Usage and Beyond

Posted by Robin Anil <ro...@gmail.com>.
   1. Locally Weighted Linear Regression
   2. Naive Bayes(We have this and CBayes as a bonus)
   3. Gaussian Discriminative Analysis (GDA)
   4. Logistic Regression (LR) (In development)
   5. k-means(we have this and kmeans++ is in development)
   6. Neural Network (NN)
   7. Principal Components Analysis (PCA)
   8. Independent Component Analysis (ICA)
   9. Expectation Maximization (EM) (We have it in pig script and in couple
   of algorithms not generic yet)
   10. Support Vector Machine (SVM)(In development - The pegasus version)

Apart from this we have Recommenders, Couple of clustering algorithms,
PFPGrowth, LDA, SGD, DF

This is where we are right now 4.5/10 + bonuses. I guess at 0.2 release we
are alright, but at 2.5 years, we need more hands :)

Robin

Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
I'd love to see us finish the 10 algorithms in the Ng paper that we started with ;-)

-Grant
On Feb 12, 2010, at 3:52 AM, Robin Anil wrote:

> Any more feedback on the original topic ? i.e. "Your" use of mahout and your
> wishlist
> 
> Robin


Re: Mahout Usage and Beyond

Posted by Grant Ingersoll <gs...@apache.org>.
I'd also add, the other thing that would be great is Tika integration for the DocumentVectorizer (which is seriously cool already!).  Thus, if I had a huge number of Word/HTML/PDFs on HDFS, I could run the DV and the output would be Mahout vectors.

On Feb 12, 2010, at 3:52 AM, Robin Anil wrote:

> Any more feedback on the original topic ? i.e. "Your" use of mahout and your
> wishlist
> 
> Robin



Re: Mahout Usage and Beyond

Posted by Robin Anil <ro...@gmail.com>.
Any more feedback on the original topic ? i.e. "Your" use of mahout and your
wishlist

Robin

Re: Mahout Usage and Beyond

Posted by Ted Dunning <te...@gmail.com>.
You should also group together queries from the same user in a single
session.  That together with your clicks can give you valuable data.

On Thu, Feb 11, 2010 at 1:46 AM, Andrew Wang <an...@gmail.com>wrote:

> In relative research papers, we suppose the different queries which have
> same result documents clicked by users are similar. So, the query and the
> result documents that clicked by users can be constructed as vector or
> matrix.
> Query1: DocID1, DocID2, DocIDX,...
> Query2: DocIDX, DocIDY, DocIDZ,...
> QueryX: ..........
>



-- 
Ted Dunning, CTO
DeepDyve

Re: Mahout Usage and Beyond

Posted by "Ankur C. Goel" <ga...@yahoo-inc.com>.
The data you could mention could potentially be of very high dimension.   Seems like you are looking for near neighbors for a query.
Locality Sensitive Hashing (LSH) is known to provide answer to such queries efficiently in high dimensional space.  Most of the search engines use it in one form or the other to perform de-duplication of crawled web pages.
You can check out the following links for references

http://www2007.org/papers/paper570.pdf
http://www1.cs.columbia.edu/~radev/set/DupeDetection.pdf

A while back, I implemented this in map-red and more recently as a pig script.

The good part about pig script is that I could implement it with just a couple of UDFs and immediately validate it on small toy dataset without hadoop.
Also the pig script code itself is quite small (just 43 lines) and hence more readable.

-@nkur

On 2/11/10 3:16 PM, "Andrew Wang" <an...@gmail.com> wrote:

Hi, Claudio

I am doing something about query clustring. You know, in search engine ,
people always enter query content with few words, finding the similary
queries is my job, and from this similary queries, i'd like to find synonym
words.

In order to find the similary queries, only using the query content is
impossible, because content have serverl words (two or three words).

In relative research papers, we suppose the different queries which have
same result documents clicked by users are similar. So, the query and the
result documents that clicked by users can be constructed as vector or
matrix.
Query1: DocID1, DocID2, DocIDX,...
Query2: DocIDX, DocIDY, DocIDZ,...
QueryX: ..........

QueryX is the query content which input by users, DocID is the document
clicked by user who input the query QueryX.

Throug hadoop, I get the matrix and would like to cluster the similary
Queries. I reviewed the sourecode of KMeans in Mahout. I will give it a try.

Claudio, i don't konw much about textrand. In google, TextRank: Bringing
Order into Texts is found, sounds it is useful for text processing and will
helpful for me. Thanks!

On Thu, Feb 11, 2010 at 5:09 PM, Claudio Martella <
claudio.martella@tis.bz.it> wrote:

> I don't know what kind of algorithm you're using, have you every thought
> of textrank? pagerank applied to automatic keyword extraction.
>
>
>
> Andrew Wang wrote:
> > OK, i will give it a try.
> >
> > ps: The solution for vectorizing document is cool, look forward to it !
> >
> > On Thu, Feb 11, 2010 at 4:31 PM, Robin Anil <ro...@gmail.com>
> wrote:
> >
> >
> >> Thanks for replying. Clustering algorithms do work with 0.19  and in
> this
> >> coming release we are including a hadoop based solution for vectorizing
> >> document. Hope you will like it
> >>
> >> Robin
> >>
> >>
> >> On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <
> andrew.wang.1900@gmail.com
> >>
> >>> wrote:
> >>>
> >>> Hi, Robin
> >>>
> >>> In my work, i have a lot of query log which produced by search engine
> and
> >>> we
> >>> use hadoop as our tool to analyse those data. Sometimes, i'd like to
> some
> >>> data mining job such as clustering the similary queries, or classify
> >>>
> >> them.
> >>
> >>> At first time, i think the mahout maybe another option for me to do
> data
> >>> mining job (as you know, the weka is my favorable data mining tool).
> But,
> >>> as
> >>> i try to integrate mahout into my project, i find two major obstacles
> to
> >>> prevent me moving on further:
> >>>
> >>> First, in my company, The hadoop with 0.19 is provided as platform for
> us
> >>> to
> >>> do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
> >>> above.
> >>> This prevent me from benefiting from the functions which provided by
> >>> mahout.
> >>>
> >>> Secondly, the input data should be indexed by Lucene firstly( right or
> >>> wrong? ), then be imported by the Mahout. It confuse me very much,
> >>>
> >> because
> >>
> >>> there are so many data stored by HDFS. In order to use the Mahout, i
> have
> >>> to
> >>> check out all the data firstly ,and indexed by Lucene, and so on. It is
> >>> unbelievable for me.
> >>>
> >>> So, i haven't use the mahout in my daily work. However, i always give
> my
> >>> attendtion to the Mahout, maybe someday i benefit from it.
> >>>
> >>> What about other one's idea?
> >>>
> >>> On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com>
> >>>
> >> wrote:
> >>
> >>>> Hi Mahouters
> >>>>      I am trying to find out how you are using Mahout for your work or
> >>>> project, or which among the algorithms in Mahout are more important
> for
> >>>>
> >>> you
> >>>
> >>>> to do that work. And finally what do you expect to see in Mahout(A
> kind
> >>>>
> >>> of
> >>>
> >>>> a
> >>>> wish list). It wont take much of your time. Please reply with this
> >>>>
> >>> details.
> >>>
> >>>>  It will help a great deal in figuring out where what we need to
> >>>> prioritize.
> >>>>
> >>>> Thanks
> >>>> Robin
> >>>>
> >>>>
> >>>
> >>> --
> >>> http://anqiang1900.blog.163.com/
> >>>
> >>>
> >
> >
> >
> >
>
>
> --
> Claudio Martella
> Digital Technologies
> Unit Research & Development - Analyst
>
> TIS innovation park
> Via Siemens 19 | Siemensstr. 19
> 39100 Bolzano | 39100 Bozen
> Tel. +39 0471 068 123
> Fax  +39 0471 068 129
> claudio.martella@tis.bz.it http://www.tis.bz.it
>
> Short information regarding use of personal data. According to Section 13
> of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we
> process your personal data in order to fulfil contractual and fiscal
> obligations and also to send you information regarding our services and
> events. Your personal data are processed with and without electronic means
> and by respecting data subjects' rights, fundamental freedoms and dignity,
> particularly with regard to confidentiality, personal identity and the right
> to personal data protection. At any time and without formalities you can
> write an e-mail to privacy@tis.bz.it in order to object the processing of
> your personal data for the purpose of sending advertising materials and also
> to exercise the right to access personal data and other rights referred to
> in Section 7 of Decree 196/2003. The data controller is TIS Techno
> Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the
> complete information on the web site www.tis.bz.it.
>
>
>


--
http://anqiang1900.blog.163.com/


Re: Mahout Usage and Beyond

Posted by Claudio Martella <cl...@tis.bz.it>.
Hi Andrew,

I understand your problem, you want to cluster also on a behavioral
basis (clicks). The paper i told you about (the one you quote correctly)
is more about how to extract keywords from content, in a manner that is
more "precise" than tf. That's good if you want to create a vector of
the document's content. It was a suggestion for the mahout project too.
It would be another possible way of extracting vectors from documents.
I'm new to mahout, i'll try to contribute an implementation soon.


Andrew Wang wrote:
> Hi, Claudio
>
> I am doing something about query clustring. You know, in search engine ,
> people always enter query content with few words, finding the similary
> queries is my job, and from this similary queries, i'd like to find synonym
> words.
>
> In order to find the similary queries, only using the query content is
> impossible, because content have serverl words (two or three words).
>
> In relative research papers, we suppose the different queries which have
> same result documents clicked by users are similar. So, the query and the
> result documents that clicked by users can be constructed as vector or
> matrix.
> Query1: DocID1, DocID2, DocIDX,...
> Query2: DocIDX, DocIDY, DocIDZ,...
> QueryX: ..........
>
> QueryX is the query content which input by users, DocID is the document
> clicked by user who input the query QueryX.
>
> Throug hadoop, I get the matrix and would like to cluster the similary
> Queries. I reviewed the sourecode of KMeans in Mahout. I will give it a try.
>
> Claudio, i don't konw much about textrand. In google, TextRank: Bringing
> Order into Texts is found, sounds it is useful for text processing and will
> helpful for me. Thanks!
>
> On Thu, Feb 11, 2010 at 5:09 PM, Claudio Martella <
> claudio.martella@tis.bz.it> wrote:
>
>   
>> I don't know what kind of algorithm you're using, have you every thought
>> of textrank? pagerank applied to automatic keyword extraction.
>>
>>
>>
>> Andrew Wang wrote:
>>     
>>> OK, i will give it a try.
>>>
>>> ps: The solution for vectorizing document is cool, look forward to it !
>>>
>>> On Thu, Feb 11, 2010 at 4:31 PM, Robin Anil <ro...@gmail.com>
>>>       
>> wrote:
>>     
>>>       
>>>> Thanks for replying. Clustering algorithms do work with 0.19  and in
>>>>         
>> this
>>     
>>>> coming release we are including a hadoop based solution for vectorizing
>>>> document. Hope you will like it
>>>>
>>>> Robin
>>>>
>>>>
>>>> On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <
>>>>         
>> andrew.wang.1900@gmail.com
>>     
>>>>> wrote:
>>>>>
>>>>> Hi, Robin
>>>>>
>>>>> In my work, i have a lot of query log which produced by search engine
>>>>>           
>> and
>>     
>>>>> we
>>>>> use hadoop as our tool to analyse those data. Sometimes, i'd like to
>>>>>           
>> some
>>     
>>>>> data mining job such as clustering the similary queries, or classify
>>>>>
>>>>>           
>>>> them.
>>>>
>>>>         
>>>>> At first time, i think the mahout maybe another option for me to do
>>>>>           
>> data
>>     
>>>>> mining job (as you know, the weka is my favorable data mining tool).
>>>>>           
>> But,
>>     
>>>>> as
>>>>> i try to integrate mahout into my project, i find two major obstacles
>>>>>           
>> to
>>     
>>>>> prevent me moving on further:
>>>>>
>>>>> First, in my company, The hadoop with 0.19 is provided as platform for
>>>>>           
>> us
>>     
>>>>> to
>>>>> do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
>>>>> above.
>>>>> This prevent me from benefiting from the functions which provided by
>>>>> mahout.
>>>>>
>>>>> Secondly, the input data should be indexed by Lucene firstly( right or
>>>>> wrong? ), then be imported by the Mahout. It confuse me very much,
>>>>>
>>>>>           
>>>> because
>>>>
>>>>         
>>>>> there are so many data stored by HDFS. In order to use the Mahout, i
>>>>>           
>> have
>>     
>>>>> to
>>>>> check out all the data firstly ,and indexed by Lucene, and so on. It is
>>>>> unbelievable for me.
>>>>>
>>>>> So, i haven't use the mahout in my daily work. However, i always give
>>>>>           
>> my
>>     
>>>>> attendtion to the Mahout, maybe someday i benefit from it.
>>>>>
>>>>> What about other one's idea?
>>>>>
>>>>> On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com>
>>>>>
>>>>>           
>>>> wrote:
>>>>
>>>>         
>>>>>> Hi Mahouters
>>>>>>      I am trying to find out how you are using Mahout for your work or
>>>>>> project, or which among the algorithms in Mahout are more important
>>>>>>             
>> for
>>     
>>>>> you
>>>>>
>>>>>           
>>>>>> to do that work. And finally what do you expect to see in Mahout(A
>>>>>>             
>> kind
>>     
>>>>> of
>>>>>
>>>>>           
>>>>>> a
>>>>>> wish list). It wont take much of your time. Please reply with this
>>>>>>
>>>>>>             
>>>>> details.
>>>>>
>>>>>           
>>>>>>  It will help a great deal in figuring out where what we need to
>>>>>> prioritize.
>>>>>>
>>>>>> Thanks
>>>>>> Robin
>>>>>>
>>>>>>
>>>>>>             
>>>>> --
>>>>> http://anqiang1900.blog.163.com/
>>>>>
>>>>>
>>>>>           
>>>
>>>
>>>       
>> --
>> Claudio Martella
>> Digital Technologies
>> Unit Research & Development - Analyst
>>
>> TIS innovation park
>> Via Siemens 19 | Siemensstr. 19
>> 39100 Bolzano | 39100 Bozen
>> Tel. +39 0471 068 123
>> Fax  +39 0471 068 129
>> claudio.martella@tis.bz.it http://www.tis.bz.it
>>
>> Short information regarding use of personal data. According to Section 13
>> of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we
>> process your personal data in order to fulfil contractual and fiscal
>> obligations and also to send you information regarding our services and
>> events. Your personal data are processed with and without electronic means
>> and by respecting data subjects' rights, fundamental freedoms and dignity,
>> particularly with regard to confidentiality, personal identity and the right
>> to personal data protection. At any time and without formalities you can
>> write an e-mail to privacy@tis.bz.it in order to object the processing of
>> your personal data for the purpose of sending advertising materials and also
>> to exercise the right to access personal data and other rights referred to
>> in Section 7 of Decree 196/2003. The data controller is TIS Techno
>> Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the
>> complete information on the web site www.tis.bz.it.
>>
>>
>>
>>     
>
>
>   


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
claudio.martella@tis.bz.it http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.



Re: Mahout Usage and Beyond

Posted by Andrew Wang <an...@gmail.com>.
Hi, Claudio

I am doing something about query clustring. You know, in search engine ,
people always enter query content with few words, finding the similary
queries is my job, and from this similary queries, i'd like to find synonym
words.

In order to find the similary queries, only using the query content is
impossible, because content have serverl words (two or three words).

In relative research papers, we suppose the different queries which have
same result documents clicked by users are similar. So, the query and the
result documents that clicked by users can be constructed as vector or
matrix.
Query1: DocID1, DocID2, DocIDX,...
Query2: DocIDX, DocIDY, DocIDZ,...
QueryX: ..........

QueryX is the query content which input by users, DocID is the document
clicked by user who input the query QueryX.

Throug hadoop, I get the matrix and would like to cluster the similary
Queries. I reviewed the sourecode of KMeans in Mahout. I will give it a try.

Claudio, i don't konw much about textrand. In google, TextRank: Bringing
Order into Texts is found, sounds it is useful for text processing and will
helpful for me. Thanks!

On Thu, Feb 11, 2010 at 5:09 PM, Claudio Martella <
claudio.martella@tis.bz.it> wrote:

> I don't know what kind of algorithm you're using, have you every thought
> of textrank? pagerank applied to automatic keyword extraction.
>
>
>
> Andrew Wang wrote:
> > OK, i will give it a try.
> >
> > ps: The solution for vectorizing document is cool, look forward to it !
> >
> > On Thu, Feb 11, 2010 at 4:31 PM, Robin Anil <ro...@gmail.com>
> wrote:
> >
> >
> >> Thanks for replying. Clustering algorithms do work with 0.19  and in
> this
> >> coming release we are including a hadoop based solution for vectorizing
> >> document. Hope you will like it
> >>
> >> Robin
> >>
> >>
> >> On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <
> andrew.wang.1900@gmail.com
> >>
> >>> wrote:
> >>>
> >>> Hi, Robin
> >>>
> >>> In my work, i have a lot of query log which produced by search engine
> and
> >>> we
> >>> use hadoop as our tool to analyse those data. Sometimes, i'd like to
> some
> >>> data mining job such as clustering the similary queries, or classify
> >>>
> >> them.
> >>
> >>> At first time, i think the mahout maybe another option for me to do
> data
> >>> mining job (as you know, the weka is my favorable data mining tool).
> But,
> >>> as
> >>> i try to integrate mahout into my project, i find two major obstacles
> to
> >>> prevent me moving on further:
> >>>
> >>> First, in my company, The hadoop with 0.19 is provided as platform for
> us
> >>> to
> >>> do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
> >>> above.
> >>> This prevent me from benefiting from the functions which provided by
> >>> mahout.
> >>>
> >>> Secondly, the input data should be indexed by Lucene firstly( right or
> >>> wrong? ), then be imported by the Mahout. It confuse me very much,
> >>>
> >> because
> >>
> >>> there are so many data stored by HDFS. In order to use the Mahout, i
> have
> >>> to
> >>> check out all the data firstly ,and indexed by Lucene, and so on. It is
> >>> unbelievable for me.
> >>>
> >>> So, i haven't use the mahout in my daily work. However, i always give
> my
> >>> attendtion to the Mahout, maybe someday i benefit from it.
> >>>
> >>> What about other one's idea?
> >>>
> >>> On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com>
> >>>
> >> wrote:
> >>
> >>>> Hi Mahouters
> >>>>      I am trying to find out how you are using Mahout for your work or
> >>>> project, or which among the algorithms in Mahout are more important
> for
> >>>>
> >>> you
> >>>
> >>>> to do that work. And finally what do you expect to see in Mahout(A
> kind
> >>>>
> >>> of
> >>>
> >>>> a
> >>>> wish list). It wont take much of your time. Please reply with this
> >>>>
> >>> details.
> >>>
> >>>>  It will help a great deal in figuring out where what we need to
> >>>> prioritize.
> >>>>
> >>>> Thanks
> >>>> Robin
> >>>>
> >>>>
> >>>
> >>> --
> >>> http://anqiang1900.blog.163.com/
> >>>
> >>>
> >
> >
> >
> >
>
>
> --
> Claudio Martella
> Digital Technologies
> Unit Research & Development - Analyst
>
> TIS innovation park
> Via Siemens 19 | Siemensstr. 19
> 39100 Bolzano | 39100 Bozen
> Tel. +39 0471 068 123
> Fax  +39 0471 068 129
> claudio.martella@tis.bz.it http://www.tis.bz.it
>
> Short information regarding use of personal data. According to Section 13
> of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we
> process your personal data in order to fulfil contractual and fiscal
> obligations and also to send you information regarding our services and
> events. Your personal data are processed with and without electronic means
> and by respecting data subjects' rights, fundamental freedoms and dignity,
> particularly with regard to confidentiality, personal identity and the right
> to personal data protection. At any time and without formalities you can
> write an e-mail to privacy@tis.bz.it in order to object the processing of
> your personal data for the purpose of sending advertising materials and also
> to exercise the right to access personal data and other rights referred to
> in Section 7 of Decree 196/2003. The data controller is TIS Techno
> Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the
> complete information on the web site www.tis.bz.it.
>
>
>


-- 
http://anqiang1900.blog.163.com/

Re: Mahout Usage and Beyond

Posted by Claudio Martella <cl...@tis.bz.it>.
I don't know what kind of algorithm you're using, have you every thought
of textrank? pagerank applied to automatic keyword extraction.



Andrew Wang wrote:
> OK, i will give it a try.
>
> ps: The solution for vectorizing document is cool, look forward to it !
>
> On Thu, Feb 11, 2010 at 4:31 PM, Robin Anil <ro...@gmail.com> wrote:
>
>   
>> Thanks for replying. Clustering algorithms do work with 0.19  and in this
>> coming release we are including a hadoop based solution for vectorizing
>> document. Hope you will like it
>>
>> Robin
>>
>>
>> On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <andrew.wang.1900@gmail.com
>>     
>>> wrote:
>>>       
>>> Hi, Robin
>>>
>>> In my work, i have a lot of query log which produced by search engine and
>>> we
>>> use hadoop as our tool to analyse those data. Sometimes, i'd like to some
>>> data mining job such as clustering the similary queries, or classify
>>>       
>> them.
>>     
>>> At first time, i think the mahout maybe another option for me to do data
>>> mining job (as you know, the weka is my favorable data mining tool). But,
>>> as
>>> i try to integrate mahout into my project, i find two major obstacles to
>>> prevent me moving on further:
>>>
>>> First, in my company, The hadoop with 0.19 is provided as platform for us
>>> to
>>> do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
>>> above.
>>> This prevent me from benefiting from the functions which provided by
>>> mahout.
>>>
>>> Secondly, the input data should be indexed by Lucene firstly( right or
>>> wrong? ), then be imported by the Mahout. It confuse me very much,
>>>       
>> because
>>     
>>> there are so many data stored by HDFS. In order to use the Mahout, i have
>>> to
>>> check out all the data firstly ,and indexed by Lucene, and so on. It is
>>> unbelievable for me.
>>>
>>> So, i haven't use the mahout in my daily work. However, i always give my
>>> attendtion to the Mahout, maybe someday i benefit from it.
>>>
>>> What about other one's idea?
>>>
>>> On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com>
>>>       
>> wrote:
>>     
>>>> Hi Mahouters
>>>>      I am trying to find out how you are using Mahout for your work or
>>>> project, or which among the algorithms in Mahout are more important for
>>>>         
>>> you
>>>       
>>>> to do that work. And finally what do you expect to see in Mahout(A kind
>>>>         
>>> of
>>>       
>>>> a
>>>> wish list). It wont take much of your time. Please reply with this
>>>>         
>>> details.
>>>       
>>>>  It will help a great deal in figuring out where what we need to
>>>> prioritize.
>>>>
>>>> Thanks
>>>> Robin
>>>>
>>>>         
>>>
>>> --
>>> http://anqiang1900.blog.163.com/
>>>
>>>       
>
>
>
>   


-- 
Claudio Martella
Digital Technologies
Unit Research & Development - Analyst

TIS innovation park
Via Siemens 19 | Siemensstr. 19
39100 Bolzano | 39100 Bozen
Tel. +39 0471 068 123
Fax  +39 0471 068 129
claudio.martella@tis.bz.it http://www.tis.bz.it

Short information regarding use of personal data. According to Section 13 of Italian Legislative Decree no. 196 of 30 June 2003, we inform you that we process your personal data in order to fulfil contractual and fiscal obligations and also to send you information regarding our services and events. Your personal data are processed with and without electronic means and by respecting data subjects' rights, fundamental freedoms and dignity, particularly with regard to confidentiality, personal identity and the right to personal data protection. At any time and without formalities you can write an e-mail to privacy@tis.bz.it in order to object the processing of your personal data for the purpose of sending advertising materials and also to exercise the right to access personal data and other rights referred to in Section 7 of Decree 196/2003. The data controller is TIS Techno Innovation Alto Adige, Siemens Street n. 19, Bolzano. You can find the complete information on the web site www.tis.bz.it.



Re: Mahout Usage and Beyond

Posted by Andrew Wang <an...@gmail.com>.
OK, i will give it a try.

ps: The solution for vectorizing document is cool, look forward to it !

On Thu, Feb 11, 2010 at 4:31 PM, Robin Anil <ro...@gmail.com> wrote:

> Thanks for replying. Clustering algorithms do work with 0.19  and in this
> coming release we are including a hadoop based solution for vectorizing
> document. Hope you will like it
>
> Robin
>
>
> On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <andrew.wang.1900@gmail.com
> >wrote:
>
> > Hi, Robin
> >
> > In my work, i have a lot of query log which produced by search engine and
> > we
> > use hadoop as our tool to analyse those data. Sometimes, i'd like to some
> > data mining job such as clustering the similary queries, or classify
> them.
> > At first time, i think the mahout maybe another option for me to do data
> > mining job (as you know, the weka is my favorable data mining tool). But,
> > as
> > i try to integrate mahout into my project, i find two major obstacles to
> > prevent me moving on further:
> >
> > First, in my company, The hadoop with 0.19 is provided as platform for us
> > to
> > do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
> > above.
> > This prevent me from benefiting from the functions which provided by
> > mahout.
> >
> > Secondly, the input data should be indexed by Lucene firstly( right or
> > wrong? ), then be imported by the Mahout. It confuse me very much,
> because
> > there are so many data stored by HDFS. In order to use the Mahout, i have
> > to
> > check out all the data firstly ,and indexed by Lucene, and so on. It is
> > unbelievable for me.
> >
> > So, i haven't use the mahout in my daily work. However, i always give my
> > attendtion to the Mahout, maybe someday i benefit from it.
> >
> > What about other one's idea?
> >
> > On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com>
> wrote:
> >
> > > Hi Mahouters
> > >      I am trying to find out how you are using Mahout for your work or
> > > project, or which among the algorithms in Mahout are more important for
> > you
> > > to do that work. And finally what do you expect to see in Mahout(A kind
> > of
> > > a
> > > wish list). It wont take much of your time. Please reply with this
> > details.
> > >  It will help a great deal in figuring out where what we need to
> > > prioritize.
> > >
> > > Thanks
> > > Robin
> > >
> >
> >
> >
> > --
> > http://anqiang1900.blog.163.com/
> >
>



-- 
http://anqiang1900.blog.163.com/

Re: Mahout Usage and Beyond

Posted by Robin Anil <ro...@gmail.com>.
Thanks for replying. Clustering algorithms do work with 0.19  and in this
coming release we are including a hadoop based solution for vectorizing
document. Hope you will like it

Robin


On Thu, Feb 11, 2010 at 1:46 PM, Andrew Wang <an...@gmail.com>wrote:

> Hi, Robin
>
> In my work, i have a lot of query log which produced by search engine and
> we
> use hadoop as our tool to analyse those data. Sometimes, i'd like to some
> data mining job such as clustering the similary queries, or classify them.
> At first time, i think the mahout maybe another option for me to do data
> mining job (as you know, the weka is my favorable data mining tool). But,
> as
> i try to integrate mahout into my project, i find two major obstacles to
> prevent me moving on further:
>
> First, in my company, The hadoop with 0.19 is provided as platform for us
> to
> do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or
> above.
> This prevent me from benefiting from the functions which provided by
> mahout.
>
> Secondly, the input data should be indexed by Lucene firstly( right or
> wrong? ), then be imported by the Mahout. It confuse me very much, because
> there are so many data stored by HDFS. In order to use the Mahout, i have
> to
> check out all the data firstly ,and indexed by Lucene, and so on. It is
> unbelievable for me.
>
> So, i haven't use the mahout in my daily work. However, i always give my
> attendtion to the Mahout, maybe someday i benefit from it.
>
> What about other one's idea?
>
> On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com> wrote:
>
> > Hi Mahouters
> >      I am trying to find out how you are using Mahout for your work or
> > project, or which among the algorithms in Mahout are more important for
> you
> > to do that work. And finally what do you expect to see in Mahout(A kind
> of
> > a
> > wish list). It wont take much of your time. Please reply with this
> details.
> >  It will help a great deal in figuring out where what we need to
> > prioritize.
> >
> > Thanks
> > Robin
> >
>
>
>
> --
> http://anqiang1900.blog.163.com/
>

Re: Mahout Usage and Beyond

Posted by Andrew Wang <an...@gmail.com>.
Hi, Robin

In my work, i have a lot of query log which produced by search engine and we
use hadoop as our tool to analyse those data. Sometimes, i'd like to some
data mining job such as clustering the similary queries, or classify them.
At first time, i think the mahout maybe another option for me to do data
mining job (as you know, the weka is my favorable data mining tool). But, as
i try to integrate mahout into my project, i find two major obstacles to
prevent me moving on further:

First, in my company, The hadoop with 0.19 is provided as platform for us to
do daily jobs. As we know, Mahout is dependent the hadoop with 0.2 or above.
This prevent me from benefiting from the functions which provided by mahout.

Secondly, the input data should be indexed by Lucene firstly( right or
wrong? ), then be imported by the Mahout. It confuse me very much, because
there are so many data stored by HDFS. In order to use the Mahout, i have to
check out all the data firstly ,and indexed by Lucene, and so on. It is
unbelievable for me.

So, i haven't use the mahout in my daily work. However, i always give my
attendtion to the Mahout, maybe someday i benefit from it.

What about other one's idea?

On Wed, Feb 10, 2010 at 6:19 PM, Robin Anil <ro...@gmail.com> wrote:

> Hi Mahouters
>      I am trying to find out how you are using Mahout for your work or
> project, or which among the algorithms in Mahout are more important for you
> to do that work. And finally what do you expect to see in Mahout(A kind of
> a
> wish list). It wont take much of your time. Please reply with this details.
>  It will help a great deal in figuring out where what we need to
> prioritize.
>
> Thanks
> Robin
>



-- 
http://anqiang1900.blog.163.com/