You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@mahout.apache.org by "Bai, Gang" <de...@baigang.net> on 2011/03/22 14:24:35 UTC

About GBDT support

Hi all,

Is there any plan to implement Gredient Boosted Decision Trees in Mahout? I
think it's high expected because of its extensive application in web data
mining.

Best regards,
-BaiGang

Re: About GBDT support

Posted by "Bai, Gang" <de...@baigang.net>.
On Fri, Mar 25, 2011 at 11:00 AM, Bai, Gang <de...@baigang.net> wrote:

>
> As the nature of boosting method, GBDT is hard to parallelize (in a whole
> ensemble manner). So it typically requires multiple stages. Also, efficiency
> is an urgent issue.
>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ is not an ^^^^^^^
An awkward typo.

Re: About GBDT support

Posted by Jerry Ye <je...@yahoo-inc.com>.
Hi Peter,
The MPI launcher I wrote essentially tricks Hadoop into thinking that there are mappers that run for a really long time but launches MPI jobs instead.  The main advantage is that I get to use existing Hadoop clusters to launch my MPI jobs.  The only way this would fit in with the Mahout project is to have Mahout launch these types of jobs and essentially act as a workflow manager.  Also, keep in mind that the MPI code would be in C/C++ as well.  Using RPC and implementing some of MPI's functionality in Mahout might be a better alternative.

My understanding is that there will be a MPI application manager in Hadoop YARN/NextGen.  So you don't have to wait too long if all you want to do is launch MPI jobs on an existing Hadoop cluster.

- jerry

On Sep 15, 2011, at 5:13 AM, Peter Higdon wrote:

> Hi Jerry,
> 
> Does this mean that your modification of OpenMPI to run with on Hadoop is not
> suitable to fit into the Mahout project? 
> 
> It may not be HPC, but it would offer versatile algorithm options while staying
> in a familiar, de-facto standard, environment.
> 
> Thanks!
> 
> Peter
> 


Re: About GBDT support

Posted by Peter Higdon <pe...@gmail.com>.
Hi Jerry,

Does this mean that your modification of OpenMPI to run with on Hadoop is not
suitable to fit into the Mahout project? 

It may not be HPC, but it would offer versatile algorithm options while staying
in a familiar, de-facto standard, environment.

Thanks!

Peter


Re: About GBDT support

Posted by "Bai, Gang" <de...@baigang.net>.
Hi Jerry,

Thanks for clarifying. Also, the difficulty for a fast and scalable
implementation on Hadoop and the recent academic outcome you mentioned are
very helpful.

Your paper is indeed a brilliant work. A personal reason why I expect an
implementation of Google's method is... that I also work in Yahoo (just
started in last month as a campus new hire).

Thanks.
-BaiGang

On Fri, Mar 25, 2011 at 11:47 AM, Jerry Ye <je...@yahoo-inc.com> wrote:

>  One of the author’s of the Yahoo paper here. One of the main points of
> our paper was that an algorithm like GBDT needs a speedy way of
> communicating optimal split points and subsequently the partitioning of the
> samples.  As of right now, communication in MapReduce is essentially done by
> reading/writing files on HDFS.  This was exactly the reason why our pure
> MapReduce implementation of the algorithm was so slow.  In order to reduce
> communication costs, we looked into using MPI and wrote a launcher for it on
> top of Hadoop (mostly to utilize existing clusters).
>
> Google’s approach was quite different and trained approximate models while
> ours produced exactly the same model as the single machine implementation.
>  Like ours, their approach also deviated greatly from what a standard
> MapReduce job is and had to write what essentially was another job
> scheduler.
>
> Recently, there was a NIPS workshop on distributed computing (
> http://lccc.eecs.berkeley.edu/) with 2 other variants of GBDT from
> Microsoft and Washington University.  The general message, however, was that
> we needed something better than writing to HDFS for communication and all of
> the solutions used something like MPI or another way of communication
> between nodes directly.
>
> It would be great to have GBDT in Mahout, but it’s not intuitive how to fit
> a fast and scalable implementation into the existing framework.
>
> - jerry
>
>
> On 3/24/11 8:00 PM, "Bai, Gang" <de...@baigang.net> wrote:
>
> This post is probably OT since it's essentially about implementing an
> algorithm for Mahout. :-)
>
> As the nature of boosting method, GBDT is hard to parallelize (in a whole
> ensemble manner). So it typically requires multiple stages. Also,
> efficiency
> is an urgent issue. A distributed edition of algorithm focus on tackling
> huge amount of data, which is impossible for a traditional in-memory
> edition.
>
> Both Yahoo and Google have articles address this problem. Yahoo's method*
> is
> less prominent IMHO. It generally deals with scalability, i.e partitioning
> the data both horizonally (in the sample dimension) and vertically (in the
> feature dimension), but leaves the communication costs unresolved, which
> makes it less efficient than a sequential algorithm. As for Google's
> method**, I just skimmed over the article and had not found time to read it
> in detail. It's genrally fancier, presenting the algorithm as well as
> addressing several engineering issues.
>
> I don't know how to start, or propose to start, an implementation of a
> particular algorithm, but I recommend this one. I'll give assistance on
> this
> project.
>
> * Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
> ** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
> VLDB 2009.
>
> Thanks,
> -BaiGang
>
> On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com>
> wrote:
>
> > I don't know of anybody currently planning to implement GBDT's.  A
> scalable
> > implementation would probably be a good thing to have in Mahout.
> >
> > Decision tree methods can, however, be difficult to implement in a
> scalable
> > way.
> >
> > How do you propose to do so?
> >
> >
> > On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:
> >
> >> Hi all,
> >>
> >> Is there any plan to implement Gredient Boosted Decision Trees in
> Mahout?
> >> I
> >> think it's high expected because of its extensive application in web
> data
> >> mining.
> >>
> >> Best regards,
> >> -BaiGang
> >>
> >
> >
>
>

Re: About GBDT support

Posted by Jake Mannix <ja...@gmail.com>.
On Thu, Sep 15, 2011 at 11:16 AM, Jerry Ye <je...@yahoo-inc.com> wrote:

> Hi Jake,
> This should work as well since the main pain points that I had with
> MapReduce was the lack of persistent memory between iterations and speedy
> communication.  Along the same lines, there is also a MPI application master
> in the works for Hadoop YARN/NextGen.
>

Well, Giraph "solves" this by having all iterations happen in the same
single Map(-only, no Reduce) task, and stores intermediate state between
supersteps in HDFS, so that if the task dies, when it gets retried, it can
pick up at the superstep it left off at, reloading all current state back
into memory.


> However, integrating either Giraph or MPI into Mahout would require some
> thought.
>

Integrating Giraph into Mahout doesn't require any MPI / C++ code, it's just
a single jar launchable via "hadoop jar classname [args]", so it actually
doesn't seem so infeasible at all, to me.  I don't know about your MPI work
- that may need to wait until YARN, yes.

  -jake


>
> - jerry
>
> On Sep 15, 2011, at 9:44 AM, Jake Mannix wrote:
>
> This is an old thread to resurrect from the dead, but I wonder if training
> GBDT on a Pregel-like architecture (for example: Apache Giraph<http://incubator.apache.org/giraph>,
> which loads the data set into memory in long-lived Hadoop Mappers, then
> communicates via Hadoop RPC between nodes while doing BSP iterations) would
> allow for "speedy communication of optimal split points"?  The advantage of
> Giraph is that it runs on vanilla Hadoop, no new Schedulers or Job Trackers.
>
>   -jake
>
> On Thu, Mar 24, 2011 at 8:47 PM, Jerry Ye <je...@yahoo-inc.com> wrote:
>
>> One of the author's of the Yahoo paper here. One of the main points of our
>> paper was that an algorithm like GBDT needs a speedy way of communicating
>> optimal split points and subsequently the partitioning of the samples.  As
>> of right now, communication in MapReduce is essentially done by
>> reading/writing files on HDFS.  This was exactly the reason why our pure
>> MapReduce implementation of the algorithm was so slow.  In order to reduce
>> communication costs, we looked into using MPI and wrote a launcher for it on
>> top of Hadoop (mostly to utilize existing clusters).
>>
>> Google's approach was quite different and trained approximate models while
>> ours produced exactly the same model as the single machine implementation.
>>  Like ours, their approach also deviated greatly from what a standard
>> MapReduce job is and had to write what essentially was another job
>> scheduler.
>>
>> Recently, there was a NIPS workshop on distributed computing (
>> http://lccc.eecs.berkeley.edu/) with 2 other variants of GBDT from
>> Microsoft and Washington University.  The general message, however, was that
>> we needed something better than writing to HDFS for communication and all of
>> the solutions used something like MPI or another way of communication
>> between nodes directly.
>>
>> It would be great to have GBDT in Mahout, but it's not intuitive how to
>> fit a fast and scalable implementation into the existing framework.
>>
>> - jerry
>>
>> On 3/24/11 8:00 PM, "Bai, Gang" <de...@baigang.net> wrote:
>>
>> This post is probably OT since it's essentially about implementing an
>> algorithm for Mahout. :-)
>>
>> As the nature of boosting method, GBDT is hard to parallelize (in a whole
>> ensemble manner). So it typically requires multiple stages. Also,
>> efficiency
>> is an urgent issue. A distributed edition of algorithm focus on tackling
>> huge amount of data, which is impossible for a traditional in-memory
>> edition.
>>
>> Both Yahoo and Google have articles address this problem. Yahoo's method*
>> is
>> less prominent IMHO. It generally deals with scalability, i.e partitioning
>> the data both horizonally (in the sample dimension) and vertically (in the
>> feature dimension), but leaves the communication costs unresolved, which
>> makes it less efficient than a sequential algorithm. As for Google's
>> method**, I just skimmed over the article and had not found time to read
>> it
>> in detail. It's genrally fancier, presenting the algorithm as well as
>> addressing several engineering issues.
>>
>> I don't know how to start, or propose to start, an implementation of a
>> particular algorithm, but I recommend this one. I'll give assistance on
>> this
>> project.
>>
>> * Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
>> ** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
>> VLDB 2009.
>>
>> Thanks,
>> -BaiGang
>>
>> On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com>
>> wrote:
>>
>> > I don't know of anybody currently planning to implement GBDT's.  A
>> scalable
>> > implementation would probably be a good thing to have in Mahout.
>> >
>> > Decision tree methods can, however, be difficult to implement in a
>> scalable
>> > way.
>> >
>> > How do you propose to do so?
>> >
>> >
>> > On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:
>> >
>> >> Hi all,
>> >>
>> >> Is there any plan to implement Gredient Boosted Decision Trees in
>> Mahout?
>> >> I
>> >> think it's high expected because of its extensive application in web
>> data
>> >> mining.
>> >>
>> >> Best regards,
>> >> -BaiGang
>> >>
>> >
>> >
>>
>>
>
>

Re: About GBDT support

Posted by Jerry Ye <je...@yahoo-inc.com>.
Hi Jake,
This should work as well since the main pain points that I had with MapReduce was the lack of persistent memory between iterations and speedy communication.  Along the same lines, there is also a MPI application master in the works for Hadoop YARN/NextGen.

However, integrating either Giraph or MPI into Mahout would require some thought.

- jerry

On Sep 15, 2011, at 9:44 AM, Jake Mannix wrote:

This is an old thread to resurrect from the dead, but I wonder if training GBDT on a Pregel-like architecture (for example: Apache Giraph<http://incubator.apache.org/giraph>, which loads the data set into memory in long-lived Hadoop Mappers, then communicates via Hadoop RPC between nodes while doing BSP iterations) would allow for "speedy communication of optimal split points"?  The advantage of Giraph is that it runs on vanilla Hadoop, no new Schedulers or Job Trackers.

  -jake

On Thu, Mar 24, 2011 at 8:47 PM, Jerry Ye <je...@yahoo-inc.com>> wrote:
One of the author's of the Yahoo paper here. One of the main points of our paper was that an algorithm like GBDT needs a speedy way of communicating optimal split points and subsequently the partitioning of the samples.  As of right now, communication in MapReduce is essentially done by reading/writing files on HDFS.  This was exactly the reason why our pure MapReduce implementation of the algorithm was so slow.  In order to reduce communication costs, we looked into using MPI and wrote a launcher for it on top of Hadoop (mostly to utilize existing clusters).

Google's approach was quite different and trained approximate models while ours produced exactly the same model as the single machine implementation.  Like ours, their approach also deviated greatly from what a standard MapReduce job is and had to write what essentially was another job scheduler.

Recently, there was a NIPS workshop on distributed computing (http://lccc.eecs.berkeley.edu/) with 2 other variants of GBDT from Microsoft and Washington University.  The general message, however, was that we needed something better than writing to HDFS for communication and all of the solutions used something like MPI or another way of communication between nodes directly.

It would be great to have GBDT in Mahout, but it's not intuitive how to fit a fast and scalable implementation into the existing framework.

- jerry

On 3/24/11 8:00 PM, "Bai, Gang" <de...@baigang.net>> wrote:

This post is probably OT since it's essentially about implementing an
algorithm for Mahout. :-)

As the nature of boosting method, GBDT is hard to parallelize (in a whole
ensemble manner). So it typically requires multiple stages. Also, efficiency
is an urgent issue. A distributed edition of algorithm focus on tackling
huge amount of data, which is impossible for a traditional in-memory
edition.

Both Yahoo and Google have articles address this problem. Yahoo's method* is
less prominent IMHO. It generally deals with scalability, i.e partitioning
the data both horizonally (in the sample dimension) and vertically (in the
feature dimension), but leaves the communication costs unresolved, which
makes it less efficient than a sequential algorithm. As for Google's
method**, I just skimmed over the article and had not found time to read it
in detail. It's genrally fancier, presenting the algorithm as well as
addressing several engineering issues.

I don't know how to start, or propose to start, an implementation of a
particular algorithm, but I recommend this one. I'll give assistance on this
project.

* Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
VLDB 2009.

Thanks,
-BaiGang

On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com>> wrote:

> I don't know of anybody currently planning to implement GBDT's.  A scalable
> implementation would probably be a good thing to have in Mahout.
>
> Decision tree methods can, however, be difficult to implement in a scalable
> way.
>
> How do you propose to do so?
>
>
> On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net>> wrote:
>
>> Hi all,
>>
>> Is there any plan to implement Gredient Boosted Decision Trees in Mahout?
>> I
>> think it's high expected because of its extensive application in web data
>> mining.
>>
>> Best regards,
>> -BaiGang
>>
>
>




Re: About GBDT support

Posted by Jake Mannix <ja...@gmail.com>.
This is an old thread to resurrect from the dead, but I wonder if training
GBDT on a Pregel-like architecture (for example: Apache
Giraph<http://incubator.apache.org/giraph>,
which loads the data set into memory in long-lived Hadoop Mappers, then
communicates via Hadoop RPC between nodes while doing BSP iterations) would
allow for "speedy communication of optimal split points"?  The advantage of
Giraph is that it runs on vanilla Hadoop, no new Schedulers or Job Trackers.

  -jake

On Thu, Mar 24, 2011 at 8:47 PM, Jerry Ye <je...@yahoo-inc.com> wrote:

> One of the author's of the Yahoo paper here. One of the main points of our
> paper was that an algorithm like GBDT needs a speedy way of communicating
> optimal split points and subsequently the partitioning of the samples.  As
> of right now, communication in MapReduce is essentially done by
> reading/writing files on HDFS.  This was exactly the reason why our pure
> MapReduce implementation of the algorithm was so slow.  In order to reduce
> communication costs, we looked into using MPI and wrote a launcher for it on
> top of Hadoop (mostly to utilize existing clusters).
>
> Google's approach was quite different and trained approximate models while
> ours produced exactly the same model as the single machine implementation.
>  Like ours, their approach also deviated greatly from what a standard
> MapReduce job is and had to write what essentially was another job
> scheduler.
>
> Recently, there was a NIPS workshop on distributed computing (
> http://lccc.eecs.berkeley.edu/) with 2 other variants of GBDT from
> Microsoft and Washington University.  The general message, however, was that
> we needed something better than writing to HDFS for communication and all of
> the solutions used something like MPI or another way of communication
> between nodes directly.
>
> It would be great to have GBDT in Mahout, but it's not intuitive how to fit
> a fast and scalable implementation into the existing framework.
>
> - jerry
>
> On 3/24/11 8:00 PM, "Bai, Gang" <de...@baigang.net> wrote:
>
> This post is probably OT since it's essentially about implementing an
> algorithm for Mahout. :-)
>
> As the nature of boosting method, GBDT is hard to parallelize (in a whole
> ensemble manner). So it typically requires multiple stages. Also,
> efficiency
> is an urgent issue. A distributed edition of algorithm focus on tackling
> huge amount of data, which is impossible for a traditional in-memory
> edition.
>
> Both Yahoo and Google have articles address this problem. Yahoo's method*
> is
> less prominent IMHO. It generally deals with scalability, i.e partitioning
> the data both horizonally (in the sample dimension) and vertically (in the
> feature dimension), but leaves the communication costs unresolved, which
> makes it less efficient than a sequential algorithm. As for Google's
> method**, I just skimmed over the article and had not found time to read it
> in detail. It's genrally fancier, presenting the algorithm as well as
> addressing several engineering issues.
>
> I don't know how to start, or propose to start, an implementation of a
> particular algorithm, but I recommend this one. I'll give assistance on
> this
> project.
>
> * Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
> ** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
> VLDB 2009.
>
> Thanks,
> -BaiGang
>
> On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com>
> wrote:
>
> > I don't know of anybody currently planning to implement GBDT's.  A
> scalable
> > implementation would probably be a good thing to have in Mahout.
> >
> > Decision tree methods can, however, be difficult to implement in a
> scalable
> > way.
> >
> > How do you propose to do so?
> >
> >
> > On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:
> >
> >> Hi all,
> >>
> >> Is there any plan to implement Gredient Boosted Decision Trees in
> Mahout?
> >> I
> >> think it's high expected because of its extensive application in web
> data
> >> mining.
> >>
> >> Best regards,
> >> -BaiGang
> >>
> >
> >
>
>

Re: About GBDT support

Posted by Jerry Ye <je...@yahoo-inc.com>.
One of the author's of the Yahoo paper here. One of the main points of our paper was that an algorithm like GBDT needs a speedy way of communicating optimal split points and subsequently the partitioning of the samples.  As of right now, communication in MapReduce is essentially done by reading/writing files on HDFS.  This was exactly the reason why our pure MapReduce implementation of the algorithm was so slow.  In order to reduce communication costs, we looked into using MPI and wrote a launcher for it on top of Hadoop (mostly to utilize existing clusters).

Google's approach was quite different and trained approximate models while ours produced exactly the same model as the single machine implementation.  Like ours, their approach also deviated greatly from what a standard MapReduce job is and had to write what essentially was another job scheduler.

Recently, there was a NIPS workshop on distributed computing (http://lccc.eecs.berkeley.edu/) with 2 other variants of GBDT from Microsoft and Washington University.  The general message, however, was that we needed something better than writing to HDFS for communication and all of the solutions used something like MPI or another way of communication between nodes directly.

It would be great to have GBDT in Mahout, but it's not intuitive how to fit a fast and scalable implementation into the existing framework.

- jerry

On 3/24/11 8:00 PM, "Bai, Gang" <de...@baigang.net> wrote:

This post is probably OT since it's essentially about implementing an
algorithm for Mahout. :-)

As the nature of boosting method, GBDT is hard to parallelize (in a whole
ensemble manner). So it typically requires multiple stages. Also, efficiency
is an urgent issue. A distributed edition of algorithm focus on tackling
huge amount of data, which is impossible for a traditional in-memory
edition.

Both Yahoo and Google have articles address this problem. Yahoo's method* is
less prominent IMHO. It generally deals with scalability, i.e partitioning
the data both horizonally (in the sample dimension) and vertically (in the
feature dimension), but leaves the communication costs unresolved, which
makes it less efficient than a sequential algorithm. As for Google's
method**, I just skimmed over the article and had not found time to read it
in detail. It's genrally fancier, presenting the algorithm as well as
addressing several engineering issues.

I don't know how to start, or propose to start, an implementation of a
particular algorithm, but I recommend this one. I'll give assistance on this
project.

* Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
VLDB 2009.

Thanks,
-BaiGang

On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com> wrote:

> I don't know of anybody currently planning to implement GBDT's.  A scalable
> implementation would probably be a good thing to have in Mahout.
>
> Decision tree methods can, however, be difficult to implement in a scalable
> way.
>
> How do you propose to do so?
>
>
> On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:
>
>> Hi all,
>>
>> Is there any plan to implement Gredient Boosted Decision Trees in Mahout?
>> I
>> think it's high expected because of its extensive application in web data
>> mining.
>>
>> Best regards,
>> -BaiGang
>>
>
>


Re: About GBDT support

Posted by "Bai, Gang" <de...@baigang.net>.
This post is probably OT since it's essentially about implementing an
algorithm for Mahout. :-)

As the nature of boosting method, GBDT is hard to parallelize (in a whole
ensemble manner). So it typically requires multiple stages. Also, efficiency
is an urgent issue. A distributed edition of algorithm focus on tackling
huge amount of data, which is impossible for a traditional in-memory
edition.

Both Yahoo and Google have articles address this problem. Yahoo's method* is
less prominent IMHO. It generally deals with scalability, i.e partitioning
the data both horizonally (in the sample dimension) and vertically (in the
feature dimension), but leaves the communication costs unresolved, which
makes it less efficient than a sequential algorithm. As for Google's
method**, I just skimmed over the article and had not found time to read it
in detail. It's genrally fancier, presenting the algorithm as well as
addressing several engineering issues.

I don't know how to start, or propose to start, an implementation of a
particular algorithm, but I recommend this one. I'll give assistance on this
project.

* Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009.
** PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce.
VLDB 2009.

Thanks,
-BaiGang

On Tue, Mar 22, 2011 at 10:23 PM, Ted Dunning <te...@gmail.com> wrote:

> I don't know of anybody currently planning to implement GBDT's.  A scalable
> implementation would probably be a good thing to have in Mahout.
>
> Decision tree methods can, however, be difficult to implement in a scalable
> way.
>
> How do you propose to do so?
>
>
> On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:
>
>> Hi all,
>>
>> Is there any plan to implement Gredient Boosted Decision Trees in Mahout?
>> I
>> think it's high expected because of its extensive application in web data
>> mining.
>>
>> Best regards,
>> -BaiGang
>>
>
>

Re: About GBDT support

Posted by Lance Norskog <go...@gmail.com>.
Also, please start a new thread for this problem. It has nothing to do
with the subject line.

On Wed, Mar 23, 2011 at 10:12 AM, Jeff Eastman <je...@narus.com> wrote:
> The console logs are not helpful. Need to see the command line arguments you are using. In particular, did you use the -cl option when running kmeans?
>
> -----Original Message-----
> From: vishnu krishnan [mailto:vgrkrishnan@gmail.com]
> Sent: Wednesday, March 23, 2011 10:02 AM
> To: user@mahout.apache.org
> Cc: Ted Dunning; Bai, Gang
> Subject: Re: About GBDT support
>
> *we run kmeans news clustering with 53 news articles. we took the
> newsarticle and article ID from database, and we get the output like this.
> what is it meant by
>
> 0 belongs to cluster 1.0: [].
> is there only one cluster and where is the article ID we had appended?
> how can we know which article belongs to which cluster.kindely help me to
> rectify this problem?
>
> *
>
>
>
> OUTPUT
>
> init:
> deps-module-jar:
> deps-ear-jar:
> deps-jar:
> compile-single:
> run-main:
> Mar 23, 2011 3:02:38 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Deleting newsClusters
> Mar 23, 2011 3:02:38 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
> Mar 23, 2011 3:02:38 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:39 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0001
> Mar 23, 2011 3:02:39 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:39 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0001_m_000000_0' to
> newsClusters/tokenized-documents
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0001_m_000000_0' done.
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 0%
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0001
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 5
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=8889540
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=9063087
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=0
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:40 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0002
> Mar 23, 2011 3:02:40 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Max Ngram size is 2
> Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Emit Unitgrams is true
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 0% reduce 0%
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0002_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0002_m_000000_0' done.
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size:
> 1104153 bytes
> Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Min support is 5
> Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Emit Unitgrams is true
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0002_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0002_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:42 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0002_r_000000_0' to
> newsClusters/wordcount/subgrams
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0002_r_000000_0' done.
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0002
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 14
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: org.apache.mahout.vectorizer.collocations.llr.CollocMapper$Count
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: NGRAM_TOTAL=12244
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: org.apache.mahout.vectorizer.collocations.llr.CollocReducer$Skipped
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: LESS_THAN_MIN_SUPPORT=22811
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=36602035
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=38059703
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=10348
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=30502
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=795
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=61004
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=1512720
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=53681
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53681
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=30502
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:42 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0003
> Mar 23, 2011 3:02:42 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0003_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0003_m_000000_0' done.
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 17039
> bytes
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: NGram Total is 12244
> Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Min LLR value is 1.0
> Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Emit Unitgrams is true
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0003_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0003_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:42 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0003_r_000000_0' to
> newsClusters/wordcount/ngrams
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0003_r_000000_0' done.
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0003
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=55302923
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=55833922
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=707
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=795
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=707
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=1590
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=15447
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=795
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=795
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:43 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0004
> Mar 23, 2011 3:02:43 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0004_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0004_m_000000_0' done.
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 89679
> bytes
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0004_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0004_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:44 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0004_r_000000_0' to
> newsClusters/partial-vectors-0
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0004_r_000000_0' done.
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0004
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=73176996
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=73805246
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=53
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=53
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=106
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=89466
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=53
> Mar 23, 2011 3:02:44 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:45 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0005
> Mar 23, 2011 3:02:45 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0005_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0005_m_000000_0' done.
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
> bytes
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0005_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0005_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:45 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0005_r_000000_0' to
> newsClusters/tf-vectors
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0005_r_000000_0' done.
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0005
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=90946183
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=91678644
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=53
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=53
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=106
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=42496
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=53
> Mar 23, 2011 3:02:46 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Deleting newsClusters/partial-vectors-0
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:46 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0006
> Mar 23, 2011 3:02:46 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0006_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0006_m_000000_0' done.
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 9914
> bytes
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0006_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0006_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:46 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0006_r_000000_0' to
> newsClusters/df-count
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0006_r_000000_0' done.
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0006
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=108620888
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=109456559
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=708
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=708
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=708
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=1416
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=56220
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=4685
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=4685
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=708
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:47 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0007
> Mar 23, 2011 3:02:47 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0007_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0007_m_000000_0' done.
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
> bytes
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0007_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0007_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:47 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0007_r_000000_0' to
> newsClusters/partial-vectors-0
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0007_r_000000_0' done.
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0007
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=126340072
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=127288605
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=53
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=53
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=106
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=42496
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=53
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:48 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0008
> Mar 23, 2011 3:02:48 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0008_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0008_m_000000_0' done.
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 907
> bytes
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0008_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0008_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:49 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0008_r_000000_0' to
> newsClusters/tfidf-vectors
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0008_r_000000_0' done.
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0008
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=143935937
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=144993725
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=53
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=53
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=106
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=799
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=53
> Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Deleting newsClusters/partial-vectors-0
> Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Build Clusters Input: newsClusters/tfidf-vectors Out:
> newsClusters/canopy-centroids Measure:
> org.apache.mahout.common.distance.EuclideanDistanceMeasure@903025 t1: 250.0
> t2: 120.0
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:50 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0009
> Mar 23, 2011 3:02:50 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0009_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0009_m_000000_0' done.
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 17
> bytes
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0009_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0009_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:50 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0009_r_000000_0' to
> newsClusters/canopy-centroids/clusters-0
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0009_r_000000_0' done.
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0009
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 12
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=161475007
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=162703107
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=0
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=2
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=13
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=0
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=1
> Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Input: newsClusters/tfidf-vectors Clusters In:
> newsClusters/canopy-centroids/clusters-0 Out: newsClusters/clusters
> Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure
> Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: convergence: 0.01 max Iterations: 20 num Reduce Tasks:
> org.apache.mahout.math.VectorWritable Input Vectors: {}
> Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: K-Means Iteration 1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:51 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0010
> Mar 23, 2011 3:02:51 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: io.sort.mb = 100
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: data buffer = 79691776/99614720
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> <init>
> INFO: record buffer = 262144/327680
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> flush
> INFO: Starting flush of map output
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
> sortAndSpill
> INFO: Finished spill 0
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0010_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0010_m_000000_0' done.
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Merging 1 sorted segments
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
> INFO: Down to the last merge-pass, with 1 segments left of total size: 29
> bytes
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0010_r_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0010_r_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:51 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0010_r_000000_0' to
> newsClusters/clusters/clusters-1
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO: reduce > reduce
> Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0010_r_000000_0' done.
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 100%
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0010
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 13
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Clustering
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Converged Clusters=1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: FileSystemCounters
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_READ=179033398
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=180412134
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input groups=1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine output records=1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce shuffle bytes=0
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce output records=1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=2
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output bytes=1325
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Combine input records=53
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
> INFO: Reduce input records=1
> Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Clustering data
> Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Running Clustering
> Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: Input: newsClusters/tfidf-vectors Clusters In:
> newsClusters/clusters/clusters-1 Out: newsClusters/clusters/clusteredPoints
> Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure@1958cc2
> Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
> INFO: convergence: 0.01 Input Vectors: org.apache.mahout.math.VectorWritable
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
> INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
> - already initialized
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
> configureCommandLineOptions
> WARNING: Use GenericOptionsParser for parsing the arguments. Applications
> should implement Tool for the same.
> Mar 23, 2011 3:02:52 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Running job: job_local_0011
> Mar 23, 2011 3:02:52 PM
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
> INFO: Total input paths to process : 1
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task done
> INFO: Task:attempt_local_0011_m_000000_0 is done. And is in the process of
> commiting
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task commit
> INFO: Task attempt_local_0011_m_000000_0 is allowed to commit now
> Mar 23, 2011 3:02:52 PM
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
> INFO: Saved output of task 'attempt_local_0011_m_000000_0' to
> newsClusters/clusters/clusteredPoints
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
> statusUpdate
> INFO:
> Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task sendDone
> INFO: Task 'attempt_local_0011_m_000000_0' done.
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: map 100% reduce 0%
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
> monitorAndPrintJob
> INFO: Job complete: job_local_0011
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: Counters: 5
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> INFO: FileSystemCounters
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> 0 belongs to cluster 1.0: []
> 0 belongs to cluster 1.0: []
> INFO: FILE_BYTES_READ=98289121
> 0 belongs to cluster 1.0: []
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: FILE_BYTES_WRITTEN=99057511
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: Map-Reduce Framework
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: Map input records=53
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: Spilled Records=0
> Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
> INFO: Map output records=53
> BUILD SUCCESSFUL (total time: 15 seconds)
>



-- 
Lance Norskog
goksron@gmail.com

RE: About GBDT support

Posted by Jeff Eastman <je...@Narus.com>.
The console logs are not helpful. Need to see the command line arguments you are using. In particular, did you use the -cl option when running kmeans?

-----Original Message-----
From: vishnu krishnan [mailto:vgrkrishnan@gmail.com]
Sent: Wednesday, March 23, 2011 10:02 AM
To: user@mahout.apache.org
Cc: Ted Dunning; Bai, Gang
Subject: Re: About GBDT support

*we run kmeans news clustering with 53 news articles. we took the
newsarticle and article ID from database, and we get the output like this.
what is it meant by

0 belongs to cluster 1.0: [].
is there only one cluster and where is the article ID we had appended?
how can we know which article belongs to which cluster.kindely help me to
rectify this problem?

*



OUTPUT

init:
deps-module-jar:
deps-ear-jar:
deps-jar:
compile-single:
run-main:
Mar 23, 2011 3:02:38 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters
Mar 23, 2011 3:02:38 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Mar 23, 2011 3:02:38 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0001
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to
newsClusters/tokenized-documents
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 0%
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0001
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=8889540
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=9063087
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=0
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:40 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:40 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0002
Mar 23, 2011 3:02:40 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Max Ngram size is 2
Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 0% reduce 0%
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_m_000000_0' done.
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size:
1104153 bytes
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Min support is 5
Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0002_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0002_r_000000_0' to
newsClusters/wordcount/subgrams
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_r_000000_0' done.
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0002
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 14
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: org.apache.mahout.vectorizer.collocations.llr.CollocMapper$Count
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: NGRAM_TOTAL=12244
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: org.apache.mahout.vectorizer.collocations.llr.CollocReducer$Skipped
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: LESS_THAN_MIN_SUPPORT=22811
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=36602035
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=38059703
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=10348
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=30502
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=795
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=61004
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=1512720
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=53681
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53681
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=30502
Mar 23, 2011 3:02:42 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0003
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0003_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0003_m_000000_0' done.
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 17039
bytes
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: NGram Total is 12244
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Min LLR value is 1.0
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0003_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0003_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0003_r_000000_0' to
newsClusters/wordcount/ngrams
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0003_r_000000_0' done.
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0003
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=55302923
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=55833922
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=707
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=707
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=1590
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=15447
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:43 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0004
Mar 23, 2011 3:02:43 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0004_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0004_m_000000_0' done.
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 89679
bytes
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0004_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0004_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:44 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0004_r_000000_0' to
newsClusters/partial-vectors-0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0004_r_000000_0' done.
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0004
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=73176996
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=73805246
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=89466
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0005
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0005_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0005_m_000000_0' done.
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
bytes
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0005_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0005_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0005_r_000000_0' to
newsClusters/tf-vectors
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0005_r_000000_0' done.
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0005
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=90946183
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=91678644
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=42496
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:46 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters/partial-vectors-0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0006
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0006_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0006_m_000000_0' done.
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 9914
bytes
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0006_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0006_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0006_r_000000_0' to
newsClusters/df-count
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0006_r_000000_0' done.
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0006
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=108620888
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=109456559
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=1416
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=56220
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=4685
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=4685
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0007
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0007_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0007_m_000000_0' done.
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
bytes
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0007_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0007_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0007_r_000000_0' to
newsClusters/partial-vectors-0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0007_r_000000_0' done.
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0007
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=126340072
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=127288605
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=42496
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:48 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0008
Mar 23, 2011 3:02:48 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0008_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0008_m_000000_0' done.
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 907
bytes
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0008_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0008_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:49 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0008_r_000000_0' to
newsClusters/tfidf-vectors
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0008_r_000000_0' done.
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0008
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=143935937
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=144993725
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=799
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters/partial-vectors-0
Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Build Clusters Input: newsClusters/tfidf-vectors Out:
newsClusters/canopy-centroids Measure:
org.apache.mahout.common.distance.EuclideanDistanceMeasure@903025 t1: 250.0
t2: 120.0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0009
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0009_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0009_m_000000_0' done.
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 17
bytes
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0009_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0009_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0009_r_000000_0' to
newsClusters/canopy-centroids/clusters-0
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0009_r_000000_0' done.
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0009
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=161475007
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=162703107
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=2
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=13
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=1
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Input: newsClusters/tfidf-vectors Clusters In:
newsClusters/canopy-centroids/clusters-0 Out: newsClusters/clusters
Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: convergence: 0.01 max Iterations: 20 num Reduce Tasks:
org.apache.mahout.math.VectorWritable Input Vectors: {}
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: K-Means Iteration 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0010
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0010_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0010_m_000000_0' done.
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 29
bytes
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0010_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0010_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0010_r_000000_0' to
newsClusters/clusters/clusters-1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0010_r_000000_0' done.
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0010
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 13
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Clustering
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Converged Clusters=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=179033398
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=180412134
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=2
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=1325
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=1
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Clustering data
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Running Clustering
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Input: newsClusters/tfidf-vectors Clusters In:
newsClusters/clusters/clusters-1 Out: newsClusters/clusters/clusteredPoints
Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure@1958cc2
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: convergence: 0.01 Input Vectors: org.apache.mahout.math.VectorWritable
Mar 23, 2011 3:02:52 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0011
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0011_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0011_m_000000_0 is allowed to commit now
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0011_m_000000_0' to
newsClusters/clusters/clusteredPoints
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0011_m_000000_0' done.
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 0%
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0011
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
INFO: FileSystemCounters
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
INFO: FILE_BYTES_READ=98289121
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=99057511
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=0
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
BUILD SUCCESSFUL (total time: 15 seconds)

Re: About GBDT support

Posted by vishnu krishnan <vg...@gmail.com>.
*we run kmeans news clustering with 53 news articles. we took the
newsarticle and article ID from database, and we get the output like this.
what is it meant by

0 belongs to cluster 1.0: [].
is there only one cluster and where is the article ID we had appended?
how can we know which article belongs to which cluster.kindely help me to
rectify this problem?

*



OUTPUT

init:
deps-module-jar:
deps-ear-jar:
deps-jar:
compile-single:
run-main:
Mar 23, 2011 3:02:38 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters
Mar 23, 2011 3:02:38 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Initializing JVM Metrics with processName=JobTracker, sessionId=
Mar 23, 2011 3:02:38 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0001
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0001_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0001_m_000000_0 is allowed to commit now
Mar 23, 2011 3:02:39 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0001_m_000000_0' to
newsClusters/tokenized-documents
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:39 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0001_m_000000_0' done.
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 0%
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0001
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=8889540
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=9063087
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=0
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:40 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:40 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0002
Mar 23, 2011 3:02:40 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:40 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Max Ngram size is 2
Mar 23, 2011 3:02:40 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 0% reduce 0%
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_m_000000_0' done.
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size:
1104153 bytes
Mar 23, 2011 3:02:41 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Min support is 5
Mar 23, 2011 3:02:41 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0002_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0002_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0002_r_000000_0' to
newsClusters/wordcount/subgrams
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0002_r_000000_0' done.
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0002
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 14
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: org.apache.mahout.vectorizer.collocations.llr.CollocMapper$Count
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: NGRAM_TOTAL=12244
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: org.apache.mahout.vectorizer.collocations.llr.CollocReducer$Skipped
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: LESS_THAN_MIN_SUPPORT=22811
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=36602035
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=38059703
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=10348
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=30502
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=795
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=61004
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=1512720
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=53681
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53681
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=30502
Mar 23, 2011 3:02:42 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0003
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0003_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0003_m_000000_0' done.
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 17039
bytes
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: NGram Total is 12244
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Min LLR value is 1.0
Mar 23, 2011 3:02:42 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Emit Unitgrams is true
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0003_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0003_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:42 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0003_r_000000_0' to
newsClusters/wordcount/ngrams
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:42 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0003_r_000000_0' done.
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0003
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=55302923
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=55833922
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=707
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=707
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=1590
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=15447
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=795
Mar 23, 2011 3:02:43 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:43 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0004
Mar 23, 2011 3:02:43 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:43 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0004_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0004_m_000000_0' done.
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 89679
bytes
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0004_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0004_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:44 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0004_r_000000_0' to
newsClusters/partial-vectors-0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0004_r_000000_0' done.
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0004
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=73176996
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=73805246
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=89466
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:44 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0005
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0005_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0005_m_000000_0' done.
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
bytes
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0005_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0005_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:45 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0005_r_000000_0' to
newsClusters/tf-vectors
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:45 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0005_r_000000_0' done.
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0005
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=90946183
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=91678644
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=42496
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:46 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters/partial-vectors-0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0006
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0006_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0006_m_000000_0' done.
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 9914
bytes
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0006_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0006_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:46 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0006_r_000000_0' to
newsClusters/df-count
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:46 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0006_r_000000_0' done.
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0006
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=108620888
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=109456559
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=1416
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=56220
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=4685
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=4685
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=708
Mar 23, 2011 3:02:47 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0007
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0007_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0007_m_000000_0' done.
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 42707
bytes
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0007_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0007_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:47 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0007_r_000000_0' to
newsClusters/partial-vectors-0
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:47 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0007_r_000000_0' done.
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0007
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=126340072
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=127288605
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=42496
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:48 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:48 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0008
Mar 23, 2011 3:02:48 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0008_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:48 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0008_m_000000_0' done.
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 907
bytes
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0008_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0008_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:49 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0008_r_000000_0' to
newsClusters/tfidf-vectors
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0008_r_000000_0' done.
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0008
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=143935937
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=144993725
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=106
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=799
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=53
Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Deleting newsClusters/partial-vectors-0
Mar 23, 2011 3:02:49 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Build Clusters Input: newsClusters/tfidf-vectors Out:
newsClusters/canopy-centroids Measure:
org.apache.mahout.common.distance.EuclideanDistanceMeasure@903025 t1: 250.0
t2: 120.0
Mar 23, 2011 3:02:49 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:49 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0009
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0009_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0009_m_000000_0' done.
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 17
bytes
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0009_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0009_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:50 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0009_r_000000_0' to
newsClusters/canopy-centroids/clusters-0
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:50 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0009_r_000000_0' done.
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0009
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 12
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=161475007
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=162703107
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=2
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=13
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=1
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Input: newsClusters/tfidf-vectors Clusters In:
newsClusters/canopy-centroids/clusters-0 Out: newsClusters/clusters
Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: convergence: 0.01 max Iterations: 20 num Reduce Tasks:
org.apache.mahout.math.VectorWritable Input Vectors: {}
Mar 23, 2011 3:02:51 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: K-Means Iteration 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0010
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: io.sort.mb = 100
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: data buffer = 79691776/99614720
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
<init>
INFO: record buffer = 262144/327680
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
flush
INFO: Starting flush of map output
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.MapTask$MapOutputBuffer
sortAndSpill
INFO: Finished spill 0
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0010_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0010_m_000000_0' done.
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Merging 1 sorted segments
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Merger$MergeQueue merge
INFO: Down to the last merge-pass, with 1 segments left of total size: 29
bytes
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0010_r_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0010_r_000000_0 is allowed to commit now
Mar 23, 2011 3:02:51 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0010_r_000000_0' to
newsClusters/clusters/clusters-1
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO: reduce > reduce
Mar 23, 2011 3:02:51 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0010_r_000000_0' done.
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 100%
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0010
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 13
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Clustering
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Converged Clusters=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FileSystemCounters
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_READ=179033398
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=180412134
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input groups=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Combine output records=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce shuffle bytes=0
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce output records=1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=2
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map output bytes=1325
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Combine input records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Counters log
INFO: Reduce input records=1
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Clustering data
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Running Clustering
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: Input: newsClusters/tfidf-vectors Clusters In:
newsClusters/clusters/clusters-1 Out: newsClusters/clusters/clusteredPoints
Distance: org.apache.mahout.common.distance.TanimotoDistanceMeasure@1958cc2
Mar 23, 2011 3:02:52 PM org.slf4j.impl.JCLLoggerAdapter info
INFO: convergence: 0.01 Input Vectors: org.apache.mahout.math.VectorWritable
Mar 23, 2011 3:02:52 PM org.apache.hadoop.metrics.jvm.JvmMetrics init
INFO: Cannot initialize JVM Metrics with processName=JobTracker, sessionId=
- already initialized
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
configureCommandLineOptions
WARNING: Use GenericOptionsParser for parsing the arguments. Applications
should implement Tool for the same.
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Running job: job_local_0011
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process : 1
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task done
INFO: Task:attempt_local_0011_m_000000_0 is done. And is in the process of
commiting
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task commit
INFO: Task attempt_local_0011_m_000000_0 is allowed to commit now
Mar 23, 2011 3:02:52 PM
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask
INFO: Saved output of task 'attempt_local_0011_m_000000_0' to
newsClusters/clusters/clusteredPoints
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.LocalJobRunner$Job
statusUpdate
INFO:
Mar 23, 2011 3:02:52 PM org.apache.hadoop.mapred.Task sendDone
INFO: Task 'attempt_local_0011_m_000000_0' done.
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: map 100% reduce 0%
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.JobClient
monitorAndPrintJob
INFO: Job complete: job_local_0011
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Counters: 5
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
INFO: FileSystemCounters
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
0 belongs to cluster 1.0: []
0 belongs to cluster 1.0: []
INFO: FILE_BYTES_READ=98289121
0 belongs to cluster 1.0: []
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: FILE_BYTES_WRITTEN=99057511
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map-Reduce Framework
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map input records=53
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Spilled Records=0
Mar 23, 2011 3:02:53 PM org.apache.hadoop.mapred.Counters log
INFO: Map output records=53
BUILD SUCCESSFUL (total time: 15 seconds)

Re: About GBDT support

Posted by Ted Dunning <te...@gmail.com>.
I don't know of anybody currently planning to implement GBDT's.  A scalable
implementation would probably be a good thing to have in Mahout.

Decision tree methods can, however, be difficult to implement in a scalable
way.

How do you propose to do so?

On Tue, Mar 22, 2011 at 6:24 AM, Bai, Gang <de...@baigang.net> wrote:

> Hi all,
>
> Is there any plan to implement Gredient Boosted Decision Trees in Mahout? I
> think it's high expected because of its extensive application in web data
> mining.
>
> Best regards,
> -BaiGang
>