You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Y G <gy...@gmail.com> on 2009/09/22 10:44:56 UTC

wiki home page

hi all:
Someone modified wiki home page and left some “strange” remarks.(
http://wiki.apache.org/hadoop)

here is the remark:

> Big bug in Hadoop MapReduce <http://wiki.apache.org/hadoop/MapReduce>!!! When
> I use too Many Counters in a big job(processing about 4T data, 1 billion
> record), I often encounter dead lock problems with jobtracker. I doubt
> that codes in jobtracker has a fucky dead lock. So agony!!! And during 3
> months of hadoop running, I encounter so many 'read-only filesystem' under
> SUSE. if some nodes in read-only filesystem status, it will cause the job
> failed later.
> So agony!!!! Someone can discuss about it????
>

Attention has been given before?
-----
天天开心
身体健康

Re: wiki home page

Posted by Y G <gy...@gmail.com>.
Now it is back to normal
-----
天天开心
身体健康



2009/9/22 Eason.Lee <le...@gmail.com>

> OH MY GOD!
>
> 2009/9/22 Y G <gy...@gmail.com>
>
> > hi all:
> > Someone modified wiki home page and left some “strange” remarks.(
> > http://wiki.apache.org/hadoop)
> >
> > here is the remark:
> >
> > > Big bug in Hadoop MapReduce <http://wiki.apache.org/hadoop/MapReduce
> >!!!
> > When
> > > I use too Many Counters in a big job(processing about 4T data, 1
> billion
> > > record), I often encounter dead lock problems with jobtracker. I doubt
> > > that codes in jobtracker has a fucky dead lock. So agony!!! And during
> 3
> > > months of hadoop running, I encounter so many 'read-only filesystem'
> > under
> > > SUSE. if some nodes in read-only filesystem status, it will cause the
> job
> > > failed later.
> > > So agony!!!! Someone can discuss about it????
> > >
> >
> > Attention has been given before?
> > -----
> > 天天开心
> > 身体健康
> >
>

Re: wiki home page

Posted by "Eason.Lee" <le...@gmail.com>.
OH MY GOD!

2009/9/22 Y G <gy...@gmail.com>

> hi all:
> Someone modified wiki home page and left some “strange” remarks.(
> http://wiki.apache.org/hadoop)
>
> here is the remark:
>
> > Big bug in Hadoop MapReduce <http://wiki.apache.org/hadoop/MapReduce>!!!
> When
> > I use too Many Counters in a big job(processing about 4T data, 1 billion
> > record), I often encounter dead lock problems with jobtracker. I doubt
> > that codes in jobtracker has a fucky dead lock. So agony!!! And during 3
> > months of hadoop running, I encounter so many 'read-only filesystem'
> under
> > SUSE. if some nodes in read-only filesystem status, it will cause the job
> > failed later.
> > So agony!!!! Someone can discuss about it????
> >
>
> Attention has been given before?
> -----
> 天天开心
> 身体健康
>