You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by janani venkat <ja...@gmail.com> on 2010/04/09 12:18:54 UTC

distributed cache

Hi,
Am quite new to using this hadoop. I set the single node and tried running
the sample map-reduce programs in that. They worked fine.
1)I want to run the distributed cache code(single node or 2-node cluster)
and view the output. But i dont understand how to specify the input files,
setting up the path in  JobConf.java and where to add the functions
specified in the instruction.
2)I also want to view the output files(logs).
3)They are talking about speculative execution and it is set to true by
default in JobConf. But where exactly the actual logic of speculative
execution could be found in the hadoop installation? I mean the specific
code which gets executed when it is called.


Waiting for guidance..

regards
KulliKarot

Re: distributed cache

Posted by janani venkat <ja...@gmail.com>.
Thanks raghav!
For the 3rd question We want a look at the code first..

On Fri, Apr 9, 2010 at 10:28 PM, Raghava Mutharaju <
m.vijayaraghava@gmail.com> wrote:

> Hi,
>
>    I can answer the 2nd question.
>
> >>> 2)I also want to view the output files(logs).
>      Check the following link. It contains URLs to view the logs on the Web
> UI.
>
>
> http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29#Hadoop_Web_Interfaces
> .
>
> If that is not possible (Web UI is the preferred way, atleast for me), then
> the logs would be in ${HADOOP_LOG_DIR}. The default location is
> ${HADOOP_HOME}/logs. The relevant logs would be in "userlogs" folder. These
> 2 environment variables are generally set in the file hadoop-env.sh, so,
> you
> can check out the values there.
>
> For the 3rd question, are you planning to change the code related to
> speculative execution or do you just want to have a look at it?
>
>
> Regards,
> Raghava.
>
> On Fri, Apr 9, 2010 at 6:18 AM, janani venkat <ja...@gmail.com> wrote:
>
> > Hi,
> > Am quite new to using this hadoop. I set the single node and tried
> running
> > the sample map-reduce programs in that. They worked fine.
> > 1)I want to run the distributed cache code(single node or 2-node cluster)
> > and view the output. But i dont understand how to specify the input
> files,
> > setting up the path in  JobConf.java and where to add the functions
> > specified in the instruction.
> > 2)I also want to view the output files(logs).
> > 3)They are talking about speculative execution and it is set to true by
> > default in JobConf. But where exactly the actual logic of speculative
> > execution could be found in the hadoop installation? I mean the specific
> > code which gets executed when it is called.
> >
> >
> > Waiting for guidance..
> >
> > regards
> > KulliKarot
> >
>

Re: distributed cache

Posted by Raghava Mutharaju <m....@gmail.com>.
Hi,

    I can answer the 2nd question.

>>> 2)I also want to view the output files(logs).
     Check the following link. It contains URLs to view the logs on the Web
UI.

http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_%28Single-Node_Cluster%29#Hadoop_Web_Interfaces
.

If that is not possible (Web UI is the preferred way, atleast for me), then
the logs would be in ${HADOOP_LOG_DIR}. The default location is
${HADOOP_HOME}/logs. The relevant logs would be in "userlogs" folder. These
2 environment variables are generally set in the file hadoop-env.sh, so, you
can check out the values there.

For the 3rd question, are you planning to change the code related to
speculative execution or do you just want to have a look at it?


Regards,
Raghava.

On Fri, Apr 9, 2010 at 6:18 AM, janani venkat <ja...@gmail.com> wrote:

> Hi,
> Am quite new to using this hadoop. I set the single node and tried running
> the sample map-reduce programs in that. They worked fine.
> 1)I want to run the distributed cache code(single node or 2-node cluster)
> and view the output. But i dont understand how to specify the input files,
> setting up the path in  JobConf.java and where to add the functions
> specified in the instruction.
> 2)I also want to view the output files(logs).
> 3)They are talking about speculative execution and it is set to true by
> default in JobConf. But where exactly the actual logic of speculative
> execution could be found in the hadoop installation? I mean the specific
> code which gets executed when it is called.
>
>
> Waiting for guidance..
>
> regards
> KulliKarot
>