You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Alexander Hristov <al...@planetalia.com> on 2012/10/03 06:43:48 UTC

Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Hi again

Why does it seem to me that everything Hadoop 0.23-related is an uphill 
battle? :-(

I'm trying something as simple as running a classic(MapReduce 1) Hadoop 
cluster. Here's my configuration:

core-site.xml:
<configuration>
     <property>
       <name>fs.default.name</name>
       <value>hdfs://samplehost.com:9000</value>
     </property>
</configuration>

hdfs-site.xml
<configuration>
      <property>
          <name>dfs.replication</name>
          <value>3</value>
      </property>
</configuration>

mapred-site.xml
<configuration>
   <property>
     <name>mapreduce.framework.name</name>
     <value>classic</value>
   </property>
   <property>
     <name>mapred.job.tracker</name>
     <value>samplehost.com:9001</value>
   </property>
     <property>
     <name> mapreduce.jobtracker.address</name>
     <value>samplehost.com:9001</value>
   </property>
</configuration>

yarn-site.xml
<configuration>
</configuration>


Well, I start the thing and do a netstat -l , and there's no one 
listening on port 9001. There are no errors in the logs, and no mention 
of that port, either.
Obviously, all Map/Reduce examples fail with Connection Refused.

Starting the same cluster using a MapReduce 2 (YARN) configuration works 
properly.

Regards,

Alexander




Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
I am incorrect on the below:

> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).

Turns out, "classic" is just for older clients to run on YARN without
any other changes. It roughly translates the RM address from a JT
property into a YARN one, allowing for near-seamless transition of
jobs from 0.22 to 0.23.

There is hence no support for using any of the new libs in Apache
Hadoop 0.23/2.x to run a job targeted on an MR1 cluster or for running
an MR1 cluster out of it. You'll have to look elsewhere for such a
method, or try the MR1 specific ideas I'd mentioned earlier.

On Wed, Oct 3, 2012 at 12:08 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi,
>
> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).
>
> With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
> removed MR1 services support:
>
> """
> ➜  mapred jobtracker
> Sorry, the jobtracker command is no longer supported.
> """
>
> So if you need MR1 running with few hassles, you'll have to either use
> the independent version of classic MR from an Apache Hadoop 0.22
> release (which had independent components, making this easy) and set
> it up as an independent cluster with 0.23 HDFS jars (API is compatible
> so should work), or use the MR1 tarball CDH4 offers (which is closer
> to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
> the daemons via start-mapred.sh in either of them.
>
> If you instead want to use Apache Hadoop 1.x, you'll need to remove
> all HDFS references in the core-jar or exclude it from a build, and
> produce an MR-only deployable set. This is harder work to do.
>
> P.s. I haven't tried this out personally, but feel this may work.
>
> On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Thanks for replying.
>>
>> I'm using the 0.23.3 release as distributed, no previous versions.
>>
>> So what's the point in documenting a classic option, then, if it is not
>> available? I thought distributions were self-contained, or at least the docs
>> don't mention that you need any previous versions.
>>
>>
>>
>>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>>
>>> Whatever version you're trying to use, make sure it is not using the
>>> older HDFS jars?
>>>
>>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>>> wrote:
>>>>
>>>> Hi again
>>>>
>>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>>> battle? :-(
>>>>
>>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>>> cluster. Here's my configuration:
>>>>
>>>> core-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>        <name>fs.default.name</name>
>>>>        <value>hdfs://samplehost.com:9000</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> hdfs-site.xml
>>>> <configuration>
>>>>       <property>
>>>>           <name>dfs.replication</name>
>>>>           <value>3</value>
>>>>       </property>
>>>> </configuration>
>>>>
>>>> mapred-site.xml
>>>> <configuration>
>>>>    <property>
>>>>      <name>mapreduce.framework.name</name>
>>>>      <value>classic</value>
>>>>    </property>
>>>>    <property>
>>>>      <name>mapred.job.tracker</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>>      <property>
>>>>      <name> mapreduce.jobtracker.address</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>> </configuration>
>>>>
>>>> yarn-site.xml
>>>> <configuration>
>>>> </configuration>
>>>>
>>>>
>>>> Well, I start the thing and do a netstat -l , and there's no one
>>>> listening
>>>> on port 9001. There are no errors in the logs, and no mention of that
>>>> port,
>>>> either.
>>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>>
>>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>>> properly.
>>>>
>>>> Regards,
>>>>
>>>> Alexander
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
I am incorrect on the below:

> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).

Turns out, "classic" is just for older clients to run on YARN without
any other changes. It roughly translates the RM address from a JT
property into a YARN one, allowing for near-seamless transition of
jobs from 0.22 to 0.23.

There is hence no support for using any of the new libs in Apache
Hadoop 0.23/2.x to run a job targeted on an MR1 cluster or for running
an MR1 cluster out of it. You'll have to look elsewhere for such a
method, or try the MR1 specific ideas I'd mentioned earlier.

On Wed, Oct 3, 2012 at 12:08 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi,
>
> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).
>
> With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
> removed MR1 services support:
>
> """
> ➜  mapred jobtracker
> Sorry, the jobtracker command is no longer supported.
> """
>
> So if you need MR1 running with few hassles, you'll have to either use
> the independent version of classic MR from an Apache Hadoop 0.22
> release (which had independent components, making this easy) and set
> it up as an independent cluster with 0.23 HDFS jars (API is compatible
> so should work), or use the MR1 tarball CDH4 offers (which is closer
> to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
> the daemons via start-mapred.sh in either of them.
>
> If you instead want to use Apache Hadoop 1.x, you'll need to remove
> all HDFS references in the core-jar or exclude it from a build, and
> produce an MR-only deployable set. This is harder work to do.
>
> P.s. I haven't tried this out personally, but feel this may work.
>
> On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Thanks for replying.
>>
>> I'm using the 0.23.3 release as distributed, no previous versions.
>>
>> So what's the point in documenting a classic option, then, if it is not
>> available? I thought distributions were self-contained, or at least the docs
>> don't mention that you need any previous versions.
>>
>>
>>
>>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>>
>>> Whatever version you're trying to use, make sure it is not using the
>>> older HDFS jars?
>>>
>>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>>> wrote:
>>>>
>>>> Hi again
>>>>
>>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>>> battle? :-(
>>>>
>>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>>> cluster. Here's my configuration:
>>>>
>>>> core-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>        <name>fs.default.name</name>
>>>>        <value>hdfs://samplehost.com:9000</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> hdfs-site.xml
>>>> <configuration>
>>>>       <property>
>>>>           <name>dfs.replication</name>
>>>>           <value>3</value>
>>>>       </property>
>>>> </configuration>
>>>>
>>>> mapred-site.xml
>>>> <configuration>
>>>>    <property>
>>>>      <name>mapreduce.framework.name</name>
>>>>      <value>classic</value>
>>>>    </property>
>>>>    <property>
>>>>      <name>mapred.job.tracker</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>>      <property>
>>>>      <name> mapreduce.jobtracker.address</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>> </configuration>
>>>>
>>>> yarn-site.xml
>>>> <configuration>
>>>> </configuration>
>>>>
>>>>
>>>> Well, I start the thing and do a netstat -l , and there's no one
>>>> listening
>>>> on port 9001. There are no errors in the logs, and no mention of that
>>>> port,
>>>> either.
>>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>>
>>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>>> properly.
>>>>
>>>> Regards,
>>>>
>>>> Alexander
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
I am incorrect on the below:

> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).

Turns out, "classic" is just for older clients to run on YARN without
any other changes. It roughly translates the RM address from a JT
property into a YARN one, allowing for near-seamless transition of
jobs from 0.22 to 0.23.

There is hence no support for using any of the new libs in Apache
Hadoop 0.23/2.x to run a job targeted on an MR1 cluster or for running
an MR1 cluster out of it. You'll have to look elsewhere for such a
method, or try the MR1 specific ideas I'd mentioned earlier.

On Wed, Oct 3, 2012 at 12:08 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi,
>
> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).
>
> With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
> removed MR1 services support:
>
> """
> ➜  mapred jobtracker
> Sorry, the jobtracker command is no longer supported.
> """
>
> So if you need MR1 running with few hassles, you'll have to either use
> the independent version of classic MR from an Apache Hadoop 0.22
> release (which had independent components, making this easy) and set
> it up as an independent cluster with 0.23 HDFS jars (API is compatible
> so should work), or use the MR1 tarball CDH4 offers (which is closer
> to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
> the daemons via start-mapred.sh in either of them.
>
> If you instead want to use Apache Hadoop 1.x, you'll need to remove
> all HDFS references in the core-jar or exclude it from a build, and
> produce an MR-only deployable set. This is harder work to do.
>
> P.s. I haven't tried this out personally, but feel this may work.
>
> On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Thanks for replying.
>>
>> I'm using the 0.23.3 release as distributed, no previous versions.
>>
>> So what's the point in documenting a classic option, then, if it is not
>> available? I thought distributions were self-contained, or at least the docs
>> don't mention that you need any previous versions.
>>
>>
>>
>>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>>
>>> Whatever version you're trying to use, make sure it is not using the
>>> older HDFS jars?
>>>
>>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>>> wrote:
>>>>
>>>> Hi again
>>>>
>>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>>> battle? :-(
>>>>
>>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>>> cluster. Here's my configuration:
>>>>
>>>> core-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>        <name>fs.default.name</name>
>>>>        <value>hdfs://samplehost.com:9000</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> hdfs-site.xml
>>>> <configuration>
>>>>       <property>
>>>>           <name>dfs.replication</name>
>>>>           <value>3</value>
>>>>       </property>
>>>> </configuration>
>>>>
>>>> mapred-site.xml
>>>> <configuration>
>>>>    <property>
>>>>      <name>mapreduce.framework.name</name>
>>>>      <value>classic</value>
>>>>    </property>
>>>>    <property>
>>>>      <name>mapred.job.tracker</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>>      <property>
>>>>      <name> mapreduce.jobtracker.address</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>> </configuration>
>>>>
>>>> yarn-site.xml
>>>> <configuration>
>>>> </configuration>
>>>>
>>>>
>>>> Well, I start the thing and do a netstat -l , and there's no one
>>>> listening
>>>> on port 9001. There are no errors in the logs, and no mention of that
>>>> port,
>>>> either.
>>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>>
>>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>>> properly.
>>>>
>>>> Regards,
>>>>
>>>> Alexander
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
I am incorrect on the below:

> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).

Turns out, "classic" is just for older clients to run on YARN without
any other changes. It roughly translates the RM address from a JT
property into a YARN one, allowing for near-seamless transition of
jobs from 0.22 to 0.23.

There is hence no support for using any of the new libs in Apache
Hadoop 0.23/2.x to run a job targeted on an MR1 cluster or for running
an MR1 cluster out of it. You'll have to look elsewhere for such a
method, or try the MR1 specific ideas I'd mentioned earlier.

On Wed, Oct 3, 2012 at 12:08 PM, Harsh J <ha...@cloudera.com> wrote:
> Hi,
>
> The classic option exists to provide backward compatibility for users
> wanting to run an MR1 cluster (with JT, etc.).
>
> With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
> removed MR1 services support:
>
> """
> ➜  mapred jobtracker
> Sorry, the jobtracker command is no longer supported.
> """
>
> So if you need MR1 running with few hassles, you'll have to either use
> the independent version of classic MR from an Apache Hadoop 0.22
> release (which had independent components, making this easy) and set
> it up as an independent cluster with 0.23 HDFS jars (API is compatible
> so should work), or use the MR1 tarball CDH4 offers (which is closer
> to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
> the daemons via start-mapred.sh in either of them.
>
> If you instead want to use Apache Hadoop 1.x, you'll need to remove
> all HDFS references in the core-jar or exclude it from a build, and
> produce an MR-only deployable set. This is harder work to do.
>
> P.s. I haven't tried this out personally, but feel this may work.
>
> On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Thanks for replying.
>>
>> I'm using the 0.23.3 release as distributed, no previous versions.
>>
>> So what's the point in documenting a classic option, then, if it is not
>> available? I thought distributions were self-contained, or at least the docs
>> don't mention that you need any previous versions.
>>
>>
>>
>>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>>
>>> Whatever version you're trying to use, make sure it is not using the
>>> older HDFS jars?
>>>
>>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>>> wrote:
>>>>
>>>> Hi again
>>>>
>>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>>> battle? :-(
>>>>
>>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>>> cluster. Here's my configuration:
>>>>
>>>> core-site.xml:
>>>> <configuration>
>>>>      <property>
>>>>        <name>fs.default.name</name>
>>>>        <value>hdfs://samplehost.com:9000</value>
>>>>      </property>
>>>> </configuration>
>>>>
>>>> hdfs-site.xml
>>>> <configuration>
>>>>       <property>
>>>>           <name>dfs.replication</name>
>>>>           <value>3</value>
>>>>       </property>
>>>> </configuration>
>>>>
>>>> mapred-site.xml
>>>> <configuration>
>>>>    <property>
>>>>      <name>mapreduce.framework.name</name>
>>>>      <value>classic</value>
>>>>    </property>
>>>>    <property>
>>>>      <name>mapred.job.tracker</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>>      <property>
>>>>      <name> mapreduce.jobtracker.address</name>
>>>>      <value>samplehost.com:9001</value>
>>>>    </property>
>>>> </configuration>
>>>>
>>>> yarn-site.xml
>>>> <configuration>
>>>> </configuration>
>>>>
>>>>
>>>> Well, I start the thing and do a netstat -l , and there's no one
>>>> listening
>>>> on port 9001. There are no errors in the logs, and no mention of that
>>>> port,
>>>> either.
>>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>>
>>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>>> properly.
>>>>
>>>> Regards,
>>>>
>>>> Alexander
>>>>
>>>>
>>>>
>>>
>>>
>>
>
>
>
> --
> Harsh J



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
Hi,

The classic option exists to provide backward compatibility for users
wanting to run an MR1 cluster (with JT, etc.).

With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
removed MR1 services support:

"""
➜  mapred jobtracker
Sorry, the jobtracker command is no longer supported.
"""

So if you need MR1 running with few hassles, you'll have to either use
the independent version of classic MR from an Apache Hadoop 0.22
release (which had independent components, making this easy) and set
it up as an independent cluster with 0.23 HDFS jars (API is compatible
so should work), or use the MR1 tarball CDH4 offers (which is closer
to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
the daemons via start-mapred.sh in either of them.

If you instead want to use Apache Hadoop 1.x, you'll need to remove
all HDFS references in the core-jar or exclude it from a build, and
produce an MR-only deployable set. This is harder work to do.

P.s. I haven't tried this out personally, but feel this may work.

On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Thanks for replying.
>
> I'm using the 0.23.3 release as distributed, no previous versions.
>
> So what's the point in documenting a classic option, then, if it is not
> available? I thought distributions were self-contained, or at least the docs
> don't mention that you need any previous versions.
>
>
>
>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>
>> Whatever version you're trying to use, make sure it is not using the
>> older HDFS jars?
>>
>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>> wrote:
>>>
>>> Hi again
>>>
>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>> battle? :-(
>>>
>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>> cluster. Here's my configuration:
>>>
>>> core-site.xml:
>>> <configuration>
>>>      <property>
>>>        <name>fs.default.name</name>
>>>        <value>hdfs://samplehost.com:9000</value>
>>>      </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> <configuration>
>>>       <property>
>>>           <name>dfs.replication</name>
>>>           <value>3</value>
>>>       </property>
>>> </configuration>
>>>
>>> mapred-site.xml
>>> <configuration>
>>>    <property>
>>>      <name>mapreduce.framework.name</name>
>>>      <value>classic</value>
>>>    </property>
>>>    <property>
>>>      <name>mapred.job.tracker</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>>      <property>
>>>      <name> mapreduce.jobtracker.address</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>> </configuration>
>>>
>>> yarn-site.xml
>>> <configuration>
>>> </configuration>
>>>
>>>
>>> Well, I start the thing and do a netstat -l , and there's no one
>>> listening
>>> on port 9001. There are no errors in the logs, and no mention of that
>>> port,
>>> either.
>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>
>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>> properly.
>>>
>>> Regards,
>>>
>>> Alexander
>>>
>>>
>>>
>>
>>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
Hi,

The classic option exists to provide backward compatibility for users
wanting to run an MR1 cluster (with JT, etc.).

With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
removed MR1 services support:

"""
➜  mapred jobtracker
Sorry, the jobtracker command is no longer supported.
"""

So if you need MR1 running with few hassles, you'll have to either use
the independent version of classic MR from an Apache Hadoop 0.22
release (which had independent components, making this easy) and set
it up as an independent cluster with 0.23 HDFS jars (API is compatible
so should work), or use the MR1 tarball CDH4 offers (which is closer
to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
the daemons via start-mapred.sh in either of them.

If you instead want to use Apache Hadoop 1.x, you'll need to remove
all HDFS references in the core-jar or exclude it from a build, and
produce an MR-only deployable set. This is harder work to do.

P.s. I haven't tried this out personally, but feel this may work.

On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Thanks for replying.
>
> I'm using the 0.23.3 release as distributed, no previous versions.
>
> So what's the point in documenting a classic option, then, if it is not
> available? I thought distributions were self-contained, or at least the docs
> don't mention that you need any previous versions.
>
>
>
>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>
>> Whatever version you're trying to use, make sure it is not using the
>> older HDFS jars?
>>
>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>> wrote:
>>>
>>> Hi again
>>>
>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>> battle? :-(
>>>
>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>> cluster. Here's my configuration:
>>>
>>> core-site.xml:
>>> <configuration>
>>>      <property>
>>>        <name>fs.default.name</name>
>>>        <value>hdfs://samplehost.com:9000</value>
>>>      </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> <configuration>
>>>       <property>
>>>           <name>dfs.replication</name>
>>>           <value>3</value>
>>>       </property>
>>> </configuration>
>>>
>>> mapred-site.xml
>>> <configuration>
>>>    <property>
>>>      <name>mapreduce.framework.name</name>
>>>      <value>classic</value>
>>>    </property>
>>>    <property>
>>>      <name>mapred.job.tracker</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>>      <property>
>>>      <name> mapreduce.jobtracker.address</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>> </configuration>
>>>
>>> yarn-site.xml
>>> <configuration>
>>> </configuration>
>>>
>>>
>>> Well, I start the thing and do a netstat -l , and there's no one
>>> listening
>>> on port 9001. There are no errors in the logs, and no mention of that
>>> port,
>>> either.
>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>
>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>> properly.
>>>
>>> Regards,
>>>
>>> Alexander
>>>
>>>
>>>
>>
>>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
Hi,

The classic option exists to provide backward compatibility for users
wanting to run an MR1 cluster (with JT, etc.).

With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
removed MR1 services support:

"""
➜  mapred jobtracker
Sorry, the jobtracker command is no longer supported.
"""

So if you need MR1 running with few hassles, you'll have to either use
the independent version of classic MR from an Apache Hadoop 0.22
release (which had independent components, making this easy) and set
it up as an independent cluster with 0.23 HDFS jars (API is compatible
so should work), or use the MR1 tarball CDH4 offers (which is closer
to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
the daemons via start-mapred.sh in either of them.

If you instead want to use Apache Hadoop 1.x, you'll need to remove
all HDFS references in the core-jar or exclude it from a build, and
produce an MR-only deployable set. This is harder work to do.

P.s. I haven't tried this out personally, but feel this may work.

On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Thanks for replying.
>
> I'm using the 0.23.3 release as distributed, no previous versions.
>
> So what's the point in documenting a classic option, then, if it is not
> available? I thought distributions were self-contained, or at least the docs
> don't mention that you need any previous versions.
>
>
>
>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>
>> Whatever version you're trying to use, make sure it is not using the
>> older HDFS jars?
>>
>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>> wrote:
>>>
>>> Hi again
>>>
>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>> battle? :-(
>>>
>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>> cluster. Here's my configuration:
>>>
>>> core-site.xml:
>>> <configuration>
>>>      <property>
>>>        <name>fs.default.name</name>
>>>        <value>hdfs://samplehost.com:9000</value>
>>>      </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> <configuration>
>>>       <property>
>>>           <name>dfs.replication</name>
>>>           <value>3</value>
>>>       </property>
>>> </configuration>
>>>
>>> mapred-site.xml
>>> <configuration>
>>>    <property>
>>>      <name>mapreduce.framework.name</name>
>>>      <value>classic</value>
>>>    </property>
>>>    <property>
>>>      <name>mapred.job.tracker</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>>      <property>
>>>      <name> mapreduce.jobtracker.address</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>> </configuration>
>>>
>>> yarn-site.xml
>>> <configuration>
>>> </configuration>
>>>
>>>
>>> Well, I start the thing and do a netstat -l , and there's no one
>>> listening
>>> on port 9001. There are no errors in the logs, and no mention of that
>>> port,
>>> either.
>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>
>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>> properly.
>>>
>>> Regards,
>>>
>>> Alexander
>>>
>>>
>>>
>>
>>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
Hi,

The classic option exists to provide backward compatibility for users
wanting to run an MR1 cluster (with JT, etc.).

With the inclusion of YARN and MR2 modes of runtime, Apache Hadoop
removed MR1 services support:

"""
➜  mapred jobtracker
Sorry, the jobtracker command is no longer supported.
"""

So if you need MR1 running with few hassles, you'll have to either use
the independent version of classic MR from an Apache Hadoop 0.22
release (which had independent components, making this easy) and set
it up as an independent cluster with 0.23 HDFS jars (API is compatible
so should work), or use the MR1 tarball CDH4 offers (which is closer
to 1.x MR1 feature-set), and strip out the CDH4 HDFS jars and then run
the daemons via start-mapred.sh in either of them.

If you instead want to use Apache Hadoop 1.x, you'll need to remove
all HDFS references in the core-jar or exclude it from a build, and
produce an MR-only deployable set. This is harder work to do.

P.s. I haven't tried this out personally, but feel this may work.

On Wed, Oct 3, 2012 at 11:36 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Thanks for replying.
>
> I'm using the 0.23.3 release as distributed, no previous versions.
>
> So what's the point in documenting a classic option, then, if it is not
> available? I thought distributions were self-contained, or at least the docs
> don't mention that you need any previous versions.
>
>
>
>> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
>> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>>
>> Whatever version you're trying to use, make sure it is not using the
>> older HDFS jars?
>>
>> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com>
>> wrote:
>>>
>>> Hi again
>>>
>>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>>> battle? :-(
>>>
>>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>>> cluster. Here's my configuration:
>>>
>>> core-site.xml:
>>> <configuration>
>>>      <property>
>>>        <name>fs.default.name</name>
>>>        <value>hdfs://samplehost.com:9000</value>
>>>      </property>
>>> </configuration>
>>>
>>> hdfs-site.xml
>>> <configuration>
>>>       <property>
>>>           <name>dfs.replication</name>
>>>           <value>3</value>
>>>       </property>
>>> </configuration>
>>>
>>> mapred-site.xml
>>> <configuration>
>>>    <property>
>>>      <name>mapreduce.framework.name</name>
>>>      <value>classic</value>
>>>    </property>
>>>    <property>
>>>      <name>mapred.job.tracker</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>>      <property>
>>>      <name> mapreduce.jobtracker.address</name>
>>>      <value>samplehost.com:9001</value>
>>>    </property>
>>> </configuration>
>>>
>>> yarn-site.xml
>>> <configuration>
>>> </configuration>
>>>
>>>
>>> Well, I start the thing and do a netstat -l , and there's no one
>>> listening
>>> on port 9001. There are no errors in the logs, and no mention of that
>>> port,
>>> either.
>>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>>
>>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>>> properly.
>>>
>>> Regards,
>>>
>>> Alexander
>>>
>>>
>>>
>>
>>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Alexander Hristov <al...@planetalia.com>.
Thanks for replying.

I'm using the 0.23.3 release as distributed, no previous versions.

So what's the point in documenting a classic option, then, if it is not 
available? I thought distributions were self-contained, or at least the 
docs don't mention that you need any previous versions.


> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>
> Whatever version you're trying to use, make sure it is not using the
> older HDFS jars?
>
> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Hi again
>>
>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>> battle? :-(
>>
>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>> cluster. Here's my configuration:
>>
>> core-site.xml:
>> <configuration>
>>      <property>
>>        <name>fs.default.name</name>
>>        <value>hdfs://samplehost.com:9000</value>
>>      </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>       <property>
>>           <name>dfs.replication</name>
>>           <value>3</value>
>>       </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>    <property>
>>      <name>mapreduce.framework.name</name>
>>      <value>classic</value>
>>    </property>
>>    <property>
>>      <name>mapred.job.tracker</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>>      <property>
>>      <name> mapreduce.jobtracker.address</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>> </configuration>
>>
>> yarn-site.xml
>> <configuration>
>> </configuration>
>>
>>
>> Well, I start the thing and do a netstat -l , and there's no one listening
>> on port 9001. There are no errors in the logs, and no mention of that port,
>> either.
>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>
>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>> properly.
>>
>> Regards,
>>
>> Alexander
>>
>>
>>
>
>


Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Alexander Hristov <al...@planetalia.com>.
Thanks for replying.

I'm using the 0.23.3 release as distributed, no previous versions.

So what's the point in documenting a classic option, then, if it is not 
available? I thought distributions were self-contained, or at least the 
docs don't mention that you need any previous versions.


> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>
> Whatever version you're trying to use, make sure it is not using the
> older HDFS jars?
>
> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Hi again
>>
>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>> battle? :-(
>>
>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>> cluster. Here's my configuration:
>>
>> core-site.xml:
>> <configuration>
>>      <property>
>>        <name>fs.default.name</name>
>>        <value>hdfs://samplehost.com:9000</value>
>>      </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>       <property>
>>           <name>dfs.replication</name>
>>           <value>3</value>
>>       </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>    <property>
>>      <name>mapreduce.framework.name</name>
>>      <value>classic</value>
>>    </property>
>>    <property>
>>      <name>mapred.job.tracker</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>>      <property>
>>      <name> mapreduce.jobtracker.address</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>> </configuration>
>>
>> yarn-site.xml
>> <configuration>
>> </configuration>
>>
>>
>> Well, I start the thing and do a netstat -l , and there's no one listening
>> on port 9001. There are no errors in the logs, and no mention of that port,
>> either.
>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>
>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>> properly.
>>
>> Regards,
>>
>> Alexander
>>
>>
>>
>
>


Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Alexander Hristov <al...@planetalia.com>.
Thanks for replying.

I'm using the 0.23.3 release as distributed, no previous versions.

So what's the point in documenting a classic option, then, if it is not 
available? I thought distributions were self-contained, or at least the 
docs don't mention that you need any previous versions.


> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>
> Whatever version you're trying to use, make sure it is not using the
> older HDFS jars?
>
> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Hi again
>>
>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>> battle? :-(
>>
>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>> cluster. Here's my configuration:
>>
>> core-site.xml:
>> <configuration>
>>      <property>
>>        <name>fs.default.name</name>
>>        <value>hdfs://samplehost.com:9000</value>
>>      </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>       <property>
>>           <name>dfs.replication</name>
>>           <value>3</value>
>>       </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>    <property>
>>      <name>mapreduce.framework.name</name>
>>      <value>classic</value>
>>    </property>
>>    <property>
>>      <name>mapred.job.tracker</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>>      <property>
>>      <name> mapreduce.jobtracker.address</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>> </configuration>
>>
>> yarn-site.xml
>> <configuration>
>> </configuration>
>>
>>
>> Well, I start the thing and do a netstat -l , and there's no one listening
>> on port 9001. There are no errors in the logs, and no mention of that port,
>> either.
>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>
>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>> properly.
>>
>> Regards,
>>
>> Alexander
>>
>>
>>
>
>


Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Alexander Hristov <al...@planetalia.com>.
Thanks for replying.

I'm using the 0.23.3 release as distributed, no previous versions.

So what's the point in documenting a classic option, then, if it is not 
available? I thought distributions were self-contained, or at least the 
docs don't mention that you need any previous versions.


> What is your 'classic' MapReduce bundle version? 0.23 ships no classic
> MapReduce services bundle in it AFAIK, only YARN+(MR2-App).
>
> Whatever version you're trying to use, make sure it is not using the
> older HDFS jars?
>
> On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
>> Hi again
>>
>> Why does it seem to me that everything Hadoop 0.23-related is an uphill
>> battle? :-(
>>
>> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
>> cluster. Here's my configuration:
>>
>> core-site.xml:
>> <configuration>
>>      <property>
>>        <name>fs.default.name</name>
>>        <value>hdfs://samplehost.com:9000</value>
>>      </property>
>> </configuration>
>>
>> hdfs-site.xml
>> <configuration>
>>       <property>
>>           <name>dfs.replication</name>
>>           <value>3</value>
>>       </property>
>> </configuration>
>>
>> mapred-site.xml
>> <configuration>
>>    <property>
>>      <name>mapreduce.framework.name</name>
>>      <value>classic</value>
>>    </property>
>>    <property>
>>      <name>mapred.job.tracker</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>>      <property>
>>      <name> mapreduce.jobtracker.address</name>
>>      <value>samplehost.com:9001</value>
>>    </property>
>> </configuration>
>>
>> yarn-site.xml
>> <configuration>
>> </configuration>
>>
>>
>> Well, I start the thing and do a netstat -l , and there's no one listening
>> on port 9001. There are no errors in the logs, and no mention of that port,
>> either.
>> Obviously, all Map/Reduce examples fail with Connection Refused.
>>
>> Starting the same cluster using a MapReduce 2 (YARN) configuration works
>> properly.
>>
>> Regards,
>>
>> Alexander
>>
>>
>>
>
>


Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
What is your 'classic' MapReduce bundle version? 0.23 ships no classic
MapReduce services bundle in it AFAIK, only YARN+(MR2-App).

Whatever version you're trying to use, make sure it is not using the
older HDFS jars?

On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Hi again
>
> Why does it seem to me that everything Hadoop 0.23-related is an uphill
> battle? :-(
>
> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
> cluster. Here's my configuration:
>
> core-site.xml:
> <configuration>
>     <property>
>       <name>fs.default.name</name>
>       <value>hdfs://samplehost.com:9000</value>
>     </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>      <property>
>          <name>dfs.replication</name>
>          <value>3</value>
>      </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>   <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>samplehost.com:9001</value>
>   </property>
>     <property>
>     <name> mapreduce.jobtracker.address</name>
>     <value>samplehost.com:9001</value>
>   </property>
> </configuration>
>
> yarn-site.xml
> <configuration>
> </configuration>
>
>
> Well, I start the thing and do a netstat -l , and there's no one listening
> on port 9001. There are no errors in the logs, and no mention of that port,
> either.
> Obviously, all Map/Reduce examples fail with Connection Refused.
>
> Starting the same cluster using a MapReduce 2 (YARN) configuration works
> properly.
>
> Regards,
>
> Alexander
>
>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
What is your 'classic' MapReduce bundle version? 0.23 ships no classic
MapReduce services bundle in it AFAIK, only YARN+(MR2-App).

Whatever version you're trying to use, make sure it is not using the
older HDFS jars?

On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Hi again
>
> Why does it seem to me that everything Hadoop 0.23-related is an uphill
> battle? :-(
>
> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
> cluster. Here's my configuration:
>
> core-site.xml:
> <configuration>
>     <property>
>       <name>fs.default.name</name>
>       <value>hdfs://samplehost.com:9000</value>
>     </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>      <property>
>          <name>dfs.replication</name>
>          <value>3</value>
>      </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>   <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>samplehost.com:9001</value>
>   </property>
>     <property>
>     <name> mapreduce.jobtracker.address</name>
>     <value>samplehost.com:9001</value>
>   </property>
> </configuration>
>
> yarn-site.xml
> <configuration>
> </configuration>
>
>
> Well, I start the thing and do a netstat -l , and there's no one listening
> on port 9001. There are no errors in the logs, and no mention of that port,
> either.
> Obviously, all Map/Reduce examples fail with Connection Refused.
>
> Starting the same cluster using a MapReduce 2 (YARN) configuration works
> properly.
>
> Regards,
>
> Alexander
>
>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
What is your 'classic' MapReduce bundle version? 0.23 ships no classic
MapReduce services bundle in it AFAIK, only YARN+(MR2-App).

Whatever version you're trying to use, make sure it is not using the
older HDFS jars?

On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Hi again
>
> Why does it seem to me that everything Hadoop 0.23-related is an uphill
> battle? :-(
>
> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
> cluster. Here's my configuration:
>
> core-site.xml:
> <configuration>
>     <property>
>       <name>fs.default.name</name>
>       <value>hdfs://samplehost.com:9000</value>
>     </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>      <property>
>          <name>dfs.replication</name>
>          <value>3</value>
>      </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>   <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>samplehost.com:9001</value>
>   </property>
>     <property>
>     <name> mapreduce.jobtracker.address</name>
>     <value>samplehost.com:9001</value>
>   </property>
> </configuration>
>
> yarn-site.xml
> <configuration>
> </configuration>
>
>
> Well, I start the thing and do a netstat -l , and there's no one listening
> on port 9001. There are no errors in the logs, and no mention of that port,
> either.
> Obviously, all Map/Reduce examples fail with Connection Refused.
>
> Starting the same cluster using a MapReduce 2 (YARN) configuration works
> properly.
>
> Regards,
>
> Alexander
>
>
>



-- 
Harsh J

Re: Classic(MapReduce 1) cluster in Hadoop 0.23 just won't listen

Posted by Harsh J <ha...@cloudera.com>.
What is your 'classic' MapReduce bundle version? 0.23 ships no classic
MapReduce services bundle in it AFAIK, only YARN+(MR2-App).

Whatever version you're trying to use, make sure it is not using the
older HDFS jars?

On Wed, Oct 3, 2012 at 10:13 AM, Alexander Hristov <al...@planetalia.com> wrote:
> Hi again
>
> Why does it seem to me that everything Hadoop 0.23-related is an uphill
> battle? :-(
>
> I'm trying something as simple as running a classic(MapReduce 1) Hadoop
> cluster. Here's my configuration:
>
> core-site.xml:
> <configuration>
>     <property>
>       <name>fs.default.name</name>
>       <value>hdfs://samplehost.com:9000</value>
>     </property>
> </configuration>
>
> hdfs-site.xml
> <configuration>
>      <property>
>          <name>dfs.replication</name>
>          <value>3</value>
>      </property>
> </configuration>
>
> mapred-site.xml
> <configuration>
>   <property>
>     <name>mapreduce.framework.name</name>
>     <value>classic</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>samplehost.com:9001</value>
>   </property>
>     <property>
>     <name> mapreduce.jobtracker.address</name>
>     <value>samplehost.com:9001</value>
>   </property>
> </configuration>
>
> yarn-site.xml
> <configuration>
> </configuration>
>
>
> Well, I start the thing and do a netstat -l , and there's no one listening
> on port 9001. There are no errors in the logs, and no mention of that port,
> either.
> Obviously, all Map/Reduce examples fail with Connection Refused.
>
> Starting the same cluster using a MapReduce 2 (YARN) configuration works
> properly.
>
> Regards,
>
> Alexander
>
>
>



-- 
Harsh J