You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Mohammad Mustaqeem <3m...@gmail.com> on 2013/05/08 15:43:52 UTC

Rack Aware Hadoop cluster

Hello everyone,
    I was searching for how to make the hadoop cluster rack-aware and I
find out from here
http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness
that
we can do this by giving property of "topology.script.file.name". But here
it is not written where to put this
<property>
        <name>topology.script.file.name</name>

<value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
</property>

Means in which configuration file.
I am using hadoop-2.0.3-alpha.


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Shahab Yunus <sh...@gmail.com>.
core-site.xml

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Finally, one I can answer. :)  That should be in core-site.xml (unless it's
moved from ver 1.x).  It needs to be in the configuration for NameNode(s)
and JobTracker (Yarn).

In 1.x you need to restart NN and JT services for the script to take effect.


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue arise. When I start the dfs, I got the a line in namenode
log - "2013-05-09 15:29:44, 270 INFO logs: Aliases are enabled" What its
mean?
After it, nothing happens. JPS shows that datanode is running but web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 10:00 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help
>
>
> On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> That problem is being resolved.
>> But a new issue rises. When I start the dfs, I found a line namenode log
>> - "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
>> mean.
>> After it nothing happens. JPS shows that datanode is running but the web
>> interface for dfshealth is not running.
>> Any Idea?
>> Please help.
>>
>>
>> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>>
>>> That's one I use too I think it's on apache web site
>>>
>>> Sent from my iPhone
>>>
>>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>>
>>> Here is a sample I stole from the web and modified slightly... I think.
>>>
>>> HADOOP_CONF=/etc/hadoop/conf
>>>
>>> while [ $# -gt 0 ] ; do
>>>   nodeArg=$1
>>>   exec< ${HADOOP_CONF}/rack_info.txt
>>>   result=""
>>>   while read line ; do
>>>     ar=( $line )
>>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>>       result="${ar[1]}"
>>>     fi
>>>   done
>>>   shift
>>>   if [ -z "$result" ] ; then
>>>     echo -n "/default/rack "
>>>   else
>>>     echo -n "$result "
>>>   fi
>>>
>>> done
>>>
>>>
>>> The rack_info.txt file contains all hostname AND IP addresses for each
>>> node:
>>> 10.10.10.10  /dc1/rack1
>>> 10.10.10.11  /dc1/rack2
>>> datanode1  /dc1/rack1
>>> datanode2  /dc1/rack2
>>> .. etch.
>>>
>>>
>>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>>
>>>> Look between the <code> blocks starting at line 1336.
>>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>>> documentation with a future Hadoop release. :)
>>>>
>>>> -- Adam
>>>>
>>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
>>>> >
>>>>  wrote:
>>>>
>>>> > If anybody have sample (topology.script.file.name) script then
>>>> please share it.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > @chris, I have test it outside. It is working fine.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > Error in script.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>>> wrote:
>>>> > Your script has an error in it.  Please test your script using both
>>>> IP Addresses and Names, outside of hadoop.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > I have done this and found following error in log -
>>>> >
>>>> >
>>>> > 2013-05-08 18:53:45,221 WARN
>>>> org.apache.hadoop.net.ScriptBasedMapping: Exception running
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>>> error: "(" unexpected (expecting "done")
>>>> >
>>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> >       at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> >       at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> >       at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> >       at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> > 2013-05-08 18:53:45,223 ERROR
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>>> call returned null! Using /default-rack for host [127.0.0.1]
>>>> >
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>>> > It both parsed during the HDFS startup.
>>>> >
>>>> > Leonid
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > Hello everyone,
>>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>>> I find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>>> topology.script.file.name". But here it is not written where to put
>>>> this
>>>> > <property>
>>>> >               <name>topology.script.file.name</name>
>>>> >
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> > </property>
>>>> >
>>>> > Means in which configuration file.
>>>> > I am using hadoop-2.0.3-alpha.
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue arise. When I start the dfs, I got the a line in namenode
log - "2013-05-09 15:29:44, 270 INFO logs: Aliases are enabled" What its
mean?
After it, nothing happens. JPS shows that datanode is running but web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 10:00 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help
>
>
> On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> That problem is being resolved.
>> But a new issue rises. When I start the dfs, I found a line namenode log
>> - "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
>> mean.
>> After it nothing happens. JPS shows that datanode is running but the web
>> interface for dfshealth is not running.
>> Any Idea?
>> Please help.
>>
>>
>> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>>
>>> That's one I use too I think it's on apache web site
>>>
>>> Sent from my iPhone
>>>
>>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>>
>>> Here is a sample I stole from the web and modified slightly... I think.
>>>
>>> HADOOP_CONF=/etc/hadoop/conf
>>>
>>> while [ $# -gt 0 ] ; do
>>>   nodeArg=$1
>>>   exec< ${HADOOP_CONF}/rack_info.txt
>>>   result=""
>>>   while read line ; do
>>>     ar=( $line )
>>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>>       result="${ar[1]}"
>>>     fi
>>>   done
>>>   shift
>>>   if [ -z "$result" ] ; then
>>>     echo -n "/default/rack "
>>>   else
>>>     echo -n "$result "
>>>   fi
>>>
>>> done
>>>
>>>
>>> The rack_info.txt file contains all hostname AND IP addresses for each
>>> node:
>>> 10.10.10.10  /dc1/rack1
>>> 10.10.10.11  /dc1/rack2
>>> datanode1  /dc1/rack1
>>> datanode2  /dc1/rack2
>>> .. etch.
>>>
>>>
>>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>>
>>>> Look between the <code> blocks starting at line 1336.
>>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>>> documentation with a future Hadoop release. :)
>>>>
>>>> -- Adam
>>>>
>>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
>>>> >
>>>>  wrote:
>>>>
>>>> > If anybody have sample (topology.script.file.name) script then
>>>> please share it.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > @chris, I have test it outside. It is working fine.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > Error in script.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>>> wrote:
>>>> > Your script has an error in it.  Please test your script using both
>>>> IP Addresses and Names, outside of hadoop.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > I have done this and found following error in log -
>>>> >
>>>> >
>>>> > 2013-05-08 18:53:45,221 WARN
>>>> org.apache.hadoop.net.ScriptBasedMapping: Exception running
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>>> error: "(" unexpected (expecting "done")
>>>> >
>>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> >       at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> >       at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> >       at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> >       at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> > 2013-05-08 18:53:45,223 ERROR
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>>> call returned null! Using /default-rack for host [127.0.0.1]
>>>> >
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>>> > It both parsed during the HDFS startup.
>>>> >
>>>> > Leonid
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > Hello everyone,
>>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>>> I find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>>> topology.script.file.name". But here it is not written where to put
>>>> this
>>>> > <property>
>>>> >               <name>topology.script.file.name</name>
>>>> >
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> > </property>
>>>> >
>>>> > Means in which configuration file.
>>>> > I am using hadoop-2.0.3-alpha.
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue arise. When I start the dfs, I got the a line in namenode
log - "2013-05-09 15:29:44, 270 INFO logs: Aliases are enabled" What its
mean?
After it, nothing happens. JPS shows that datanode is running but web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 10:00 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help
>
>
> On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> That problem is being resolved.
>> But a new issue rises. When I start the dfs, I found a line namenode log
>> - "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
>> mean.
>> After it nothing happens. JPS shows that datanode is running but the web
>> interface for dfshealth is not running.
>> Any Idea?
>> Please help.
>>
>>
>> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>>
>>> That's one I use too I think it's on apache web site
>>>
>>> Sent from my iPhone
>>>
>>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>>
>>> Here is a sample I stole from the web and modified slightly... I think.
>>>
>>> HADOOP_CONF=/etc/hadoop/conf
>>>
>>> while [ $# -gt 0 ] ; do
>>>   nodeArg=$1
>>>   exec< ${HADOOP_CONF}/rack_info.txt
>>>   result=""
>>>   while read line ; do
>>>     ar=( $line )
>>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>>       result="${ar[1]}"
>>>     fi
>>>   done
>>>   shift
>>>   if [ -z "$result" ] ; then
>>>     echo -n "/default/rack "
>>>   else
>>>     echo -n "$result "
>>>   fi
>>>
>>> done
>>>
>>>
>>> The rack_info.txt file contains all hostname AND IP addresses for each
>>> node:
>>> 10.10.10.10  /dc1/rack1
>>> 10.10.10.11  /dc1/rack2
>>> datanode1  /dc1/rack1
>>> datanode2  /dc1/rack2
>>> .. etch.
>>>
>>>
>>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>>
>>>> Look between the <code> blocks starting at line 1336.
>>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>>> documentation with a future Hadoop release. :)
>>>>
>>>> -- Adam
>>>>
>>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
>>>> >
>>>>  wrote:
>>>>
>>>> > If anybody have sample (topology.script.file.name) script then
>>>> please share it.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > @chris, I have test it outside. It is working fine.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > Error in script.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>>> wrote:
>>>> > Your script has an error in it.  Please test your script using both
>>>> IP Addresses and Names, outside of hadoop.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > I have done this and found following error in log -
>>>> >
>>>> >
>>>> > 2013-05-08 18:53:45,221 WARN
>>>> org.apache.hadoop.net.ScriptBasedMapping: Exception running
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>>> error: "(" unexpected (expecting "done")
>>>> >
>>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> >       at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> >       at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> >       at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> >       at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> > 2013-05-08 18:53:45,223 ERROR
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>>> call returned null! Using /default-rack for host [127.0.0.1]
>>>> >
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>>> > It both parsed during the HDFS startup.
>>>> >
>>>> > Leonid
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > Hello everyone,
>>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>>> I find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>>> topology.script.file.name". But here it is not written where to put
>>>> this
>>>> > <property>
>>>> >               <name>topology.script.file.name</name>
>>>> >
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> > </property>
>>>> >
>>>> > Means in which configuration file.
>>>> > I am using hadoop-2.0.3-alpha.
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue arise. When I start the dfs, I got the a line in namenode
log - "2013-05-09 15:29:44, 270 INFO logs: Aliases are enabled" What its
mean?
After it, nothing happens. JPS shows that datanode is running but web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 10:00 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help
>
>
> On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> That problem is being resolved.
>> But a new issue rises. When I start the dfs, I found a line namenode log
>> - "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
>> mean.
>> After it nothing happens. JPS shows that datanode is running but the web
>> interface for dfshealth is not running.
>> Any Idea?
>> Please help.
>>
>>
>> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>>
>>> That's one I use too I think it's on apache web site
>>>
>>> Sent from my iPhone
>>>
>>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>>
>>> Here is a sample I stole from the web and modified slightly... I think.
>>>
>>> HADOOP_CONF=/etc/hadoop/conf
>>>
>>> while [ $# -gt 0 ] ; do
>>>   nodeArg=$1
>>>   exec< ${HADOOP_CONF}/rack_info.txt
>>>   result=""
>>>   while read line ; do
>>>     ar=( $line )
>>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>>       result="${ar[1]}"
>>>     fi
>>>   done
>>>   shift
>>>   if [ -z "$result" ] ; then
>>>     echo -n "/default/rack "
>>>   else
>>>     echo -n "$result "
>>>   fi
>>>
>>> done
>>>
>>>
>>> The rack_info.txt file contains all hostname AND IP addresses for each
>>> node:
>>> 10.10.10.10  /dc1/rack1
>>> 10.10.10.11  /dc1/rack2
>>> datanode1  /dc1/rack1
>>> datanode2  /dc1/rack2
>>> .. etch.
>>>
>>>
>>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>>
>>>> Look between the <code> blocks starting at line 1336.
>>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>>> documentation with a future Hadoop release. :)
>>>>
>>>> -- Adam
>>>>
>>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
>>>> >
>>>>  wrote:
>>>>
>>>> > If anybody have sample (topology.script.file.name) script then
>>>> please share it.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > @chris, I have test it outside. It is working fine.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > Error in script.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>>> wrote:
>>>> > Your script has an error in it.  Please test your script using both
>>>> IP Addresses and Names, outside of hadoop.
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > I have done this and found following error in log -
>>>> >
>>>> >
>>>> > 2013-05-08 18:53:45,221 WARN
>>>> org.apache.hadoop.net.ScriptBasedMapping: Exception running
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>>> error: "(" unexpected (expecting "done")
>>>> >
>>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> >       at
>>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> >       at
>>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> >       at
>>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> >       at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> >       at
>>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> >       at
>>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> >       at
>>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> > 2013-05-08 18:53:45,223 ERROR
>>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>>> call returned null! Using /default-rack for host [127.0.0.1]
>>>> >
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>>> > It both parsed during the HDFS startup.
>>>> >
>>>> > Leonid
>>>> >
>>>> >
>>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>> > Hello everyone,
>>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>>> I find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>>> topology.script.file.name". But here it is not written where to put
>>>> this
>>>> > <property>
>>>> >               <name>topology.script.file.name</name>
>>>> >
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> > </property>
>>>> >
>>>> > Means in which configuration file.
>>>> > I am using hadoop-2.0.3-alpha.
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > With regards ---
>>>> > Mohammad Mustaqeem,
>>>> > M.Tech (CSE)
>>>> > MNNIT Allahabad
>>>> > 9026604270
>>>> >
>>>> >
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help


On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help.
>
>
> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>
>> That's one I use too I think it's on apache web site
>>
>> Sent from my iPhone
>>
>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>
>> Here is a sample I stole from the web and modified slightly... I think.
>>
>> HADOOP_CONF=/etc/hadoop/conf
>>
>> while [ $# -gt 0 ] ; do
>>   nodeArg=$1
>>   exec< ${HADOOP_CONF}/rack_info.txt
>>   result=""
>>   while read line ; do
>>     ar=( $line )
>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>       result="${ar[1]}"
>>     fi
>>   done
>>   shift
>>   if [ -z "$result" ] ; then
>>     echo -n "/default/rack "
>>   else
>>     echo -n "$result "
>>   fi
>>
>> done
>>
>>
>> The rack_info.txt file contains all hostname AND IP addresses for each
>> node:
>> 10.10.10.10  /dc1/rack1
>> 10.10.10.11  /dc1/rack2
>> datanode1  /dc1/rack1
>> datanode2  /dc1/rack2
>> .. etch.
>>
>>
>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>
>>> Look between the <code> blocks starting at line 1336.
>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>> documentation with a future Hadoop release. :)
>>>
>>> -- Adam
>>>
>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>>  wrote:
>>>
>>> > If anybody have sample (topology.script.file.name) script then please
>>> share it.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > @chris, I have test it outside. It is working fine.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > Error in script.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>> wrote:
>>> > Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > I have done this and found following error in log -
>>> >
>>> >
>>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>> error: "(" unexpected (expecting "done")
>>> >
>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> >       at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> >       at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> >       at
>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> >       at
>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> >       at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >       at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> > 2013-05-08 18:53:45,223 ERROR
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>> call returned null! Using /default-rack for host [127.0.0.1]
>>> >
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>> > It both parsed during the HDFS startup.
>>> >
>>> > Leonid
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > Hello everyone,
>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>> I find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>> topology.script.file.name". But here it is not written where to put this
>>> > <property>
>>> >               <name>topology.script.file.name</name>
>>> >
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> > </property>
>>> >
>>> > Means in which configuration file.
>>> > I am using hadoop-2.0.3-alpha.
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help


On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help.
>
>
> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>
>> That's one I use too I think it's on apache web site
>>
>> Sent from my iPhone
>>
>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>
>> Here is a sample I stole from the web and modified slightly... I think.
>>
>> HADOOP_CONF=/etc/hadoop/conf
>>
>> while [ $# -gt 0 ] ; do
>>   nodeArg=$1
>>   exec< ${HADOOP_CONF}/rack_info.txt
>>   result=""
>>   while read line ; do
>>     ar=( $line )
>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>       result="${ar[1]}"
>>     fi
>>   done
>>   shift
>>   if [ -z "$result" ] ; then
>>     echo -n "/default/rack "
>>   else
>>     echo -n "$result "
>>   fi
>>
>> done
>>
>>
>> The rack_info.txt file contains all hostname AND IP addresses for each
>> node:
>> 10.10.10.10  /dc1/rack1
>> 10.10.10.11  /dc1/rack2
>> datanode1  /dc1/rack1
>> datanode2  /dc1/rack2
>> .. etch.
>>
>>
>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>
>>> Look between the <code> blocks starting at line 1336.
>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>> documentation with a future Hadoop release. :)
>>>
>>> -- Adam
>>>
>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>>  wrote:
>>>
>>> > If anybody have sample (topology.script.file.name) script then please
>>> share it.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > @chris, I have test it outside. It is working fine.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > Error in script.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>> wrote:
>>> > Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > I have done this and found following error in log -
>>> >
>>> >
>>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>> error: "(" unexpected (expecting "done")
>>> >
>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> >       at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> >       at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> >       at
>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> >       at
>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> >       at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >       at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> > 2013-05-08 18:53:45,223 ERROR
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>> call returned null! Using /default-rack for host [127.0.0.1]
>>> >
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>> > It both parsed during the HDFS startup.
>>> >
>>> > Leonid
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > Hello everyone,
>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>> I find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>> topology.script.file.name". But here it is not written where to put this
>>> > <property>
>>> >               <name>topology.script.file.name</name>
>>> >
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> > </property>
>>> >
>>> > Means in which configuration file.
>>> > I am using hadoop-2.0.3-alpha.
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help


On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help.
>
>
> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>
>> That's one I use too I think it's on apache web site
>>
>> Sent from my iPhone
>>
>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>
>> Here is a sample I stole from the web and modified slightly... I think.
>>
>> HADOOP_CONF=/etc/hadoop/conf
>>
>> while [ $# -gt 0 ] ; do
>>   nodeArg=$1
>>   exec< ${HADOOP_CONF}/rack_info.txt
>>   result=""
>>   while read line ; do
>>     ar=( $line )
>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>       result="${ar[1]}"
>>     fi
>>   done
>>   shift
>>   if [ -z "$result" ] ; then
>>     echo -n "/default/rack "
>>   else
>>     echo -n "$result "
>>   fi
>>
>> done
>>
>>
>> The rack_info.txt file contains all hostname AND IP addresses for each
>> node:
>> 10.10.10.10  /dc1/rack1
>> 10.10.10.11  /dc1/rack2
>> datanode1  /dc1/rack1
>> datanode2  /dc1/rack2
>> .. etch.
>>
>>
>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>
>>> Look between the <code> blocks starting at line 1336.
>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>> documentation with a future Hadoop release. :)
>>>
>>> -- Adam
>>>
>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>>  wrote:
>>>
>>> > If anybody have sample (topology.script.file.name) script then please
>>> share it.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > @chris, I have test it outside. It is working fine.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > Error in script.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>> wrote:
>>> > Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > I have done this and found following error in log -
>>> >
>>> >
>>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>> error: "(" unexpected (expecting "done")
>>> >
>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> >       at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> >       at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> >       at
>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> >       at
>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> >       at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >       at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> > 2013-05-08 18:53:45,223 ERROR
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>> call returned null! Using /default-rack for host [127.0.0.1]
>>> >
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>> > It both parsed during the HDFS startup.
>>> >
>>> > Leonid
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > Hello everyone,
>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>> I find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>> topology.script.file.name". But here it is not written where to put this
>>> > <property>
>>> >               <name>topology.script.file.name</name>
>>> >
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> > </property>
>>> >
>>> > Means in which configuration file.
>>> > I am using hadoop-2.0.3-alpha.
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help


On Thu, May 9, 2013 at 3:37 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> That problem is being resolved.
> But a new issue rises. When I start the dfs, I found a line namenode log -
> "2013-05-09 15:29:44,270 INFO logs: Aliases are enabled" What does it
> mean.
> After it nothing happens. JPS shows that datanode is running but the web
> interface for dfshealth is not running.
> Any Idea?
> Please help.
>
>
> On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:
>
>> That's one I use too I think it's on apache web site
>>
>> Sent from my iPhone
>>
>> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>>
>> Here is a sample I stole from the web and modified slightly... I think.
>>
>> HADOOP_CONF=/etc/hadoop/conf
>>
>> while [ $# -gt 0 ] ; do
>>   nodeArg=$1
>>   exec< ${HADOOP_CONF}/rack_info.txt
>>   result=""
>>   while read line ; do
>>     ar=( $line )
>>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>>       result="${ar[1]}"
>>     fi
>>   done
>>   shift
>>   if [ -z "$result" ] ; then
>>     echo -n "/default/rack "
>>   else
>>     echo -n "$result "
>>   fi
>>
>> done
>>
>>
>> The rack_info.txt file contains all hostname AND IP addresses for each
>> node:
>> 10.10.10.10  /dc1/rack1
>> 10.10.10.11  /dc1/rack2
>> datanode1  /dc1/rack1
>> datanode2  /dc1/rack2
>> .. etch.
>>
>>
>> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>>
>>> Look between the <code> blocks starting at line 1336.
>>> http://lnkd.in/rJsqpV   Some day it will get included in the
>>> documentation with a future Hadoop release. :)
>>>
>>> -- Adam
>>>
>>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>>  wrote:
>>>
>>> > If anybody have sample (topology.script.file.name) script then please
>>> share it.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > @chris, I have test it outside. It is working fine.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > Error in script.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com>
>>> wrote:
>>> > Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > I have done this and found following error in log -
>>> >
>>> >
>>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>>> Exception running
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> > org.apache.hadoop.util.Shell$ExitCodeException:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>>> error: "(" unexpected (expecting "done")
>>> >
>>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> >       at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> >       at
>>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> >       at
>>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> >       at
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> >       at
>>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> >       at
>>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> >       at
>>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> >       at
>>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> >       at java.security.AccessController.doPrivileged(Native Method)
>>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>>> >       at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> > 2013-05-08 18:53:45,223 ERROR
>>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>>> call returned null! Using /default-rack for host [127.0.0.1]
>>> >
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>> lfedotov@hortonworks.com> wrote:
>>> > You can put this parameter to core-site.xml or hdfs-site.xml
>>> > It both parsed during the HDFS startup.
>>> >
>>> > Leonid
>>> >
>>> >
>>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>> > Hello everyone,
>>> >     I was searching for how to make the hadoop cluster rack-aware and
>>> I find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>>> topology.script.file.name". But here it is not written where to put this
>>> > <property>
>>> >               <name>topology.script.file.name</name>
>>> >
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> > </property>
>>> >
>>> > Means in which configuration file.
>>> > I am using hadoop-2.0.3-alpha.
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > With regards ---
>>> > Mohammad Mustaqeem,
>>> > M.Tech (CSE)
>>> > MNNIT Allahabad
>>> > 9026604270
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:

> That's one I use too I think it's on apache web site
>
> Sent from my iPhone
>
> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>
> Here is a sample I stole from the web and modified slightly... I think.
>
> HADOOP_CONF=/etc/hadoop/conf
>
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
>
> done
>
>
> The rack_info.txt file contains all hostname AND IP addresses for each
> node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
>
>
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>
>> Look between the <code> blocks starting at line 1336.
>> http://lnkd.in/rJsqpV   Some day it will get included in the
>> documentation with a future Hadoop release. :)
>>
>> -- Adam
>>
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>>
>> > If anybody have sample (topology.script.file.name) script then please
>> share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>> error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>> call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>> topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:

> That's one I use too I think it's on apache web site
>
> Sent from my iPhone
>
> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>
> Here is a sample I stole from the web and modified slightly... I think.
>
> HADOOP_CONF=/etc/hadoop/conf
>
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
>
> done
>
>
> The rack_info.txt file contains all hostname AND IP addresses for each
> node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
>
>
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>
>> Look between the <code> blocks starting at line 1336.
>> http://lnkd.in/rJsqpV   Some day it will get included in the
>> documentation with a future Hadoop release. :)
>>
>> -- Adam
>>
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>>
>> > If anybody have sample (topology.script.file.name) script then please
>> share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>> error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>> call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>> topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:

> That's one I use too I think it's on apache web site
>
> Sent from my iPhone
>
> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>
> Here is a sample I stole from the web and modified slightly... I think.
>
> HADOOP_CONF=/etc/hadoop/conf
>
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
>
> done
>
>
> The rack_info.txt file contains all hostname AND IP addresses for each
> node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
>
>
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>
>> Look between the <code> blocks starting at line 1336.
>> http://lnkd.in/rJsqpV   Some day it will get included in the
>> documentation with a future Hadoop release. :)
>>
>> -- Adam
>>
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>>
>> > If anybody have sample (topology.script.file.name) script then please
>> share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>> error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>> call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>> topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
That problem is being resolved.
But a new issue rises. When I start the dfs, I found a line namenode
log - "2013-05-09
15:29:44,270 INFO logs: Aliases are enabled" What does it mean.
After it nothing happens. JPS shows that datanode is running but the web
interface for dfshealth is not running.
Any Idea?
Please help.


On Thu, May 9, 2013 at 2:22 AM, Serge Blazhievsky <ha...@gmail.com>wrote:

> That's one I use too I think it's on apache web site
>
> Sent from my iPhone
>
> On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:
>
> Here is a sample I stole from the web and modified slightly... I think.
>
> HADOOP_CONF=/etc/hadoop/conf
>
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
>
> done
>
>
> The rack_info.txt file contains all hostname AND IP addresses for each
> node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
>
>
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>
>> Look between the <code> blocks starting at line 1336.
>> http://lnkd.in/rJsqpV   Some day it will get included in the
>> documentation with a future Hadoop release. :)
>>
>> -- Adam
>>
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>>
>> > If anybody have sample (topology.script.file.name) script then please
>> share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
>> Exception running
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
>> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
>> error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at
>> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at
>> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at
>> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at
>> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at
>> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR
>> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
>> call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>> lfedotov@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
>> topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Serge Blazhievsky <ha...@gmail.com>.
That's one I use too I think it's on apache web site 

Sent from my iPhone

On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:

> Here is a sample I stole from the web and modified slightly... I think.  
> 
> HADOOP_CONF=/etc/hadoop/conf
> 
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
> 
> done
> 
> 
> The rack_info.txt file contains all hostname AND IP addresses for each node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
> 
> 
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>> Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)
>> 
>> -- Adam
>> 
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>> 
>> > If anybody have sample (topology.script.file.name) script then please share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >               <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
> 

Re: Rack Aware Hadoop cluster

Posted by Serge Blazhievsky <ha...@gmail.com>.
That's one I use too I think it's on apache web site 

Sent from my iPhone

On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:

> Here is a sample I stole from the web and modified slightly... I think.  
> 
> HADOOP_CONF=/etc/hadoop/conf
> 
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
> 
> done
> 
> 
> The rack_info.txt file contains all hostname AND IP addresses for each node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
> 
> 
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>> Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)
>> 
>> -- Adam
>> 
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>> 
>> > If anybody have sample (topology.script.file.name) script then please share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >               <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
> 

Re: Rack Aware Hadoop cluster

Posted by Serge Blazhievsky <ha...@gmail.com>.
That's one I use too I think it's on apache web site 

Sent from my iPhone

On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:

> Here is a sample I stole from the web and modified slightly... I think.  
> 
> HADOOP_CONF=/etc/hadoop/conf
> 
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
> 
> done
> 
> 
> The rack_info.txt file contains all hostname AND IP addresses for each node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
> 
> 
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>> Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)
>> 
>> -- Adam
>> 
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>> 
>> > If anybody have sample (topology.script.file.name) script then please share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >               <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
> 

Re: Rack Aware Hadoop cluster

Posted by Serge Blazhievsky <ha...@gmail.com>.
That's one I use too I think it's on apache web site 

Sent from my iPhone

On May 8, 2013, at 1:49 PM, Chris Embree <ce...@gmail.com> wrote:

> Here is a sample I stole from the web and modified slightly... I think.  
> 
> HADOOP_CONF=/etc/hadoop/conf
> 
> while [ $# -gt 0 ] ; do
>   nodeArg=$1
>   exec< ${HADOOP_CONF}/rack_info.txt
>   result=""
>   while read line ; do
>     ar=( $line )
>     if [ "${ar[0]}" = "$nodeArg" ] ; then
>       result="${ar[1]}"
>     fi
>   done
>   shift
>   if [ -z "$result" ] ; then
>     echo -n "/default/rack "
>   else
>     echo -n "$result "
>   fi
> 
> done
> 
> 
> The rack_info.txt file contains all hostname AND IP addresses for each node:
> 10.10.10.10  /dc1/rack1
> 10.10.10.11  /dc1/rack2
> datanode1  /dc1/rack1
> datanode2  /dc1/rack2
> .. etch.
> 
> 
> On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:
>> Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)
>> 
>> -- Adam
>> 
>> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>>  wrote:
>> 
>> > If anybody have sample (topology.script.file.name) script then please share it.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > @chris, I have test it outside. It is working fine.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > Error in script.
>> >
>> >
>> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>> > Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
>> >
>> >
>> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > I have done this and found following error in log -
>> >
>> >
>> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> > org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>> >
>> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> >       at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> >       at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> >       at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> >       at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> >       at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> >       at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> >       at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> >       at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> >       at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> >       at java.security.AccessController.doPrivileged(Native Method)
>> >       at javax.security.auth.Subject.doAs(Subject.java:415)
>> >       at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> > 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>> >
>> >
>> >
>> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
>> > You can put this parameter to core-site.xml or hdfs-site.xml
>> > It both parsed during the HDFS startup.
>> >
>> > Leonid
>> >
>> >
>> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
>> > Hello everyone,
>> >     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this
>> > <property>
>> >               <name>topology.script.file.name</name>
>> >               <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> > </property>
>> >
>> > Means in which configuration file.
>> > I am using hadoop-2.0.3-alpha.
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
>> >
>> >
>> >
>> > --
>> > With regards ---
>> > Mohammad Mustaqeem,
>> > M.Tech (CSE)
>> > MNNIT Allahabad
>> > 9026604270
>> >
>> >
> 

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Here is a sample I stole from the web and modified slightly... I think.

HADOOP_CONF=/etc/hadoop/conf

while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/rack_info.txt
  result=""
  while read line ; do
    ar=( $line )
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done
  shift
  if [ -z "$result" ] ; then
    echo -n "/default/rack "
  else
    echo -n "$result "
  fi

done


The rack_info.txt file contains all hostname AND IP addresses for each node:
10.10.10.10  /dc1/rack1
10.10.10.11  /dc1/rack2
datanode1  /dc1/rack1
datanode2  /dc1/rack2
.. etch.


On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:

> Look between the <code> blocks starting at line 1336.
> http://lnkd.in/rJsqpV   Some day it will get included in the
> documentation with a future Hadoop release. :)
>
> -- Adam
>
> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>  wrote:
>
> > If anybody have sample (topology.script.file.name) script then please
> share it.
> >
> >
> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > @chris, I have test it outside. It is working fine.
> >
> >
> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > Error in script.
> >
> >
> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> > Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
> >
> >
> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > I have done this and found following error in log -
> >
> >
> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> > org.apache.hadoop.util.Shell$ExitCodeException:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
> error: "(" unexpected (expecting "done")
> >
> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> >       at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> >       at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> >       at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> >       at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> >       at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:415)
> >       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> > 2013-05-08 18:53:45,223 ERROR
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
> call returned null! Using /default-rack for host [127.0.0.1]
> >
> >
> >
> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > You can put this parameter to core-site.xml or hdfs-site.xml
> > It both parsed during the HDFS startup.
> >
> > Leonid
> >
> >
> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > Hello everyone,
> >     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
> topology.script.file.name". But here it is not written where to put this
> > <property>
> >               <name>topology.script.file.name</name>
> >
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> > </property>
> >
> > Means in which configuration file.
> > I am using hadoop-2.0.3-alpha.
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Here is a sample I stole from the web and modified slightly... I think.

HADOOP_CONF=/etc/hadoop/conf

while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/rack_info.txt
  result=""
  while read line ; do
    ar=( $line )
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done
  shift
  if [ -z "$result" ] ; then
    echo -n "/default/rack "
  else
    echo -n "$result "
  fi

done


The rack_info.txt file contains all hostname AND IP addresses for each node:
10.10.10.10  /dc1/rack1
10.10.10.11  /dc1/rack2
datanode1  /dc1/rack1
datanode2  /dc1/rack2
.. etch.


On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:

> Look between the <code> blocks starting at line 1336.
> http://lnkd.in/rJsqpV   Some day it will get included in the
> documentation with a future Hadoop release. :)
>
> -- Adam
>
> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>  wrote:
>
> > If anybody have sample (topology.script.file.name) script then please
> share it.
> >
> >
> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > @chris, I have test it outside. It is working fine.
> >
> >
> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > Error in script.
> >
> >
> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> > Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
> >
> >
> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > I have done this and found following error in log -
> >
> >
> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> > org.apache.hadoop.util.Shell$ExitCodeException:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
> error: "(" unexpected (expecting "done")
> >
> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> >       at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> >       at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> >       at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> >       at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> >       at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:415)
> >       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> > 2013-05-08 18:53:45,223 ERROR
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
> call returned null! Using /default-rack for host [127.0.0.1]
> >
> >
> >
> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > You can put this parameter to core-site.xml or hdfs-site.xml
> > It both parsed during the HDFS startup.
> >
> > Leonid
> >
> >
> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > Hello everyone,
> >     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
> topology.script.file.name". But here it is not written where to put this
> > <property>
> >               <name>topology.script.file.name</name>
> >
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> > </property>
> >
> > Means in which configuration file.
> > I am using hadoop-2.0.3-alpha.
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Here is a sample I stole from the web and modified slightly... I think.

HADOOP_CONF=/etc/hadoop/conf

while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/rack_info.txt
  result=""
  while read line ; do
    ar=( $line )
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done
  shift
  if [ -z "$result" ] ; then
    echo -n "/default/rack "
  else
    echo -n "$result "
  fi

done


The rack_info.txt file contains all hostname AND IP addresses for each node:
10.10.10.10  /dc1/rack1
10.10.10.11  /dc1/rack2
datanode1  /dc1/rack1
datanode2  /dc1/rack2
.. etch.


On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:

> Look between the <code> blocks starting at line 1336.
> http://lnkd.in/rJsqpV   Some day it will get included in the
> documentation with a future Hadoop release. :)
>
> -- Adam
>
> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>  wrote:
>
> > If anybody have sample (topology.script.file.name) script then please
> share it.
> >
> >
> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > @chris, I have test it outside. It is working fine.
> >
> >
> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > Error in script.
> >
> >
> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> > Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
> >
> >
> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > I have done this and found following error in log -
> >
> >
> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> > org.apache.hadoop.util.Shell$ExitCodeException:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
> error: "(" unexpected (expecting "done")
> >
> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> >       at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> >       at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> >       at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> >       at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> >       at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:415)
> >       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> > 2013-05-08 18:53:45,223 ERROR
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
> call returned null! Using /default-rack for host [127.0.0.1]
> >
> >
> >
> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > You can put this parameter to core-site.xml or hdfs-site.xml
> > It both parsed during the HDFS startup.
> >
> > Leonid
> >
> >
> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > Hello everyone,
> >     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
> topology.script.file.name". But here it is not written where to put this
> > <property>
> >               <name>topology.script.file.name</name>
> >
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> > </property>
> >
> > Means in which configuration file.
> > I am using hadoop-2.0.3-alpha.
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Here is a sample I stole from the web and modified slightly... I think.

HADOOP_CONF=/etc/hadoop/conf

while [ $# -gt 0 ] ; do
  nodeArg=$1
  exec< ${HADOOP_CONF}/rack_info.txt
  result=""
  while read line ; do
    ar=( $line )
    if [ "${ar[0]}" = "$nodeArg" ] ; then
      result="${ar[1]}"
    fi
  done
  shift
  if [ -z "$result" ] ; then
    echo -n "/default/rack "
  else
    echo -n "$result "
  fi

done


The rack_info.txt file contains all hostname AND IP addresses for each node:
10.10.10.10  /dc1/rack1
10.10.10.11  /dc1/rack2
datanode1  /dc1/rack1
datanode2  /dc1/rack2
.. etch.


On Wed, May 8, 2013 at 1:38 PM, Adam Faris <af...@linkedin.com> wrote:

> Look between the <code> blocks starting at line 1336.
> http://lnkd.in/rJsqpV   Some day it will get included in the
> documentation with a future Hadoop release. :)
>
> -- Adam
>
> On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
>  wrote:
>
> > If anybody have sample (topology.script.file.name) script then please
> share it.
> >
> >
> > On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > @chris, I have test it outside. It is working fine.
> >
> >
> > On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > Error in script.
> >
> >
> > On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> > Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
> >
> >
> > On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > I have done this and found following error in log -
> >
> >
> > 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
> Exception running
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> > org.apache.hadoop.util.Shell$ExitCodeException:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
> /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax
> error: "(" unexpected (expecting "done")
> >
> >       at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> >       at org.apache.hadoop.util.Shell.run(Shell.java:129)
> >       at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> >       at
> org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> >       at
> org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> >       at
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> >       at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> >       at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> >       at
> org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> >       at
> org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> >       at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> >       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> >       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> >       at java.security.AccessController.doPrivileged(Native Method)
> >       at javax.security.auth.Subject.doAs(Subject.java:415)
> >       at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> >       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> > 2013-05-08 18:53:45,223 ERROR
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve
> call returned null! Using /default-rack for host [127.0.0.1]
> >
> >
> >
> > On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>
> wrote:
> > You can put this parameter to core-site.xml or hdfs-site.xml
> > It both parsed during the HDFS startup.
> >
> > Leonid
> >
> >
> > On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
> > Hello everyone,
> >     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awarenessthat we can do this by giving property of "
> topology.script.file.name". But here it is not written where to put this
> > <property>
> >               <name>topology.script.file.name</name>
> >
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> > </property>
> >
> > Means in which configuration file.
> > I am using hadoop-2.0.3-alpha.
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
> >
> >
> >
> > --
> > With regards ---
> > Mohammad Mustaqeem,
> > M.Tech (CSE)
> > MNNIT Allahabad
> > 9026604270
> >
> >
>
>

Re: Rack Aware Hadoop cluster

Posted by Adam Faris <af...@linkedin.com>.
Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)

-- Adam

On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
 wrote:

> If anybody have sample (topology.script.file.name) script then please share it.
> 
> 
> On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> @chris, I have test it outside. It is working fine.
> 
> 
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> Error in script.
> 
> 
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
> 
> 
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> I have done this and found following error in log - 
> 
> 
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1 
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
> 
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
> 
> 
> 
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
> 
> Leonid
> 
> 
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this 
> <property>
>         	<name>topology.script.file.name</name>
>         	<value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property> 
> 
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 


Re: Rack Aware Hadoop cluster

Posted by Adam Faris <af...@linkedin.com>.
Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)

-- Adam

On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
 wrote:

> If anybody have sample (topology.script.file.name) script then please share it.
> 
> 
> On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> @chris, I have test it outside. It is working fine.
> 
> 
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> Error in script.
> 
> 
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
> 
> 
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> I have done this and found following error in log - 
> 
> 
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1 
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
> 
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
> 
> 
> 
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
> 
> Leonid
> 
> 
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this 
> <property>
>         	<name>topology.script.file.name</name>
>         	<value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property> 
> 
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 


Re: Rack Aware Hadoop cluster

Posted by Adam Faris <af...@linkedin.com>.
Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)

-- Adam

On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
 wrote:

> If anybody have sample (topology.script.file.name) script then please share it.
> 
> 
> On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> @chris, I have test it outside. It is working fine.
> 
> 
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> Error in script.
> 
> 
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
> 
> 
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> I have done this and found following error in log - 
> 
> 
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1 
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
> 
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
> 
> 
> 
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
> 
> Leonid
> 
> 
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this 
> <property>
>         	<name>topology.script.file.name</name>
>         	<value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property> 
> 
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 


Re: Rack Aware Hadoop cluster

Posted by Adam Faris <af...@linkedin.com>.
Look between the <code> blocks starting at line 1336.  http://lnkd.in/rJsqpV   Some day it will get included in the documentation with a future Hadoop release. :)

-- Adam

On May 8, 2013, at 10:29 AM, Mohammad Mustaqeem <3m...@gmail.com>
 wrote:

> If anybody have sample (topology.script.file.name) script then please share it.
> 
> 
> On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> @chris, I have test it outside. It is working fine.
> 
> 
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> Error in script.
> 
> 
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
> Your script has an error in it.  Please test your script using both IP Addresses and Names, outside of hadoop.
> 
> 
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> I have done this and found following error in log - 
> 
> 
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1 
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
> 
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
> 
> 
> 
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com> wrote:
> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
> 
> Leonid
> 
> 
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m...@gmail.com> wrote:
> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I find out from here http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that we can do this by giving property of "topology.script.file.name". But here it is not written where to put this 
> <property>
>         	<name>topology.script.file.name</name>
>         	<value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property> 
> 
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 
> 
> 
> 
> -- 
> With regards ---
> Mohammad Mustaqeem,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
> 
> 


Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
If anybody have sample (topology.script.file.name) script then please share
it.


On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> @chris, I have test it outside. It is working fine.
>
>
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> Error in script.
>>
>>
>> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>>
>>> Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>>
>>>
>>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> I have done this and found following error in log -
>>>>
>>>>
>>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>>
>>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>>
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>>
>>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>>> It both parsed during the HDFS startup.
>>>>>
>>>>> Leonid
>>>>>
>>>>>
>>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>>> I find out from here
>>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>>> we can do this by giving property of "topology.script.file.name".
>>>>>> But here it is not written where to put this
>>>>>> <property>
>>>>>>         <name>topology.script.file.name</name>
>>>>>>
>>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>>> </property>
>>>>>>
>>>>>> Means in which configuration file.
>>>>>> I am using hadoop-2.0.3-alpha.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *With regards ---*
>>>>>> *Mohammad Mustaqeem*,
>>>>>> M.Tech (CSE)
>>>>>> MNNIT Allahabad
>>>>>> 9026604270
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
If anybody have sample (topology.script.file.name) script then please share
it.


On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> @chris, I have test it outside. It is working fine.
>
>
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> Error in script.
>>
>>
>> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>>
>>> Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>>
>>>
>>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> I have done this and found following error in log -
>>>>
>>>>
>>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>>
>>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>>
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>>
>>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>>> It both parsed during the HDFS startup.
>>>>>
>>>>> Leonid
>>>>>
>>>>>
>>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>>> I find out from here
>>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>>> we can do this by giving property of "topology.script.file.name".
>>>>>> But here it is not written where to put this
>>>>>> <property>
>>>>>>         <name>topology.script.file.name</name>
>>>>>>
>>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>>> </property>
>>>>>>
>>>>>> Means in which configuration file.
>>>>>> I am using hadoop-2.0.3-alpha.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *With regards ---*
>>>>>> *Mohammad Mustaqeem*,
>>>>>> M.Tech (CSE)
>>>>>> MNNIT Allahabad
>>>>>> 9026604270
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
If anybody have sample (topology.script.file.name) script then please share
it.


On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> @chris, I have test it outside. It is working fine.
>
>
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> Error in script.
>>
>>
>> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>>
>>> Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>>
>>>
>>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> I have done this and found following error in log -
>>>>
>>>>
>>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>>
>>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>>
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>>
>>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>>> It both parsed during the HDFS startup.
>>>>>
>>>>> Leonid
>>>>>
>>>>>
>>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>>> I find out from here
>>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>>> we can do this by giving property of "topology.script.file.name".
>>>>>> But here it is not written where to put this
>>>>>> <property>
>>>>>>         <name>topology.script.file.name</name>
>>>>>>
>>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>>> </property>
>>>>>>
>>>>>> Means in which configuration file.
>>>>>> I am using hadoop-2.0.3-alpha.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *With regards ---*
>>>>>> *Mohammad Mustaqeem*,
>>>>>> M.Tech (CSE)
>>>>>> MNNIT Allahabad
>>>>>> 9026604270
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
If anybody have sample (topology.script.file.name) script then please share
it.


On Wed, May 8, 2013 at 10:30 PM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> @chris, I have test it outside. It is working fine.
>
>
> On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> Error in script.
>>
>>
>> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>>
>>> Your script has an error in it.  Please test your script using both IP
>>> Addresses and Names, outside of hadoop.
>>>
>>>
>>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> I have done this and found following error in log -
>>>>
>>>>
>>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>>
>>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>>
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <
>>>> lfedotov@hortonworks.com> wrote:
>>>>
>>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>>> It both parsed during the HDFS startup.
>>>>>
>>>>> Leonid
>>>>>
>>>>>
>>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>>
>>>>>> Hello everyone,
>>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>>> I find out from here
>>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>>> we can do this by giving property of "topology.script.file.name".
>>>>>> But here it is not written where to put this
>>>>>> <property>
>>>>>>         <name>topology.script.file.name</name>
>>>>>>
>>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>>> </property>
>>>>>>
>>>>>> Means in which configuration file.
>>>>>> I am using hadoop-2.0.3-alpha.
>>>>>>
>>>>>>
>>>>>> --
>>>>>> *With regards ---*
>>>>>> *Mohammad Mustaqeem*,
>>>>>> M.Tech (CSE)
>>>>>> MNNIT Allahabad
>>>>>> 9026604270
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
@chris, I have test it outside. It is working fine.


On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> Error in script.
>
>
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>
>> Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>>
>>
>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> I have done this and found following error in log -
>>>
>>>
>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>
>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>
>>>
>>>
>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lfedotov@hortonworks.com
>>> > wrote:
>>>
>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>> It both parsed during the HDFS startup.
>>>>
>>>> Leonid
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>
>>>>> Hello everyone,
>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>> I find out from here
>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>> we can do this by giving property of "topology.script.file.name". But
>>>>> here it is not written where to put this
>>>>> <property>
>>>>>         <name>topology.script.file.name</name>
>>>>>
>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>> </property>
>>>>>
>>>>> Means in which configuration file.
>>>>> I am using hadoop-2.0.3-alpha.
>>>>>
>>>>>
>>>>> --
>>>>> *With regards ---*
>>>>> *Mohammad Mustaqeem*,
>>>>> M.Tech (CSE)
>>>>> MNNIT Allahabad
>>>>> 9026604270
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
@chris, I have test it outside. It is working fine.


On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> Error in script.
>
>
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>
>> Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>>
>>
>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> I have done this and found following error in log -
>>>
>>>
>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>
>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>
>>>
>>>
>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lfedotov@hortonworks.com
>>> > wrote:
>>>
>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>> It both parsed during the HDFS startup.
>>>>
>>>> Leonid
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>
>>>>> Hello everyone,
>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>> I find out from here
>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>> we can do this by giving property of "topology.script.file.name". But
>>>>> here it is not written where to put this
>>>>> <property>
>>>>>         <name>topology.script.file.name</name>
>>>>>
>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>> </property>
>>>>>
>>>>> Means in which configuration file.
>>>>> I am using hadoop-2.0.3-alpha.
>>>>>
>>>>>
>>>>> --
>>>>> *With regards ---*
>>>>> *Mohammad Mustaqeem*,
>>>>> M.Tech (CSE)
>>>>> MNNIT Allahabad
>>>>> 9026604270
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
@chris, I have test it outside. It is working fine.


On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> Error in script.
>
>
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>
>> Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>>
>>
>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> I have done this and found following error in log -
>>>
>>>
>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>
>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>
>>>
>>>
>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lfedotov@hortonworks.com
>>> > wrote:
>>>
>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>> It both parsed during the HDFS startup.
>>>>
>>>> Leonid
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>
>>>>> Hello everyone,
>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>> I find out from here
>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>> we can do this by giving property of "topology.script.file.name". But
>>>>> here it is not written where to put this
>>>>> <property>
>>>>>         <name>topology.script.file.name</name>
>>>>>
>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>> </property>
>>>>>
>>>>> Means in which configuration file.
>>>>> I am using hadoop-2.0.3-alpha.
>>>>>
>>>>>
>>>>> --
>>>>> *With regards ---*
>>>>> *Mohammad Mustaqeem*,
>>>>> M.Tech (CSE)
>>>>> MNNIT Allahabad
>>>>> 9026604270
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
@chris, I have test it outside. It is working fine.


On Wed, May 8, 2013 at 7:48 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> Error in script.
>
>
> On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:
>
>> Your script has an error in it.  Please test your script using both IP
>> Addresses and Names, outside of hadoop.
>>
>>
>> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> I have done this and found following error in log -
>>>
>>>
>>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>>
>>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>>
>>>
>>>
>>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lfedotov@hortonworks.com
>>> > wrote:
>>>
>>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>>> It both parsed during the HDFS startup.
>>>>
>>>> Leonid
>>>>
>>>>
>>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>>> 3m.mustaqeem@gmail.com> wrote:
>>>>
>>>>> Hello everyone,
>>>>>     I was searching for how to make the hadoop cluster rack-aware and
>>>>> I find out from here
>>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>>> we can do this by giving property of "topology.script.file.name". But
>>>>> here it is not written where to put this
>>>>> <property>
>>>>>         <name>topology.script.file.name</name>
>>>>>
>>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>>> </property>
>>>>>
>>>>> Means in which configuration file.
>>>>> I am using hadoop-2.0.3-alpha.
>>>>>
>>>>>
>>>>> --
>>>>> *With regards ---*
>>>>> *Mohammad Mustaqeem*,
>>>>> M.Tech (CSE)
>>>>> MNNIT Allahabad
>>>>> 9026604270
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
Error in script.


On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:

> Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
>
>
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
>
>> I have done this and found following error in log -
>>
>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>
>>
>>
>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>>
>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>> It both parsed during the HDFS startup.
>>>
>>> Leonid
>>>
>>>
>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> Hello everyone,
>>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>>> find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>> we can do this by giving property of "topology.script.file.name". But
>>>> here it is not written where to put this
>>>> <property>
>>>>         <name>topology.script.file.name</name>
>>>>
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> </property>
>>>>
>>>> Means in which configuration file.
>>>> I am using hadoop-2.0.3-alpha.
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
Error in script.


On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:

> Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
>
>
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
>
>> I have done this and found following error in log -
>>
>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>
>>
>>
>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>>
>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>> It both parsed during the HDFS startup.
>>>
>>> Leonid
>>>
>>>
>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> Hello everyone,
>>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>>> find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>> we can do this by giving property of "topology.script.file.name". But
>>>> here it is not written where to put this
>>>> <property>
>>>>         <name>topology.script.file.name</name>
>>>>
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> </property>
>>>>
>>>> Means in which configuration file.
>>>> I am using hadoop-2.0.3-alpha.
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
Error in script.


On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:

> Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
>
>
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
>
>> I have done this and found following error in log -
>>
>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>
>>
>>
>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>>
>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>> It both parsed during the HDFS startup.
>>>
>>> Leonid
>>>
>>>
>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> Hello everyone,
>>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>>> find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>> we can do this by giving property of "topology.script.file.name". But
>>>> here it is not written where to put this
>>>> <property>
>>>>         <name>topology.script.file.name</name>
>>>>
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> </property>
>>>>
>>>> Means in which configuration file.
>>>> I am using hadoop-2.0.3-alpha.
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
Error in script.


On Wed, May 8, 2013 at 7:11 AM, Chris Embree <ce...@gmail.com> wrote:

> Your script has an error in it.  Please test your script using both IP
> Addresses and Names, outside of hadoop.
>
>
> On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem <
> 3m.mustaqeem@gmail.com> wrote:
>
>> I have done this and found following error in log -
>>
>> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
>> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>>
>> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
>> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
>> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
>> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
>> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
>> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
>> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
>> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
>> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
>> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
>> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
>> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:415)
>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
>> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
>> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>>
>>
>>
>> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>>
>>> You can put this parameter to core-site.xml or hdfs-site.xml
>>> It both parsed during the HDFS startup.
>>>
>>> Leonid
>>>
>>>
>>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>>> 3m.mustaqeem@gmail.com> wrote:
>>>
>>>> Hello everyone,
>>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>>> find out from here
>>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>>> we can do this by giving property of "topology.script.file.name". But
>>>> here it is not written where to put this
>>>> <property>
>>>>         <name>topology.script.file.name</name>
>>>>
>>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>>> </property>
>>>>
>>>> Means in which configuration file.
>>>> I am using hadoop-2.0.3-alpha.
>>>>
>>>>
>>>> --
>>>> *With regards ---*
>>>> *Mohammad Mustaqeem*,
>>>> M.Tech (CSE)
>>>> MNNIT Allahabad
>>>> 9026604270
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Your script has an error in it.  Please test your script using both IP
Addresses and Names, outside of hadoop.


On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> I have done this and found following error in log -
>
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>
>
>
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> You can put this parameter to core-site.xml or hdfs-site.xml
>> It both parsed during the HDFS startup.
>>
>> Leonid
>>
>>
>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> Hello everyone,
>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>> find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>> we can do this by giving property of "topology.script.file.name". But
>>> here it is not written where to put this
>>> <property>
>>>         <name>topology.script.file.name</name>
>>>
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> </property>
>>>
>>> Means in which configuration file.
>>> I am using hadoop-2.0.3-alpha.
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Your script has an error in it.  Please test your script using both IP
Addresses and Names, outside of hadoop.


On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> I have done this and found following error in log -
>
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>
>
>
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> You can put this parameter to core-site.xml or hdfs-site.xml
>> It both parsed during the HDFS startup.
>>
>> Leonid
>>
>>
>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> Hello everyone,
>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>> find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>> we can do this by giving property of "topology.script.file.name". But
>>> here it is not written where to put this
>>> <property>
>>>         <name>topology.script.file.name</name>
>>>
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> </property>
>>>
>>> Means in which configuration file.
>>> I am using hadoop-2.0.3-alpha.
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Your script has an error in it.  Please test your script using both IP
Addresses and Names, outside of hadoop.


On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> I have done this and found following error in log -
>
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>
>
>
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> You can put this parameter to core-site.xml or hdfs-site.xml
>> It both parsed during the HDFS startup.
>>
>> Leonid
>>
>>
>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> Hello everyone,
>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>> find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>> we can do this by giving property of "topology.script.file.name". But
>>> here it is not written where to put this
>>> <property>
>>>         <name>topology.script.file.name</name>
>>>
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> </property>
>>>
>>> Means in which configuration file.
>>> I am using hadoop-2.0.3-alpha.
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Your script has an error in it.  Please test your script using both IP
Addresses and Names, outside of hadoop.


On Wed, May 8, 2013 at 10:01 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> I have done this and found following error in log -
>
> 2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping: Exception running /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh 127.0.0.1
> org.apache.hadoop.util.Shell$ExitCodeException: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8: /home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: Syntax error: "(" unexpected (expecting "done")
>
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:129)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
> 	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
> 	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
> 	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
> 	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
> 	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> 2013-05-08 18:53:45,223 ERROR org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The resolve call returned null! Using /default-rack for host [127.0.0.1]
>
>
>
> On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:
>
>> You can put this parameter to core-site.xml or hdfs-site.xml
>> It both parsed during the HDFS startup.
>>
>> Leonid
>>
>>
>> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <
>> 3m.mustaqeem@gmail.com> wrote:
>>
>>> Hello everyone,
>>>     I was searching for how to make the hadoop cluster rack-aware and I
>>> find out from here
>>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>>> we can do this by giving property of "topology.script.file.name". But
>>> here it is not written where to put this
>>> <property>
>>>         <name>topology.script.file.name</name>
>>>
>>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>>> </property>
>>>
>>> Means in which configuration file.
>>> I am using hadoop-2.0.3-alpha.
>>>
>>>
>>> --
>>> *With regards ---*
>>> *Mohammad Mustaqeem*,
>>> M.Tech (CSE)
>>> MNNIT Allahabad
>>> 9026604270
>>>
>>>
>>>
>>
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
I have done this and found following error in log -

2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh
127.0.0.1
org.apache.hadoop.util.Shell$ExitCodeException:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh:
Syntax error: "(" unexpected (expecting "done")

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
	at org.apache.hadoop.util.Shell.run(Shell.java:129)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
2013-05-08 18:53:45,223 ERROR
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The
resolve call returned null! Using /default-rack for host [127.0.0.1]



On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
>
> Leonid
>
>
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> Hello everyone,
>>     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>> we can do this by giving property of "topology.script.file.name". But
>> here it is not written where to put this
>> <property>
>>         <name>topology.script.file.name</name>
>>
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> </property>
>>
>> Means in which configuration file.
>> I am using hadoop-2.0.3-alpha.
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
I have done this and found following error in log -

2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh
127.0.0.1
org.apache.hadoop.util.Shell$ExitCodeException:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh:
Syntax error: "(" unexpected (expecting "done")

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
	at org.apache.hadoop.util.Shell.run(Shell.java:129)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
2013-05-08 18:53:45,223 ERROR
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The
resolve call returned null! Using /default-rack for host [127.0.0.1]



On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
>
> Leonid
>
>
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> Hello everyone,
>>     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>> we can do this by giving property of "topology.script.file.name". But
>> here it is not written where to put this
>> <property>
>>         <name>topology.script.file.name</name>
>>
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> </property>
>>
>> Means in which configuration file.
>> I am using hadoop-2.0.3-alpha.
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
I have done this and found following error in log -

2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh
127.0.0.1
org.apache.hadoop.util.Shell$ExitCodeException:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh:
Syntax error: "(" unexpected (expecting "done")

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
	at org.apache.hadoop.util.Shell.run(Shell.java:129)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
2013-05-08 18:53:45,223 ERROR
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The
resolve call returned null! Using /default-rack for host [127.0.0.1]



On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
>
> Leonid
>
>
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> Hello everyone,
>>     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>> we can do this by giving property of "topology.script.file.name". But
>> here it is not written where to put this
>> <property>
>>         <name>topology.script.file.name</name>
>>
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> </property>
>>
>> Means in which configuration file.
>> I am using hadoop-2.0.3-alpha.
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Mohammad Mustaqeem <3m...@gmail.com>.
I have done this and found following error in log -

2013-05-08 18:53:45,221 WARN org.apache.hadoop.net.ScriptBasedMapping:
Exception running
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh
127.0.0.1
org.apache.hadoop.util.Shell$ExitCodeException:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh: 8:
/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh:
Syntax error: "(" unexpected (expecting "done")

	at org.apache.hadoop.util.Shell.runCommand(Shell.java:202)
	at org.apache.hadoop.util.Shell.run(Shell.java:129)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:322)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:241)
	at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:179)
	at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.resolveNetworkLocation(DatanodeManager.java:454)
	at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:713)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3459)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:881)
	at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:90)
	at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:18295)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
2013-05-08 18:53:45,223 ERROR
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: The
resolve call returned null! Using /default-rack for host [127.0.0.1]



On Wed, May 8, 2013 at 7:18 PM, Leonid Fedotov <lf...@hortonworks.com>wrote:

> You can put this parameter to core-site.xml or hdfs-site.xml
> It both parsed during the HDFS startup.
>
> Leonid
>
>
> On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem <3m.mustaqeem@gmail.com
> > wrote:
>
>> Hello everyone,
>>     I was searching for how to make the hadoop cluster rack-aware and I
>> find out from here
>> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
>> we can do this by giving property of "topology.script.file.name". But
>> here it is not written where to put this
>> <property>
>>         <name>topology.script.file.name</name>
>>
>> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
>> </property>
>>
>> Means in which configuration file.
>> I am using hadoop-2.0.3-alpha.
>>
>>
>> --
>> *With regards ---*
>> *Mohammad Mustaqeem*,
>> M.Tech (CSE)
>> MNNIT Allahabad
>> 9026604270
>>
>>
>>
>


-- 
*With regards ---*
*Mohammad Mustaqeem*,
M.Tech (CSE)
MNNIT Allahabad
9026604270

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
You can put this parameter to core-site.xml or hdfs-site.xml
It both parsed during the HDFS startup.

Leonid


On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Finally, one I can answer. :)  That should be in core-site.xml (unless it's
moved from ver 1.x).  It needs to be in the configuration for NameNode(s)
and JobTracker (Yarn).

In 1.x you need to restart NN and JT services for the script to take effect.


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
You can put this parameter to core-site.xml or hdfs-site.xml
It both parsed during the HDFS startup.

Leonid


On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
You can put this parameter to core-site.xml or hdfs-site.xml
It both parsed during the HDFS startup.

Leonid


On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Shahab Yunus <sh...@gmail.com>.
core-site.xml

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Leonid Fedotov <lf...@hortonworks.com>.
You can put this parameter to core-site.xml or hdfs-site.xml
It both parsed during the HDFS startup.

Leonid


On Wed, May 8, 2013 at 6:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Shahab Yunus <sh...@gmail.com>.
core-site.xml

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Finally, one I can answer. :)  That should be in core-site.xml (unless it's
moved from ver 1.x).  It needs to be in the configuration for NameNode(s)
and JobTracker (Yarn).

In 1.x you need to restart NN and JT services for the script to take effect.


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Chris Embree <ce...@gmail.com>.
Finally, one I can answer. :)  That should be in core-site.xml (unless it's
moved from ver 1.x).  It needs to be in the configuration for NameNode(s)
and JobTracker (Yarn).

In 1.x you need to restart NN and JT services for the script to take effect.


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>

Re: Rack Aware Hadoop cluster

Posted by Shahab Yunus <sh...@gmail.com>.
core-site.xml

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/core-default.xml


On Wed, May 8, 2013 at 9:43 AM, Mohammad Mustaqeem
<3m...@gmail.com>wrote:

> Hello everyone,
>     I was searching for how to make the hadoop cluster rack-aware and I
> find out from here
> http://hadoop.apache.org/docs/r2.0.4-alpha/hadoop-project-dist/hadoop-common/ClusterSetup.html#Hadoop_Rack_Awareness that
> we can do this by giving property of "topology.script.file.name". But
> here it is not written where to put this
> <property>
>         <name>topology.script.file.name</name>
>
> <value>/home/mustaqeem/development/hadoop-2.0.3-alpha/etc/hadoop/rack.sh</value>
> </property>
>
> Means in which configuration file.
> I am using hadoop-2.0.3-alpha.
>
>
> --
> *With regards ---*
> *Mohammad Mustaqeem*,
> M.Tech (CSE)
> MNNIT Allahabad
> 9026604270
>
>
>