You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Lixiang Ao <ao...@gmail.com> on 2013/04/18 11:04:32 UTC

Run multiple HDFS instances

Hi all,

Can I run mutiple HDFS instances, that is, n seperate namenodes and n
datanodes, on a single machine?

I've modified core-site.xml and hdfs-site.xml to avoid port and file
conflicting between HDFSes, but when I started the second HDFS, I got the
errors:

Starting namenodes on [localhost]
localhost: namenode running as process 20544. Stop it first.
localhost: datanode running as process 20786. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 21074. Stop it first.

Is there a way to solve this?
Thank you in advance,

Lixiang Ao

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Not really, fereration provides seperate namespaces, but I want it looks
like one namespace. My basic idea is to maintain a map from files to
namenodes, it receive RPC calls from client and forward them to specific
namenode that in charge of the file. It's challenging for me but
I'll figure out whether it works.

在 2013年4月19日星期五,Hemanth Yamijala 写道:

> Are you trying to implement something like namespace federation, that's a
> part of Hadoop 2.0 -
> http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
>
>
> On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> Actually I'm trying to do something like combining multiple namenodes so
>> that they present themselves to clients as a single namespace, implementing
>> basic namenode functionalities.
>>
>> 在 2013年4月18日星期四,Chris Embree 写道:
>>
>> Glad you got this working... can you explain your use case a little?
>>> I'm trying to understand why you might want to do that.
>>>
>>>
>>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com>wrote:
>>>
>>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It
>>>> works!  Everything looks fine now.
>>>>
>>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>>
>>>> Thanks a lot.
>>>>
>>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>>
>>>> Yes you can but if you want the scripts to work, you should have them
>>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>>> every time you invoke them.
>>>>>
>>>>> I instead prefer to start the daemons up via their direct command such
>>>>> as "hdfs namenode" and so and move them to the background, with a
>>>>> redirect for logging.
>>>>>
>>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>>> wrote:
>>>>> > Hi all,
>>>>> >
>>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>>> > datanodes, on a single machine?
>>>>> >
>>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>>> > conflicting between HDFSes, but when I started the second HDFS, I
>>>>> got the
>>>>> > errors:
>>>>> >
>>>>> > Starting namenodes on [localhost]
>>>>> > localhost: namenode running as process 20544. Stop it first.
>>>>> > localhost: datanode running as process 20786. Stop it first.
>>>>> > Starting secondary namenodes [0.0.0.0]
>>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>>> >
>>>>> > Is there a way to solve this?
>>>>> > Thank you in advance,
>>>>> >
>>>>> > Lixiang Ao
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Not really, fereration provides seperate namespaces, but I want it looks
like one namespace. My basic idea is to maintain a map from files to
namenodes, it receive RPC calls from client and forward them to specific
namenode that in charge of the file. It's challenging for me but
I'll figure out whether it works.

在 2013年4月19日星期五,Hemanth Yamijala 写道:

> Are you trying to implement something like namespace federation, that's a
> part of Hadoop 2.0 -
> http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
>
>
> On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> Actually I'm trying to do something like combining multiple namenodes so
>> that they present themselves to clients as a single namespace, implementing
>> basic namenode functionalities.
>>
>> 在 2013年4月18日星期四,Chris Embree 写道:
>>
>> Glad you got this working... can you explain your use case a little?
>>> I'm trying to understand why you might want to do that.
>>>
>>>
>>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com>wrote:
>>>
>>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It
>>>> works!  Everything looks fine now.
>>>>
>>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>>
>>>> Thanks a lot.
>>>>
>>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>>
>>>> Yes you can but if you want the scripts to work, you should have them
>>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>>> every time you invoke them.
>>>>>
>>>>> I instead prefer to start the daemons up via their direct command such
>>>>> as "hdfs namenode" and so and move them to the background, with a
>>>>> redirect for logging.
>>>>>
>>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>>> wrote:
>>>>> > Hi all,
>>>>> >
>>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>>> > datanodes, on a single machine?
>>>>> >
>>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>>> > conflicting between HDFSes, but when I started the second HDFS, I
>>>>> got the
>>>>> > errors:
>>>>> >
>>>>> > Starting namenodes on [localhost]
>>>>> > localhost: namenode running as process 20544. Stop it first.
>>>>> > localhost: datanode running as process 20786. Stop it first.
>>>>> > Starting secondary namenodes [0.0.0.0]
>>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>>> >
>>>>> > Is there a way to solve this?
>>>>> > Thank you in advance,
>>>>> >
>>>>> > Lixiang Ao
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Not really, fereration provides seperate namespaces, but I want it looks
like one namespace. My basic idea is to maintain a map from files to
namenodes, it receive RPC calls from client and forward them to specific
namenode that in charge of the file. It's challenging for me but
I'll figure out whether it works.

在 2013年4月19日星期五,Hemanth Yamijala 写道:

> Are you trying to implement something like namespace federation, that's a
> part of Hadoop 2.0 -
> http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
>
>
> On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> Actually I'm trying to do something like combining multiple namenodes so
>> that they present themselves to clients as a single namespace, implementing
>> basic namenode functionalities.
>>
>> 在 2013年4月18日星期四,Chris Embree 写道:
>>
>> Glad you got this working... can you explain your use case a little?
>>> I'm trying to understand why you might want to do that.
>>>
>>>
>>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com>wrote:
>>>
>>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It
>>>> works!  Everything looks fine now.
>>>>
>>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>>
>>>> Thanks a lot.
>>>>
>>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>>
>>>> Yes you can but if you want the scripts to work, you should have them
>>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>>> every time you invoke them.
>>>>>
>>>>> I instead prefer to start the daemons up via their direct command such
>>>>> as "hdfs namenode" and so and move them to the background, with a
>>>>> redirect for logging.
>>>>>
>>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>>> wrote:
>>>>> > Hi all,
>>>>> >
>>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>>> > datanodes, on a single machine?
>>>>> >
>>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>>> > conflicting between HDFSes, but when I started the second HDFS, I
>>>>> got the
>>>>> > errors:
>>>>> >
>>>>> > Starting namenodes on [localhost]
>>>>> > localhost: namenode running as process 20544. Stop it first.
>>>>> > localhost: datanode running as process 20786. Stop it first.
>>>>> > Starting secondary namenodes [0.0.0.0]
>>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>>> >
>>>>> > Is there a way to solve this?
>>>>> > Thank you in advance,
>>>>> >
>>>>> > Lixiang Ao
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Not really, fereration provides seperate namespaces, but I want it looks
like one namespace. My basic idea is to maintain a map from files to
namenodes, it receive RPC calls from client and forward them to specific
namenode that in charge of the file. It's challenging for me but
I'll figure out whether it works.

在 2013年4月19日星期五,Hemanth Yamijala 写道:

> Are you trying to implement something like namespace federation, that's a
> part of Hadoop 2.0 -
> http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html
>
>
> On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> Actually I'm trying to do something like combining multiple namenodes so
>> that they present themselves to clients as a single namespace, implementing
>> basic namenode functionalities.
>>
>> 在 2013年4月18日星期四,Chris Embree 写道:
>>
>> Glad you got this working... can you explain your use case a little?
>>> I'm trying to understand why you might want to do that.
>>>
>>>
>>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com>wrote:
>>>
>>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It
>>>> works!  Everything looks fine now.
>>>>
>>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>>
>>>> Thanks a lot.
>>>>
>>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>>
>>>> Yes you can but if you want the scripts to work, you should have them
>>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>>> every time you invoke them.
>>>>>
>>>>> I instead prefer to start the daemons up via their direct command such
>>>>> as "hdfs namenode" and so and move them to the background, with a
>>>>> redirect for logging.
>>>>>
>>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>>> wrote:
>>>>> > Hi all,
>>>>> >
>>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>>> > datanodes, on a single machine?
>>>>> >
>>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>>> > conflicting between HDFSes, but when I started the second HDFS, I
>>>>> got the
>>>>> > errors:
>>>>> >
>>>>> > Starting namenodes on [localhost]
>>>>> > localhost: namenode running as process 20544. Stop it first.
>>>>> > localhost: datanode running as process 20786. Stop it first.
>>>>> > Starting secondary namenodes [0.0.0.0]
>>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>>> >
>>>>> > Is there a way to solve this?
>>>>> > Thank you in advance,
>>>>> >
>>>>> > Lixiang Ao
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Harsh J
>>>>>
>>>>
>>>
>

Re: Run multiple HDFS instances

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html


On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <ao...@gmail.com> wrote:

> Actually I'm trying to do something like combining multiple namenodes so
> that they present themselves to clients as a single namespace, implementing
> basic namenode functionalities.
>
> 在 2013年4月18日星期四,Chris Embree 写道:
>
> Glad you got this working... can you explain your use case a little?   I'm
>> trying to understand why you might want to do that.
>>
>>
>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:
>>
>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>>  Everything looks fine now.
>>>
>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>
>>> Thanks a lot.
>>>
>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>
>>> Yes you can but if you want the scripts to work, you should have them
>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>> every time you invoke them.
>>>>
>>>> I instead prefer to start the daemons up via their direct command such
>>>> as "hdfs namenode" and so and move them to the background, with a
>>>> redirect for logging.
>>>>
>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>> wrote:
>>>> > Hi all,
>>>> >
>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>> > datanodes, on a single machine?
>>>> >
>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>>> the
>>>> > errors:
>>>> >
>>>> > Starting namenodes on [localhost]
>>>> > localhost: namenode running as process 20544. Stop it first.
>>>> > localhost: datanode running as process 20786. Stop it first.
>>>> > Starting secondary namenodes [0.0.0.0]
>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>> >
>>>> > Is there a way to solve this?
>>>> > Thank you in advance,
>>>> >
>>>> > Lixiang Ao
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>

Re: Run multiple HDFS instances

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html


On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <ao...@gmail.com> wrote:

> Actually I'm trying to do something like combining multiple namenodes so
> that they present themselves to clients as a single namespace, implementing
> basic namenode functionalities.
>
> 在 2013年4月18日星期四,Chris Embree 写道:
>
> Glad you got this working... can you explain your use case a little?   I'm
>> trying to understand why you might want to do that.
>>
>>
>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:
>>
>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>>  Everything looks fine now.
>>>
>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>
>>> Thanks a lot.
>>>
>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>
>>> Yes you can but if you want the scripts to work, you should have them
>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>> every time you invoke them.
>>>>
>>>> I instead prefer to start the daemons up via their direct command such
>>>> as "hdfs namenode" and so and move them to the background, with a
>>>> redirect for logging.
>>>>
>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>> wrote:
>>>> > Hi all,
>>>> >
>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>> > datanodes, on a single machine?
>>>> >
>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>>> the
>>>> > errors:
>>>> >
>>>> > Starting namenodes on [localhost]
>>>> > localhost: namenode running as process 20544. Stop it first.
>>>> > localhost: datanode running as process 20786. Stop it first.
>>>> > Starting secondary namenodes [0.0.0.0]
>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>> >
>>>> > Is there a way to solve this?
>>>> > Thank you in advance,
>>>> >
>>>> > Lixiang Ao
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>

Re: Run multiple HDFS instances

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html


On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <ao...@gmail.com> wrote:

> Actually I'm trying to do something like combining multiple namenodes so
> that they present themselves to clients as a single namespace, implementing
> basic namenode functionalities.
>
> 在 2013年4月18日星期四,Chris Embree 写道:
>
> Glad you got this working... can you explain your use case a little?   I'm
>> trying to understand why you might want to do that.
>>
>>
>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:
>>
>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>>  Everything looks fine now.
>>>
>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>
>>> Thanks a lot.
>>>
>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>
>>> Yes you can but if you want the scripts to work, you should have them
>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>> every time you invoke them.
>>>>
>>>> I instead prefer to start the daemons up via their direct command such
>>>> as "hdfs namenode" and so and move them to the background, with a
>>>> redirect for logging.
>>>>
>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>> wrote:
>>>> > Hi all,
>>>> >
>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>> > datanodes, on a single machine?
>>>> >
>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>>> the
>>>> > errors:
>>>> >
>>>> > Starting namenodes on [localhost]
>>>> > localhost: namenode running as process 20544. Stop it first.
>>>> > localhost: datanode running as process 20786. Stop it first.
>>>> > Starting secondary namenodes [0.0.0.0]
>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>> >
>>>> > Is there a way to solve this?
>>>> > Thank you in advance,
>>>> >
>>>> > Lixiang Ao
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>

Re: Run multiple HDFS instances

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Are you trying to implement something like namespace federation, that's a
part of Hadoop 2.0 -
http://hadoop.apache.org/docs/r2.0.3-alpha/hadoop-project-dist/hadoop-hdfs/Federation.html


On Thu, Apr 18, 2013 at 10:02 PM, Lixiang Ao <ao...@gmail.com> wrote:

> Actually I'm trying to do something like combining multiple namenodes so
> that they present themselves to clients as a single namespace, implementing
> basic namenode functionalities.
>
> 在 2013年4月18日星期四,Chris Embree 写道:
>
> Glad you got this working... can you explain your use case a little?   I'm
>> trying to understand why you might want to do that.
>>
>>
>> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:
>>
>>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>>  Everything looks fine now.
>>>
>>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>>
>>> Thanks a lot.
>>>
>>> 在 2013年4月18日星期四,Harsh J 写道:
>>>
>>> Yes you can but if you want the scripts to work, you should have them
>>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>>> every time you invoke them.
>>>>
>>>> I instead prefer to start the daemons up via their direct command such
>>>> as "hdfs namenode" and so and move them to the background, with a
>>>> redirect for logging.
>>>>
>>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com>
>>>> wrote:
>>>> > Hi all,
>>>> >
>>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>>> > datanodes, on a single machine?
>>>> >
>>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>>> the
>>>> > errors:
>>>> >
>>>> > Starting namenodes on [localhost]
>>>> > localhost: namenode running as process 20544. Stop it first.
>>>> > localhost: datanode running as process 20786. Stop it first.
>>>> > Starting secondary namenodes [0.0.0.0]
>>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>>> >
>>>> > Is there a way to solve this?
>>>> > Thank you in advance,
>>>> >
>>>> > Lixiang Ao
>>>>
>>>>
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Actually I'm trying to do something like combining multiple namenodes so
that they present themselves to clients as a single namespace, implementing
basic namenode functionalities.

在 2013年4月18日星期四,Chris Embree 写道:

> Glad you got this working... can you explain your use case a little?   I'm
> trying to understand why you might want to do that.
>
>
> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>  Everything looks fine now.
>>
>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>
>> Thanks a lot.
>>
>> 在 2013年4月18日星期四,Harsh J 写道:
>>
>> Yes you can but if you want the scripts to work, you should have them
>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>> every time you invoke them.
>>>
>>> I instead prefer to start the daemons up via their direct command such
>>> as "hdfs namenode" and so and move them to the background, with a
>>> redirect for logging.
>>>
>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>>> > Hi all,
>>> >
>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>> > datanodes, on a single machine?
>>> >
>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>> the
>>> > errors:
>>> >
>>> > Starting namenodes on [localhost]
>>> > localhost: namenode running as process 20544. Stop it first.
>>> > localhost: datanode running as process 20786. Stop it first.
>>> > Starting secondary namenodes [0.0.0.0]
>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>> >
>>> > Is there a way to solve this?
>>> > Thank you in advance,
>>> >
>>> > Lixiang Ao
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Actually I'm trying to do something like combining multiple namenodes so
that they present themselves to clients as a single namespace, implementing
basic namenode functionalities.

在 2013年4月18日星期四,Chris Embree 写道:

> Glad you got this working... can you explain your use case a little?   I'm
> trying to understand why you might want to do that.
>
>
> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>  Everything looks fine now.
>>
>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>
>> Thanks a lot.
>>
>> 在 2013年4月18日星期四,Harsh J 写道:
>>
>> Yes you can but if you want the scripts to work, you should have them
>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>> every time you invoke them.
>>>
>>> I instead prefer to start the daemons up via their direct command such
>>> as "hdfs namenode" and so and move them to the background, with a
>>> redirect for logging.
>>>
>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>>> > Hi all,
>>> >
>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>> > datanodes, on a single machine?
>>> >
>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>> the
>>> > errors:
>>> >
>>> > Starting namenodes on [localhost]
>>> > localhost: namenode running as process 20544. Stop it first.
>>> > localhost: datanode running as process 20786. Stop it first.
>>> > Starting secondary namenodes [0.0.0.0]
>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>> >
>>> > Is there a way to solve this?
>>> > Thank you in advance,
>>> >
>>> > Lixiang Ao
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Actually I'm trying to do something like combining multiple namenodes so
that they present themselves to clients as a single namespace, implementing
basic namenode functionalities.

在 2013年4月18日星期四,Chris Embree 写道:

> Glad you got this working... can you explain your use case a little?   I'm
> trying to understand why you might want to do that.
>
>
> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>  Everything looks fine now.
>>
>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>
>> Thanks a lot.
>>
>> 在 2013年4月18日星期四,Harsh J 写道:
>>
>> Yes you can but if you want the scripts to work, you should have them
>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>> every time you invoke them.
>>>
>>> I instead prefer to start the daemons up via their direct command such
>>> as "hdfs namenode" and so and move them to the background, with a
>>> redirect for logging.
>>>
>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>>> > Hi all,
>>> >
>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>> > datanodes, on a single machine?
>>> >
>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>> the
>>> > errors:
>>> >
>>> > Starting namenodes on [localhost]
>>> > localhost: namenode running as process 20544. Stop it first.
>>> > localhost: datanode running as process 20786. Stop it first.
>>> > Starting secondary namenodes [0.0.0.0]
>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>> >
>>> > Is there a way to solve this?
>>> > Thank you in advance,
>>> >
>>> > Lixiang Ao
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>

Re: Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
Actually I'm trying to do something like combining multiple namenodes so
that they present themselves to clients as a single namespace, implementing
basic namenode functionalities.

在 2013年4月18日星期四,Chris Embree 写道:

> Glad you got this working... can you explain your use case a little?   I'm
> trying to understand why you might want to do that.
>
>
> On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <aolixiang@gmail.com<javascript:_e({}, 'cvml', 'aolixiang@gmail.com');>
> > wrote:
>
>> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>>  Everything looks fine now.
>>
>> Seems direct command "hdfs namenode" gives a better sense of control  :)
>>
>> Thanks a lot.
>>
>> 在 2013年4月18日星期四,Harsh J 写道:
>>
>> Yes you can but if you want the scripts to work, you should have them
>>> use a different PID directory (I think its called HADOOP_PID_DIR)
>>> every time you invoke them.
>>>
>>> I instead prefer to start the daemons up via their direct command such
>>> as "hdfs namenode" and so and move them to the background, with a
>>> redirect for logging.
>>>
>>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>>> > Hi all,
>>> >
>>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>>> > datanodes, on a single machine?
>>> >
>>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>>> > conflicting between HDFSes, but when I started the second HDFS, I got
>>> the
>>> > errors:
>>> >
>>> > Starting namenodes on [localhost]
>>> > localhost: namenode running as process 20544. Stop it first.
>>> > localhost: datanode running as process 20786. Stop it first.
>>> > Starting secondary namenodes [0.0.0.0]
>>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>>> >
>>> > Is there a way to solve this?
>>> > Thank you in advance,
>>> >
>>> > Lixiang Ao
>>>
>>>
>>>
>>> --
>>> Harsh J
>>>
>>
>

Re: Run multiple HDFS instances

Posted by Chris Embree <ce...@gmail.com>.
Glad you got this working... can you explain your use case a little?   I'm
trying to understand why you might want to do that.


On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:

> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>  Everything looks fine now.
>
> Seems direct command "hdfs namenode" gives a better sense of control  :)
>
> Thanks a lot.
>
> 在 2013年4月18日星期四,Harsh J 写道:
>
> Yes you can but if you want the scripts to work, you should have them
>> use a different PID directory (I think its called HADOOP_PID_DIR)
>> every time you invoke them.
>>
>> I instead prefer to start the daemons up via their direct command such
>> as "hdfs namenode" and so and move them to the background, with a
>> redirect for logging.
>>
>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>> > Hi all,
>> >
>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>> > datanodes, on a single machine?
>> >
>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>> > conflicting between HDFSes, but when I started the second HDFS, I got
>> the
>> > errors:
>> >
>> > Starting namenodes on [localhost]
>> > localhost: namenode running as process 20544. Stop it first.
>> > localhost: datanode running as process 20786. Stop it first.
>> > Starting secondary namenodes [0.0.0.0]
>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>> >
>> > Is there a way to solve this?
>> > Thank you in advance,
>> >
>> > Lixiang Ao
>>
>>
>>
>> --
>> Harsh J
>>
>

Re: Run multiple HDFS instances

Posted by Chris Embree <ce...@gmail.com>.
Glad you got this working... can you explain your use case a little?   I'm
trying to understand why you might want to do that.


On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:

> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>  Everything looks fine now.
>
> Seems direct command "hdfs namenode" gives a better sense of control  :)
>
> Thanks a lot.
>
> 在 2013年4月18日星期四,Harsh J 写道:
>
> Yes you can but if you want the scripts to work, you should have them
>> use a different PID directory (I think its called HADOOP_PID_DIR)
>> every time you invoke them.
>>
>> I instead prefer to start the daemons up via their direct command such
>> as "hdfs namenode" and so and move them to the background, with a
>> redirect for logging.
>>
>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>> > Hi all,
>> >
>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>> > datanodes, on a single machine?
>> >
>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>> > conflicting between HDFSes, but when I started the second HDFS, I got
>> the
>> > errors:
>> >
>> > Starting namenodes on [localhost]
>> > localhost: namenode running as process 20544. Stop it first.
>> > localhost: datanode running as process 20786. Stop it first.
>> > Starting secondary namenodes [0.0.0.0]
>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>> >
>> > Is there a way to solve this?
>> > Thank you in advance,
>> >
>> > Lixiang Ao
>>
>>
>>
>> --
>> Harsh J
>>
>

Re: Run multiple HDFS instances

Posted by Chris Embree <ce...@gmail.com>.
Glad you got this working... can you explain your use case a little?   I'm
trying to understand why you might want to do that.


On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:

> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>  Everything looks fine now.
>
> Seems direct command "hdfs namenode" gives a better sense of control  :)
>
> Thanks a lot.
>
> 在 2013年4月18日星期四,Harsh J 写道:
>
> Yes you can but if you want the scripts to work, you should have them
>> use a different PID directory (I think its called HADOOP_PID_DIR)
>> every time you invoke them.
>>
>> I instead prefer to start the daemons up via their direct command such
>> as "hdfs namenode" and so and move them to the background, with a
>> redirect for logging.
>>
>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>> > Hi all,
>> >
>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>> > datanodes, on a single machine?
>> >
>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>> > conflicting between HDFSes, but when I started the second HDFS, I got
>> the
>> > errors:
>> >
>> > Starting namenodes on [localhost]
>> > localhost: namenode running as process 20544. Stop it first.
>> > localhost: datanode running as process 20786. Stop it first.
>> > Starting secondary namenodes [0.0.0.0]
>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>> >
>> > Is there a way to solve this?
>> > Thank you in advance,
>> >
>> > Lixiang Ao
>>
>>
>>
>> --
>> Harsh J
>>
>

Re: Run multiple HDFS instances

Posted by Chris Embree <ce...@gmail.com>.
Glad you got this working... can you explain your use case a little?   I'm
trying to understand why you might want to do that.


On Thu, Apr 18, 2013 at 11:29 AM, Lixiang Ao <ao...@gmail.com> wrote:

> I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
>  Everything looks fine now.
>
> Seems direct command "hdfs namenode" gives a better sense of control  :)
>
> Thanks a lot.
>
> 在 2013年4月18日星期四,Harsh J 写道:
>
> Yes you can but if you want the scripts to work, you should have them
>> use a different PID directory (I think its called HADOOP_PID_DIR)
>> every time you invoke them.
>>
>> I instead prefer to start the daemons up via their direct command such
>> as "hdfs namenode" and so and move them to the background, with a
>> redirect for logging.
>>
>> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
>> > Hi all,
>> >
>> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
>> > datanodes, on a single machine?
>> >
>> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
>> > conflicting between HDFSes, but when I started the second HDFS, I got
>> the
>> > errors:
>> >
>> > Starting namenodes on [localhost]
>> > localhost: namenode running as process 20544. Stop it first.
>> > localhost: datanode running as process 20786. Stop it first.
>> > Starting secondary namenodes [0.0.0.0]
>> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>> >
>> > Is there a way to solve this?
>> > Thank you in advance,
>> >
>> > Lixiang Ao
>>
>>
>>
>> --
>> Harsh J
>>
>

Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
 Everything looks fine now.

Seems direct command "hdfs namenode" gives a better sense of control  :)

Thanks a lot.

在 2013年4月18日星期四,Harsh J 写道:

> Yes you can but if you want the scripts to work, you should have them
> use a different PID directory (I think its called HADOOP_PID_DIR)
> every time you invoke them.
>
> I instead prefer to start the daemons up via their direct command such
> as "hdfs namenode" and so and move them to the background, with a
> redirect for logging.
>
> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> > Hi all,
> >
> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> > datanodes, on a single machine?
> >
> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
> > conflicting between HDFSes, but when I started the second HDFS, I got the
> > errors:
> >
> > Starting namenodes on [localhost]
> > localhost: namenode running as process 20544. Stop it first.
> > localhost: datanode running as process 20786. Stop it first.
> > Starting secondary namenodes [0.0.0.0]
> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
> >
> > Is there a way to solve this?
> > Thank you in advance,
> >
> > Lixiang Ao
>
>
>
> --
> Harsh J
>

Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
 Everything looks fine now.

Seems direct command "hdfs namenode" gives a better sense of control  :)

Thanks a lot.

在 2013年4月18日星期四,Harsh J 写道:

> Yes you can but if you want the scripts to work, you should have them
> use a different PID directory (I think its called HADOOP_PID_DIR)
> every time you invoke them.
>
> I instead prefer to start the daemons up via their direct command such
> as "hdfs namenode" and so and move them to the background, with a
> redirect for logging.
>
> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> > Hi all,
> >
> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> > datanodes, on a single machine?
> >
> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
> > conflicting between HDFSes, but when I started the second HDFS, I got the
> > errors:
> >
> > Starting namenodes on [localhost]
> > localhost: namenode running as process 20544. Stop it first.
> > localhost: datanode running as process 20786. Stop it first.
> > Starting secondary namenodes [0.0.0.0]
> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
> >
> > Is there a way to solve this?
> > Thank you in advance,
> >
> > Lixiang Ao
>
>
>
> --
> Harsh J
>

Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
 Everything looks fine now.

Seems direct command "hdfs namenode" gives a better sense of control  :)

Thanks a lot.

在 2013年4月18日星期四,Harsh J 写道:

> Yes you can but if you want the scripts to work, you should have them
> use a different PID directory (I think its called HADOOP_PID_DIR)
> every time you invoke them.
>
> I instead prefer to start the daemons up via their direct command such
> as "hdfs namenode" and so and move them to the background, with a
> redirect for logging.
>
> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> > Hi all,
> >
> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> > datanodes, on a single machine?
> >
> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
> > conflicting between HDFSes, but when I started the second HDFS, I got the
> > errors:
> >
> > Starting namenodes on [localhost]
> > localhost: namenode running as process 20544. Stop it first.
> > localhost: datanode running as process 20786. Stop it first.
> > Starting secondary namenodes [0.0.0.0]
> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
> >
> > Is there a way to solve this?
> > Thank you in advance,
> >
> > Lixiang Ao
>
>
>
> --
> Harsh J
>

Run multiple HDFS instances

Posted by Lixiang Ao <ao...@gmail.com>.
I modified sbin/hadoop-daemon.sh, where HADOOP_PID_DIR is set. It works!
 Everything looks fine now.

Seems direct command "hdfs namenode" gives a better sense of control  :)

Thanks a lot.

在 2013年4月18日星期四,Harsh J 写道:

> Yes you can but if you want the scripts to work, you should have them
> use a different PID directory (I think its called HADOOP_PID_DIR)
> every time you invoke them.
>
> I instead prefer to start the daemons up via their direct command such
> as "hdfs namenode" and so and move them to the background, with a
> redirect for logging.
>
> On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> > Hi all,
> >
> > Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> > datanodes, on a single machine?
> >
> > I've modified core-site.xml and hdfs-site.xml to avoid port and file
> > conflicting between HDFSes, but when I started the second HDFS, I got the
> > errors:
> >
> > Starting namenodes on [localhost]
> > localhost: namenode running as process 20544. Stop it first.
> > localhost: datanode running as process 20786. Stop it first.
> > Starting secondary namenodes [0.0.0.0]
> > 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
> >
> > Is there a way to solve this?
> > Thank you in advance,
> >
> > Lixiang Ao
>
>
>
> --
> Harsh J
>

Re: Run multiple HDFS instances

Posted by Harsh J <ha...@cloudera.com>.
Yes you can but if you want the scripts to work, you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and file
> conflicting between HDFSes, but when I started the second HDFS, I got the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



-- 
Harsh J

Re: Run multiple HDFS instances

Posted by Harsh J <ha...@cloudera.com>.
Yes you can but if you want the scripts to work, you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and file
> conflicting between HDFSes, but when I started the second HDFS, I got the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



-- 
Harsh J

Re: Run multiple HDFS instances

Posted by Harsh J <ha...@cloudera.com>.
Yes you can but if you want the scripts to work, you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and file
> conflicting between HDFSes, but when I started the second HDFS, I got the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



-- 
Harsh J

Re: Run multiple HDFS instances

Posted by Harsh J <ha...@cloudera.com>.
Yes you can but if you want the scripts to work, you should have them
use a different PID directory (I think its called HADOOP_PID_DIR)
every time you invoke them.

I instead prefer to start the daemons up via their direct command such
as "hdfs namenode" and so and move them to the background, with a
redirect for logging.

On Thu, Apr 18, 2013 at 2:34 PM, Lixiang Ao <ao...@gmail.com> wrote:
> Hi all,
>
> Can I run mutiple HDFS instances, that is, n seperate namenodes and n
> datanodes, on a single machine?
>
> I've modified core-site.xml and hdfs-site.xml to avoid port and file
> conflicting between HDFSes, but when I started the second HDFS, I got the
> errors:
>
> Starting namenodes on [localhost]
> localhost: namenode running as process 20544. Stop it first.
> localhost: datanode running as process 20786. Stop it first.
> Starting secondary namenodes [0.0.0.0]
> 0.0.0.0: secondarynamenode running as process 21074. Stop it first.
>
> Is there a way to solve this?
> Thank you in advance,
>
> Lixiang Ao



-- 
Harsh J