You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by surfer <su...@crs4.it> on 2012/09/04 10:22:14 UTC

SNN

Hi

When I start my cluster (with start-dfs.sh), secondary namenodes are
created on all the machines in conf/slaves. I set conf/masters to a
single different machine (along with dfs.http.address pointing to the
nameserver) but seems to be ignored. any hint of what I'm doing wrong?

thanks
giovanni


hadoop version: 1.0.3

cluster:
1 machine with NN JT
1 machine with SNN (desidered...)
61 machines with DN TT

Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/05/2012 09:45 AM, surfer wrote:
> On 09/04/2012 06:33 PM, Michael Segel wrote:
>> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>>
>> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>>
>> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>>
>> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>>
>>
>> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>>
>>> Can you please show contents of masters and slaves config files?
>>>
>>>
>>>
> ok, thank you michael for the hint (and terry for your answer)
>
> the problem arose from the change in hadoop-env.sh of the default
> setting for HADOOP_SLAVES. I choose a different location from the default.
>
> two changes makes the script succeed:
>
> 1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
> secondary namenode but right after that it executes the hadoop-env.sh
> that reverts it to "pathtoconf/slaves".
>
> So the solution is moving the three lines after the block in which
> HADOOP_SLAVES is set before the block.
>
> changes in hadoop-config in pathtohadoop/libexec/
> with diff:
> 56,59d55
> < if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> <   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> < fi
> <
> 72a69,71
> > if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> >   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> > fi
>
> 2) slaves.sh assigns the value to HOSTLIST (which is created to store
> the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
> without arguments
>
> The solution is again a movement of some code lines.
>
changes in slaves.sh  in pathtohadoop/bin
with diff:
> 41,45d40
> < # If the slaves file is specified in the command line,
> < # then it takes precedence over the definition in
> < # hadoop-env.sh. Save it here.
> < HOSTLIST=$HADOOP_SLAVES
> <
> 51a47,51
> > # If the slaves file is specified in the command line,
> > # then it takes precedence over the definition in
> > # hadoop-env.sh. Save it here.
> > HOSTLIST=$HADOOP_SLAVES
> >
>
> should I open a JIRA?
> thank you
> giovanni aka surfer


Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/05/2012 09:45 AM, surfer wrote:
> On 09/04/2012 06:33 PM, Michael Segel wrote:
>> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>>
>> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>>
>> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>>
>> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>>
>>
>> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>>
>>> Can you please show contents of masters and slaves config files?
>>>
>>>
>>>
> ok, thank you michael for the hint (and terry for your answer)
>
> the problem arose from the change in hadoop-env.sh of the default
> setting for HADOOP_SLAVES. I choose a different location from the default.
>
> two changes makes the script succeed:
>
> 1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
> secondary namenode but right after that it executes the hadoop-env.sh
> that reverts it to "pathtoconf/slaves".
>
> So the solution is moving the three lines after the block in which
> HADOOP_SLAVES is set before the block.
>
> changes in hadoop-config in pathtohadoop/libexec/
> with diff:
> 56,59d55
> < if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> <   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> < fi
> <
> 72a69,71
> > if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> >   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> > fi
>
> 2) slaves.sh assigns the value to HOSTLIST (which is created to store
> the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
> without arguments
>
> The solution is again a movement of some code lines.
>
changes in slaves.sh  in pathtohadoop/bin
with diff:
> 41,45d40
> < # If the slaves file is specified in the command line,
> < # then it takes precedence over the definition in
> < # hadoop-env.sh. Save it here.
> < HOSTLIST=$HADOOP_SLAVES
> <
> 51a47,51
> > # If the slaves file is specified in the command line,
> > # then it takes precedence over the definition in
> > # hadoop-env.sh. Save it here.
> > HOSTLIST=$HADOOP_SLAVES
> >
>
> should I open a JIRA?
> thank you
> giovanni aka surfer


Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/05/2012 09:45 AM, surfer wrote:
> On 09/04/2012 06:33 PM, Michael Segel wrote:
>> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>>
>> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>>
>> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>>
>> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>>
>>
>> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>>
>>> Can you please show contents of masters and slaves config files?
>>>
>>>
>>>
> ok, thank you michael for the hint (and terry for your answer)
>
> the problem arose from the change in hadoop-env.sh of the default
> setting for HADOOP_SLAVES. I choose a different location from the default.
>
> two changes makes the script succeed:
>
> 1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
> secondary namenode but right after that it executes the hadoop-env.sh
> that reverts it to "pathtoconf/slaves".
>
> So the solution is moving the three lines after the block in which
> HADOOP_SLAVES is set before the block.
>
> changes in hadoop-config in pathtohadoop/libexec/
> with diff:
> 56,59d55
> < if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> <   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> < fi
> <
> 72a69,71
> > if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> >   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> > fi
>
> 2) slaves.sh assigns the value to HOSTLIST (which is created to store
> the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
> without arguments
>
> The solution is again a movement of some code lines.
>
changes in slaves.sh  in pathtohadoop/bin
with diff:
> 41,45d40
> < # If the slaves file is specified in the command line,
> < # then it takes precedence over the definition in
> < # hadoop-env.sh. Save it here.
> < HOSTLIST=$HADOOP_SLAVES
> <
> 51a47,51
> > # If the slaves file is specified in the command line,
> > # then it takes precedence over the definition in
> > # hadoop-env.sh. Save it here.
> > HOSTLIST=$HADOOP_SLAVES
> >
>
> should I open a JIRA?
> thank you
> giovanni aka surfer


Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/05/2012 09:45 AM, surfer wrote:
> On 09/04/2012 06:33 PM, Michael Segel wrote:
>> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>>
>> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>>
>> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>>
>> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>>
>>
>> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>>
>>> Can you please show contents of masters and slaves config files?
>>>
>>>
>>>
> ok, thank you michael for the hint (and terry for your answer)
>
> the problem arose from the change in hadoop-env.sh of the default
> setting for HADOOP_SLAVES. I choose a different location from the default.
>
> two changes makes the script succeed:
>
> 1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
> secondary namenode but right after that it executes the hadoop-env.sh
> that reverts it to "pathtoconf/slaves".
>
> So the solution is moving the three lines after the block in which
> HADOOP_SLAVES is set before the block.
>
> changes in hadoop-config in pathtohadoop/libexec/
> with diff:
> 56,59d55
> < if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> <   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> < fi
> <
> 72a69,71
> > if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
> >   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> > fi
>
> 2) slaves.sh assigns the value to HOSTLIST (which is created to store
> the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
> without arguments
>
> The solution is again a movement of some code lines.
>
changes in slaves.sh  in pathtohadoop/bin
with diff:
> 41,45d40
> < # If the slaves file is specified in the command line,
> < # then it takes precedence over the definition in
> < # hadoop-env.sh. Save it here.
> < HOSTLIST=$HADOOP_SLAVES
> <
> 51a47,51
> > # If the slaves file is specified in the command line,
> > # then it takes precedence over the definition in
> > # hadoop-env.sh. Save it here.
> > HOSTLIST=$HADOOP_SLAVES
> >
>
> should I open a JIRA?
> thank you
> giovanni aka surfer


Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 06:33 PM, Michael Segel wrote:
> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>
> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>
> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>
> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>
>
> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>
>> Can you please show contents of masters and slaves config files?
>>
>>
>>
ok, thank you michael for the hint (and terry for your answer)

the problem arose from the change in hadoop-env.sh of the default
setting for HADOOP_SLAVES. I choose a different location from the default.

two changes makes the script succeed:

1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
secondary namenode but right after that it executes the hadoop-env.sh
that reverts it to "pathtoconf/slaves".

So the solution is moving the three lines after the block in which
HADOOP_SLAVES is set before the block.

changes in hadoop-config in pathtohadoop/libexec/
with diff:
56,59d55
< if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
<   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
< fi
<
72a69,71
> if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
>   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> fi

2) slaves.sh assigns the value to HOSTLIST (which is created to store
the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
without arguments

The solution is again a movement of some code lines.

41,45d40
< # If the slaves file is specified in the command line,
< # then it takes precedence over the definition in
< # hadoop-env.sh. Save it here.
< HOSTLIST=$HADOOP_SLAVES
<
51a47,51
> # If the slaves file is specified in the command line,
> # then it takes precedence over the definition in
> # hadoop-env.sh. Save it here.
> HOSTLIST=$HADOOP_SLAVES
>

should I open a JIRA?
thank you
giovanni aka surfer

Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 06:33 PM, Michael Segel wrote:
> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>
> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>
> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>
> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>
>
> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>
>> Can you please show contents of masters and slaves config files?
>>
>>
>>
ok, thank you michael for the hint (and terry for your answer)

the problem arose from the change in hadoop-env.sh of the default
setting for HADOOP_SLAVES. I choose a different location from the default.

two changes makes the script succeed:

1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
secondary namenode but right after that it executes the hadoop-env.sh
that reverts it to "pathtoconf/slaves".

So the solution is moving the three lines after the block in which
HADOOP_SLAVES is set before the block.

changes in hadoop-config in pathtohadoop/libexec/
with diff:
56,59d55
< if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
<   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
< fi
<
72a69,71
> if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
>   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> fi

2) slaves.sh assigns the value to HOSTLIST (which is created to store
the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
without arguments

The solution is again a movement of some code lines.

41,45d40
< # If the slaves file is specified in the command line,
< # then it takes precedence over the definition in
< # hadoop-env.sh. Save it here.
< HOSTLIST=$HADOOP_SLAVES
<
51a47,51
> # If the slaves file is specified in the command line,
> # then it takes precedence over the definition in
> # hadoop-env.sh. Save it here.
> HOSTLIST=$HADOOP_SLAVES
>

should I open a JIRA?
thank you
giovanni aka surfer

Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 06:33 PM, Michael Segel wrote:
> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>
> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>
> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>
> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>
>
> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>
>> Can you please show contents of masters and slaves config files?
>>
>>
>>
ok, thank you michael for the hint (and terry for your answer)

the problem arose from the change in hadoop-env.sh of the default
setting for HADOOP_SLAVES. I choose a different location from the default.

two changes makes the script succeed:

1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
secondary namenode but right after that it executes the hadoop-env.sh
that reverts it to "pathtoconf/slaves".

So the solution is moving the three lines after the block in which
HADOOP_SLAVES is set before the block.

changes in hadoop-config in pathtohadoop/libexec/
with diff:
56,59d55
< if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
<   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
< fi
<
72a69,71
> if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
>   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> fi

2) slaves.sh assigns the value to HOSTLIST (which is created to store
the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
without arguments

The solution is again a movement of some code lines.

41,45d40
< # If the slaves file is specified in the command line,
< # then it takes precedence over the definition in
< # hadoop-env.sh. Save it here.
< HOSTLIST=$HADOOP_SLAVES
<
51a47,51
> # If the slaves file is specified in the command line,
> # then it takes precedence over the definition in
> # hadoop-env.sh. Save it here.
> HOSTLIST=$HADOOP_SLAVES
>

should I open a JIRA?
thank you
giovanni aka surfer

Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 06:33 PM, Michael Segel wrote:
> The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 
>
> I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.
>
> Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 
>
> If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.
>
>
> On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:
>
>> Can you please show contents of masters and slaves config files?
>>
>>
>>
ok, thank you michael for the hint (and terry for your answer)

the problem arose from the change in hadoop-env.sh of the default
setting for HADOOP_SLAVES. I choose a different location from the default.

two changes makes the script succeed:

1) hadoop-config.sh sets HADOOP_SLAVES to "pathtoconf/masters" for the
secondary namenode but right after that it executes the hadoop-env.sh
that reverts it to "pathtoconf/slaves".

So the solution is moving the three lines after the block in which
HADOOP_SLAVES is set before the block.

changes in hadoop-config in pathtohadoop/libexec/
with diff:
56,59d55
< if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
<   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
< fi
<
72a69,71
> if [ -f "${HADOOP_CONF_DIR}/hadoop-env.sh" ]; then
>   . "${HADOOP_CONF_DIR}/hadoop-env.sh"
> fi

2) slaves.sh assigns the value to HOSTLIST (which is created to store
the HADOOP_SLAVES value if set) after it has called hadoop-config.sh
without arguments

The solution is again a movement of some code lines.

41,45d40
< # If the slaves file is specified in the command line,
< # then it takes precedence over the definition in
< # hadoop-env.sh. Save it here.
< HOSTLIST=$HADOOP_SLAVES
<
51a47,51
> # If the slaves file is specified in the command line,
> # then it takes precedence over the definition in
> # hadoop-env.sh. Save it here.
> HOSTLIST=$HADOOP_SLAVES
>

should I open a JIRA?
thank you
giovanni aka surfer

Re: SNN

Posted by Michael Segel <mi...@hotmail.com>.
The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 

I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.

Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 

If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.


On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:

> Can you please show contents of masters and slaves config files?
> 
> 
> On 09/04/2012 09:15 AM, surfer wrote:
>> On 09/04/2012 12:58 PM, Michel Segel wrote:
>>> Which distro?
>>> 
>>> Saw this happen, way back when with a Cloudera release. 
>>> 
>>> Check your config files too...
>>> 
>>> 
>>> Sent from a remote device. Please excuse any typos...
>>> 
>>> Mike Segel
>> thanks for your answer
>> 
>> the config files are these: https://gist.github.com/3620975
>> do you see in them something suspicious?
>> 
>> the release is from apache.  I took the 1.0.3 tar file.
>> 
>> 
> 


Re: SNN

Posted by Michael Segel <mi...@hotmail.com>.
The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 

I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.

Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 

If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.


On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:

> Can you please show contents of masters and slaves config files?
> 
> 
> On 09/04/2012 09:15 AM, surfer wrote:
>> On 09/04/2012 12:58 PM, Michel Segel wrote:
>>> Which distro?
>>> 
>>> Saw this happen, way back when with a Cloudera release. 
>>> 
>>> Check your config files too...
>>> 
>>> 
>>> Sent from a remote device. Please excuse any typos...
>>> 
>>> Mike Segel
>> thanks for your answer
>> 
>> the config files are these: https://gist.github.com/3620975
>> do you see in them something suspicious?
>> 
>> the release is from apache.  I took the 1.0.3 tar file.
>> 
>> 
> 


Re: SNN

Posted by Michael Segel <mi...@hotmail.com>.
The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 

I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.

Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 

If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.


On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:

> Can you please show contents of masters and slaves config files?
> 
> 
> On 09/04/2012 09:15 AM, surfer wrote:
>> On 09/04/2012 12:58 PM, Michel Segel wrote:
>>> Which distro?
>>> 
>>> Saw this happen, way back when with a Cloudera release. 
>>> 
>>> Check your config files too...
>>> 
>>> 
>>> Sent from a remote device. Please excuse any typos...
>>> 
>>> Mike Segel
>> thanks for your answer
>> 
>> the config files are these: https://gist.github.com/3620975
>> do you see in them something suspicious?
>> 
>> the release is from apache.  I took the 1.0.3 tar file.
>> 
>> 
> 


Re: SNN

Posted by Michael Segel <mi...@hotmail.com>.
The other question you have to look at is the underlying start and stop script to see what is being passed on to them. 

I thought there was a parameter that would overload the defaults where you specified the slaves and master files, but I could be wrong.

Since this is raw Apache, I don't think that it sets up scripts in each machine's /etc/init.d directory, or does it? 

If it does, then you may just want to roll your own start and stop script and then make sure that the admins have sudo privileges to run those scripts.


On Sep 4, 2012, at 11:05 AM, Terry Healy <th...@bnl.gov> wrote:

> Can you please show contents of masters and slaves config files?
> 
> 
> On 09/04/2012 09:15 AM, surfer wrote:
>> On 09/04/2012 12:58 PM, Michel Segel wrote:
>>> Which distro?
>>> 
>>> Saw this happen, way back when with a Cloudera release. 
>>> 
>>> Check your config files too...
>>> 
>>> 
>>> Sent from a remote device. Please excuse any typos...
>>> 
>>> Mike Segel
>> thanks for your answer
>> 
>> the config files are these: https://gist.github.com/3620975
>> do you see in them something suspicious?
>> 
>> the release is from apache.  I took the 1.0.3 tar file.
>> 
>> 
> 


Re: SNN

Posted by Terry Healy <th...@bnl.gov>.
Can you please show contents of masters and slaves config files?


On 09/04/2012 09:15 AM, surfer wrote:
> On 09/04/2012 12:58 PM, Michel Segel wrote:
>> Which distro?
>>
>> Saw this happen, way back when with a Cloudera release. 
>>
>> Check your config files too...
>>
>>
>> Sent from a remote device. Please excuse any typos...
>>
>> Mike Segel
> thanks for your answer
> 
> the config files are these: https://gist.github.com/3620975
> do you see in them something suspicious?
> 
> the release is from apache.  I took the 1.0.3 tar file.
> 
> 

Re: SNN

Posted by Terry Healy <th...@bnl.gov>.
Can you please show contents of masters and slaves config files?


On 09/04/2012 09:15 AM, surfer wrote:
> On 09/04/2012 12:58 PM, Michel Segel wrote:
>> Which distro?
>>
>> Saw this happen, way back when with a Cloudera release. 
>>
>> Check your config files too...
>>
>>
>> Sent from a remote device. Please excuse any typos...
>>
>> Mike Segel
> thanks for your answer
> 
> the config files are these: https://gist.github.com/3620975
> do you see in them something suspicious?
> 
> the release is from apache.  I took the 1.0.3 tar file.
> 
> 

Re: SNN

Posted by Terry Healy <th...@bnl.gov>.
Can you please show contents of masters and slaves config files?


On 09/04/2012 09:15 AM, surfer wrote:
> On 09/04/2012 12:58 PM, Michel Segel wrote:
>> Which distro?
>>
>> Saw this happen, way back when with a Cloudera release. 
>>
>> Check your config files too...
>>
>>
>> Sent from a remote device. Please excuse any typos...
>>
>> Mike Segel
> thanks for your answer
> 
> the config files are these: https://gist.github.com/3620975
> do you see in them something suspicious?
> 
> the release is from apache.  I took the 1.0.3 tar file.
> 
> 

Re: SNN

Posted by Terry Healy <th...@bnl.gov>.
Can you please show contents of masters and slaves config files?


On 09/04/2012 09:15 AM, surfer wrote:
> On 09/04/2012 12:58 PM, Michel Segel wrote:
>> Which distro?
>>
>> Saw this happen, way back when with a Cloudera release. 
>>
>> Check your config files too...
>>
>>
>> Sent from a remote device. Please excuse any typos...
>>
>> Mike Segel
> thanks for your answer
> 
> the config files are these: https://gist.github.com/3620975
> do you see in them something suspicious?
> 
> the release is from apache.  I took the 1.0.3 tar file.
> 
> 

Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 12:58 PM, Michel Segel wrote:
> Which distro?
>
> Saw this happen, way back when with a Cloudera release. 
>
> Check your config files too...
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
thanks for your answer

the config files are these: https://gist.github.com/3620975
do you see in them something suspicious?

the release is from apache.  I took the 1.0.3 tar file.



Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 12:58 PM, Michel Segel wrote:
> Which distro?
>
> Saw this happen, way back when with a Cloudera release. 
>
> Check your config files too...
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
thanks for your answer

the config files are these: https://gist.github.com/3620975
do you see in them something suspicious?

the release is from apache.  I took the 1.0.3 tar file.



Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 12:58 PM, Michel Segel wrote:
> Which distro?
>
> Saw this happen, way back when with a Cloudera release. 
>
> Check your config files too...
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
thanks for your answer

the config files are these: https://gist.github.com/3620975
do you see in them something suspicious?

the release is from apache.  I took the 1.0.3 tar file.



Re: SNN

Posted by surfer <su...@crs4.it>.
On 09/04/2012 12:58 PM, Michel Segel wrote:
> Which distro?
>
> Saw this happen, way back when with a Cloudera release. 
>
> Check your config files too...
>
>
> Sent from a remote device. Please excuse any typos...
>
> Mike Segel
thanks for your answer

the config files are these: https://gist.github.com/3620975
do you see in them something suspicious?

the release is from apache.  I took the 1.0.3 tar file.



Re: SNN

Posted by Michel Segel <mi...@hotmail.com>.
Which distro?

Saw this happen, way back when with a Cloudera release. 

Check your config files too...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Sep 4, 2012, at 3:22 AM, surfer <su...@crs4.it> wrote:

> Hi
> 
> When I start my cluster (with start-dfs.sh), secondary namenodes are
> created on all the machines in conf/slaves. I set conf/masters to a
> single different machine (along with dfs.http.address pointing to the
> nameserver) but seems to be ignored. any hint of what I'm doing wrong?
> 
> thanks
> giovanni
> 
> 
> hadoop version: 1.0.3
> 
> cluster:
> 1 machine with NN JT
> 1 machine with SNN (desidered...)
> 61 machines with DN TT
> 

Re: SNN

Posted by Michel Segel <mi...@hotmail.com>.
Which distro?

Saw this happen, way back when with a Cloudera release. 

Check your config files too...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Sep 4, 2012, at 3:22 AM, surfer <su...@crs4.it> wrote:

> Hi
> 
> When I start my cluster (with start-dfs.sh), secondary namenodes are
> created on all the machines in conf/slaves. I set conf/masters to a
> single different machine (along with dfs.http.address pointing to the
> nameserver) but seems to be ignored. any hint of what I'm doing wrong?
> 
> thanks
> giovanni
> 
> 
> hadoop version: 1.0.3
> 
> cluster:
> 1 machine with NN JT
> 1 machine with SNN (desidered...)
> 61 machines with DN TT
> 

Re: SNN

Posted by Michel Segel <mi...@hotmail.com>.
Which distro?

Saw this happen, way back when with a Cloudera release. 

Check your config files too...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Sep 4, 2012, at 3:22 AM, surfer <su...@crs4.it> wrote:

> Hi
> 
> When I start my cluster (with start-dfs.sh), secondary namenodes are
> created on all the machines in conf/slaves. I set conf/masters to a
> single different machine (along with dfs.http.address pointing to the
> nameserver) but seems to be ignored. any hint of what I'm doing wrong?
> 
> thanks
> giovanni
> 
> 
> hadoop version: 1.0.3
> 
> cluster:
> 1 machine with NN JT
> 1 machine with SNN (desidered...)
> 61 machines with DN TT
> 

Re: SNN

Posted by Michel Segel <mi...@hotmail.com>.
Which distro?

Saw this happen, way back when with a Cloudera release. 

Check your config files too...


Sent from a remote device. Please excuse any typos...

Mike Segel

On Sep 4, 2012, at 3:22 AM, surfer <su...@crs4.it> wrote:

> Hi
> 
> When I start my cluster (with start-dfs.sh), secondary namenodes are
> created on all the machines in conf/slaves. I set conf/masters to a
> single different machine (along with dfs.http.address pointing to the
> nameserver) but seems to be ignored. any hint of what I'm doing wrong?
> 
> thanks
> giovanni
> 
> 
> hadoop version: 1.0.3
> 
> cluster:
> 1 machine with NN JT
> 1 machine with SNN (desidered...)
> 61 machines with DN TT
>