You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Juan Carlos <ju...@gmail.com> on 2014/02/27 10:12:49 UTC

Re: Question about DataNode

Hi Edward,
maybe you are sending your request to the master from the slave. I don't
are sure, but I think that secondary never answer any request, neither read
request, and you have to modify your config files by hand to change your
slave to be master.

I haven't tested so much with master/slave configurations, I only tested
with QJM and NFS syncronizations, in these cases whenever you start
namenodes they first get syncronized before being able to promote to active
namenode by checking journalnodes or NFS looking for changes in metadata to
update them.


2014-02-27 9:52 GMT+01:00 EdwardKing <zh...@neusoft.com>:

>  Two node,one is master,another is slave, I kill DataNode under slave,
> then create directory by dfs command under master machine:
> [hadoop@master]$./start-all.sh
>  [hadoop@slave]$ jps
> 9917 DataNode
> 10152 Jps
> [hadoop@slave]$ kill -9 9917
> [hadoop@master]$ hadoop dfs -mkdir test
> [hadoop@master]$ hadoop dfs -ls
> drwxr-xr-x   - hadoop supergroup          0 2014-02-27 00:15 test
> I guess test directory can't exist under slave,because slave is killed.
> Right?
> Then I restart service under master,like follows:
>  [hadoop@master]$./start-all.sh
>
> This time I find slave also contains test directory, why? Is hadoop2.2.0
> can automatic recovery?
> Any idea will be appreciated.
> Best regards,
> Edward
>
>
>
>
>
> ---------------------------------------------------------------------------------------------------
> Confidentiality Notice: The information contained in this e-mail and any
> accompanying attachment(s)
> is intended only for the use of the intended recipient and may be
> confidential and/or privileged of
> Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader
> of this communication is
> not the intended recipient, unauthorized use, forwarding, printing,
> storing, disclosure or copying
> is strictly prohibited, and may be unlawful.If you have received this
> communication in error,please
> immediately notify the sender by return e-mail, and delete the original
> message and all copies from
> your system. Thank you.
>
> ---------------------------------------------------------------------------------------------------
>

Re: Question about DataNode

Posted by Bertrand Dechoux <de...@gmail.com>.
I am not sure what your question is. You might want to be more explicit and
read more about Hadoop architecture and the roles of the various daemons. A
directory is only metadata so you can mess around with DataNodes if you
want but they are not involved.

Regards

Bertrand


Bertrand Dechoux


On Thu, Feb 27, 2014 at 10:12 AM, Juan Carlos <ju...@gmail.com> wrote:

> Hi Edward,
> maybe you are sending your request to the master from the slave. I don't
> are sure, but I think that secondary never answer any request, neither read
> request, and you have to modify your config files by hand to change your
> slave to be master.
>
> I haven't tested so much with master/slave configurations, I only tested
> with QJM and NFS syncronizations, in these cases whenever you start
> namenodes they first get syncronized before being able to promote to active
> namenode by checking journalnodes or NFS looking for changes in metadata to
> update them.
>
>
> 2014-02-27 9:52 GMT+01:00 EdwardKing <zh...@neusoft.com>:
>
>  Two node,one is master,another is slave, I kill DataNode under slave,
>> then create directory by dfs command under master machine:
>> [hadoop@master]$./start-all.sh
>>  [hadoop@slave]$ jps
>> 9917 DataNode
>> 10152 Jps
>> [hadoop@slave]$ kill -9 9917
>> [hadoop@master]$ hadoop dfs -mkdir test
>> [hadoop@master]$ hadoop dfs -ls
>> drwxr-xr-x   - hadoop supergroup          0 2014-02-27 00:15 test
>> I guess test directory can't exist under slave,because slave is killed.
>> Right?
>> Then I restart service under master,like follows:
>>  [hadoop@master]$./start-all.sh
>>
>> This time I find slave also contains test directory, why? Is hadoop2.2.0
>> can automatic recovery?
>> Any idea will be appreciated.
>> Best regards,
>> Edward
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>

Re: Question about DataNode

Posted by Bertrand Dechoux <de...@gmail.com>.
I am not sure what your question is. You might want to be more explicit and
read more about Hadoop architecture and the roles of the various daemons. A
directory is only metadata so you can mess around with DataNodes if you
want but they are not involved.

Regards

Bertrand


Bertrand Dechoux


On Thu, Feb 27, 2014 at 10:12 AM, Juan Carlos <ju...@gmail.com> wrote:

> Hi Edward,
> maybe you are sending your request to the master from the slave. I don't
> are sure, but I think that secondary never answer any request, neither read
> request, and you have to modify your config files by hand to change your
> slave to be master.
>
> I haven't tested so much with master/slave configurations, I only tested
> with QJM and NFS syncronizations, in these cases whenever you start
> namenodes they first get syncronized before being able to promote to active
> namenode by checking journalnodes or NFS looking for changes in metadata to
> update them.
>
>
> 2014-02-27 9:52 GMT+01:00 EdwardKing <zh...@neusoft.com>:
>
>  Two node,one is master,another is slave, I kill DataNode under slave,
>> then create directory by dfs command under master machine:
>> [hadoop@master]$./start-all.sh
>>  [hadoop@slave]$ jps
>> 9917 DataNode
>> 10152 Jps
>> [hadoop@slave]$ kill -9 9917
>> [hadoop@master]$ hadoop dfs -mkdir test
>> [hadoop@master]$ hadoop dfs -ls
>> drwxr-xr-x   - hadoop supergroup          0 2014-02-27 00:15 test
>> I guess test directory can't exist under slave,because slave is killed.
>> Right?
>> Then I restart service under master,like follows:
>>  [hadoop@master]$./start-all.sh
>>
>> This time I find slave also contains test directory, why? Is hadoop2.2.0
>> can automatic recovery?
>> Any idea will be appreciated.
>> Best regards,
>> Edward
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>

Re: Question about DataNode

Posted by Bertrand Dechoux <de...@gmail.com>.
I am not sure what your question is. You might want to be more explicit and
read more about Hadoop architecture and the roles of the various daemons. A
directory is only metadata so you can mess around with DataNodes if you
want but they are not involved.

Regards

Bertrand


Bertrand Dechoux


On Thu, Feb 27, 2014 at 10:12 AM, Juan Carlos <ju...@gmail.com> wrote:

> Hi Edward,
> maybe you are sending your request to the master from the slave. I don't
> are sure, but I think that secondary never answer any request, neither read
> request, and you have to modify your config files by hand to change your
> slave to be master.
>
> I haven't tested so much with master/slave configurations, I only tested
> with QJM and NFS syncronizations, in these cases whenever you start
> namenodes they first get syncronized before being able to promote to active
> namenode by checking journalnodes or NFS looking for changes in metadata to
> update them.
>
>
> 2014-02-27 9:52 GMT+01:00 EdwardKing <zh...@neusoft.com>:
>
>  Two node,one is master,another is slave, I kill DataNode under slave,
>> then create directory by dfs command under master machine:
>> [hadoop@master]$./start-all.sh
>>  [hadoop@slave]$ jps
>> 9917 DataNode
>> 10152 Jps
>> [hadoop@slave]$ kill -9 9917
>> [hadoop@master]$ hadoop dfs -mkdir test
>> [hadoop@master]$ hadoop dfs -ls
>> drwxr-xr-x   - hadoop supergroup          0 2014-02-27 00:15 test
>> I guess test directory can't exist under slave,because slave is killed.
>> Right?
>> Then I restart service under master,like follows:
>>  [hadoop@master]$./start-all.sh
>>
>> This time I find slave also contains test directory, why? Is hadoop2.2.0
>> can automatic recovery?
>> Any idea will be appreciated.
>> Best regards,
>> Edward
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>

Re: Question about DataNode

Posted by Bertrand Dechoux <de...@gmail.com>.
I am not sure what your question is. You might want to be more explicit and
read more about Hadoop architecture and the roles of the various daemons. A
directory is only metadata so you can mess around with DataNodes if you
want but they are not involved.

Regards

Bertrand


Bertrand Dechoux


On Thu, Feb 27, 2014 at 10:12 AM, Juan Carlos <ju...@gmail.com> wrote:

> Hi Edward,
> maybe you are sending your request to the master from the slave. I don't
> are sure, but I think that secondary never answer any request, neither read
> request, and you have to modify your config files by hand to change your
> slave to be master.
>
> I haven't tested so much with master/slave configurations, I only tested
> with QJM and NFS syncronizations, in these cases whenever you start
> namenodes they first get syncronized before being able to promote to active
> namenode by checking journalnodes or NFS looking for changes in metadata to
> update them.
>
>
> 2014-02-27 9:52 GMT+01:00 EdwardKing <zh...@neusoft.com>:
>
>  Two node,one is master,another is slave, I kill DataNode under slave,
>> then create directory by dfs command under master machine:
>> [hadoop@master]$./start-all.sh
>>  [hadoop@slave]$ jps
>> 9917 DataNode
>> 10152 Jps
>> [hadoop@slave]$ kill -9 9917
>> [hadoop@master]$ hadoop dfs -mkdir test
>> [hadoop@master]$ hadoop dfs -ls
>> drwxr-xr-x   - hadoop supergroup          0 2014-02-27 00:15 test
>> I guess test directory can't exist under slave,because slave is killed.
>> Right?
>> Then I restart service under master,like follows:
>>  [hadoop@master]$./start-all.sh
>>
>> This time I find slave also contains test directory, why? Is hadoop2.2.0
>> can automatic recovery?
>> Any idea will be appreciated.
>> Best regards,
>> Edward
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>> Confidentiality Notice: The information contained in this e-mail and any
>> accompanying attachment(s)
>> is intended only for the use of the intended recipient and may be
>> confidential and/or privileged of
>> Neusoft Corporation, its subsidiaries and/or its affiliates. If any
>> reader of this communication is
>> not the intended recipient, unauthorized use, forwarding, printing,
>> storing, disclosure or copying
>> is strictly prohibited, and may be unlawful.If you have received this
>> communication in error,please
>> immediately notify the sender by return e-mail, and delete the original
>> message and all copies from
>> your system. Thank you.
>>
>> ---------------------------------------------------------------------------------------------------
>>
>
>