You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Arthur.hk.chan@gmail.com" <ar...@gmail.com> on 2014/08/11 17:04:42 UTC

Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Hi 

I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 

Does anyone have the idea what would be wrong about this?  Thanks

Regards
Arthur



bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
Number of Maps  = 5
Samples per Map = 1010000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Starting Job
14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
….

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
I restarted the cluster, and the usercahe all gone automatically.  no longer an issue.  thanks

 
On 20 Aug, 2014, at 7:05 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi, 
> 
>  i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB
> 
> I found that these files are all under tmp/nm-local-dir/usercache
> 
> Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?
> 
> Regards
> Arthur
> 
> 


Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Thanks for your reply.  However I think it is not about 32-bit version issue, cus my Hadoop is 64-bit as I compiled it from source.  I think my way to install snappy should be wrong, 

Arthur
On 19 Aug, 2014, at 11:53 pm, Andre Kelpe <ak...@concurrentinc.com> wrote:

> Could this be caused by the fact that hadoop no longer ships with 64bit libs? https://issues.apache.org/jira/browse/HADOOP-9911
> 
> - André
> 
> 
> On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> I am trying Snappy in Hadoop 2.4.1, here are my steps: 
> 
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
> 
> 2)
> added the following 
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>     <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
> 
> 3) 
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
> 
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout
> 
> I got the following warning, actually there is no any test file created in hdfs:
> 
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.
> 
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?
> 
> Regards
> Arthur
> 
> 
> 
> 
> 
> 
> 
> 
> -- 
> André Kelpe
> andre@concurrentinc.com
> http://concurrentinc.com


Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Re: Hadoop 2.4.1 Snappy Smoke Test failed

Posted by Andre Kelpe <ak...@concurrentinc.com>.
Could this be caused by the fact that hadoop no longer ships with 64bit
libs? https://issues.apache.org/jira/browse/HADOOP-9911

- André


On Tue, Aug 19, 2014 at 5:40 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> I am trying Snappy in Hadoop 2.4.1, here are my steps:
>
> (CentOS 64-bit)
> 1)
> yum install snappy snappy-devel
>
> 2)
> added the following
> (core-site.xml)
>    <property>
>     <name>io.compression.codecs</name>
>
> <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
>    </property>
>
> 3)
> mapred-site.xml
>    <property>
>     <name>mapreduce.admin.map.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>    <property>
>     <name>mapreduce.admin.reduce.child.java.opts</name>
>     <value>-server -XX:NewRatio=8
> -Djava.library.path=/usr/lib/hadoop/lib/native/
> -Djava.net.preferIPv4Stack=true</value>
>     <final>true</final>
>    </property>
>
> 4) smoke test
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen
> 100000 /tmp/teragenout
>
> I got the following warning, actually there is no any test file created in
> hdfs:
>
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.map.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the map JVM env using
> mapreduce.admin.user.env config settings.
> 14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in
> mapreduce.admin.reduce.child.java.opts can cause programs to no longer
> function if hadoop native libraries are used. These values should be set as
> part of the LD_LIBRARY_PATH in the reduce JVM env using
> mapreduce.admin.user.env config settings.
>
> Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1?
> or what would be wrong? or is my new change in mapred-site.xml incorrect?
>
> Regards
> Arthur
>
>
>
>
>
>


-- 
André Kelpe
andre@concurrentinc.com
http://concurrentinc.com

Hadoop 2.4.1 How to clear usercache

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi, 

 i use Hadoop 2.4.1, in my cluster,  Non DFS Used: 2.09 TB

I found that these files are all under tmp/nm-local-dir/usercache

Is there any Hadoop command to remove these unused user cache files tmp/nm-local-dir/usercache ?

Regards
Arthur



Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Hadoop 2.4.1 Snappy Smoke Test failed

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

I am trying Snappy in Hadoop 2.4.1, here are my steps: 

(CentOS 64-bit)
1)
yum install snappy snappy-devel

2)
added the following 
(core-site.xml)
   <property>
    <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
   </property>

3) 
mapred-site.xml
   <property>
    <name>mapreduce.admin.map.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>
   <property>
    <name>mapreduce.admin.reduce.child.java.opts</name>
    <value>-server -XX:NewRatio=8 -Djava.library.path=/usr/lib/hadoop/lib/native/ -Djava.net.preferIPv4Stack=true</value>
    <final>true</final>
   </property>

4) smoke test
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar  teragen 100000 /tmp/teragenout

I got the following warning, actually there is no any test file created in hdfs:

14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.map.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the map JVM env using mapreduce.admin.user.env config settings.
14/08/19 22:50:10 WARN mapred.YARNRunner: Usage of -Djava.library.path in mapreduce.admin.reduce.child.java.opts can cause programs to no longer function if hadoop native libraries are used. These values should be set as part of the LD_LIBRARY_PATH in the reduce JVM env using mapreduce.admin.user.env config settings.

Can anyone please advise how to install and enable SNAPPY in Hadoop 2.4.1? or what would be wrong? or is my new change in mapred-site.xml incorrect?

Regards
Arthur






Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

   Could you show me the error message for rm2. please ?


Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 10:17 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> Thank y very much!
>
> At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager
> in rm2 is not started accordingly.  Please advise what would be wrong?
> Thanks
>
> Regards
> Arthur
>
>
>
>
> On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> No, need to start multiple RMs separately.
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
> Interesting question. But one of the design for auto-failover is that the
> down-time of RM is invisible to end users. The end users can submit
> applications normally even if the failover happens.
>
> We can monitor the status of RMs by using the command-line (you did
> previously) or from webUI/webService
> (rm_address:portnumber/cluster/cluster). We can get the current status from
> there.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is
>> my yarn-site.xml.
>>
>> At the moment, the ResourceManager HA works if:
>>
>> 1) at rm1, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> active
>>
>> yarn rmadmin -getServiceState rm2
>> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
>> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
>> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
>> MILLISECONDS)
>> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
>> connection exception: java.net.ConnectException: Connection refused; For
>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>>
>>
>> 2) at rm2, run ./sbin/start-yarn.sh
>>
>> yarn rmadmin -getServiceState rm1
>> standby
>>
>>
>> Some questions:
>> Q1)  I need start yarn in EACH master separately, is this normal? Is
>> there a way that I just run ./sbin/start-yarn.sh in rm1 and get the
>> STANDBY ResourceManager in rm2 started as well?
>>
>> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
>> down in an auto-failover env? or how do you monitor the status of
>> ACTIVE/STANDBY ResourceManager?
>>
>>
>> Regards
>> Arthur
>>
>>
>> <?xml version="1.0"?>
>> <configuration>
>>
>> <!-- Site specific YARN configuration properties -->
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services</name>
>>       <value>mapreduce_shuffle</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.resourcemanager.address</name>
>>       <value>192.168.1.1:8032</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.resource-tracker.address</name>
>>        <value>192.168.1.1:8031</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.admin.address</name>
>>        <value>192.168.1.1:8033</value>
>>    </property>
>>
>>    <property>
>>        <name>yarn.resourcemanager.scheduler.address</name>
>>        <value>192.168.1.1:8030</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.loacl-dirs</name>
>>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>>        <final>true</final>
>>    </property>
>>
>>    <property>
>>        <name>yarn.web-proxy.address</name>
>>        <value>192.168.1.1:8888</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>>    </property>
>>
>>
>>
>>
>>    <property>
>>       <name>yarn.nodemanager.resource.memory-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.minimum-allocation-mb</name>
>>       <value>9216</value>
>>    </property>
>>
>>    <property>
>>       <name>yarn.scheduler.maximum-allocation-mb</name>
>>       <value>18432</value>
>>    </property>
>>
>>
>>
>>   <property>
>>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>>     <value>2000</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.cluster-id</name>
>>     <value>cluster_rm</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.ha.rm-ids</name>
>>     <value>rm1,rm2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm1</name>
>>     <value>192.168.1.1</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.hostname.rm2</name>
>>     <value>192.168.1.2</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.recovery.enabled</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.store.class</name>
>>
>> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>>   </property>
>>   <property>
>>       <name>yarn.resourcemanager.zk-address</name>
>>       <value>rm1:2181,m135:2181,m137:2181</value>
>>   </property>
>>   <property>
>>
>> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>>     <value>5000</value>
>>   </property>
>>
>>   <!-- RM1 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm1</name>
>>     <value>192.168.1.1:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>>     <value>192.168.1.1:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>>     <value>192.168.1.1:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>>     <value>192.168.1.1:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>>     <value>192.168.1.1:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm1</name>
>>     <value>192.168.1.1:23142</value>
>>   </property>
>>
>>
>>   <!-- RM2 configs -->
>>   <property>
>>     <name>yarn.resourcemanager.address.rm2</name>
>>     <value>192.168.1.2:23140</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>>     <value>192.168.1.2:23130</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>>     <value>192.168.1.2:23189</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>>     <value>192.168.1.2:23188</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>>     <value>192.168.1.2:23125</value>
>>   </property>
>>   <property>
>>     <name>yarn.resourcemanager.admin.address.rm2</name>
>>     <value>192.168.1.2:23142</value>
>>   </property>
>>
>>   <property>
>>     <name>yarn.nodemanager.remote-app-log-dir</name>
>>     <value>/edh/hadoop_logs/hadoop/</value>
>>   </property>
>>
>> </configuration>
>>
>>
>>
>> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>>
>> Hey, Arthur:
>>
>>     Did you use single node cluster or multiple nodes cluster? Could you
>> share your configuration file (yarn-site.xml) ? This looks like a
>> configuration issue.
>>
>> Thanks
>>
>> Xuan Gong
>>
>>
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
>> arthur.hk.chan@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If I have TWO nodes for ResourceManager HA, what should be the correct
>>> steps and commands to start and stop ResourceManager in a ResourceManager
>>> HA cluster ?
>>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it
>>> seems that  ./sbin/start-yarn.sh can only start YARN in a node at a
>>> time.
>>>
>>> Regards
>>> Arthur
>>>
>>>
>>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

Thank y very much!

At the moment if I run ./sbin/start-yarn.sh in rm1, the standby STANDBY ResourceManager in rm2 is not started accordingly.  Please advise what would be wrong? Thanks

Regards
Arthur




On 12 Aug, 2014, at 1:13 pm, Xuan Gong <xg...@hortonworks.com> wrote:

> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> No, need to start multiple RMs separately.
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager? 
> 
> Interesting question. But one of the design for auto-failover is that the down-time of RM is invisible to end users. The end users can submit applications normally even if the failover happens. 
> 
> We can monitor the status of RMs by using the command-line (you did previously) or from webUI/webService (rm_address:portnumber/cluster/cluster). We can get the current status from there.
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.
> 
> At the moment, the ResourceManager HA works if:
> 
> 1) at rm1, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> active
> 
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> 
> 
> 2) at rm2, run ./sbin/start-yarn.sh
> 
> yarn rmadmin -getServiceState rm1
> standby
> 
> 
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?
> 
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   
> 
> 
> Regards
> Arthur
> 
> 
> <?xml version="1.0"?>
> <configuration>
> 
> <!-- Site specific YARN configuration properties -->
> 
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
> 
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
> 
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
> 
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
> 
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
> 
> 
> 
> 
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
> 
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
> 
> 
> 
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>     <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>     <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
> 
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
> 
> 
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
> 
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
> 
> </configuration>
> 
> 
> 
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
> 
>> Hey, Arthur:
>> 
>>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
>> 
>> Thanks
>> 
>> Xuan Gong
>> 
>> 
>> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
>> Hi,
>> 
>> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>> 
>> Regards
>> Arthur
>> 
>> 
> 
> 
> 
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there
a way that I just run ./sbin/start-yarn.sh in rm1 and get the
STANDBY ResourceManager in rm2 started as well?

No, need to start multiple RMs separately.

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down
in an auto-failover env? or how do you monitor the status of
ACTIVE/STANDBY ResourceManager?

Interesting question. But one of the design for auto-failover is that the
down-time of RM is invisible to end users. The end users can submit
applications normally even if the failover happens.

We can monitor the status of RMs by using the command-line (you did
previously) or from webUI/webService
(rm_address:portnumber/cluster/cluster). We can get the current status from
there.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 5:12 PM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my
> yarn-site.xml.
>
> At the moment, the ResourceManager HA works if:
>
> 1) at rm1, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> active
>
> yarn rmadmin -getServiceState rm2
> 14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/
> 192.168.1.1:23142. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000
> MILLISECONDS)
> Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on
> connection exception: java.net.ConnectException: Connection refused; For
> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>
>
> 2) at rm2, run ./sbin/start-yarn.sh
>
> yarn rmadmin -getServiceState rm1
> standby
>
>
> Some questions:
> Q1)  I need start yarn in EACH master separately, is this normal? Is there
> a way that I just run ./sbin/start-yarn.sh in rm1 and get the
> STANDBY ResourceManager in rm2 started as well?
>
> Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is
> down in an auto-failover env? or how do you monitor the status of
> ACTIVE/STANDBY ResourceManager?
>
>
> Regards
> Arthur
>
>
> <?xml version="1.0"?>
> <configuration>
>
> <!-- Site specific YARN configuration properties -->
>
>    <property>
>       <name>yarn.nodemanager.aux-services</name>
>       <value>mapreduce_shuffle</value>
>    </property>
>
>    <property>
>       <name>yarn.resourcemanager.address</name>
>       <value>192.168.1.1:8032</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.resource-tracker.address</name>
>        <value>192.168.1.1:8031</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.admin.address</name>
>        <value>192.168.1.1:8033</value>
>    </property>
>
>    <property>
>        <name>yarn.resourcemanager.scheduler.address</name>
>        <value>192.168.1.1:8030</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.loacl-dirs</name>
>        <value>/edh/hadoop_data/mapred/nodemanager</value>
>        <final>true</final>
>    </property>
>
>    <property>
>        <name>yarn.web-proxy.address</name>
>        <value>192.168.1.1:8888</value>
>    </property>
>
>    <property>
>       <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
>       <value>org.apache.hadoop.mapred.ShuffleHandler</value>
>    </property>
>
>
>
>
>    <property>
>       <name>yarn.nodemanager.resource.memory-mb</name>
>       <value>18432</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.minimum-allocation-mb</name>
>       <value>9216</value>
>    </property>
>
>    <property>
>       <name>yarn.scheduler.maximum-allocation-mb</name>
>       <value>18432</value>
>    </property>
>
>
>
>   <property>
>     <name>yarn.resourcemanager.connect.retry-interval.ms</name>
>     <value>2000</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.cluster-id</name>
>     <value>cluster_rm</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.ha.rm-ids</name>
>     <value>rm1,rm2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm1</name>
>     <value>192.168.1.1</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.hostname.rm2</name>
>     <value>192.168.1.2</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.recovery.enabled</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.store.class</name>
>
> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
>   </property>
>   <property>
>       <name>yarn.resourcemanager.zk-address</name>
>       <value>rm1:2181,m135:2181,m137:2181</value>
>   </property>
>   <property>
>
> <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
>     <value>5000</value>
>   </property>
>
>   <!-- RM1 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm1</name>
>     <value>192.168.1.1:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm1</name>
>     <value>192.168.1.1:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm1</name>
>     <value>192.168.1.1:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm1</name>
>     <value>192.168.1.1:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
>     <value>192.168.1.1:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm1</name>
>     <value>192.168.1.1:23142</value>
>   </property>
>
>
>   <!-- RM2 configs -->
>   <property>
>     <name>yarn.resourcemanager.address.rm2</name>
>     <value>192.168.1.2:23140</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.scheduler.address.rm2</name>
>     <value>192.168.1.2:23130</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.https.address.rm2</name>
>     <value>192.168.1.2:23189</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.webapp.address.rm2</name>
>     <value>192.168.1.2:23188</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
>     <value>192.168.1.2:23125</value>
>   </property>
>   <property>
>     <name>yarn.resourcemanager.admin.address.rm2</name>
>     <value>192.168.1.2:23142</value>
>   </property>
>
>   <property>
>     <name>yarn.nodemanager.remote-app-log-dir</name>
>     <value>/edh/hadoop_logs/hadoop/</value>
>   </property>
>
> </configuration>
>
>
>
> On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:
>
> Hey, Arthur:
>
>     Did you use single node cluster or multiple nodes cluster? Could you
> share your configuration file (yarn-site.xml) ? This looks like a
> configuration issue.
>
> Thanks
>
> Xuan Gong
>
>
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
>> Hi,
>>
>> If I have TWO nodes for ResourceManager HA, what should be the correct
>> steps and commands to start and stop ResourceManager in a ResourceManager
>> HA cluster ?
>> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
>> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>>
>> Regards
>> Arthur
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

it is a multiple-node cluster, two master nodes (rm1 and rm2), below is my yarn-site.xml.

At the moment, the ResourceManager HA works if:

1) at rm1, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
active

yarn rmadmin -getServiceState rm2
14/08/12 07:47:59 INFO ipc.Client: Retrying connect to server: rm1/192.168.1.1:23142. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From rm2/192.168.1.2 to rm2:23142 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


2) at rm2, run ./sbin/start-yarn.sh

yarn rmadmin -getServiceState rm1
standby


Some questions:
Q1)  I need start yarn in EACH master separately, is this normal? Is there a way that I just run ./sbin/start-yarn.sh in rm1 and get the STANDBY ResourceManager in rm2 started as well?

Q2) How to get alerts (e.g. by email) if the ACTIVE ResourceManager is down in an auto-failover env? or how do you monitor the status of ACTIVE/STANDBY ResourceManager?   


Regards
Arthur


<?xml version="1.0"?>
<configuration>

<!-- Site specific YARN configuration properties -->

   <property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
   </property>

   <property>
      <name>yarn.resourcemanager.address</name>
      <value>192.168.1.1:8032</value>
   </property>

   <property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>192.168.1.1:8031</value>
   </property>

   <property>
       <name>yarn.resourcemanager.admin.address</name>
       <value>192.168.1.1:8033</value>
   </property>

   <property>
       <name>yarn.resourcemanager.scheduler.address</name>
       <value>192.168.1.1:8030</value>
   </property>

   <property>
      <name>yarn.nodemanager.loacl-dirs</name>
       <value>/edh/hadoop_data/mapred/nodemanager</value>
       <final>true</final>
   </property>

   <property>
       <name>yarn.web-proxy.address</name>
       <value>192.168.1.1:8888</value>
   </property>

   <property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
   </property>




   <property>
      <name>yarn.nodemanager.resource.memory-mb</name>
      <value>18432</value>
   </property>

   <property>
      <name>yarn.scheduler.minimum-allocation-mb</name>
      <value>9216</value>
   </property>

   <property>
      <name>yarn.scheduler.maximum-allocation-mb</name>
      <value>18432</value>
   </property>



  <property>
    <name>yarn.resourcemanager.connect.retry-interval.ms</name>
    <value>2000</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>cluster_rm</value>
  </property>
  <property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>192.168.1.1</value>
  </property>
  <property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>192.168.1.2</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
  </property>
  <property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
  </property>
  <property>
      <name>yarn.resourcemanager.zk-address</name>
      <value>rm1:2181,m135:2181,m137:2181</value>
  </property>
  <property>
    <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
    <value>5000</value>
  </property>

  <!-- RM1 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm1</name>
    <value>192.168.1.1:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm1</name>
    <value>192.168.1.1:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm1</name>
    <value>192.168.1.1:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm1</name>
    <value>192.168.1.1:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
    <value>192.168.1.1:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm1</name>
    <value>192.168.1.1:23142</value>
  </property>


  <!-- RM2 configs -->
  <property>
    <name>yarn.resourcemanager.address.rm2</name>
    <value>192.168.1.2:23140</value>
  </property>
  <property>
    <name>yarn.resourcemanager.scheduler.address.rm2</name>
    <value>192.168.1.2:23130</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.https.address.rm2</name>
    <value>192.168.1.2:23189</value>
  </property>
  <property>
    <name>yarn.resourcemanager.webapp.address.rm2</name>
    <value>192.168.1.2:23188</value>
  </property>
  <property>
    <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
    <value>192.168.1.2:23125</value>
  </property>
  <property>
    <name>yarn.resourcemanager.admin.address.rm2</name>
    <value>192.168.1.2:23142</value>
  </property>

  <property>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/edh/hadoop_logs/hadoop/</value>
  </property>

</configuration>



On 12 Aug, 2014, at 1:49 am, Xuan Gong <xg...@hortonworks.com> wrote:

> Hey, Arthur:
> 
>     Did you use single node cluster or multiple nodes cluster? Could you share your configuration file (yarn-site.xml) ? This looks like a configuration issue. 
> 
> Thanks
> 
> Xuan Gong
> 
> 
> On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:
> Hi,
> 
> If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
> 
> Regards
> Arthur
> 
> 


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by Xuan Gong <xg...@hortonworks.com>.
Hey, Arthur:

    Did you use single node cluster or multiple nodes cluster? Could you
share your configuration file (yarn-site.xml) ? This looks like a
configuration issue.

Thanks

Xuan Gong


On Mon, Aug 11, 2014 at 9:45 AM, Arthur.hk.chan@gmail.com <
arthur.hk.chan@gmail.com> wrote:

> Hi,
>
> If I have TWO nodes for ResourceManager HA, what should be the correct
> steps and commands to start and stop ResourceManager in a ResourceManager
> HA cluster ?
> Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems
> that  ./sbin/start-yarn.sh can only start YARN in a node at a time.
>
> Regards
> Arthur
>
>
> On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <
> arthur.hk.chan@gmail.com> wrote:
>
> Hi
>
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and
> NM2). When verifying ResourceManager failover, I use “kill -9” to terminate
> the ResourceManager in name node 1 (NM1), if I run the the test job, it
> seems that the failover of ResourceManager keeps trying NM1 and NM2
> non-stop.
>
> Does anyone have the idea what would be wrong about this?  Thanks
>
> Regards
> Arthur
>
>
>
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar
> pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing
> over to nm1
> ….
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….


Re: Hadoop 2.4.1 Verifying Automatic Failover Failed: ResourceManager

Posted by "Arthur.hk.chan@gmail.com" <ar...@gmail.com>.
Hi,

If I have TWO nodes for ResourceManager HA, what should be the correct steps and commands to start and stop ResourceManager in a ResourceManager HA cluster ?
Unlike ./sbin/start-dfs.sh (which can start all NNs from a NN), it seems that  ./sbin/start-yarn.sh can only start YARN in a node at a time.

Regards
Arthur


On 11 Aug, 2014, at 11:04 pm, Arthur.hk.chan@gmail.com <ar...@gmail.com> wrote:

> Hi 
> 
> I am running Hadoop 2.4.1 with YARN HA enabled (two name nodes, NM1 and NM2). When verifying ResourceManager failover, I use “kill -9” to terminate the ResourceManager in name node 1 (NM1), if I run the the test job, it seems that the failover of ResourceManager keeps trying NM1 and NM2 non-stop. 
> 
> Does anyone have the idea what would be wrong about this?  Thanks
> 
> Regards
> Arthur
> 
> 
> 
> bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.1.jar pi  5 1010000000
> Number of Maps  = 5
> Samples per Map = 1010000000
> Wrote input for Map #0
> Wrote input for Map #1
> Wrote input for Map #2
> Wrote input for Map #3
> Wrote input for Map #4
> Starting Job
> 14/08/11 22:35:23 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:24 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:25 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:28 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:30 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:32 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:34 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:37 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> 14/08/11 22:35:39 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm2
> 14/08/11 22:35:40 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to nm1
> ….