You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ambari.apache.org by Dmitro Lisnichenko <dl...@hortonworks.com> on 2016/01/06 16:52:17 UTC
Review Request 41981: Ambari has lowercase hostnames while cluster is
installed with uppercase hostnames.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41981/
-----------------------------------------------------------
Review request for Ambari, Alexandr Antonenko, Jayush Luniya, Myroslav Papirkovskyy, and Sumit Mohanty.
Bugs: AMBARI-14453
https://issues.apache.org/jira/browse/AMBARI-14453
Repository: ambari
Description
-------
Steps followed -
1. Hadoop cluster with below config is installed successfully with uppercase hostnames.
{code}
CLUSTERNAME='N91'
MASTER1=N9-1-1.labs
MASTER2=N9-1-2.labs
DATANODE1=N9-1-3.labs
DATANODE2=N9-1-4.labs
DATANODE3=N9-1-5.labs
{code}
Snippet of /etc/hosts file -
{code}
10.0.8.1 N9-1-1.labs N9-1-1 byn001-1 hadoopvm1-1
10.0.8.2 N9-1-2.labs N9-1-2 byn001-2 hadoopvm1-2
10.0.8.3 N9-1-3.labs N9-1-3 byn001-3 hadoopvm1-3
10.0.8.4 N9-1-4.labs N9-1-4 byn001-4 hadoopvm1-4
10.0.8.5 N9-1-5.labs N9-1-5 byn001-5 hadoopvm1-5
{code}
2. Ambari host page shows all hostnames to be of lower case.
3. Attempt to run jobs fails on -
{code}
Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused;
{code}
Snippet of mapreduce job failure -
{code}
out: 15/12/21 08:12:09 INFO mapreduce.Job: Task Id : attempt_1450702943449_0001_m_000002_0, Status : FAILED
out: Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
out: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
out: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
out: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
out: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
out: at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
at org.apache.hadoop.ipc.Client.call(Client.java:1431)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:641)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:630)
at org.apache.hadoop.mapred.Task.isCommitRequired(Task.java:1085)
at org.apache.hadoop.mapred.Task.done(Task.java:1042)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:612)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:710)
at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493)
at org.apache.hadoop.ipc.Client.call(Client.java:1397)
... 27 more
{code}
3. Services - HDFS, Mapreduce, HBase, Yarn services which were up and running after installation go down.
Attached are the logs for Yarn and HBase services.
Diffs
-----
ambari-agent/src/main/python/ambari_agent/hostname.py 87e1e0f
ambari-agent/src/test/python/ambari_agent/TestHeartbeat.py 26f6286
ambari-web/app/controllers/wizard/step2_controller.js 3b51761
ambari-web/app/mixins/wizard/assign_master_components.js 7dc267e
ambari-web/test/controllers/wizard/step2_test.js d62b247
Diff: https://reviews.apache.org/r/41981/diff/
Testing
-------
mvn clean test
Thanks,
Dmitro Lisnichenko
Re: Review Request 41981: Ambari has lowercase hostnames while
cluster is installed with uppercase hostnames.
Posted by Jayush Luniya <jl...@hortonworks.com>.
> On Jan. 12, 2016, 8:09 p.m., Sumit Mohanty wrote:
> > Going through the latest comments - e.g. https://issues.apache.org/jira/browse/AMBARI-14453?focusedCommentId=15090328&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15090328
> >
> > It does not look like an issue that needs to be fixed in agent or BE. We should cancel the patch.
Can you cancel this review as it is no longer required?
- Jayush
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41981/#review114059
-----------------------------------------------------------
On Jan. 6, 2016, 3:52 p.m., Dmitro Lisnichenko wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/41981/
> -----------------------------------------------------------
>
> (Updated Jan. 6, 2016, 3:52 p.m.)
>
>
> Review request for Ambari, Alexandr Antonenko, Jayush Luniya, Myroslav Papirkovskyy, and Sumit Mohanty.
>
>
> Bugs: AMBARI-14453
> https://issues.apache.org/jira/browse/AMBARI-14453
>
>
> Repository: ambari
>
>
> Description
> -------
>
> Steps followed -
> 1. Hadoop cluster with below config is installed successfully with uppercase hostnames.
> {code}
> CLUSTERNAME='N91'
> MASTER1=N9-1-1.labs
> MASTER2=N9-1-2.labs
> DATANODE1=N9-1-3.labs
> DATANODE2=N9-1-4.labs
> DATANODE3=N9-1-5.labs
> {code}
> Snippet of /etc/hosts file -
> {code}
> 10.0.8.1 N9-1-1.labs N9-1-1 byn001-1 hadoopvm1-1
> 10.0.8.2 N9-1-2.labs N9-1-2 byn001-2 hadoopvm1-2
> 10.0.8.3 N9-1-3.labs N9-1-3 byn001-3 hadoopvm1-3
> 10.0.8.4 N9-1-4.labs N9-1-4 byn001-4 hadoopvm1-4
> 10.0.8.5 N9-1-5.labs N9-1-5 byn001-5 hadoopvm1-5
> {code}
>
> 2. Ambari host page shows all hostnames to be of lower case.
> 3. Attempt to run jobs fails on -
> {code}
> Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused;
> {code}
> Snippet of mapreduce job failure -
> {code}
> out: 15/12/21 08:12:09 INFO mapreduce.Job: Task Id : attempt_1450702943449_0001_m_000002_0, Status : FAILED
> out: Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> out: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> out: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> out: at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
> at org.apache.hadoop.ipc.Client.call(Client.java:1431)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:641)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:630)
> at org.apache.hadoop.mapred.Task.isCommitRequired(Task.java:1085)
> at org.apache.hadoop.mapred.Task.done(Task.java:1042)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:612)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:710)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493)
> at org.apache.hadoop.ipc.Client.call(Client.java:1397)
> ... 27 more
> {code}
>
> 3. Services - HDFS, Mapreduce, HBase, Yarn services which were up and running after installation go down.
>
> Attached are the logs for Yarn and HBase services.
>
>
> Diffs
> -----
>
> ambari-agent/src/main/python/ambari_agent/hostname.py 87e1e0f
> ambari-agent/src/test/python/ambari_agent/TestHeartbeat.py 26f6286
> ambari-web/app/controllers/wizard/step2_controller.js 3b51761
> ambari-web/app/mixins/wizard/assign_master_components.js 7dc267e
> ambari-web/test/controllers/wizard/step2_test.js d62b247
>
> Diff: https://reviews.apache.org/r/41981/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Dmitro Lisnichenko
>
>
Re: Review Request 41981: Ambari has lowercase hostnames while
cluster is installed with uppercase hostnames.
Posted by Sumit Mohanty <sm...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41981/#review114059
-----------------------------------------------------------
Going through the latest comments - e.g. https://issues.apache.org/jira/browse/AMBARI-14453?focusedCommentId=15090328&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15090328
It does not look like an issue that needs to be fixed in agent or BE. We should cancel the patch.
- Sumit Mohanty
On Jan. 6, 2016, 3:52 p.m., Dmitro Lisnichenko wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/41981/
> -----------------------------------------------------------
>
> (Updated Jan. 6, 2016, 3:52 p.m.)
>
>
> Review request for Ambari, Alexandr Antonenko, Jayush Luniya, Myroslav Papirkovskyy, and Sumit Mohanty.
>
>
> Bugs: AMBARI-14453
> https://issues.apache.org/jira/browse/AMBARI-14453
>
>
> Repository: ambari
>
>
> Description
> -------
>
> Steps followed -
> 1. Hadoop cluster with below config is installed successfully with uppercase hostnames.
> {code}
> CLUSTERNAME='N91'
> MASTER1=N9-1-1.labs
> MASTER2=N9-1-2.labs
> DATANODE1=N9-1-3.labs
> DATANODE2=N9-1-4.labs
> DATANODE3=N9-1-5.labs
> {code}
> Snippet of /etc/hosts file -
> {code}
> 10.0.8.1 N9-1-1.labs N9-1-1 byn001-1 hadoopvm1-1
> 10.0.8.2 N9-1-2.labs N9-1-2 byn001-2 hadoopvm1-2
> 10.0.8.3 N9-1-3.labs N9-1-3 byn001-3 hadoopvm1-3
> 10.0.8.4 N9-1-4.labs N9-1-4 byn001-4 hadoopvm1-4
> 10.0.8.5 N9-1-5.labs N9-1-5 byn001-5 hadoopvm1-5
> {code}
>
> 2. Ambari host page shows all hostnames to be of lower case.
> 3. Attempt to run jobs fails on -
> {code}
> Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused;
> {code}
> Snippet of mapreduce job failure -
> {code}
> out: 15/12/21 08:12:09 INFO mapreduce.Job: Task Id : attempt_1450702943449_0001_m_000002_0, Status : FAILED
> out: Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> out: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> out: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> out: at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
> at org.apache.hadoop.ipc.Client.call(Client.java:1431)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:641)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:630)
> at org.apache.hadoop.mapred.Task.isCommitRequired(Task.java:1085)
> at org.apache.hadoop.mapred.Task.done(Task.java:1042)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:612)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:710)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493)
> at org.apache.hadoop.ipc.Client.call(Client.java:1397)
> ... 27 more
> {code}
>
> 3. Services - HDFS, Mapreduce, HBase, Yarn services which were up and running after installation go down.
>
> Attached are the logs for Yarn and HBase services.
>
>
> Diffs
> -----
>
> ambari-agent/src/main/python/ambari_agent/hostname.py 87e1e0f
> ambari-agent/src/test/python/ambari_agent/TestHeartbeat.py 26f6286
> ambari-web/app/controllers/wizard/step2_controller.js 3b51761
> ambari-web/app/mixins/wizard/assign_master_components.js 7dc267e
> ambari-web/test/controllers/wizard/step2_test.js d62b247
>
> Diff: https://reviews.apache.org/r/41981/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Dmitro Lisnichenko
>
>
Re: Review Request 41981: Ambari has lowercase hostnames while
cluster is installed with uppercase hostnames.
Posted by Jayush Luniya <jl...@hortonworks.com>.
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/41981/#review114053
-----------------------------------------------------------
-1. Doesnt look like lowercasing was causing those failures. We should be using lowercase hostnames
- Jayush Luniya
On Jan. 6, 2016, 3:52 p.m., Dmitro Lisnichenko wrote:
>
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/41981/
> -----------------------------------------------------------
>
> (Updated Jan. 6, 2016, 3:52 p.m.)
>
>
> Review request for Ambari, Alexandr Antonenko, Jayush Luniya, Myroslav Papirkovskyy, and Sumit Mohanty.
>
>
> Bugs: AMBARI-14453
> https://issues.apache.org/jira/browse/AMBARI-14453
>
>
> Repository: ambari
>
>
> Description
> -------
>
> Steps followed -
> 1. Hadoop cluster with below config is installed successfully with uppercase hostnames.
> {code}
> CLUSTERNAME='N91'
> MASTER1=N9-1-1.labs
> MASTER2=N9-1-2.labs
> DATANODE1=N9-1-3.labs
> DATANODE2=N9-1-4.labs
> DATANODE3=N9-1-5.labs
> {code}
> Snippet of /etc/hosts file -
> {code}
> 10.0.8.1 N9-1-1.labs N9-1-1 byn001-1 hadoopvm1-1
> 10.0.8.2 N9-1-2.labs N9-1-2 byn001-2 hadoopvm1-2
> 10.0.8.3 N9-1-3.labs N9-1-3 byn001-3 hadoopvm1-3
> 10.0.8.4 N9-1-4.labs N9-1-4 byn001-4 hadoopvm1-4
> 10.0.8.5 N9-1-5.labs N9-1-5 byn001-5 hadoopvm1-5
> {code}
>
> 2. Ambari host page shows all hostnames to be of lower case.
> 3. Attempt to run jobs fails on -
> {code}
> Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused;
> {code}
> Snippet of mapreduce job failure -
> {code}
> out: 15/12/21 08:12:09 INFO mapreduce.Job: Task Id : attempt_1450702943449_0001_m_000002_0, Status : FAILED
> out: Error: java.net.ConnectException: Call From N9-1-4.labs/10.0.8.4 to n9-1-1.labs:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> out: at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> out: at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> out: at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> out: at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
> at org.apache.hadoop.ipc.Client.call(Client.java:1431)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:252)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1315)
> at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1311)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:641)
> at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.needsTaskCommit(FileOutputCommitter.java:630)
> at org.apache.hadoop.mapred.Task.isCommitRequired(Task.java:1085)
> at org.apache.hadoop.mapred.Task.done(Task.java:1042)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:345)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> Caused by: java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:612)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:710)
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:373)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1493)
> at org.apache.hadoop.ipc.Client.call(Client.java:1397)
> ... 27 more
> {code}
>
> 3. Services - HDFS, Mapreduce, HBase, Yarn services which were up and running after installation go down.
>
> Attached are the logs for Yarn and HBase services.
>
>
> Diffs
> -----
>
> ambari-agent/src/main/python/ambari_agent/hostname.py 87e1e0f
> ambari-agent/src/test/python/ambari_agent/TestHeartbeat.py 26f6286
> ambari-web/app/controllers/wizard/step2_controller.js 3b51761
> ambari-web/app/mixins/wizard/assign_master_components.js 7dc267e
> ambari-web/test/controllers/wizard/step2_test.js d62b247
>
> Diff: https://reviews.apache.org/r/41981/diff/
>
>
> Testing
> -------
>
> mvn clean test
>
>
> Thanks,
>
> Dmitro Lisnichenko
>
>