You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by "Attila Doroszlai (Jira)" <ji...@apache.org> on 2023/10/20 09:31:00 UTC

[jira] [Comment Edited] (HDDS-9512) Datanode in ozone share a same port with datanode in HDFS.

    [ https://issues.apache.org/jira/browse/HDDS-9512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17777656#comment-17777656 ] 

Attila Doroszlai edited comment on HDDS-9512 at 10/20/23 9:30 AM:
------------------------------------------------------------------

Thanks [~wangyuanben] for reporting this issue.  I agree that we should change the default value of this port to avoid inconvenience for users.

bq. We can't run the datanode of HDFS when HddsDatanodeService is on as well.

Both Ozone and HDFS allow configuring the port.  So they can be run on the same node by configuring one of them to use a different port.


was (Author: adoroszlai):
Thanks [~wangyuanben] for reporting this issue.  I agree that we should change the default value of this port to avoid inconvenience for users.

bq. We can't run the datanode of HDFS when HddsDatanodeService is on as well.

Both Ozone and HDFS allows configuring the port.  So they can be run on the same node by configuring one of them to use a different port.

> Datanode in ozone share a same port with datanode in HDFS.
> ----------------------------------------------------------
>
>                 Key: HDDS-9512
>                 URL: https://issues.apache.org/jira/browse/HDDS-9512
>             Project: Apache Ozone
>          Issue Type: Bug
>          Components: Ozone Datanode
>            Reporter: WangYuanben
>            Priority: Major
>
> Now in master branch, we have the config like below:
> {code:java}
> <property>
>   <name>hdds.datanode.client.port</name>
>   <value>9864</value>
>   <tag>OZONE, HDDS, MANAGEMENT</tag>
>   <description>
>     The port number of the Ozone Datanode client service.
>   </description>
> </property> {code}
> where HDFS has the config like this:
> {code:java}
> <property>  
>   <name>dfs.datanode.http.address</name>  
>   <value>0.0.0.0:9864</value>  
>   <description>    
>     The datanode http server address and port.  
>   </description>
> </property> {code}
> Obviously they share the same port 9864. When starting HddsDatanodeService in the node where datanode of HDFS is running, we get the error:
> {code:java}
> Caused by: java.net.BindException: Problem binding to [0.0.0.0:9864] java.net.BindException: Address already in use; For more details see:  http://wiki.apache.org/hadoop/BindException
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:930)
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:826)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:680)
>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:1288)
>         at org.apache.hadoop.ipc.Server.<init>(Server.java:3223)
>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:1195)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.<init>(ProtobufRpcEngine2.java:485)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:452)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:375)
>         at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:986)
>         at org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.startRpcServer(HddsDatanodeClientProtocolServer.java:138)
>         at org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.lambda$getRpcServer$0(HddsDatanodeClientProtocolServer.java:110)
>         at org.apache.hadoop.hdds.HddsUtils.preserveThreadName(HddsUtils.java:847)
>         at org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.getRpcServer(HddsDatanodeClientProtocolServer.java:110)
>         at org.apache.hadoop.ozone.HddsDatanodeClientProtocolServer.<init>(HddsDatanodeClientProtocolServer.java:64)
>         at org.apache.hadoop.ozone.HddsDatanodeService.start(HddsDatanodeService.java:317)
>         ... 13 more
> Caused by: java.net.BindException: Address already in use
>         at sun.nio.ch.Net.bind0(Native Method)
>         at sun.nio.ch.Net.bind(Net.java:433)
>         at sun.nio.ch.Net.bind(Net.java:425)
>         at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>         at org.apache.hadoop.ipc.Server.bind(Server.java:663)
>         ... 26 more {code}
>  We can't run the datanode of HDFS when HddsDatanodeService is on as well.
>  
> This affects the compatibility of Ozone with HDFS. Therefore, maybe we should change this port. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@ozone.apache.org
For additional commands, e-mail: issues-help@ozone.apache.org