You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2017/10/10 16:31:00 UTC

[jira] [Resolved] (HDFS-5718) TestHttpsFileSystem intermittently fails with Port in use error

     [ https://issues.apache.org/jira/browse/HDFS-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ted Yu resolved HDFS-5718.
--------------------------
    Resolution: Cannot Reproduce

> TestHttpsFileSystem intermittently fails with Port in use error
> ---------------------------------------------------------------
>
>                 Key: HDFS-5718
>                 URL: https://issues.apache.org/jira/browse/HDFS-5718
>             Project: Hadoop HDFS
>          Issue Type: Test
>            Reporter: Ted Yu
>            Priority: Minor
>
> From https://builds.apache.org/job/Hadoop-Hdfs-trunk/1634/testReport/junit/org.apache.hadoop.hdfs.web/TestHttpsFileSystem/org_apache_hadoop_hdfs_web_TestHttpsFileSystem/ :
> {code}
> java.net.BindException: Port in use: localhost:50475
> 	at java.net.PlainSocketImpl.socketBind(Native Method)
> 	at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:383)
> 	at java.net.ServerSocket.bind(ServerSocket.java:328)
> 	at java.net.ServerSocket.<init>(ServerSocket.java:194)
> 	at javax.net.ssl.SSLServerSocket.<init>(SSLServerSocket.java:106)
> 	at com.sun.net.ssl.internal.ssl.SSLServerSocketImpl.<init>(SSLServerSocketImpl.java:108)
> 	at com.sun.net.ssl.internal.ssl.SSLServerSocketFactoryImpl.createServerSocket(SSLServerSocketFactoryImpl.java:72)
> 	at org.mortbay.jetty.security.SslSocketConnector.newServerSocket(SslSocketConnector.java:478)
> 	at org.mortbay.jetty.bio.SocketConnector.open(SocketConnector.java:73)
> 	at org.apache.hadoop.http.HttpServer.openListeners(HttpServer.java:973)
> 	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:914)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:412)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:769)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:315)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1846)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1746)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1203)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:673)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:342)
> 	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:323)
> 	at org.apache.hadoop.hdfs.web.TestHttpsFileSystem.setUp(TestHttpsFileSystem.java:64)
> {code}
> This could have been caused by concurrent test(s).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org