You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "wind.fly.vip@outlook.com" <wi...@outlook.com> on 2021/06/25 10:06:17 UTC

Need Help - java.io.EOFException happened when visiting remote HDFS cluster from windows10 os

Hi, all:
    'java.io.EOFException' exception happened when I run flowing code in Idea:

// init configuration

Configuration conf = new Configuration();

// Set FileSystem URI
conf.set("fs.defaultFS", "hdfs://cxhadoop");

conf.set("dfs.nameservices", "cxhadoop");
conf.set("ha.zookeeper.quorum", "hadoop-slave1:2181,hadoop-slave2:2181,hadoop-slave3:2181");
conf.set("dfs.ha.namenodes.cxhadoop", "namenode82,namenode122");
conf.set("dfs.namenode.rpc-address.cxhadoop.namenode82", "hadoop-master:8020");
conf.set("dfs.namenode.rpc-address.cxhadoop.namenode122", "hadoop-slave1:8020");
conf.set("dfs.client.failover.proxy.provider.cxhadoop", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

System.setProperty("HADOOP_USER_NAME", "hdfs");
System.setProperty("hadoop.home.dir", "D:\\winutils");

// init FileSystem
FileSystem fs = FileSystem.get(conf);

String path = "/flink-checkpoints";
Path p = new Path(path);
if (fs.exists(p) && fs.isDirectory(p)) {
    FileStatus[] fileStatuses = fs.listStatus(p, new PathFilter() {
        @Override
        public boolean accept(Path path) {
            return path.toString().contains("chk-");
        }
    });
    Arrays.sort(fileStatuses, new Comparator<FileStatus>() {
        @Override
        public int compare(FileStatus o1, FileStatus o2) {
            return o1.getModificationTime() < o2.getModificationTime() ? 1 : -1;
        }
    });
    for (FileStatus status : fileStatuses) {
        System.out.println(status.getPath().getName());
    }
}


fs.close();

   Detail stacktrace are:

   Exception in thread "main" java.io.EOFException: End of File Exception between local host is: "SAIC-DEV00879/xxxxx"; destination host is: "hadoop-slave1":8020; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:824)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:788)
        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1495)
        at org.apache.hadoop.ipc.Client.call(Client.java:1437)
        at org.apache.hadoop.ipc.Client.call(Client.java:1347)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
        at com.sun.proxy.$Proxy11.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:874)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
        at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1697)
        at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1491)
        at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1488)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1503)
        at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
        at com.wind.fly.hdfs.HDFSApiDemo.main(HDFSApiDemo.java:74)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1796)
        at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1165)

        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:1061)

And my project is constructed using maven, dependencies info are:

<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-common</artifactId>
    <version>${hadoop.version}</version>
    <exclusions>
        <exclusion>
            <artifactId>zookeeper</artifactId>
            <groupId>org.apache.zookeeper</groupId>
        </exclusion>
        <exclusion>
            <artifactId>commons-io</artifactId>
            <groupId>commons-io</groupId>
        </exclusion>
        <exclusion>
            <artifactId>guava</artifactId>
            <groupId>com.google.guava</groupId>
        </exclusion>
        <exclusion>
            <artifactId>slf4j-api</artifactId>
            <groupId>org.slf4j</groupId>
        </exclusion>
        <exclusion>
            <artifactId>commons-codec</artifactId>
            <groupId>commons-codec</groupId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-mapreduce-client-core</artifactId>
    <version>${hadoop.version}</version>
    <exclusions>
        <exclusion>
            <artifactId>guava</artifactId>
            <groupId>com.google.guava</groupId>
        </exclusion>
    </exclusions>
</dependency>
<!--   hadoop client     -->
<dependency>
    <groupId>org.apache.hadoop</groupId>
    <artifactId>hadoop-client</artifactId>
    <version>${hadoop.version}</version>
</dependency>


Besides, my env info are:

os: windows 10

jdk: 1.8

hadoop version: 3.0.0

remote cluster hadoop version: 3.0.0

    From the stacktrace info we can see exception occured when reading the response inputStream's first four bytes, but why? This problem has been bothering me for several days, sincerely hope to get your help!

Best,

Junbao Zhang