You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Matthias Pohl (Jira)" <ji...@apache.org> on 2022/10/27 12:43:00 UTC

[jira] [Updated] (FLINK-29315) HDFSTest#testBlobServerRecovery fails on CI

     [ https://issues.apache.org/jira/browse/FLINK-29315?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Matthias Pohl updated FLINK-29315:
----------------------------------
    Component/s: Test Infrastructure

> HDFSTest#testBlobServerRecovery fails on CI
> -------------------------------------------
>
>                 Key: FLINK-29315
>                 URL: https://issues.apache.org/jira/browse/FLINK-29315
>             Project: Flink
>          Issue Type: Technical Debt
>          Components: Connectors / FileSystem, Test Infrastructure, Tests
>    Affects Versions: 1.16.0, 1.15.2
>            Reporter: Chesnay Schepler
>            Priority: Blocker
>              Labels: pull-request-available
>
> The test started failing 2 days ago on different branches. I suspect something's wrong with the CI infrastructure.
> {code:java}
> Sep 15 09:11:22 [ERROR] Failures: 
> Sep 15 09:11:22 [ERROR]   HDFSTest.testBlobServerRecovery Multiple Failures (2 failures)
> Sep 15 09:11:22 	java.lang.AssertionError: Test failed Error while running command to get file permissions : java.io.IOException: Cannot run program "ls": error=1, Operation not permitted
> Sep 15 09:11:22 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> Sep 15 09:11:22 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:913)
> Sep 15 09:11:22 	at org.apache.hadoop.util.Shell.run(Shell.java:869)
> Sep 15 09:11:22 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1170)
> Sep 15 09:11:22 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1264)
> Sep 15 09:11:22 	at org.apache.hadoop.util.Shell.execCommand(Shell.java:1246)
> Sep 15 09:11:22 	at org.apache.hadoop.fs.FileUtil.execCommand(FileUtil.java:1089)
> Sep 15 09:11:22 	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:697)
> Sep 15 09:11:22 	at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:672)
> Sep 15 09:11:22 	at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)
> Sep 15 09:11:22 	at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)
> Sep 15 09:11:22 	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1501)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:851)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:485)
> Sep 15 09:11:22 	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:444)
> Sep 15 09:11:22 	at org.apache.flink.hdfstests.HDFSTest.createHDFS(HDFSTest.java:93)
> Sep 15 09:11:22 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> Sep 15 09:11:22 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Sep 15 09:11:22 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Sep 15 09:11:22 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> Sep 15 09:11:22 	... 67 more
> Sep 15 09:11:22 
> Sep 15 09:11:22 	java.lang.NullPointerException: <no message>
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)