You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Elango Ganesan (JIRA)" <ji...@apache.org> on 2019/01/31 15:00:07 UTC

[jira] [Updated] (FLINK-11496) FlinkS3 FileSysten is not handling multiple local temp directories

     [ https://issues.apache.org/jira/browse/FLINK-11496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Elango Ganesan updated FLINK-11496:
-----------------------------------
    Description: 
We think S3 Flink Filesystem when creating localTemp directory is not splitting and handling the availability of  multiple local temp directories . As a result we are seeing exception mentioned below any time we run in a EC2 instance type with more than one ephemeral drive or EBS volume.

 

[https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/FlinkS3FileSystem.java#L101] 
 Timestamp: 2019-01-29, 12:42:39
 java.nio.file.NoSuchFileException: /mnt/yarn/usercache/hadoop/appcache/application_1548598173158_0004,/mnt1/yarn/usercache/hadoop/appcache/application_1548598173158_0004/.tmp_072167ee-6432-412c-809a-bd0599961cf0
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
 at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
 at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
 at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
 at java.nio.file.Files.newOutputStream(Files.java:216)
 at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:80)
 at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:39)
 at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.openNew(RefCountedBufferingFileStream.java:174)
 at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.boundedBufferingFileStream(S3RecoverableFsDataOutputStream.java:271)
 at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.newStream(S3RecoverableFsDataOutputStream.java:236)
 at org.apache.flink.fs.s3.common.writer.S3RecoverableWriter.open(S3RecoverableWriter.java:78)

  was:
S3 Flink Filesystem when creating localTemp directory is not splitting and handling the availability of  multiple local temp directories . As a result we are seeing exception mentioned below any time we run in a EC2 instance type with more than one ephemeral drive or EBS volume.

 

https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/FlinkS3FileSystem.java#L101 
Timestamp: 2019-01-29, 12:42:39
java.nio.file.NoSuchFileException: /mnt/yarn/usercache/hadoop/appcache/application_1548598173158_0004,/mnt1/yarn/usercache/hadoop/appcache/application_1548598173158_0004/.tmp_072167ee-6432-412c-809a-bd0599961cf0
        at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
        at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
        at java.nio.file.Files.newOutputStream(Files.java:216)
        at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:80)
        at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:39)
        at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.openNew(RefCountedBufferingFileStream.java:174)
        at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.boundedBufferingFileStream(S3RecoverableFsDataOutputStream.java:271)
        at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.newStream(S3RecoverableFsDataOutputStream.java:236)
        at org.apache.flink.fs.s3.common.writer.S3RecoverableWriter.open(S3RecoverableWriter.java:78)


> FlinkS3 FileSysten is not handling multiple local temp directories
> ------------------------------------------------------------------
>
>                 Key: FLINK-11496
>                 URL: https://issues.apache.org/jira/browse/FLINK-11496
>             Project: Flink
>          Issue Type: Bug
>            Reporter: Elango Ganesan
>            Priority: Major
>             Fix For: 1.7.1
>
>
> We think S3 Flink Filesystem when creating localTemp directory is not splitting and handling the availability of  multiple local temp directories . As a result we are seeing exception mentioned below any time we run in a EC2 instance type with more than one ephemeral drive or EBS volume.
>  
> [https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-base/src/main/java/org/apache/flink/fs/s3/common/FlinkS3FileSystem.java#L101] 
>  Timestamp: 2019-01-29, 12:42:39
>  java.nio.file.NoSuchFileException: /mnt/yarn/usercache/hadoop/appcache/application_1548598173158_0004,/mnt1/yarn/usercache/hadoop/appcache/application_1548598173158_0004/.tmp_072167ee-6432-412c-809a-bd0599961cf0
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>  at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>  at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>  at java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
>  at java.nio.file.Files.newOutputStream(Files.java:216)
>  at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:80)
>  at org.apache.flink.fs.s3.common.utils.RefCountedTmpFileCreator.apply(RefCountedTmpFileCreator.java:39)
>  at org.apache.flink.fs.s3.common.utils.RefCountedBufferingFileStream.openNew(RefCountedBufferingFileStream.java:174)
>  at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.boundedBufferingFileStream(S3RecoverableFsDataOutputStream.java:271)
>  at org.apache.flink.fs.s3.common.writer.S3RecoverableFsDataOutputStream.newStream(S3RecoverableFsDataOutputStream.java:236)
>  at org.apache.flink.fs.s3.common.writer.S3RecoverableWriter.open(S3RecoverableWriter.java:78)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)