You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2018/03/09 13:54:00 UTC

[jira] [Commented] (FLINK-8599) Improve the failure behavior of the FileInputFormat for bad files

    [ https://issues.apache.org/jira/browse/FLINK-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392902#comment-16392902 ] 

ASF GitHub Bot commented on FLINK-8599:
---------------------------------------

Github user StephanEwen commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5521#discussion_r173454031
  
    --- Diff: flink-core/src/main/java/org/apache/flink/api/common/io/FileInputFormat.java ---
    @@ -819,6 +819,10 @@ public void open(FileInputSplit fileSplit) throws IOException {
     			this.stream = isot.waitForCompletion();
     			this.stream = decorateInputStream(this.stream, fileSplit);
     		}
    +		catch (FileNotFoundException e) {
    +			throw (FileNotFoundException)(new FileNotFoundException("Input split " + fileSplit.getPath() +
    --- End diff --
    
    I don't understand why "skip and continue" is in this message.
    Not all users of the `FileIputFormat` skip and continue. The interpretation of the exception should not be assumed when creating the exception.


> Improve the failure behavior of the FileInputFormat for bad files
> -----------------------------------------------------------------
>
>                 Key: FLINK-8599
>                 URL: https://issues.apache.org/jira/browse/FLINK-8599
>             Project: Flink
>          Issue Type: New Feature
>          Components: DataStream API
>    Affects Versions: 1.4.0, 1.3.2
>            Reporter: Chengzhi Zhao
>            Priority: Major
>
> So we have a s3 path that flink is monitoring that path to see new files available.
> {code:java}
> val avroInputStream_activity = env.readFile(format, path, FileProcessingMode.PROCESS_CONTINUOUSLY, 10000)  {code}
>  
> I am doing both internal and external check pointing and let's say there is a bad file (for example, a different schema been dropped in this folder) came to the path and flink will do several retries. I want to take those bad files and let the process continue. However, since the file path persist in the checkpoint, when I try to resume from external checkpoint, it threw the following error on no file been found.
>  
> {code:java}
> java.io.IOException: Error opening the Input Split s3a://myfile [0,904]: No such file or directory: s3a://myfile{code}
>  
> As [~fhueske@gmail.com] suggested, we could check if a path exists and before trying to read a file and ignore the input split instead of throwing an exception and causing a failure.
>  
> Also, I am thinking about add an error output for bad files as an option to users. So if there is any bad files exist we could move them in a separated path and do further analysis. 
>  
> Not sure how people feel about it, but I'd like to contribute on it if people think this can be an improvement. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)