You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Fabian Hueske (JIRA)" <ji...@apache.org> on 2018/01/23 09:13:00 UTC

[jira] [Created] (FLINK-8487) State loss after multiple restart attempts

Fabian Hueske created FLINK-8487:
------------------------------------

             Summary: State loss after multiple restart attempts
                 Key: FLINK-8487
                 URL: https://issues.apache.org/jira/browse/FLINK-8487
             Project: Flink
          Issue Type: Bug
          Components: State Backends, Checkpointing
    Affects Versions: 1.3.2
            Reporter: Fabian Hueske
             Fix For: 1.5.0, 1.4.1


A user [reported this issue|https://lists.apache.org/thread.html/9dc9b719cf8449067ad01114fedb75d1beac7b4dff171acdcc24903d@%3Cuser.flink.apache.org%3E] on the user@f.a.o mailing list and analyzed the situation.

Scenario:
- A program that reads from Kafka and computes counts in a keyed 15 minute tumbling window.  StateBackend is RocksDB and checkpointing is enabled.

{code}
keyBy(0)
        .timeWindow(Time.of(window_size, TimeUnit.MINUTES))
        .allowedLateness(Time.of(late_by, TimeUnit.SECONDS))
        .reduce(new ReduceFunction(), new WindowFunction())
{code}

- At some point HDFS went into a safe mode due to NameNode issues
- The following exception was thrown

{code}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby. Visit https://s.apache.org/sbnn-error
    ..................

    at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.mkdirs(HadoopFileSystem.java:453)
        at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.mkdirs(SafetyNetWrapperFileSystem.java:111)
        at org.apache.flink.runtime.state.filesystem.FsCheckpointStreamFactory.createBasePath(FsCheckpointStreamFactory.java:132)
{code}

- The pipeline came back after a few restarts and checkpoint failures, after the HDFS issues were resolved.

- It was evident that operator state was lost. Either it was the Kafka consumer that kept on advancing it's offset between a start and the next checkpoint failure (a minute's worth) or the the operator that had partial aggregates was lost. 

The user did some in-depth analysis (see [mail thread|https://lists.apache.org/thread.html/9dc9b719cf8449067ad01114fedb75d1beac7b4dff171acdcc24903d@%3Cuser.flink.apache.org%3E]) and might have (according to [~aljoscha]) identified the problem.

[~stefanrichter83@gmail.com], can you have a look at this issue and check if it is relevant?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)