You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2021/10/03 05:22:06 UTC

[GitHub] [flink-ml] gaoyunhaii opened a new pull request #16: [FLINK-9][iteration] Support per-round iteration

gaoyunhaii opened a new pull request #16:
URL: https://github.com/apache/flink-ml/pull/16


   Add support for the per-round iteration. This is done by
   
   1. Allow specify which input datastreams require replaying. A new `ReplayOperator` is inserted for these inputs.
   2. Use per-round operator wrapper to wrap the operators inside the iteration.
   
   For the long future, we would still need to support iteration body with mixed all-round and per-round operators. The current implementation have provide the support to these case and view the per-round iteration as a special case. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739982645



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/Iterations.java
##########
@@ -112,15 +145,400 @@ public static DataStreamList iterateBoundedStreamsUntilTermination(
             ReplayableDataStreamList dataStreams,
             IterationConfig config,
             IterationBody body) {
-        Preconditions.checkArgument(
-                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND);
-        Preconditions.checkArgument(dataStreams.getReplayedDataStreams().size() == 0);
+        OperatorWrapper wrapper =
+                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND
+                        ? new AllRoundOperatorWrapper<>()
+                        : new PerRoundOperatorWrapper<>();
 
-        return IterationFactory.createIteration(
+        List<DataStream<?>> allDatastreams = new ArrayList<>();
+        allDatastreams.addAll(dataStreams.getReplayedDataStreams());
+        allDatastreams.addAll(dataStreams.getNonReplayedStreams());
+
+        Set<Integer> replayedIndices =
+                IntStream.range(0, dataStreams.getReplayedDataStreams().size())
+                        .boxed()
+                        .collect(Collectors.toSet());
+
+        return createIteration(
                 initVariableStreams,
-                new DataStreamList(dataStreams.getNonReplayedStreams()),
+                new DataStreamList(allDatastreams),
+                replayedIndices,
                 body,
-                new AllRoundOperatorWrapper(),
+                wrapper,
                 true);
     }
+
+    @SuppressWarnings({"unchecked", "rawtypes"})
+    private static DataStreamList createIteration(
+            DataStreamList initVariableStreams,
+            DataStreamList dataStreams,
+            Set<Integer> replayedDataStreamIndices,
+            IterationBody body,
+            OperatorWrapper<?, IterationRecord<?>> initialOperatorWrapper,
+            boolean mayHaveCriteria) {
+        checkState(initVariableStreams.size() > 0, "There should be at least one variable stream");
+
+        IterationID iterationId = new IterationID();
+
+        List<TypeInformation<?>> initVariableTypeInfos = getTypeInfos(initVariableStreams);
+        List<TypeInformation<?>> dataStreamTypeInfos = getTypeInfos(dataStreams);
+
+        // Add heads and inputs
+        int totalInitVariableParallelism =
+                map(
+                                initVariableStreams,
+                                dataStream ->
+                                        dataStream.getParallelism() > 0
+                                                ? dataStream.getParallelism()
+                                                : dataStream
+                                                        .getExecutionEnvironment()
+                                                        .getConfig()
+                                                        .getParallelism())
+                        .stream()
+                        .mapToInt(i -> i)
+                        .sum();
+        DataStreamList initVariableInputs = addInputs(initVariableStreams, false);
+        DataStreamList headStreams =
+                addHeads(
+                        initVariableStreams,
+                        initVariableInputs,
+                        iterationId,
+                        totalInitVariableParallelism,
+                        false,
+                        0);
+
+        DataStreamList dataStreamInputs = addInputs(dataStreams, true);
+        if (replayedDataStreamIndices.size() > 0) {
+            dataStreamInputs =
+                    addReplayer(
+                            headStreams.get(0),
+                            dataStreams,
+                            dataStreamInputs,
+                            replayedDataStreamIndices);
+        }
+
+        // Create the iteration body. We map the inputs of iteration body into the draft sources,

Review comment:
       Creates?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739933707



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/config/IterationOptions.java
##########
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.config;
+
+import org.apache.flink.configuration.ConfigOption;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/** The options for the iteration. */
+public class IterationOptions {
+
+    public static final ConfigOption<String> DATA_CACHE_PATH =

Review comment:
       thanks for your explanation.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739747861



##########
File path: flink-ml-iteration/src/test/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheWriteReadTest.java
##########
@@ -0,0 +1,181 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.base.IntSerializer;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.runtime.fs.hdfs.HadoopFileSystem;
+import org.apache.flink.util.OperatingSystem;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.junit.AfterClass;
+import org.junit.Assume;
+import org.junit.BeforeClass;
+import org.junit.ClassRule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+import java.util.stream.Collectors;
+import java.util.stream.IntStream;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+
+/** Tests the behavior of {@link DataCacheWriter}. */
+@RunWith(Parameterized.class)
+public class DataCacheWriteReadTest {

Review comment:
       Please extends `TestLogger`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739982935



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/Iterations.java
##########
@@ -112,15 +145,400 @@ public static DataStreamList iterateBoundedStreamsUntilTermination(
             ReplayableDataStreamList dataStreams,
             IterationConfig config,
             IterationBody body) {
-        Preconditions.checkArgument(
-                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND);
-        Preconditions.checkArgument(dataStreams.getReplayedDataStreams().size() == 0);
+        OperatorWrapper wrapper =
+                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND
+                        ? new AllRoundOperatorWrapper<>()
+                        : new PerRoundOperatorWrapper<>();
 
-        return IterationFactory.createIteration(
+        List<DataStream<?>> allDatastreams = new ArrayList<>();
+        allDatastreams.addAll(dataStreams.getReplayedDataStreams());
+        allDatastreams.addAll(dataStreams.getNonReplayedStreams());
+
+        Set<Integer> replayedIndices =
+                IntStream.range(0, dataStreams.getReplayedDataStreams().size())
+                        .boxed()
+                        .collect(Collectors.toSet());
+
+        return createIteration(
                 initVariableStreams,
-                new DataStreamList(dataStreams.getNonReplayedStreams()),
+                new DataStreamList(allDatastreams),
+                replayedIndices,
                 body,
-                new AllRoundOperatorWrapper(),
+                wrapper,
                 true);
     }
+
+    @SuppressWarnings({"unchecked", "rawtypes"})
+    private static DataStreamList createIteration(
+            DataStreamList initVariableStreams,
+            DataStreamList dataStreams,
+            Set<Integer> replayedDataStreamIndices,
+            IterationBody body,
+            OperatorWrapper<?, IterationRecord<?>> initialOperatorWrapper,
+            boolean mayHaveCriteria) {
+        checkState(initVariableStreams.size() > 0, "There should be at least one variable stream");
+
+        IterationID iterationId = new IterationID();
+
+        List<TypeInformation<?>> initVariableTypeInfos = getTypeInfos(initVariableStreams);
+        List<TypeInformation<?>> dataStreamTypeInfos = getTypeInfos(dataStreams);
+
+        // Add heads and inputs
+        int totalInitVariableParallelism =
+                map(
+                                initVariableStreams,
+                                dataStream ->
+                                        dataStream.getParallelism() > 0
+                                                ? dataStream.getParallelism()
+                                                : dataStream
+                                                        .getExecutionEnvironment()
+                                                        .getConfig()
+                                                        .getParallelism())
+                        .stream()
+                        .mapToInt(i -> i)
+                        .sum();
+        DataStreamList initVariableInputs = addInputs(initVariableStreams, false);
+        DataStreamList headStreams =
+                addHeads(
+                        initVariableStreams,
+                        initVariableInputs,
+                        iterationId,
+                        totalInitVariableParallelism,
+                        false,
+                        0);
+
+        DataStreamList dataStreamInputs = addInputs(dataStreams, true);
+        if (replayedDataStreamIndices.size() > 0) {
+            dataStreamInputs =
+                    addReplayer(
+                            headStreams.get(0),
+                            dataStreams,
+                            dataStreamInputs,
+                            replayedDataStreamIndices);
+        }
+
+        // Create the iteration body. We map the inputs of iteration body into the draft sources,
+        // which serve as the start points to build the draft subgraph.
+        StreamExecutionEnvironment env = initVariableStreams.get(0).getExecutionEnvironment();
+        DraftExecutionEnvironment draftEnv =
+                new DraftExecutionEnvironment(env, initialOperatorWrapper);
+        DataStreamList draftHeadStreams =
+                addDraftSources(headStreams, draftEnv, initVariableTypeInfos);
+        DataStreamList draftDataStreamInputs =
+                addDraftSources(dataStreamInputs, draftEnv, dataStreamTypeInfos);
+
+        IterationBodyResult iterationBodyResult =
+                body.process(draftHeadStreams, draftDataStreamInputs);
+        ensuresTransformationAdded(iterationBodyResult.getFeedbackVariableStreams(), draftEnv);
+        ensuresTransformationAdded(iterationBodyResult.getOutputStreams(), draftEnv);
+        draftEnv.copyToActualEnvironment();
+
+        // Add tails and co-locate them with the heads.
+        DataStreamList feedbackStreams =
+                getActualDataStreams(iterationBodyResult.getFeedbackVariableStreams(), draftEnv);
+        checkState(
+                feedbackStreams.size() == initVariableStreams.size(),
+                "The number of feedback streams "
+                        + feedbackStreams.size()
+                        + " does not match the initialized one "
+                        + initVariableStreams.size());
+        for (int i = 0; i < feedbackStreams.size(); ++i) {
+            checkState(
+                    feedbackStreams.get(i).getParallelism() == headStreams.get(i).getParallelism(),
+                    String.format(
+                            "The feedback stream %d have different parallelism %d with the initial stream, which is %d",
+                            i,
+                            feedbackStreams.get(i).getParallelism(),
+                            headStreams.get(i).getParallelism()));
+        }
+
+        DataStreamList tails = addTails(feedbackStreams, iterationId, 0);
+        for (int i = 0; i < headStreams.size(); ++i) {
+            String coLocationGroupKey = "co-" + iterationId.toHexString() + "-" + i;
+            headStreams.get(i).getTransformation().setCoLocationGroupKey(coLocationGroupKey);
+            tails.get(i).getTransformation().setCoLocationGroupKey(coLocationGroupKey);
+        }
+
+        checkState(
+                mayHaveCriteria || iterationBodyResult.getTerminationCriteria() == null,
+                "The current iteration type does not support the termination criteria.");
+
+        if (iterationBodyResult.getTerminationCriteria() != null) {
+            addCriteriaStream(
+                    iterationBodyResult.getTerminationCriteria(),
+                    iterationId,
+                    env,
+                    draftEnv,
+                    initVariableStreams,
+                    headStreams,
+                    totalInitVariableParallelism);
+        }
+
+        return addOutputs(getActualDataStreams(iterationBodyResult.getOutputStreams(), draftEnv));
+    }
+
+    private static DataStreamList addReplayer(
+            DataStream<?> firstHeadStream,
+            DataStreamList originalDataStreams,
+            DataStreamList dataStreamInputs,
+            Set<Integer> replayedDataStreamIndices) {
+
+        List<DataStream<?>> result = new ArrayList<>(dataStreamInputs.size());
+        for (int i = 0; i < dataStreamInputs.size(); ++i) {
+            if (!replayedDataStreamIndices.contains(i)) {
+                result.add(dataStreamInputs.get(i));
+                continue;
+            }
+
+            // Notes that the HeadOperator would broadcast the globally aligned events,
+            // thus the operator does not require emit to the sideoutput specially.
+            DataStream<?> replayedInput =
+                    ((SingleOutputStreamOperator<IterationRecord<?>>) firstHeadStream)
+                            .getSideOutput(HeadOperator.ALIGN_NOTIFY_OUTPUT_TAG)
+                            .map(x -> x, dataStreamInputs.get(i).getType())
+                            .setParallelism(1)
+                            .name("signal-change-typeinfo")
+                            .broadcast()
+                            .union(dataStreamInputs.get(i))
+                            .transform(
+                                    "Replayer-"
+                                            + originalDataStreams
+                                                    .get(i)
+                                                    .getTransformation()
+                                                    .getName(),
+                                    dataStreamInputs.get(i).getType(),
+                                    (OneInputStreamOperator) new ReplayOperator<>())
+                            .setParallelism(dataStreamInputs.get(i).getParallelism());
+            result.add(replayedInput);
+        }
+
+        return new DataStreamList(result);
+    }
+
+    private static void addCriteriaStream(
+            DataStream<?> draftCriteriaStream,
+            IterationID iterationId,
+            StreamExecutionEnvironment env,
+            DraftExecutionEnvironment draftEnv,
+            DataStreamList initVariableStreams,
+            DataStreamList headStreams,
+            int totalInitVariableParallelism) {
+        // deal with the criteria streams

Review comment:
       deals?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739980561



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheWriter.java
##########
@@ -0,0 +1,121 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataOutputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.core.memory.DataOutputView;
+import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.function.Supplier;
+
+/** Records the data received and replayed them on required. */
+public class DataCacheWriter<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final Supplier<Path> pathGenerator;
+
+    private final List<Segment> finishSegments;
+
+    private SegmentWriter currentSegment;
+
+    public DataCacheWriter(
+            TypeSerializer<T> serializer, FileSystem fileSystem, Supplier<Path> pathGenerator)
+            throws IOException {
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.pathGenerator = pathGenerator;
+
+        this.finishSegments = new ArrayList<>();
+
+        currentSegment = new SegmentWriter(pathGenerator.get());

Review comment:
       this.currentSegment?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739966230



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/ReplayOperator.java
##########
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.operator;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.iteration.IterationRecord;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheReader;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheWriter;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTracker;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerFactory;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerListener;
+import org.apache.flink.iteration.typeinfo.IterationRecordSerializer;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.streaming.api.graph.StreamConfig;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+import org.apache.flink.util.ExceptionUtils;
+
+import java.io.IOException;
+import java.util.UUID;
+import java.util.concurrent.Executor;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Replays the data received in the round 0 in the following round. */
+public class ReplayOperator<T> extends AbstractStreamOperator<IterationRecord<T>>
+        implements OneInputStreamOperator<IterationRecord<T>, IterationRecord<T>>,
+                OperatorEpochWatermarkTrackerListener {
+
+    private OperatorEpochWatermarkTracker progressTracker;

Review comment:
       I think it would work, but perhaps we may postpone the modification~? Since the current implementation satisfy the customs for `OneInputStreamOperator`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739979978



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheReader.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Reads the cached data from a list of paths. */
+public class DataCacheReader<T> implements Iterator<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final List<Segment> segments;
+
+    @Nullable private SegmentReader currentSegmentReader;
+
+    public DataCacheReader(
+            TypeSerializer<T> serializer, FileSystem fileSystem, List<Segment> segments)
+            throws IOException {
+
+        for (Segment segment : segments) {
+            checkArgument(segment.getCount() > 0, "Do not support empty segment");
+        }
+
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.segments = segments;
+
+        if (segments.size() > 0) {
+            this.currentSegmentReader = new SegmentReader(0);
+        }
+    }
+
+    @Override
+    public boolean hasNext() {
+        return currentSegmentReader != null && currentSegmentReader.hasNext();
+    }
+
+    @Override
+    public T next() {
+        try {
+            T next = currentSegmentReader.next();
+
+            if (!currentSegmentReader.hasNext()) {
+                currentSegmentReader.close();
+                if (currentSegmentReader.index < segments.size() - 1) {
+                    currentSegmentReader = new SegmentReader(currentSegmentReader.index + 1);
+                } else {
+                    currentSegmentReader = null;
+                }
+            }
+
+            return next;
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    private class SegmentReader {
+
+        private final int index;
+
+        private final FSDataInputStream inputStream;
+
+        private final DataInputView inputView;
+
+        private int offset;
+
+        public SegmentReader(int index) throws IOException {
+            this.index = index;
+            inputStream = fileSystem.open(segments.get(index).getPath());
+            inputView = new DataInputViewStreamWrapper(inputStream);

Review comment:
       this.inputView?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739941592



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheReader.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Reads the cached data from a list of paths. */
+public class DataCacheReader<T> implements Iterator<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final List<Segment> segments;
+
+    @Nullable private SegmentReader currentSegmentReader;
+
+    public DataCacheReader(
+            TypeSerializer<T> serializer, FileSystem fileSystem, List<Segment> segments)
+            throws IOException {
+
+        for (Segment segment : segments) {
+            checkArgument(segment.getCount() > 0, "Do not support empty segment");
+        }
+
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.segments = segments;
+
+        if (segments.size() > 0) {
+            this.currentSegmentReader = new SegmentReader(0);
+        }
+    }
+
+    @Override
+    public boolean hasNext() {
+        return currentSegmentReader != null && currentSegmentReader.hasNext();
+    }
+
+    @Override
+    public T next() {
+        try {
+            T next = currentSegmentReader.next();
+
+            if (!currentSegmentReader.hasNext()) {
+                currentSegmentReader.close();
+                if (currentSegmentReader.index < segments.size() - 1) {
+                    currentSegmentReader = new SegmentReader(currentSegmentReader.index + 1);
+                } else {
+                    currentSegmentReader = null;
+                }
+            }
+
+            return next;
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    private class SegmentReader {
+
+        private final int index;
+
+        private final FSDataInputStream inputStream;
+
+        private final DataInputView inputView;
+
+        private int offset;
+
+        public SegmentReader(int index) throws IOException {
+            this.index = index;
+            inputStream = fileSystem.open(segments.get(index).getPath());
+            inputView = new DataInputViewStreamWrapper(inputStream);
+        }
+
+        public boolean hasNext() {
+            return offset < segments.get(index).getCount();

Review comment:
       `offset` should be the position of the next record ?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii closed pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii closed pull request #16:
URL: https://github.com/apache/flink-ml/pull/16


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739965998



##########
File path: flink-ml-iteration/src/test/java/org/apache/flink/iteration/itcases/BoundedPerRoundStreamIterationITCase.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.itcases;
+
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.iteration.itcases.operators.OutputRecord;
+import org.apache.flink.iteration.itcases.operators.SequenceSource;
+import org.apache.flink.iteration.itcases.operators.TwoInputReducePerRoundOperator;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.minicluster.MiniCluster;
+import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.SinkFunction;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import static org.apache.flink.iteration.itcases.UnboundedStreamIterationITCase.verifyResult;
+import static org.junit.Assert.assertEquals;
+
+/** Tests the per-round iterations. */
+public class BoundedPerRoundStreamIterationITCase {
+
+    @Rule public TemporaryFolder tempFolder = new TemporaryFolder();
+
+    private static BlockingQueue<OutputRecord<Integer>> result = new LinkedBlockingQueue<>();

Review comment:
       I modified the tests similarly to we do for the other tests.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739981170



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/Segment.java
##########
@@ -0,0 +1,73 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.core.fs.Path;
+
+import java.io.Serializable;
+import java.util.Objects;
+
+/** A segment represents a single file for the cache. */
+public class Segment implements Serializable {
+
+    private final Path path;
+
+    /** The counts of the records in the file. */

Review comment:
       Counts --> count?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739747754



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/PerRoundSubGraphBuilder.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.iteration.compile.DraftExecutionEnvironment;
+import org.apache.flink.iteration.operator.OperatorWrapper;
+import org.apache.flink.iteration.operator.perround.PerRoundOperatorWrapper;
+import org.apache.flink.streaming.api.datastream.DataStream;
+
+import java.util.List;
+import java.util.Optional;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Allows to add per-round subgraph inside the iteration body. */

Review comment:
       a per-round subgraph?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r736240831



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/PerRoundSubGraphBuilder.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.iteration.compile.DraftExecutionEnvironment;
+import org.apache.flink.iteration.operator.OperatorWrapper;
+import org.apache.flink.iteration.operator.perround.PerRoundOperatorWrapper;
+import org.apache.flink.streaming.api.datastream.DataStream;
+
+import java.util.List;
+import java.util.Optional;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Allows to add per-round subgraph inside the iteration body. */
+@Internal
+public class PerRoundSubGraphBuilder {

Review comment:
       This class is not used.

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheWriter.java
##########
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataOutputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.core.memory.DataOutputView;
+import org.apache.flink.core.memory.DataOutputViewStreamWrapper;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.Supplier;
+
+/** Records the data received and replayed them on required. */
+public class DataCacheWriter<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final Supplier<Path> pathGenerator;
+
+    private final List<Segment> finishSegments;
+
+    private Path currentPath;
+
+    private FSDataOutputStream outputStream;
+
+    private DataOutputView outputView;
+
+    private int currentSegmentCount;
+
+    public DataCacheWriter(
+            TypeSerializer<T> serializer, FileSystem fileSystem, Supplier<Path> pathGenerator)
+            throws IOException {
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.pathGenerator = pathGenerator;
+
+        this.finishSegments = new ArrayList<>();
+
+        startNewSegment();
+    }
+
+    public void addRecord(T record) throws IOException {
+        serializer.serialize(record, outputView);
+        currentSegmentCount += 1;
+    }
+
+    public List<Segment> finishAddingRecords() throws IOException {
+        finishCurrentSegment();
+        return finishSegments;
+    }
+
+    public List<Segment> getFinishSegments() {
+        return finishSegments;
+    }
+
+    @VisibleForTesting
+    void startNewSegment() throws IOException {
+        this.currentPath = pathGenerator.get();
+        this.outputStream = fileSystem.create(currentPath, FileSystem.WriteMode.NO_OVERWRITE);
+        this.outputView = new DataOutputViewStreamWrapper(outputStream);

Review comment:
       I am a little curious why there is not have a `SegmentWriter` just as `SegmentReader`. 
   We might could remove some the mutable member such as `currentPath` & `outputStream` etc.

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/IterationFactory.java
##########
@@ -125,6 +138,42 @@ public static DataStreamList createIteration(
         return addOutputs(getActualDataStreams(iterationBodyResult.getOutputStreams(), draftEnv));
     }
 
+    private static DataStreamList addReplayer(
+            DataStream<?> firstHeadStream,
+            DataStreamList originalDataStreams,
+            DataStreamList dataStreamInputs,
+            Set<Integer> replayedDataStreamIndices) {
+
+        List<DataStream<?>> result = new ArrayList<>(dataStreamInputs.size());
+        for (int i = 0; i < dataStreamInputs.size(); ++i) {
+            if (!replayedDataStreamIndices.contains(i)) {
+                result.add(dataStreamInputs.get(i));
+                continue;
+            }
+
+            DataStream<?> replayedInput =
+                    ((SingleOutputStreamOperator<IterationRecord<?>>) firstHeadStream)
+                            .getSideOutput(HeadOperator.ALIGN_NOTIFY_OUTPUT_TAG)
+                            .map(x -> x, dataStreamInputs.get(i).getType())
+                            .setParallelism(firstHeadStream.getParallelism())
+                            .name("signal-change-typeinfo")
+                            .broadcast()
+                            .union(dataStreamInputs.get(i))
+                            .transform(
+                                    "Replayer-"
+                                            + originalDataStreams
+                                                    .get(i)
+                                                    .getTransformation()
+                                                    .getName(),
+                                    dataStreamInputs.get(i).getType(),
+                                    (OneInputStreamOperator) new ReplayOperator<>())

Review comment:
       Maybe we could use the `StreamOperatorFactory`. WDYT?

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/config/IterationOptions.java
##########
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.config;
+
+import org.apache.flink.configuration.ConfigOption;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/** The options for the iteration. */
+public class IterationOptions {
+
+    public static final ConfigOption<String> DATA_CACHE_PATH =

Review comment:
       Maybe we could use `REPLAY_DATA_CACHE_PATH` if this is only used by replaying the data 

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/Iterations.java
##########
@@ -89,7 +98,12 @@
     public static DataStreamList iterateUnboundedStreams(
             DataStreamList initVariableStreams, DataStreamList dataStreams, IterationBody body) {
         return IterationFactory.createIteration(
-                initVariableStreams, dataStreams, body, new AllRoundOperatorWrapper(), false);
+                initVariableStreams,

Review comment:
       I notice that the `IterationFactory` is almost the same as the `Iterations`. Would you like to give some explaination that why we introduce `IterationFactory`.  IMHO it is a littler duplicated, which might introduce some burden for understanding.
   
   So maybe we could remove the `IterationFactory` WDYT?

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/ReplayOperator.java
##########
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.operator;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.iteration.IterationRecord;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheReader;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheWriter;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTracker;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerFactory;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerListener;
+import org.apache.flink.iteration.typeinfo.IterationRecordSerializer;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.streaming.api.graph.StreamConfig;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+import org.apache.flink.util.ExceptionUtils;
+
+import java.io.IOException;
+import java.util.UUID;
+import java.util.concurrent.Executor;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Replays the data received in the round 0 in the following round. */
+public class ReplayOperator<T> extends AbstractStreamOperator<IterationRecord<T>>
+        implements OneInputStreamOperator<IterationRecord<T>, IterationRecord<T>>,
+                OperatorEpochWatermarkTrackerListener {
+
+    private OperatorEpochWatermarkTracker progressTracker;

Review comment:
       I think maybe we could introduce the `OperatorFactory` to make the following member `final`.

##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheReader.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Reads the cached data from a list of paths. */
+public class DataCacheReader<T> implements Iterator<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final List<Segment> segments;
+
+    @Nullable private SegmentReader currentSegmentReader;
+
+    public DataCacheReader(
+            TypeSerializer<T> serializer, FileSystem fileSystem, List<Segment> segments)
+            throws IOException {
+
+        for (Segment segment : segments) {
+            checkArgument(segment.getCount() > 0, "Do not support empty segment");
+        }
+
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.segments = segments;
+
+        if (segments.size() > 0) {
+            this.currentSegmentReader = new SegmentReader(0);
+        }
+    }
+
+    @Override
+    public boolean hasNext() {
+        return currentSegmentReader != null && currentSegmentReader.hasNext();
+    }
+
+    @Override
+    public T next() {
+        try {
+            T next = currentSegmentReader.next();
+
+            if (!currentSegmentReader.hasNext()) {
+                currentSegmentReader.close();
+                if (currentSegmentReader.index < segments.size() - 1) {
+                    currentSegmentReader = new SegmentReader(currentSegmentReader.index + 1);
+                } else {
+                    currentSegmentReader = null;
+                }
+            }
+
+            return next;
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    private class SegmentReader {
+
+        private final int index;
+
+        private final FSDataInputStream inputStream;
+
+        private final DataInputView inputView;
+
+        private int offset;
+
+        public SegmentReader(int index) throws IOException {
+            this.index = index;
+            inputStream = fileSystem.open(segments.get(index).getPath());
+            inputView = new DataInputViewStreamWrapper(inputStream);
+        }
+
+        public boolean hasNext() {
+            return offset < segments.get(index).getCount();

Review comment:
       I think the meaning of `offset` is actually the same as the `count` record in the file?
   It might be better to understand use the `count` instead of the `offset`. 
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739747819



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/HeadOperator.java
##########
@@ -176,6 +182,9 @@ public void handleOperatorEvent(OperatorEvent operatorEvent) {
                         0);
                 eventBroadcastOutput.broadcastEmit((StreamRecord) reusable);
                 numFeedbackRecordsPerEpoch.remove(globallyAlignedEvent.getEpoch());
+
+                // Also notify the listener

Review comment:
       Notifies?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739747904



##########
File path: flink-ml-iteration/src/test/java/org/apache/flink/iteration/itcases/BoundedPerRoundStreamIterationITCase.java
##########
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.itcases;
+
+import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.configuration.RestOptions;
+import org.apache.flink.iteration.DataStreamList;
+import org.apache.flink.iteration.IterationBodyResult;
+import org.apache.flink.iteration.IterationConfig;
+import org.apache.flink.iteration.Iterations;
+import org.apache.flink.iteration.ReplayableDataStreamList;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.iteration.itcases.operators.OutputRecord;
+import org.apache.flink.iteration.itcases.operators.SequenceSource;
+import org.apache.flink.iteration.itcases.operators.TwoInputReducePerRoundOperator;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.runtime.minicluster.MiniCluster;
+import org.apache.flink.runtime.minicluster.MiniClusterConfiguration;
+import org.apache.flink.streaming.api.datastream.DataStream;
+import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.SinkFunction;
+
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.TemporaryFolder;
+
+import java.io.IOException;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.BlockingQueue;
+import java.util.concurrent.LinkedBlockingQueue;
+
+import static org.apache.flink.iteration.itcases.UnboundedStreamIterationITCase.verifyResult;
+import static org.junit.Assert.assertEquals;
+
+/** Tests the per-round iterations. */
+public class BoundedPerRoundStreamIterationITCase {
+
+    @Rule public TemporaryFolder tempFolder = new TemporaryFolder();
+
+    private static BlockingQueue<OutputRecord<Integer>> result = new LinkedBlockingQueue<>();

Review comment:
       Could be `final`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739979898



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/datacache/nonkeyed/DataCacheReader.java
##########
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.datacache.nonkeyed;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FSDataInputStream;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.memory.DataInputView;
+import org.apache.flink.core.memory.DataInputViewStreamWrapper;
+
+import javax.annotation.Nullable;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.List;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Reads the cached data from a list of paths. */
+public class DataCacheReader<T> implements Iterator<T> {
+
+    private final TypeSerializer<T> serializer;
+
+    private final FileSystem fileSystem;
+
+    private final List<Segment> segments;
+
+    @Nullable private SegmentReader currentSegmentReader;
+
+    public DataCacheReader(
+            TypeSerializer<T> serializer, FileSystem fileSystem, List<Segment> segments)
+            throws IOException {
+
+        for (Segment segment : segments) {
+            checkArgument(segment.getCount() > 0, "Do not support empty segment");
+        }
+
+        this.serializer = serializer;
+        this.fileSystem = fileSystem;
+        this.segments = segments;
+
+        if (segments.size() > 0) {
+            this.currentSegmentReader = new SegmentReader(0);
+        }
+    }
+
+    @Override
+    public boolean hasNext() {
+        return currentSegmentReader != null && currentSegmentReader.hasNext();
+    }
+
+    @Override
+    public T next() {
+        try {
+            T next = currentSegmentReader.next();
+
+            if (!currentSegmentReader.hasNext()) {
+                currentSegmentReader.close();
+                if (currentSegmentReader.index < segments.size() - 1) {
+                    currentSegmentReader = new SegmentReader(currentSegmentReader.index + 1);
+                } else {
+                    currentSegmentReader = null;
+                }
+            }
+
+            return next;
+        } catch (IOException e) {
+            throw new RuntimeException(e);
+        }
+    }
+
+    private class SegmentReader {
+
+        private final int index;
+
+        private final FSDataInputStream inputStream;
+
+        private final DataInputView inputView;
+
+        private int offset;
+
+        public SegmentReader(int index) throws IOException {
+            this.index = index;
+            inputStream = fileSystem.open(segments.get(index).getPath());

Review comment:
       this.inputStream?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739941807



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/PerRoundSubGraphBuilder.java
##########
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration;
+
+import org.apache.flink.annotation.Internal;
+import org.apache.flink.iteration.compile.DraftExecutionEnvironment;
+import org.apache.flink.iteration.operator.OperatorWrapper;
+import org.apache.flink.iteration.operator.perround.PerRoundOperatorWrapper;
+import org.apache.flink.streaming.api.datastream.DataStream;
+
+import java.util.List;
+import java.util.Optional;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/** Allows to add per-round subgraph inside the iteration body. */

Review comment:
       This class is removed




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739970068



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/operator/ReplayOperator.java
##########
@@ -0,0 +1,191 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.operator;
+
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.core.fs.FileSystem;
+import org.apache.flink.core.fs.Path;
+import org.apache.flink.iteration.IterationRecord;
+import org.apache.flink.iteration.config.IterationOptions;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheReader;
+import org.apache.flink.iteration.datacache.nonkeyed.DataCacheWriter;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTracker;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerFactory;
+import org.apache.flink.iteration.progresstrack.OperatorEpochWatermarkTrackerListener;
+import org.apache.flink.iteration.typeinfo.IterationRecordSerializer;
+import org.apache.flink.runtime.state.StateInitializationContext;
+import org.apache.flink.streaming.api.graph.StreamConfig;
+import org.apache.flink.streaming.api.operators.AbstractStreamOperator;
+import org.apache.flink.streaming.api.operators.OneInputStreamOperator;
+import org.apache.flink.streaming.api.operators.Output;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.tasks.StreamTask;
+import org.apache.flink.util.ExceptionUtils;
+
+import java.io.IOException;
+import java.util.UUID;
+import java.util.concurrent.Executor;
+import java.util.concurrent.Executors;
+import java.util.concurrent.atomic.AtomicReference;
+
+import static org.apache.flink.util.Preconditions.checkState;
+
+/** Replays the data received in the round 0 in the following round. */
+public class ReplayOperator<T> extends AbstractStreamOperator<IterationRecord<T>>
+        implements OneInputStreamOperator<IterationRecord<T>, IterationRecord<T>>,
+                OperatorEpochWatermarkTrackerListener {
+
+    private OperatorEpochWatermarkTracker progressTracker;

Review comment:
       I agree. But I think we might need a jira to track this improvement. WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on a change in pull request #16: [FLINK-9][iteration] Support per-round iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739822786



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/config/IterationOptions.java
##########
@@ -0,0 +1,33 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.iteration.config;
+
+import org.apache.flink.configuration.ConfigOption;
+
+import static org.apache.flink.configuration.ConfigOptions.key;
+
+/** The options for the iteration. */
+public class IterationOptions {
+
+    public static final ConfigOption<String> DATA_CACHE_PATH =

Review comment:
       This would also be used for operators to cache records, like in `withBroadcast`. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] guoweiM commented on a change in pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
guoweiM commented on a change in pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#discussion_r739982825



##########
File path: flink-ml-iteration/src/main/java/org/apache/flink/iteration/Iterations.java
##########
@@ -112,15 +145,400 @@ public static DataStreamList iterateBoundedStreamsUntilTermination(
             ReplayableDataStreamList dataStreams,
             IterationConfig config,
             IterationBody body) {
-        Preconditions.checkArgument(
-                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND);
-        Preconditions.checkArgument(dataStreams.getReplayedDataStreams().size() == 0);
+        OperatorWrapper wrapper =
+                config.getOperatorLifeCycle() == IterationConfig.OperatorLifeCycle.ALL_ROUND
+                        ? new AllRoundOperatorWrapper<>()
+                        : new PerRoundOperatorWrapper<>();
 
-        return IterationFactory.createIteration(
+        List<DataStream<?>> allDatastreams = new ArrayList<>();
+        allDatastreams.addAll(dataStreams.getReplayedDataStreams());
+        allDatastreams.addAll(dataStreams.getNonReplayedStreams());
+
+        Set<Integer> replayedIndices =
+                IntStream.range(0, dataStreams.getReplayedDataStreams().size())
+                        .boxed()
+                        .collect(Collectors.toSet());
+
+        return createIteration(
                 initVariableStreams,
-                new DataStreamList(dataStreams.getNonReplayedStreams()),
+                new DataStreamList(allDatastreams),
+                replayedIndices,
                 body,
-                new AllRoundOperatorWrapper(),
+                wrapper,
                 true);
     }
+
+    @SuppressWarnings({"unchecked", "rawtypes"})
+    private static DataStreamList createIteration(
+            DataStreamList initVariableStreams,
+            DataStreamList dataStreams,
+            Set<Integer> replayedDataStreamIndices,
+            IterationBody body,
+            OperatorWrapper<?, IterationRecord<?>> initialOperatorWrapper,
+            boolean mayHaveCriteria) {
+        checkState(initVariableStreams.size() > 0, "There should be at least one variable stream");
+
+        IterationID iterationId = new IterationID();
+
+        List<TypeInformation<?>> initVariableTypeInfos = getTypeInfos(initVariableStreams);
+        List<TypeInformation<?>> dataStreamTypeInfos = getTypeInfos(dataStreams);
+
+        // Add heads and inputs
+        int totalInitVariableParallelism =
+                map(
+                                initVariableStreams,
+                                dataStream ->
+                                        dataStream.getParallelism() > 0
+                                                ? dataStream.getParallelism()
+                                                : dataStream
+                                                        .getExecutionEnvironment()
+                                                        .getConfig()
+                                                        .getParallelism())
+                        .stream()
+                        .mapToInt(i -> i)
+                        .sum();
+        DataStreamList initVariableInputs = addInputs(initVariableStreams, false);
+        DataStreamList headStreams =
+                addHeads(
+                        initVariableStreams,
+                        initVariableInputs,
+                        iterationId,
+                        totalInitVariableParallelism,
+                        false,
+                        0);
+
+        DataStreamList dataStreamInputs = addInputs(dataStreams, true);
+        if (replayedDataStreamIndices.size() > 0) {
+            dataStreamInputs =
+                    addReplayer(
+                            headStreams.get(0),
+                            dataStreams,
+                            dataStreamInputs,
+                            replayedDataStreamIndices);
+        }
+
+        // Create the iteration body. We map the inputs of iteration body into the draft sources,
+        // which serve as the start points to build the draft subgraph.
+        StreamExecutionEnvironment env = initVariableStreams.get(0).getExecutionEnvironment();
+        DraftExecutionEnvironment draftEnv =
+                new DraftExecutionEnvironment(env, initialOperatorWrapper);
+        DataStreamList draftHeadStreams =
+                addDraftSources(headStreams, draftEnv, initVariableTypeInfos);
+        DataStreamList draftDataStreamInputs =
+                addDraftSources(dataStreamInputs, draftEnv, dataStreamTypeInfos);
+
+        IterationBodyResult iterationBodyResult =
+                body.process(draftHeadStreams, draftDataStreamInputs);
+        ensuresTransformationAdded(iterationBodyResult.getFeedbackVariableStreams(), draftEnv);
+        ensuresTransformationAdded(iterationBodyResult.getOutputStreams(), draftEnv);
+        draftEnv.copyToActualEnvironment();
+
+        // Add tails and co-locate them with the heads.

Review comment:
       Adds?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink-ml] gaoyunhaii commented on pull request #16: [FLINK-24653][iteration] Support per-round operators inside the iteration

Posted by GitBox <gi...@apache.org>.
gaoyunhaii commented on pull request #16:
URL: https://github.com/apache/flink-ml/pull/16#issuecomment-955991483


   Very thanks @guoweiM for the review! Will merge~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org