You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by NicoK <gi...@git.apache.org> on 2017/09/27 09:43:05 UTC

[GitHub] flink pull request #4509: [FLINK-7406][network] Implement Netty receiver inc...

Github user NicoK commented on a diff in the pull request:

    https://github.com/apache/flink/pull/4509#discussion_r141283335
  
    --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/io/network/netty/CreditBasedClientHandler.java ---
    @@ -0,0 +1,283 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one
    + * or more contributor license agreements.  See the NOTICE file
    + * distributed with this work for additional information
    + * regarding copyright ownership.  The ASF licenses this file
    + * to you under the Apache License, Version 2.0 (the
    + * "License"); you may not use this file except in compliance
    + * with the License.  You may obtain a copy of the License at
    + *
    + *     http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.flink.runtime.io.network.netty;
    +
    +import org.apache.flink.core.memory.MemorySegment;
    +import org.apache.flink.core.memory.MemorySegmentFactory;
    +import org.apache.flink.runtime.io.network.buffer.Buffer;
    +import org.apache.flink.runtime.io.network.buffer.FreeingBufferRecycler;
    +import org.apache.flink.runtime.io.network.netty.exception.LocalTransportException;
    +import org.apache.flink.runtime.io.network.netty.exception.RemoteTransportException;
    +import org.apache.flink.runtime.io.network.netty.exception.TransportException;
    +import org.apache.flink.runtime.io.network.partition.PartitionNotFoundException;
    +import org.apache.flink.runtime.io.network.partition.consumer.InputChannelID;
    +import org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannel;
    +
    +import org.apache.flink.shaded.guava18.com.google.common.collect.Maps;
    +import org.apache.flink.shaded.netty4.io.netty.channel.ChannelHandlerContext;
    +import org.apache.flink.shaded.netty4.io.netty.channel.ChannelInboundHandlerAdapter;
    +
    +import org.slf4j.Logger;
    +import org.slf4j.LoggerFactory;
    +
    +import java.io.IOException;
    +import java.net.SocketAddress;
    +import java.util.concurrent.ConcurrentHashMap;
    +import java.util.concurrent.ConcurrentMap;
    +import java.util.concurrent.atomic.AtomicReference;
    +
    +class CreditBasedClientHandler extends ChannelInboundHandlerAdapter {
    +
    +	private static final Logger LOG = LoggerFactory.getLogger(CreditBasedClientHandler.class);
    +
    +	private final ConcurrentMap<InputChannelID, RemoteInputChannel> inputChannels = new ConcurrentHashMap<>();
    +
    +	private final AtomicReference<Throwable> channelError = new AtomicReference<>();
    +
    +	/**
    +	 * Set of cancelled partition requests. A request is cancelled iff an input channel is cleared
    +	 * while data is still coming in for this channel.
    +	 */
    +	private final ConcurrentMap<InputChannelID, InputChannelID> cancelled = Maps.newConcurrentMap();
    --- End diff --
    
    I guess, we can also use `ConcurrentHashMap` here and avoid the guava use


---