You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Ufuk Celebi <uc...@apache.org> on 2016/10/10 09:05:16 UTC

[RESULT] [VOTE] Release Apache Flink 1.1.3 (RC1)

This vote is cancelled in favour of RC2.

On Mon, Oct 10, 2016 at 10:45 AM, Maximilian Michels <mx...@apache.org> wrote:
> Thanks for checking out the fix, Stephan. I'll merge it now to the
> release-1.1 branch. Then we should be good to go for a new release
> candidate.
>
> +1 for adding a note to the release notes to avoid the "semi async" mode.
>
> -Max
>
>
> On Sat, Oct 8, 2016 at 2:13 AM, Stephan Ewen <se...@apache.org> wrote:
>> Thanks, Max, for finding this. I looked at the pull request, looks good.
>> Lets merge it and create another release candidate.
>>
>> For the testing of the RC1:
>>
>>  - Joined Kostas with some verification of the Kafka 0.9 consumer behavior
>> for low throughput streams
>>  - mvn clean verily for Scala 2.11, Hadoop 2.6.3
>>  - No changed to LICENSE or NOTICE necessary since the last release
>>
>> I found that when executing the tests, some RocksDB state backend tests
>> occasionally fail with with a segfault. This seems to concern the "semi
>> async" mode.
>>
>> This may be just a test instability (I am pretty sure it is nothing
>> introduced in 1.1.3), as none of that code was touched, as far as I can
>> tell.
>>
>> However, I think it is prudent to add to the release notes that we advise
>> to use the "fully async" mode for the RocksDB state backend. That has the
>> additional advantage that savepoints from that mode will most likely be
>> compatible with Flink 1.2, which savepoints from the "semi async" mode will
>> almost certainly not be compatible.
>>
>> Greetings,
>> Stephan
>>
>> On Fri, Oct 7, 2016 at 8:16 PM, Maximilian Michels <mx...@apache.org> wrote:
>>
>>> -1 overall (see below)
>>>
>>> +1 for:
>>>
>>> - scanned commit history for dubious changes
>>> - ran "mvn clean install -Dhadoop.version=2.6.0 -Pinclude-yarn-tests"
>>> successfully
>>> - started cluster via "./bin/start-cluster.sh"
>>> - run batch and streaming examples via web interface and CLI
>>> - used web interface for monitoring
>>> - ran example job with quickstart project and staging repository
>>>
>>> -2 for:
>>>
>>> I ran into an issue while playing around with the
>>> ContinuousFileMonitoringFunction:
>>>
>>> env.readFile(
>>>      new TextInputFormat(new Path("/tmp/test")),
>>>          "/tmp/test",
>>>          FileProcessingMode.PROCESS_CONTINUOUSLY,
>>>          1000,
>>>          new FilePathFilter() {
>>>             @Override
>>>             public boolean filterPath(Path filePath) {
>>>                return filePath.toString().contains("filterthis");
>>>             }
>>> });
>>>
>>> If my editor creates a temporary file which is moved while the
>>> monitoring function retrieves its file status via
>>> FileSystem.listStatus(path), then we crash with an IOException. Here's
>>> a pull request with a workaround:
>>> https://github.com/apache/flink/pull/2610 We should probably add a
>>> filter function to our FileSystem interface.
>>>
>>> Another thing that I noticed is the filterPath(..) method has to
>>> return true to filter out, usually you would expect it to do the
>>> opposite, like Flink's FilterFunction. In addition, I think having to
>>> specify the file path twice is kind of awkward. Actually, the second
>>> parameter for readFile(..) overwrites the path of the InputFormat.
>>>
>>> I would still give a +1 if we have merged the pull request.
>>>
>>> -Max
>>>
>>>
>>> On Fri, Oct 7, 2016 at 4:49 PM, Kostas Kloudas
>>> <k....@data-artisans.com> wrote:
>>> > Hi all,
>>> >
>>> > I tested the Kafka source and continuous file sources and everything
>>> seems to
>>> > be working fine.
>>> >
>>> > Kostas
>>> >
>>> >> On Oct 6, 2016, at 3:37 PM, Fabian Hueske <fh...@gmail.com> wrote:
>>> >>
>>> >> +1 to release (binding)
>>> >>
>>> >> - checked hashes and signatures
>>> >> - checked diffs against 1.1.2: no dependencies added or modified
>>> >> - successfully built Flink from source archive (Maven 3.3.3, Java
>>> 1.8.0_25
>>> >> Oracle, OS X)
>>> >>  - mvn clean install (Scala 2.10)
>>> >>  - mvn clean install (Scala 2.11)
>>> >>  - mvn clean install -Dhadoop.profile=1 (Scala 2.10)
>>> >>
>>> >> Cheers, Fabian
>>> >>
>>> >> 2016-10-06 10:29 GMT+02:00 Ufuk Celebi <uc...@apache.org>:
>>> >>
>>> >>> Dear Flink community,
>>> >>>
>>> >>> Please vote on releasing the following candidate as Apache Flink
>>> version
>>> >>> 1.1.3.
>>> >>>
>>> >>> The commit to be voted on:
>>> >>> 3264a16 (http://git-wip-us.apache.org/repos/asf/flink/commit/3264a16)
>>> >>>
>>> >>> Branch:
>>> >>> release-1.1.3-rc1
>>> >>> (https://git1-us-west.apache.org/repos/asf/flink/repo?p=flin
>>> >>> k.git;a=shortlog;h=refs/heads/release-1.1.3-rc1)
>>> >>>
>>> >>> The release artifacts to be voted on can be found at:
>>> >>> http://people.apache.org/~uce/flink-1.1.3-rc1/
>>> >>>
>>> >>> The release artifacts are signed with the key with fingerprint
>>> 9D403309:
>>> >>> http://www.apache.org/dist/flink/KEYS
>>> >>>
>>> >>> The staging repository for this release can be found at:
>>> >>> https://repository.apache.org/content/repositories/orgapacheflink-1104
>>> >>>
>>> >>> -------------------------------------------------------------
>>> >>>
>>> >>> The voting time is at least three days. The vote passes if a majority
>>> >>> of at least three +1 PMC votes are cast.
>>> >>>
>>> >>> The vote ends on Monday, October 10th, 2016, counting the weekend as a
>>> >>> single day.
>>> >>>
>>> >>> [ ] +1 Release this package as Apache Flink 1.1.3
>>> >>> [ ] -1 Do not release this package, because ...
>>> >>>
>>> >
>>>