You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/01/23 17:15:26 UTC

[GitHub] [flink] GJL opened a new pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

GJL opened a new pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937
 
 
   ## What is the purpose of the change
   
   *This adds release notes for Flink 1.10.*
   
   
   ## Brief change log
   
     - *Add release notes for Flink 1.10*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / **no**)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / **no**)
     - The serializers: (yes / **no** / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / **no** / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
     - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / **no**)
     - If yes, how is the feature documented? (**not applicable** / docs / JavaDocs / not documented)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577781448
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 91aa4fb6f751281910c9e470aa780fa7abe43aa1 (Thu Jan 23 17:19:07 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370570423
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
 
 Review comment:
   ```suggestion
   The Flink CLI no longer supports the deprecated command-line options
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252425
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252187
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
+The `HighAvailabilityServices` have been split up into client-side
+`ClientHighAvailabilityServices` and cluster-side `HighAvailabilityServices`.
+When implementing custom high availability services, users should follow this
+separation by overriding the factory method
+`HighAvailabilityServicesFactory#createClientHAServices(Configuration)`.
+Moreover, `HighAvailabilityServices#getWebMonitorLeaderRetriever()` should no
+longer be implemented since it has been deprecated.
+
+#### Deprecation of HighAvailabilityServices#getWebMonitorLeaderElectionService() ([FLINK-13977](https://issues.apache.org/jira/browse/FLINK-13977))
+Implementations of `HighAvailabilityServices` should implement
+`HighAvailabilityServices#getClusterRestEndpointLeaderElectionService()` instead
+of `HighAvailabilityServices#getWebMonitorLeaderElectionService()`.
+
+#### Interface Change in LeaderElectionService ([FLINK-14287](https://issues.apache.org/jira/browse/FLINK-14287))
+`LeaderElectionService#confirmLeadership(UUID, String)` now takes an
+additional second argument, which is the address under which the leader will be
+reachable. All custom `LeaderElectionService` implementations will need to be
+updated accordingly.
+
+#### Deprecation of Checkpoint Lock ([FLINK-14857](https://issues.apache.org/jira/browse/FLINK-14857))
 
 Review comment:
   @rkhachatryan 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252187
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
+The `HighAvailabilityServices` have been split up into client-side
+`ClientHighAvailabilityServices` and cluster-side `HighAvailabilityServices`.
+When implementing custom high availability services, users should follow this
+separation by overriding the factory method
+`HighAvailabilityServicesFactory#createClientHAServices(Configuration)`.
+Moreover, `HighAvailabilityServices#getWebMonitorLeaderRetriever()` should no
+longer be implemented since it has been deprecated.
+
+#### Deprecation of HighAvailabilityServices#getWebMonitorLeaderElectionService() ([FLINK-13977](https://issues.apache.org/jira/browse/FLINK-13977))
+Implementations of `HighAvailabilityServices` should implement
+`HighAvailabilityServices#getClusterRestEndpointLeaderElectionService()` instead
+of `HighAvailabilityServices#getWebMonitorLeaderElectionService()`.
+
+#### Interface Change in LeaderElectionService ([FLINK-14287](https://issues.apache.org/jira/browse/FLINK-14287))
+`LeaderElectionService#confirmLeadership(UUID, String)` now takes an
+additional second argument, which is the address under which the leader will be
+reachable. All custom `LeaderElectionService` implementations will need to be
+updated accordingly.
+
+#### Deprecation of Checkpoint Lock ([FLINK-14857](https://issues.apache.org/jira/browse/FLINK-14857))
 
 Review comment:
   @rkhachatryan 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371268234
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
 
 Review comment:
   Changed to _present perfect_ to be consistent.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:62b54f9b26b8546dedb250788596aa9a1fe649c8 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146580675 TriggerType:PUSH TriggerID:62b54f9b26b8546dedb250788596aa9a1fe649c8
   Hash:62b54f9b26b8546dedb250788596aa9a1fe649c8 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4651 TriggerType:PUSH TriggerID:62b54f9b26b8546dedb250788596aa9a1fe649c8
   Hash:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5 Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146584628 TriggerType:PUSH TriggerID:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5
   Hash:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4653 TriggerType:PUSH TriggerID:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   * 62b54f9b26b8546dedb250788596aa9a1fe649c8 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146580675) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4651) 
   * 0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5 Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146584628) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4653) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370759756
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.preallocate</td>
+      <td>Pre-allocation is no longer supported, and managed memory is always allocated lazily</td>
+    </tr>
+  </tbody>
+</table>
+
+
+The following options, if used, are interpreted as other new options in order to
+maintain backwards compatibility where it makes sense:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated config option</th>
+      <th class="text-left">Interpreted as</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.heap.size</td>
+      <td>
+        <ul>
+          <li>taskmanager.memory.flink.size for standalone deployment</li>
+          <li>taskmanager.memory.process.size for containerized deployments</li>
+        </ul>
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.size</td>
+      <td>taskmanager.memory.managed.size</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.min</td>
+      <td>taskmanager.memory.network.min</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.max</td>
+      <td>taskmanager.memory.network.max</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.fraction</td>
+      <td>taskmanager.memory.network.fraction</td>
+    </tr>
+  </tbody>
+</table>
+
+
+The container cut-off configuration options, `containerized.heap-cutoff-ratio`
+and `containerized.heap-cutoff-min`, have no effect for task executor processes
+anymore but they still have the same semantics for the JobManager process.
+
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
 
 Review comment:
   ```suggestion
   releases. Users no longer need to build the Kinesis connector from source themselves.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370572504
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
 
 Review comment:
   ```suggestion
   `Double.NaN`, etc. .
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252425
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371267353
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
 
 Review comment:
   I cannot think of a better phrasing but I think it is acceptable in the current state. There were two options but one got removed so you are left with the other option. Feel free to propose a phrasing.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370569411
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
 
 Review comment:
   ```suggestion
   removing class relocations, the `s3-hadoop` and `s3-presto` filesystems can only be
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] AHeise commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
AHeise commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370529327
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
 
 Review comment:
   "Other filesystems are strongly recommended to be only used as plugins as we will continue to remove relocations".
   
   Except for that 👍 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] alpinegizmo commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
alpinegizmo commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r372013058
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
 
 Review comment:
   Without having more context, I misinterpreted this entry as a typo. If you don't know that this has been a boolean that selects between two mutually exclusive options, then one can get confused. I suggest adding a few more words to make this more obvious. Perhaps something like "Support for on-heap managed memory has been removed, leaving off-heap managed memory as the only possibility." or "This configuration item is no longer meaningful, since there no longer an alternative to using off-heap managed memory."

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370570806
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
 
 Review comment:
   ```suggestion
   Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` were
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:62b54f9b26b8546dedb250788596aa9a1fe649c8 Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:62b54f9b26b8546dedb250788596aa9a1fe649c8
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   * 62b54f9b26b8546dedb250788596aa9a1fe649c8 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371268575
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
 
 Review comment:
   I changed it to _present perfect_ to be consistent (instead of simple past).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] dawidwys commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
dawidwys commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370531195
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
 
 Review comment:
   ```suggestion
   set of string properties as a description of a TableSource or TableSink
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370250929
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
 
 Review comment:
   @pnowojski 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371267556
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
 
 Review comment:
   Fixed differently by @dawidwys 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370250434
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
 
 Review comment:
   @twalthr @dawidwys 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370571712
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
 
 Review comment:
   Rephrase the last sentence; without an option to configure, and no choice to be had, it seems odd to talk about "option".

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370250677
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
 
 Review comment:
   @dawidwys 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370250337
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
 
 Review comment:
   Is it really TableSink[s]?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371694040
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
 
 Review comment:
   Ok, fixed.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252746
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
+The `HighAvailabilityServices` have been split up into client-side
+`ClientHighAvailabilityServices` and cluster-side `HighAvailabilityServices`.
+When implementing custom high availability services, users should follow this
+separation by overriding the factory method
+`HighAvailabilityServicesFactory#createClientHAServices(Configuration)`.
+Moreover, `HighAvailabilityServices#getWebMonitorLeaderRetriever()` should no
+longer be implemented since it has been deprecated.
+
+#### Deprecation of HighAvailabilityServices#getWebMonitorLeaderElectionService() ([FLINK-13977](https://issues.apache.org/jira/browse/FLINK-13977))
+Implementations of `HighAvailabilityServices` should implement
+`HighAvailabilityServices#getClusterRestEndpointLeaderElectionService()` instead
+of `HighAvailabilityServices#getWebMonitorLeaderElectionService()`.
+
+#### Interface Change in LeaderElectionService ([FLINK-14287](https://issues.apache.org/jira/browse/FLINK-14287))
+`LeaderElectionService#confirmLeadership(UUID, String)` now takes an
+additional second argument, which is the address under which the leader will be
+reachable. All custom `LeaderElectionService` implementations will need to be
+updated accordingly.
+
+#### Deprecation of Checkpoint Lock ([FLINK-14857](https://issues.apache.org/jira/browse/FLINK-14857))
 
 Review comment:
   @rkhachatryan 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370569295
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
 
 Review comment:
   ```suggestion
   relocations turned out to be problematic in combination with custom
   ```
   ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370570515
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
 
 Review comment:
   ```suggestion
   All Flink users are advised to remove this command-line option.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371271225
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
 
 Review comment:
   Fixed but without the backticks. If you insist then we should add backticks in the beginning of the paragraph.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-578044737
 
 
   To all mentioned devs: please proof read the sections relevant for you.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370571041
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
 
 Review comment:
   ```suggestion
   `ConnectTableDescriptor#createTemporaryTable()`. This method expects only a
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] dawidwys commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
dawidwys commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370531467
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
 
 Review comment:
   ```suggestion
   `ConnectTableDescriptor#createTemporaryTable()`. The `ConnectTableDescriptor` approach expects only a
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252275
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370570232
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
 
 Review comment:
   ```suggestion
   When using the `--yarnship` command-line option, resource directories and jar
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:62b54f9b26b8546dedb250788596aa9a1fe649c8 Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/146580675 TriggerType:PUSH TriggerID:62b54f9b26b8546dedb250788596aa9a1fe649c8
   Hash:62b54f9b26b8546dedb250788596aa9a1fe649c8 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4651 TriggerType:PUSH TriggerID:62b54f9b26b8546dedb250788596aa9a1fe649c8
   Hash:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5 Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   * 62b54f9b26b8546dedb250788596aa9a1fe649c8 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/146580675) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4651) 
   * 0d1f1e34e2a77e05dfd81d9d1cdd0ab1698ec1a5 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-578044737
 
 
   To all mentioned devs, please proof read.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370782688
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
 
 Review comment:
   @azagrebin 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371270368
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
 
 Review comment:
   See my other comment.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] pnowojski commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
pnowojski commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370617530
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
 
 Review comment:
   I would either rephrase it:
   ```
   The reason is that TaskManagers running on YARN with `auto`, could easily exceed 
   the memory budget of their container, due to incorrectly accounted memory-mapped 
   files memory usage.
   ```
   ```such as YARN``` is a bit too broad, as this problem is probably just about YARN, as other cluster managers (at least those that we checked) are more clever when it comes to mmapped files.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] AHeise commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
AHeise commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371792616
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
 
 Review comment:
   For blog post I proposed: `s3-hadoop and s3-presto filesystems do no longer use relocations and need to be loaded through plugins, but now seamlessly integrate with all credential providers.` maybe that is more on point? wdyt? @GJL 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370253396
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371268234
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
 
 Review comment:
   I changed it to _present perfect_ to be consistent (instead of simple past).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370572257
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
 
 Review comment:
   ```suggestion
   performance decline critical can set
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370652275
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
 
 Review comment:
   Won't fix: https://english.stackexchange.com/questions/8382/when-etc-is-at-the-end-of-a-phrase-do-you-place-a-period-after-it

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370760801
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
 
 Review comment:
   on heap managed memory. Hence, there is no need for this config option anymore.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370572010
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
 
 Review comment:
   ```suggestion
   store state with TTL, a minor performance penalty during compaction is incurred.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:PENDING URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370755693
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
 
 Review comment:
   Looks good to me.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:CANCELED URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146369508 TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:FAILURE URL:https://travis-ci.com/flink-ci/flink/builds/146409222 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   Hash:d95c1df6cfa284bcc91bb40337aec77a99c0decf Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640 TriggerType:PUSH TriggerID:d95c1df6cfa284bcc91bb40337aec77a99c0decf
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [CANCELED](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146369508) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4637) 
   * d95c1df6cfa284bcc91bb40337aec77a99c0decf Travis: [FAILURE](https://travis-ci.com/flink-ci/flink/builds/146409222) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4640) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370570905
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
 
 Review comment:
   ```suggestion
   Methods `registerTableSource()`/`registerTableSink()` were deprecated in favor of
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370251049
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
 
 Review comment:
   @azagrebin 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371842311
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
 
 Review comment:
   I can change it to that. However, I don't think the comma before _"but"_ is correct [1].
   
   [1] https://www.grammarly.com/blog/comma-before-but/

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370250677
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
 
 Review comment:
   @dawidwys 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370758196
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.preallocate</td>
+      <td>Pre-allocation is no longer supported, and managed memory is always allocated lazily</td>
+    </tr>
+  </tbody>
+</table>
+
+
+The following options, if used, are interpreted as other new options in order to
+maintain backwards compatibility where it makes sense:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated config option</th>
+      <th class="text-left">Interpreted as</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.heap.size</td>
+      <td>
+        <ul>
+          <li>taskmanager.memory.flink.size for standalone deployment</li>
+          <li>taskmanager.memory.process.size for containerized deployments</li>
+        </ul>
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.size</td>
+      <td>taskmanager.memory.managed.size</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.min</td>
+      <td>taskmanager.memory.network.min</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.max</td>
+      <td>taskmanager.memory.network.max</td>
+    </tr>
+    <tr>
+      <td>taskmanager.network.memory.fraction</td>
+      <td>taskmanager.memory.network.fraction</td>
+    </tr>
+  </tbody>
+</table>
+
+
+The container cut-off configuration options, `containerized.heap-cutoff-ratio`
+and `containerized.heap-cutoff-min`, have no effect for task executor processes
+anymore but they still have the same semantics for the JobManager process.
+
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
 
 Review comment:
   ```suggestion
   The identifier `raw` is a reserved keyword now and must be escaped with
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370755387
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
 
 Review comment:
   ```suggestion
   behaviour, where Flink tries to spread out the workload across all currently available
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371364467
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
 
 Review comment:
   Flink will now always use credit-based flow control.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577784526
 
 
   @morsapaes

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370251253
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
 
 Review comment:
   @carp84 @Myasuka 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] alpinegizmo commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
alpinegizmo commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370736220
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
 
 Review comment:
   Which is it that's no longer supported: off-heap or on-heap?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r371270287
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
 
 Review comment:
   We are not consistent in the flink repository:
   ```
   grep -R "command-line" **/*.md | wc -l
         96
   ```
   vs
   ```
   grep -R "command line" **/*.md | wc -l
        254
   ```
   
   Also:
   https://en.wikipedia.org/wiki/Command-line_interface
   vs
   https://tutorials.ubuntu.com/tutorial/command-line-for-beginners#0

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r372337272
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,377 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### New Task Executor Memory Model ([FLINK-13980](https://issues.apache.org/jira/browse/FLINK-13980))
+With
+[FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors),
+a new memory model has been introduced for the task executor. New configuration
+options have been introduced to control the memory consumption of the task
+executor process. This affects all types of deployments: standalone, YARN,
+Mesos, and the new active Kubernetes integration. The memory model of the job
+manager process has not been changed yet but it is planned to be updated as
+well.
+
+If you try to reuse your previous Flink configuration without any adjustments,
+the new memory model can result in differently computed memory parameters for
+the JVM and, thus, performance changes.
+
+Please check the user documentation <!-- TODO: insert link --> for more details.
+
+##### Deprecation and breaking changes
+The following options have been removed and have no effect anymore:
+
+<table class="table">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 30%">Deprecated/removed config option</th>
+      <th class="text-left">Note</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>taskmanager.memory.fraction</td>
+      <td>
+        Check also the description of the new option
+        <code class="highlighter-rouge">taskmanager.memory.managed.fraction</code>
+        but it has different semantics and the value of the deprecated option
+        usually has to be adjusted
+      </td>
+    </tr>
+    <tr>
+      <td>taskmanager.memory.off-heap</td>
+      <td>On-heap managed memory is no longer supported</td>
 
 Review comment:
   fixed in 62b54f9b26b8546dedb250788596aa9a1fe649c8

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370249170
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
 
 Review comment:
   @AHeise 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#issuecomment-577792274
 
 
   <!--
   Meta data
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:91aa4fb6f751281910c9e470aa780fa7abe43aa1 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145805589 TriggerType:PUSH TriggerID:91aa4fb6f751281910c9e470aa780fa7abe43aa1
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/145946502 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602 TriggerType:PUSH TriggerID:1cd3fcd7425caff4fd8cd78a5544812270a9e8b8
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://travis-ci.com/flink-ci/flink/builds/146228665 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:a10c5f6747feef290d530df5d649bb00b8f4affd Status:SUCCESS URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629 TriggerType:PUSH TriggerID:a10c5f6747feef290d530df5d649bb00b8f4affd
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:PENDING URL:https://travis-ci.com/flink-ci/flink/builds/146364581 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Status:FAILURE URL:https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636 TriggerType:PUSH TriggerID:e8c1e57135e0686292ed4c8bde9d67793eb1fa97
   Hash:58c736880ee94079caa1d05e09bfa0d9a50a392b Status:UNKNOWN URL:TBD TriggerType:PUSH TriggerID:58c736880ee94079caa1d05e09bfa0d9a50a392b
   -->
   ## CI report:
   
   * 91aa4fb6f751281910c9e470aa780fa7abe43aa1 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145805589) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4589) 
   * 1cd3fcd7425caff4fd8cd78a5544812270a9e8b8 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/145946502) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4602) 
   * a10c5f6747feef290d530df5d649bb00b8f4affd Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/146228665) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4629) 
   * e8c1e57135e0686292ed4c8bde9d67793eb1fa97 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/146364581) Azure: [FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4636) 
   * 58c736880ee94079caa1d05e09bfa0d9a50a392b UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252957
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

Posted by GitBox <gi...@apache.org>.
GJL commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370252275
 
 

 ##########
 File path: docs/release-notes/flink-1.10.zh.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
+The config option `taskmanager.network.bounded-blocking-subpartition-type` has
+been renamed to `taskmanager.network.blocking-shuffle.type`. Moreover, the
+default value of the aforementioned config option has been changed from `auto`
+to `file`. The reason is that TaskManagers running on cluster managers, such
+as YARN, could easily exceed the memory budget of their container when
+memory-mapping large result subpartitions.
+
+#### Removal of non-credit-based Network Flow Control ([FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516))
+The non-credit-based network flow control code was removed alongside of the
+configuration option `taskmanager.network.credit-model`. Credit-based flow
+control is now the only option.
+
+#### Removal of HighAvailabilityOptions#HA_JOB_DELAY ([FLINK-13885](https://issues.apache.org/jira/browse/FLINK-13885))
+The configuration option `high-availability.job.delay` has been removed
+since it is no longer used.
+
+
+### State
+#### Enable Background Cleanup of State with TTL by default ([FLINK-14898](https://issues.apache.org/jira/browse/FLINK-14898))
+[Background cleanup of expired state with TTL]({{ site.baseurl }}/dev/stream/state/state.html#cleanup-of-expired-state)
+is activated by default now for all state backends shipped with Flink.
+Note that the RocksDB state backend implements background cleanup by employing
+a compaction filter. This has the caveat that even if a Flink job does not
+store state with TTL, a minor performance penalty during compaction incurs.
+Users that experience noticeable performance degradation during RocksDB
+compaction can disable the TTL compaction filter by setting the config option
+`state.backend.rocksdb.ttl.compaction.filter.enabled` to `false`.
+
+#### Deprecation of StateTtlConfig#Builder#cleanupInBackground() ([FLINK-15606](https://issues.apache.org/jira/browse/FLINK-15606))
+`StateTtlConfig#Builder#cleanupInBackground()` is deprecated now because the
+background cleanup of state with TTL is already enabled by default.
+
+#### Timers are stored in RocksDB by default when using RocksDBStateBackend ([FLINK-15637](https://issues.apache.org/jira/browse/FLINK-15637))
+The default timer store has been changed from Heap to RocksDB for the RocksDB
+state backend to support asynchronous snapshots for timer state and better
+scalability, with less than 5% performance cost. Users that find the
+performance decline critical, can set
+`state.backend.rocksdb.timer-service.factory` to `HEAP` in `flink-conf.yaml`
+to restore the old behavior.
+
+#### Removal of StateTtlConfig#TimeCharacteristic ([FLINK-15605](https://issues.apache.org/jira/browse/FLINK-15605))
+`StateTtlConfig#TimeCharacteristic` has been removed in favor of
+`StateTtlConfig#TtlTimeCharacteristic`.
+
+#### RocksDB Upgrade ([FLINK-14483](https://issues.apache.org/jira/browse/FLINK-14483))
+We have again released our own RocksDB build (FRocksDB) which is based on
+RocksDB version 5.17.2 with several feature backports for the [Write Buffer
+Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager) to
+enable limiting RocksDB's memory usage. The decision to release our own
+RocksDB build was made because later RocksDB versions suffer from a
+[performance regression under certain
+workloads](https://github.com/facebook/rocksdb/issues/5774).
+
+#### Improved RocksDB Savepoint Recovery ([FLINK-12785](https://issues.apache.org/jira/browse/FLINK-12785))
+In previous Flink releases users may encounter an `OutOfMemoryError` when
+restoring from a RocksDB savepoint containing large KV pairs. For that reason
+we introduced a configurable memory limit in the `RocksDBWriteBatchWrapper`
+with a default value of 2 MB. RocksDB's WriteBatch will flush before the
+consumed memory limit is reached. If needed, the limit can be tuned via the
+`state.backend.rocksdb.write-batch-size` config option in `flink-conf.yaml`.
+
+
+### PyFlink
+#### Python 2 Support dropped ([FLINK-14469](https://issues.apache.org/jira/browse/FLINK-14469))
+Beginning from this release, PyFlink does not support Python 2. This is because [Python 2 has
+reached end of life on January 1,
+2020](https://www.python.org/doc/sunset-python-2/), and several third-party
+projects that PyFlink depends on are also dropping Python 2 support.
+
+
+### Monitoring
+#### InfluxdbReporter skips Inf and NaN ([FLINK-12147](https://issues.apache.org/jira/browse/FLINK-12147))
+The `InfluxdbReporter` now silently skips values that are unsupported by
+InfluxDB, such as `Double.POSITIVE_INFINITY`, `Double.NEGATIVE_INFINITY`,
+`Double.NaN`, etc.
+
+
+### Connectors
+#### Kinesis Connector License Change ([FLINK-12847](https://issues.apache.org/jira/browse/FLINK-12847))
+flink-connector-kinesis is now licensed under the Apache License, Version 2.0,
+and its artifacts will be deployed to Maven central as part of the Flink
+releases. Users no longer need to build the  Kinesis connector from source themselves.
+
+
+### Miscellaneous Interface Changes
+#### ExecutionConfig#getGlobalJobParameters() cannot return null anymore ([FLINK-9787](https://issues.apache.org/jira/browse/FLINK-9787))
+`ExecutionConfig#getGlobalJobParameters` has been changed to never return
+`null`. Conversely,
+`ExecutionConfig#setGlobalJobParameters(GlobalJobParameters)` will not accept
+`null` values anymore.
+
+#### Change of contract in MasterTriggerRestoreHook interface ([FLINK-14344](https://issues.apache.org/jira/browse/FLINK-14344))
+Implementations of `MasterTriggerRestoreHook#triggerCheckpoint(long, long,
+Executor)` must be non-blocking now. Any blocking operation should be executed
+asynchronously, e.g., using the given executor.
+
+#### Client-/ and Server-Side Separation of HA Services ([FLINK-13750](https://issues.apache.org/jira/browse/FLINK-13750))
 
 Review comment:
   @tillrohrmann 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services