You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/01/24 12:50:33 UTC

[GitHub] [flink] pnowojski commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10

pnowojski commented on a change in pull request #10937: [FLINK-15743][docs] Add release notes for Flink 1.10
URL: https://github.com/apache/flink/pull/10937#discussion_r370617530
 
 

 ##########
 File path: docs/release-notes/flink-1.10.md
 ##########
 @@ -0,0 +1,287 @@
+---
+title: "Release Notes - Flink 1.10"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.10.
+
+* This will be replaced by the TOC
+{:toc}
+
+
+### Clusters & Deployment
+#### FileSystems should be loaded via Plugin Architecture ([FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956))
+In the s3-hadoop and s3-presto filesystems, classes from external
+dependencies, such as the AWS SDK, are no longer relocated. In the past, class
+relocation turned out to be problematic in combination with custom
+implementations of the `AWSCredentialsProvider` interface. As a consequence of
+removing class relocation, s3-hadoop and s3-presto filesystems can only be
+used as [plugins]({{ site.baseurl }}/ops/filesystems/#pluggable-file-systems).
+Other filesystems are strongly recommended to be only used as plugins.
+
+#### Flink Client respects Classloading Policy ([FLINK-13749](https://issues.apache.org/jira/browse/FLINK-13749))
+The Flink client now also respects the configured classloading policy, i.e.,
+`parent-first` or `child-first` classloading. Previously, only cluster
+components such as the job manager or task manager supported this setting.
+This does mean that users might get different behaviour in their programs, in
+which case they should configure the classloading policy explicitly to use
+`parent-first` classloading, which was the previous (hard-coded) behaviour.
+
+#### Enable spreading out Tasks evenly across all TaskManagers ([FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122))
+When [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077)
+was rolled out with Flink 1.5.0, we changed how slots are allocated
+from TaskManagers (TMs). Instead of evenly allocating the slots from all
+registered TMs, we had the tendency to exhaust a TM before using another one.
+To use a scheduling strategy that is more similar to the pre-FLIP-6
+behaviour, where Flink tries to spread out the workload across all available
+TMs, one can set `cluster.evenly-spread-out-slots: true` in the
+`flink-conf.yaml`.
+
+#### Directory Structure Change for highly available Artifacts ([FLINK-13633](https://issues.apache.org/jira/browse/FLINK-13633))
+All highly available artifacts stored by Flink will now be stored under
+`HA_STORAGE_DIR/HA_CLUSTER_ID` with `HA_STORAGE_DIR` configured by
+`high-availability.storageDir` and `HA_CLUSTER_ID` configured by
+`high-availability.cluster-id`.
+
+#### Resources and JARs shipped via --yarnship will be ordered in the Classpath ([FLINK-13127](https://issues.apache.org/jira/browse/FLINK-13127))
+When using the `--yarnship` command line option, resource directories and jar
+files will be added to the classpath in lexicographical order with resources
+directories appearing first.
+
+#### Removal of --yn/--yarncontainer Command Line Options ([FLINK-12362](https://issues.apache.org/jira/browse/FLINK-12362))
+The Flink CLI no longer supports the deprecated command line options
+`-yn/--yarncontainer`, which were used to specify the number of containers to
+start on YARN. This option has been deprecated since the introduction of
+[FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077).
+All Flink users are advised to remove this command line option.
+
+#### Mesos Integration will reject expired Offers faster ([FLINK-14029](https://issues.apache.org/jira/browse/FLINK-14029))
+Flink's Mesos integration now rejects all expired offers instead of only 4.
+This improves the situation where Fenzo holds on to a lot of expired offers
+without giving them back to the Mesos resource manager.
+
+#### Scheduler Rearchitecture ([FLINK-14651](https://issues.apache.org/jira/browse/FLINK-14651))
+Flink's scheduler was refactored with the goal of making scheduling strategies
+customizable in the future. Users that experience issues related to scheduling
+can fallback to the legacy scheduler by setting `jobmanager.scheduler: legacy`
+in their `flink-conf.yaml`.
+
+
+### Memory Management
+#### Fine Grained Operator Resource Management ([FLINK-14058](https://issues.apache.org/jira/browse/FLINK-14058))
+<!-- wip -->
+#### FLIP-49
+<!-- wip -->
+
+
+### Table API & SQL
+#### Rename of ANY Type to RAW Type ([FLINK-14904](https://issues.apache.org/jira/browse/FLINK-14904))
+The identifier `raw` is a reserved keyword now and be must be escaped with
+backticks when used as a SQL field or function name.
+
+#### Rename of Table Connector Properties ([FLINK-14649](https://issues.apache.org/jira/browse/FLINK-14649))
+Some indexed properties for table connectors have been flattened and renamed
+for a better user experience when writing DDL statements. This affects the
+Kafka Connector properties `connector.properties` and
+`connector.specific-offsets`. Furthermore, the Elasticsearch Connector
+property `connector.hosts` is affected. The aforementioned, old properties are
+deprecated and will be removed in future versions. Please consult the [Table
+Connectors documentation]({{ site.baseurl }}/dev/table/connect.html#table-connectors)
+for the new property names.
+
+#### Methods for interacting with temporary Tables & Views ([FLINK-14490](https://issues.apache.org/jira/browse/FLINK-14490))
+Methods `registerTable()`/`registerDataStream()`/`registerDataSet()` become
+deprecated in favor of `createTemporaryView()`, which better adheres to the
+corresponding SQL term.
+
+The `scan()` method becomes deprecated in favor of the `from()` method.
+
+Methods `registerTableSource()`/`registerTableSink()` become deprecated in favor of
+`ConnectTableDescriptor#createTemporaryTable()`. That method expects only a
+set of string properties as a description of a TableSource or TableSinks
+instead of an instance of a class in case of the deprecated methods. This in
+return makes it possible to reliably store those definitions in catalogs.
+
+Method `insertInto(String path, String... pathContinued)` has been removed in
+favor of in `insertInto(String path)`.
+
+All the newly introduced methods accept a String identifier which will be
+parsed into a 3-part identifier. The parser supports quoting the identifier.
+It also requires escaping any reserved SQL keywords.
+
+#### Removal of ExternalCatalog API ([FLINK-13697](https://issues.apache.org/jira/browse/FLINK-13697))
+The deprecated `ExternalCatalog` API has been dropped. This includes:
+
+* `ExternalCatalog` (and all dependent classes, e.g., `ExternalTable`)
+* `SchematicDescriptor`, `MetadataDescriptor`, `StatisticsDescriptor`
+
+Users are advised to use the [new Catalog API]({{ site.baseurl }}/dev/table/catalogs.html#catalog-api).
+
+
+### Configuration
+#### Introduction of Type Information for ConfigOptions ([FLINK-14493](https://issues.apache.org/jira/browse/FLINK-14493))
+Getters of `org.apache.flink.configuration.Configuration` throw
+`IllegalArgumentException` now if the configured value cannot be parsed into
+the required type. In previous Flink releases the default value was returned
+in such cases.
+
+#### Increase of default Restart Delay ([FLINK-13884](https://issues.apache.org/jira/browse/FLINK-13884))
+The default restart delay for all shipped restart strategies, i.e., `fixed-delay`
+and `failure-rate`, has been raised to 1 s (from originally 0 s).
+
+#### Simplification of Cluster-Level Restart Strategy Configuration ([FLINK-13921](https://issues.apache.org/jira/browse/FLINK-13921))
+Previously, if the user had set `restart-strategy.fixed-delay.attempts` or
+`restart-strategy.fixed-delay.delay` but had not configured the option
+`restart-strategy`, the cluster-level restart strategy would have been
+`fixed-delay`. Now the cluster-level restart strategy is only determined by
+the config option `restart-strategy` and whether checkpointing is enabled. See
+[_"Task Failure Recovery"_]({{ site.baseurl }}/dev/task_failure_recovery.html)
+for details.
+
+#### Disable memory-mapped BoundedBlockingSubpartition by default ([FLINK-14952](https://issues.apache.org/jira/browse/FLINK-14952))
 
 Review comment:
   I would either rephrase it:
   ```
   The reason is that TaskManagers running on YARN with `auto`, could easily exceed 
   the memory budget of their container, due to incorrectly accounted memory-mapped 
   files memory usage.
   ```
   ```such as YARN``` is a bit too broad, as this problem is probably just about YARN, as other cluster managers (at least those that we checked) are more clever when it comes to mmapped files.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services