You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by pr...@apache.org on 2020/09/13 04:25:37 UTC

[spark-website] branch asf-site updated: Fix formatting in release notes.

This is an automated email from the ASF dual-hosted git repository.

prashant pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new ae6991e  Fix formatting in release notes.
ae6991e is described below

commit ae6991ef625dbcb5c27f71ad45426cdca13f501f
Author: Prashant Sharma <pr...@apache.org>
AuthorDate: Sun Sep 13 04:25:24 2020 +0000

    Fix formatting in release notes.
---
 releases/_posts/2020-09-12-spark-release-2-4-7.md |  37 +++++++-
 site/releases/spark-release-2-4-7.html            | 108 ++++++++++++++--------
 2 files changed, 108 insertions(+), 37 deletions(-)

diff --git a/releases/_posts/2020-09-12-spark-release-2-4-7.md b/releases/_posts/2020-09-12-spark-release-2-4-7.md
index 6f4234d..e98e0ce 100644
--- a/releases/_posts/2020-09-12-spark-release-2-4-7.md
+++ b/releases/_posts/2020-09-12-spark-release-2-4-7.md
@@ -15,42 +15,76 @@ Spark 2.4.7 is a maintenance release containing stability, correctness, and secu
 
 ### Notable changes
 [SPARK-28818] - FrequentItems applies an incorrect schema to the resulting dataframe when nulls are present
+
 [SPARK-31511] - Make BytesToBytesMap iterator() thread-safe
+
 [SPARK-31703] - Changes made by SPARK-26985 break reading parquet files correctly in BigEndian architectures (AIX + LinuxPPC64)
+
 [SPARK-31854] - Different results of query execution with wholestage codegen on and off
+
 [SPARK-31903] - toPandas with Arrow enabled doesn't show metrics in Query UI.
+
 [SPARK-31923] - Event log cannot be generated when some internal accumulators use unexpected types
+
 [SPARK-31935] - Hadoop file system config should be effective in data source options
+
 [SPARK-31941] - Handling the exception in SparkUI for getSparkUser method
+
 [SPARK-31967] - Loading jobs UI page takes 40 seconds
+
 [SPARK-31968] - write.partitionBy() creates duplicate subdirectories when user provides duplicate columns
+
 [SPARK-31980] - Spark sequence() fails if start and end of range are identical dates
+
 [SPARK-31997] - Should drop test_udtf table when SingleSessionSuite completed
+
 [SPARK-32000] - Fix the flaky testcase for partially launched task in barrier-mode.
+
 [SPARK-32003] - Shuffle files for lost executor are not unregistered if fetch failure occurs after executor is lost
+
 [SPARK-32024] - Disk usage tracker went negative in HistoryServerDiskManager
+
 [SPARK-32028] - App id link in history summary page point to wrong application attempt
+
 [SPARK-32034] - Port HIVE-14817: Shutdown the SessionManager timeoutChecker thread properly upon shutdown
+
 [SPARK-32044] - [SS] 2.4 Kafka continuous processing print mislead initial offsets log
+
 [SPARK-32098] - Use iloc for positional slicing instead of direct slicing in createDataFrame with Arrow
+
 [SPARK-32115] - Incorrect results for SUBSTRING when overflow
+
 [SPARK-32131] - Fix AnalysisException messages at UNION/INTERSECT/EXCEPT/MINUS operations
+
 [SPARK-32167] - nullability of GetArrayStructFields is incorrect
+
 [SPARK-32214] - The type conversion function generated in makeFromJava for "other" type uses a wrong variable.
+
 [SPARK-32238] - Use Utils.getSimpleName to avoid hitting Malformed class name in ScalaUDF
+
 [SPARK-32280] - AnalysisException thrown when query contains several JOINs
+
 [SPARK-32300] - toPandas with no partitions should work
+
 [SPARK-32344] - Unevaluable expr is set to FIRST/LAST ignoreNullsExpr in distinct aggregates
+
 [SPARK-32364] - Use CaseInsensitiveMap for DataFrameReader/Writer options
+
 [SPARK-32372] - "Resolved attribute(s) XXX missing" after dudup conflict references
+
 [SPARK-32377] - CaseInsensitiveMap should be deterministic for addition
+
 [SPARK-32609] - Incorrect exchange reuse with DataSourceV2
+
 [SPARK-32672] - Data corruption in some cached compressed boolean columns
+
 [SPARK-32693] - Compare two dataframes with same schema except nullable property
+
 [SPARK-32771] - The example of expressions.Aggregator in Javadoc / Scaladoc is wrong
+
 [SPARK-32810] - CSV/JSON data sources should avoid globbing paths when inferring schema
-[SPARK-32812] - Run tests script for Python fails in certain environments
 
+[SPARK-32812] - Run tests script for Python fails in certain environments
 
 
 ### Dependency Changes
@@ -62,3 +96,4 @@ Spark 2.4.7 is a maintenance release containing stability, correctness, and secu
 You can consult JIRA for the [detailed changes](https://s.apache.org/v2.4.7-release-notes).
 
 We would like to acknowledge all community members for contributing patches to this release.
+
diff --git a/site/releases/spark-release-2-4-7.html b/site/releases/spark-release-2-4-7.html
index 5ed2e66..b008f13 100644
--- a/site/releases/spark-release-2-4-7.html
+++ b/site/releases/spark-release-2-4-7.html
@@ -206,42 +206,77 @@
 <p>Spark 2.4.7 is a maintenance release containing stability, correctness, and security fixes. This release is based on the branch-2.4 maintenance branch of Spark. We strongly recommend all 2.4 users to upgrade to this stable release.</p>
 
 <h3 id="notable-changes">Notable changes</h3>
-<p>[SPARK-28818] - FrequentItems applies an incorrect schema to the resulting dataframe when nulls are present
-[SPARK-31511] - Make BytesToBytesMap iterator() thread-safe
-[SPARK-31703] - Changes made by SPARK-26985 break reading parquet files correctly in BigEndian architectures (AIX + LinuxPPC64)
-[SPARK-31854] - Different results of query execution with wholestage codegen on and off
-[SPARK-31903] - toPandas with Arrow enabled doesn&#8217;t show metrics in Query UI.
-[SPARK-31923] - Event log cannot be generated when some internal accumulators use unexpected types
-[SPARK-31935] - Hadoop file system config should be effective in data source options
-[SPARK-31941] - Handling the exception in SparkUI for getSparkUser method
-[SPARK-31967] - Loading jobs UI page takes 40 seconds
-[SPARK-31968] - write.partitionBy() creates duplicate subdirectories when user provides duplicate columns
-[SPARK-31980] - Spark sequence() fails if start and end of range are identical dates
-[SPARK-31997] - Should drop test_udtf table when SingleSessionSuite completed
-[SPARK-32000] - Fix the flaky testcase for partially launched task in barrier-mode.
-[SPARK-32003] - Shuffle files for lost executor are not unregistered if fetch failure occurs after executor is lost
-[SPARK-32024] - Disk usage tracker went negative in HistoryServerDiskManager
-[SPARK-32028] - App id link in history summary page point to wrong application attempt
-[SPARK-32034] - Port HIVE-14817: Shutdown the SessionManager timeoutChecker thread properly upon shutdown
-[SPARK-32044] - [SS] 2.4 Kafka continuous processing print mislead initial offsets log
-[SPARK-32098] - Use iloc for positional slicing instead of direct slicing in createDataFrame with Arrow
-[SPARK-32115] - Incorrect results for SUBSTRING when overflow
-[SPARK-32131] - Fix AnalysisException messages at UNION/INTERSECT/EXCEPT/MINUS operations
-[SPARK-32167] - nullability of GetArrayStructFields is incorrect
-[SPARK-32214] - The type conversion function generated in makeFromJava for &#8220;other&#8221; type uses a wrong variable.
-[SPARK-32238] - Use Utils.getSimpleName to avoid hitting Malformed class name in ScalaUDF
-[SPARK-32280] - AnalysisException thrown when query contains several JOINs
-[SPARK-32300] - toPandas with no partitions should work
-[SPARK-32344] - Unevaluable expr is set to FIRST/LAST ignoreNullsExpr in distinct aggregates
-[SPARK-32364] - Use CaseInsensitiveMap for DataFrameReader/Writer options
-[SPARK-32372] - &#8220;Resolved attribute(s) XXX missing&#8221; after dudup conflict references
-[SPARK-32377] - CaseInsensitiveMap should be deterministic for addition
-[SPARK-32609] - Incorrect exchange reuse with DataSourceV2
-[SPARK-32672] - Data corruption in some cached compressed boolean columns
-[SPARK-32693] - Compare two dataframes with same schema except nullable property
-[SPARK-32771] - The example of expressions.Aggregator in Javadoc / Scaladoc is wrong
-[SPARK-32810] - CSV/JSON data sources should avoid globbing paths when inferring schema
-[SPARK-32812] - Run tests script for Python fails in certain environments</p>
+<p>[SPARK-28818] - FrequentItems applies an incorrect schema to the resulting dataframe when nulls are present</p>
+
+<p>[SPARK-31511] - Make BytesToBytesMap iterator() thread-safe</p>
+
+<p>[SPARK-31703] - Changes made by SPARK-26985 break reading parquet files correctly in BigEndian architectures (AIX + LinuxPPC64)</p>
+
+<p>[SPARK-31854] - Different results of query execution with wholestage codegen on and off</p>
+
+<p>[SPARK-31903] - toPandas with Arrow enabled doesn&#8217;t show metrics in Query UI.</p>
+
+<p>[SPARK-31923] - Event log cannot be generated when some internal accumulators use unexpected types</p>
+
+<p>[SPARK-31935] - Hadoop file system config should be effective in data source options</p>
+
+<p>[SPARK-31941] - Handling the exception in SparkUI for getSparkUser method</p>
+
+<p>[SPARK-31967] - Loading jobs UI page takes 40 seconds</p>
+
+<p>[SPARK-31968] - write.partitionBy() creates duplicate subdirectories when user provides duplicate columns</p>
+
+<p>[SPARK-31980] - Spark sequence() fails if start and end of range are identical dates</p>
+
+<p>[SPARK-31997] - Should drop test_udtf table when SingleSessionSuite completed</p>
+
+<p>[SPARK-32000] - Fix the flaky testcase for partially launched task in barrier-mode.</p>
+
+<p>[SPARK-32003] - Shuffle files for lost executor are not unregistered if fetch failure occurs after executor is lost</p>
+
+<p>[SPARK-32024] - Disk usage tracker went negative in HistoryServerDiskManager</p>
+
+<p>[SPARK-32028] - App id link in history summary page point to wrong application attempt</p>
+
+<p>[SPARK-32034] - Port HIVE-14817: Shutdown the SessionManager timeoutChecker thread properly upon shutdown</p>
+
+<p>[SPARK-32044] - [SS] 2.4 Kafka continuous processing print mislead initial offsets log</p>
+
+<p>[SPARK-32098] - Use iloc for positional slicing instead of direct slicing in createDataFrame with Arrow</p>
+
+<p>[SPARK-32115] - Incorrect results for SUBSTRING when overflow</p>
+
+<p>[SPARK-32131] - Fix AnalysisException messages at UNION/INTERSECT/EXCEPT/MINUS operations</p>
+
+<p>[SPARK-32167] - nullability of GetArrayStructFields is incorrect</p>
+
+<p>[SPARK-32214] - The type conversion function generated in makeFromJava for &#8220;other&#8221; type uses a wrong variable.</p>
+
+<p>[SPARK-32238] - Use Utils.getSimpleName to avoid hitting Malformed class name in ScalaUDF</p>
+
+<p>[SPARK-32280] - AnalysisException thrown when query contains several JOINs</p>
+
+<p>[SPARK-32300] - toPandas with no partitions should work</p>
+
+<p>[SPARK-32344] - Unevaluable expr is set to FIRST/LAST ignoreNullsExpr in distinct aggregates</p>
+
+<p>[SPARK-32364] - Use CaseInsensitiveMap for DataFrameReader/Writer options</p>
+
+<p>[SPARK-32372] - &#8220;Resolved attribute(s) XXX missing&#8221; after dudup conflict references</p>
+
+<p>[SPARK-32377] - CaseInsensitiveMap should be deterministic for addition</p>
+
+<p>[SPARK-32609] - Incorrect exchange reuse with DataSourceV2</p>
+
+<p>[SPARK-32672] - Data corruption in some cached compressed boolean columns</p>
+
+<p>[SPARK-32693] - Compare two dataframes with same schema except nullable property</p>
+
+<p>[SPARK-32771] - The example of expressions.Aggregator in Javadoc / Scaladoc is wrong</p>
+
+<p>[SPARK-32810] - CSV/JSON data sources should avoid globbing paths when inferring schema</p>
+
+<p>[SPARK-32812] - Run tests script for Python fails in certain environments</p>
 
 <h3 id="dependency-changes">Dependency Changes</h3>
 
@@ -252,6 +287,7 @@
 <p>We would like to acknowledge all community members for contributing patches to this release.</p>
 
 
+
 <p>
 <br/>
 <a href="/news/">Spark News Archive</a>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org