You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2020/08/22 01:10:34 UTC
[spark] branch branch-3.0 updated: [MINOR][DOCS] backport PR#29443
to fix typo in doc, log messages and comments
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new 9ccc790 [MINOR][DOCS] backport PR#29443 to fix typo in doc,log messages and comments
9ccc790 is described below
commit 9ccc79036f0c562b4b3d0e38c8fdfbb70f6e3319
Author: Brandon Jiang <br...@users.noreply.github.com>
AuthorDate: Sat Aug 22 10:08:39 2020 +0900
[MINOR][DOCS] backport PR#29443 to fix typo in doc,log messages and comments
### What changes were proposed in this pull request?
backport PR #29443 to fix typo for docs, log messages and comments
### Why are the changes needed?
typo fix to increase readability
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
manual test has been performed to test the updated
Closes #29512 from brandonJY/branch-3.0.
Authored-by: Brandon Jiang <br...@users.noreply.github.com>
Signed-off-by: HyukjinKwon <gu...@apache.org>
---
.../src/main/java/org/apache/spark/network/util/TransportConf.java | 2 +-
core/src/main/java/org/apache/spark/api/plugin/DriverPlugin.java | 2 +-
.../scala/org/apache/spark/resource/ResourceDiscoveryScriptPlugin.scala | 2 +-
docs/job-scheduling.md | 2 +-
docs/sql-ref-syntax-qry-select-groupby.md | 2 +-
docs/sql-ref-syntax-qry-select-hints.md | 2 +-
docs/sql-ref.md | 2 +-
launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java | 2 +-
.../main/java/org/apache/spark/sql/connector/catalog/TableCatalog.java | 2 +-
.../main/scala/org/apache/spark/sql/catalyst/QueryPlanningTracker.scala | 2 +-
.../spark/sql/execution/datasources/v2/ShowTablePropertiesExec.scala | 2 +-
11 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
index 6c37f9a..646e427 100644
--- a/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
+++ b/common/network-common/src/main/java/org/apache/spark/network/util/TransportConf.java
@@ -290,7 +290,7 @@ public class TransportConf {
}
/**
- * If enabled then off-heap byte buffers will be prefered for the shared ByteBuf allocators.
+ * If enabled then off-heap byte buffers will be preferred for the shared ByteBuf allocators.
*/
public boolean preferDirectBufsForSharedByteBufAllocators() {
return conf.getBoolean("spark.network.io.preferDirectBufs", true);
diff --git a/core/src/main/java/org/apache/spark/api/plugin/DriverPlugin.java b/core/src/main/java/org/apache/spark/api/plugin/DriverPlugin.java
index 0c0d0df..1d676ff 100644
--- a/core/src/main/java/org/apache/spark/api/plugin/DriverPlugin.java
+++ b/core/src/main/java/org/apache/spark/api/plugin/DriverPlugin.java
@@ -41,7 +41,7 @@ public interface DriverPlugin {
* initialization.
* <p>
* It's recommended that plugins be careful about what operations are performed in this call,
- * preferrably performing expensive operations in a separate thread, or postponing them until
+ * preferably performing expensive operations in a separate thread, or postponing them until
* the application has fully started.
*
* @param sc The SparkContext loading the plugin.
diff --git a/core/src/main/scala/org/apache/spark/resource/ResourceDiscoveryScriptPlugin.scala b/core/src/main/scala/org/apache/spark/resource/ResourceDiscoveryScriptPlugin.scala
index 11a9bb8..d861e91 100644
--- a/core/src/main/scala/org/apache/spark/resource/ResourceDiscoveryScriptPlugin.scala
+++ b/core/src/main/scala/org/apache/spark/resource/ResourceDiscoveryScriptPlugin.scala
@@ -29,7 +29,7 @@ import org.apache.spark.util.Utils.executeAndGetOutput
/**
* The default plugin that is loaded into a Spark application to control how custom
* resources are discovered. This executes the discovery script specified by the user
- * and gets the json output back and contructs ResourceInformation objects from that.
+ * and gets the json output back and constructs ResourceInformation objects from that.
* If the user specifies custom plugins, this is the last one to be executed and
* throws if the resource isn't discovered.
*
diff --git a/docs/job-scheduling.md b/docs/job-scheduling.md
index eaacfa4..5c19c77 100644
--- a/docs/job-scheduling.md
+++ b/docs/job-scheduling.md
@@ -95,7 +95,7 @@ varies across cluster managers:
In standalone mode, simply start your workers with `spark.shuffle.service.enabled` set to `true`.
In Mesos coarse-grained mode, run `$SPARK_HOME/sbin/start-mesos-shuffle-service.sh` on all
-slave nodes with `spark.shuffle.service.enabled` set to `true`. For instance, you may do so
+worker nodes with `spark.shuffle.service.enabled` set to `true`. For instance, you may do so
through Marathon.
In YARN mode, follow the instructions [here](running-on-yarn.html#configuring-the-external-shuffle-service).
diff --git a/docs/sql-ref-syntax-qry-select-groupby.md b/docs/sql-ref-syntax-qry-select-groupby.md
index 6137c0d..934e5f7 100644
--- a/docs/sql-ref-syntax-qry-select-groupby.md
+++ b/docs/sql-ref-syntax-qry-select-groupby.md
@@ -58,7 +58,7 @@ aggregate_name ( [ DISTINCT ] expression [ , ... ] ) [ FILTER ( WHERE boolean_ex
* **grouping_expression**
- Specifies the critieria based on which the rows are grouped together. The grouping of rows is performed based on
+ Specifies the criteria based on which the rows are grouped together. The grouping of rows is performed based on
result values of the grouping expressions. A grouping expression may be a column alias, a column position
or an expression.
diff --git a/docs/sql-ref-syntax-qry-select-hints.md b/docs/sql-ref-syntax-qry-select-hints.md
index 247ce48..5f1cb4c 100644
--- a/docs/sql-ref-syntax-qry-select-hints.md
+++ b/docs/sql-ref-syntax-qry-select-hints.md
@@ -31,7 +31,7 @@ Hints give users a way to suggest how Spark SQL to use specific approaches to ge
### Partitioning Hints
-Partitioning hints allow users to suggest a partitioning stragety that Spark should follow. `COALESCE`, `REPARTITION`,
+Partitioning hints allow users to suggest a partitioning strategy that Spark should follow. `COALESCE`, `REPARTITION`,
and `REPARTITION_BY_RANGE` hints are supported and are equivalent to `coalesce`, `repartition`, and
`repartitionByRange` [Dataset APIs](api/scala/org/apache/spark/sql/Dataset.html), respectively. These hints give users
a way to tune performance and control the number of output files in Spark SQL. When multiple partitioning hints are
diff --git a/docs/sql-ref.md b/docs/sql-ref.md
index 8d0c673..6a87166 100644
--- a/docs/sql-ref.md
+++ b/docs/sql-ref.md
@@ -32,7 +32,7 @@ Spark SQL is Apache Spark's module for working with structured data. This guide
* [Integration with Hive UDFs/UDAFs/UDTFs](sql-ref-functions-udf-hive.html)
* [Identifiers](sql-ref-identifier.html)
* [Literals](sql-ref-literals.html)
- * [Null Semanitics](sql-ref-null-semantics.html)
+ * [Null Semantics](sql-ref-null-semantics.html)
* [SQL Syntax](sql-ref-syntax.html)
* [DDL Statements](sql-ref-syntax-ddl.html)
* [DML Statements](sql-ref-syntax-dml.html)
diff --git a/launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java b/launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java
index 3ff7787..d5a277b 100644
--- a/launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java
+++ b/launcher/src/main/java/org/apache/spark/launcher/LauncherServer.java
@@ -364,7 +364,7 @@ class LauncherServer implements Closeable {
*
* This method allows a short period for the above to happen (same amount of time as the
* connection timeout, which is configurable). This should be fine for well-behaved
- * applications, where they close the connection arond the same time the app handle detects the
+ * applications, where they close the connection around the same time the app handle detects the
* app has finished.
*
* In case the connection is not closed within the grace period, this method forcefully closes
diff --git a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableCatalog.java b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableCatalog.java
index 1809b9c..b818515 100644
--- a/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableCatalog.java
+++ b/sql/catalyst/src/main/java/org/apache/spark/sql/connector/catalog/TableCatalog.java
@@ -176,7 +176,7 @@ public interface TableCatalog extends CatalogPlugin {
* @param newIdent the new table identifier of the table
* @throws NoSuchTableException If the table to rename doesn't exist or is a view
* @throws TableAlreadyExistsException If the new table name already exists or is a view
- * @throws UnsupportedOperationException If the namespaces of old and new identiers do not
+ * @throws UnsupportedOperationException If the namespaces of old and new identifiers do not
* match (optional)
*/
void renameTable(Identifier oldIdent, Identifier newIdent)
diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/QueryPlanningTracker.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/QueryPlanningTracker.scala
index cd75407c..35551d8 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/QueryPlanningTracker.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/QueryPlanningTracker.scala
@@ -28,7 +28,7 @@ import org.apache.spark.util.BoundedPriorityQueue
* There are two separate concepts we track:
*
* 1. Phases: These are broad scope phases in query planning, as listed below, i.e. analysis,
- * optimizationm and physical planning (just planning).
+ * optimization and physical planning (just planning).
*
* 2. Rules: These are the individual Catalyst rules that we track. In addition to time, we also
* track the number of invocations and effective invocations.
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/ShowTablePropertiesExec.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/ShowTablePropertiesExec.scala
index fef63cb..95715fd 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/ShowTablePropertiesExec.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/ShowTablePropertiesExec.scala
@@ -36,7 +36,7 @@ case class ShowTablePropertiesExec(
import scala.collection.JavaConverters._
val toRow = RowEncoder(schema).resolveAndBind().createSerializer()
- // The reservered properties are accessible through DESCRIBE
+ // The reserved properties are accessible through DESCRIBE
val properties = catalogTable.properties.asScala
.filter { case (k, v) => !CatalogV2Util.TABLE_RESERVED_PROPERTIES.contains(k) }
propertyKey match {
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org