You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/02/11 10:30:46 UTC

spark git commit: [SPARK-13264][DOC] Removed multi-byte characters in spark-env.sh.template

Repository: spark
Updated Branches:
  refs/heads/master 18bcbbdd8 -> c2f21d889


[SPARK-13264][DOC] Removed multi-byte characters in spark-env.sh.template

In spark-env.sh.template, there are multi-byte characters, this PR will remove it.

Author: Sasaki Toru <sa...@nttdata.co.jp>

Closes #11149 from sasakitoa/remove_multibyte_in_sparkenv.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c2f21d88
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c2f21d88
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c2f21d88

Branch: refs/heads/master
Commit: c2f21d88981789fe3366f2c4040445aeff5bf083
Parents: 18bcbbd
Author: Sasaki Toru <sa...@nttdata.co.jp>
Authored: Thu Feb 11 09:30:36 2016 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Feb 11 09:30:36 2016 +0000

----------------------------------------------------------------------
 R/pkg/R/serialize.R                                                | 2 +-
 conf/spark-env.sh.template                                         | 2 +-
 docs/sql-programming-guide.md                                      | 2 +-
 .../org/apache/spark/ml/regression/LinearRegressionSuite.scala     | 2 +-
 sql/README.md                                                      | 2 +-
 5 files changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/c2f21d88/R/pkg/R/serialize.R
----------------------------------------------------------------------
diff --git a/R/pkg/R/serialize.R b/R/pkg/R/serialize.R
index 095ddb9..70e87a9 100644
--- a/R/pkg/R/serialize.R
+++ b/R/pkg/R/serialize.R
@@ -54,7 +54,7 @@ writeObject <- function(con, object, writeType = TRUE) {
   # passing in vectors as arrays and instead require arrays to be passed
   # as lists.
   type <- class(object)[[1]]  # class of POSIXlt is c("POSIXlt", "POSIXt")
-  # Checking types is needed here, since ‘is.na’ only handles atomic vectors,
+  # Checking types is needed here, since 'is.na' only handles atomic vectors,
   # lists and pairlists
   if (type %in% c("integer", "character", "logical", "double", "numeric")) {
     if (is.na(object)) {

http://git-wip-us.apache.org/repos/asf/spark/blob/c2f21d88/conf/spark-env.sh.template
----------------------------------------------------------------------
diff --git a/conf/spark-env.sh.template b/conf/spark-env.sh.template
index 771251f..a031cd6 100755
--- a/conf/spark-env.sh.template
+++ b/conf/spark-env.sh.template
@@ -41,7 +41,7 @@
 # - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
 # - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
 # - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
-# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: ‘default’)
+# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: 'default')
 # - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
 # - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c2f21d88/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index ce53a39..d246100 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -2389,7 +2389,7 @@ let user control table caching explicitly:
     CACHE TABLE logs_last_month;
     UNCACHE TABLE logs_last_month;
 
-**NOTE:** `CACHE TABLE tbl` is now __eager__ by default not __lazy__. Don’t need to trigger cache materialization manually anymore.
+**NOTE:** `CACHE TABLE tbl` is now __eager__ by default not __lazy__. Don't need to trigger cache materialization manually anymore.
 
 Spark SQL newly introduced a statement to let user control table caching whether or not lazy since Spark 1.2.0:
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c2f21d88/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala
----------------------------------------------------------------------
diff --git a/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala b/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala
index 81fc660..3ae108d 100644
--- a/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala
+++ b/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala
@@ -956,7 +956,7 @@ class LinearRegressionSuite
        V1  -3.7271     2.9032  -1.284   0.3279
        V2   3.0100     0.6022   4.998   0.0378 *
        ---
-       Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
+       Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
 
        (Dispersion parameter for gaussian family taken to be 17.4376)
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c2f21d88/sql/README.md
----------------------------------------------------------------------
diff --git a/sql/README.md b/sql/README.md
index a13bdab..9ea271d 100644
--- a/sql/README.md
+++ b/sql/README.md
@@ -5,7 +5,7 @@ This module provides support for executing relational queries expressed in eithe
 
 Spark SQL is broken up into four subprojects:
  - Catalyst (sql/catalyst) - An implementation-agnostic framework for manipulating trees of relational operators and expressions.
- - Execution (sql/core) - A query planner / execution engine for translating Catalyst’s logical query plans into Spark RDDs.  This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
+ - Execution (sql/core) - A query planner / execution engine for translating Catalyst's logical query plans into Spark RDDs.  This component also includes a new public interface, SQLContext, that allows users to execute SQL or LINQ statements against existing RDDs and Parquet files.
  - Hive Support (sql/hive) - Includes an extension of SQLContext called HiveContext that allows users to write queries using a subset of HiveQL and access data from a Hive Metastore using Hive SerDes.  There are also wrappers that allows users to run queries that include Hive UDFs, UDAFs, and UDTFs.
  - HiveServer and CLI support (sql/hive-thriftserver) - Includes support for the SQL CLI (bin/spark-sql) and a HiveServer2 (for JDBC/ODBC) compatible server.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org