You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2018/03/23 12:01:09 UTC

spark git commit: [MINOR][R] Fix R lint failure

Repository: spark
Updated Branches:
  refs/heads/master 5fa438471 -> 92e952557


[MINOR][R] Fix R lint failure

## What changes were proposed in this pull request?

The lint failure bugged me:

```R
R/SQLContext.R:715:97: style: Trailing whitespace is superfluous.
#'        file-based streaming data source. \code{timeZone} to indicate a timezone to be used to
                                                                                                ^
tests/fulltests/test_streaming.R:239:45: style: Commas should always have a space after.
  expect_equal(times[order(times$eventTime),][1, 2], 2)
                                            ^
lintr checks failed.
```

and I actually saw https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.6-ubuntu-test/500/console too. If I understood correctly, there is a try about moving to Unbuntu one.

## How was this patch tested?

Manually tested by `./dev/lint-r`:

```
...
lintr checks passed.
```

Author: hyukjinkwon <gu...@apache.org>

Closes #20879 from HyukjinKwon/minor-r-lint.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/92e95255
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/92e95255
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/92e95255

Branch: refs/heads/master
Commit: 92e952557dbd8a170d66d615e25c6c6a8399dd43
Parents: 5fa4384
Author: hyukjinkwon <gu...@apache.org>
Authored: Fri Mar 23 21:01:07 2018 +0900
Committer: hyukjinkwon <gu...@apache.org>
Committed: Fri Mar 23 21:01:07 2018 +0900

----------------------------------------------------------------------
 R/pkg/R/SQLContext.R                   | 2 +-
 R/pkg/tests/fulltests/test_streaming.R | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/92e95255/R/pkg/R/SQLContext.R
----------------------------------------------------------------------
diff --git a/R/pkg/R/SQLContext.R b/R/pkg/R/SQLContext.R
index ebec0ce..429dd5d 100644
--- a/R/pkg/R/SQLContext.R
+++ b/R/pkg/R/SQLContext.R
@@ -712,7 +712,7 @@ read.jdbc <- function(url, tableName,
 #' @param schema The data schema defined in structType or a DDL-formatted string, this is
 #'               required for file-based streaming data source
 #' @param ... additional external data source specific named options, for instance \code{path} for
-#'        file-based streaming data source. \code{timeZone} to indicate a timezone to be used to 
+#'        file-based streaming data source. \code{timeZone} to indicate a timezone to be used to
 #'        parse timestamps in the JSON/CSV data sources or partition values; If it isn't set, it
 #'        uses the default value, session local timezone.
 #' @return SparkDataFrame

http://git-wip-us.apache.org/repos/asf/spark/blob/92e95255/R/pkg/tests/fulltests/test_streaming.R
----------------------------------------------------------------------
diff --git a/R/pkg/tests/fulltests/test_streaming.R b/R/pkg/tests/fulltests/test_streaming.R
index a354d50..bfb1a04 100644
--- a/R/pkg/tests/fulltests/test_streaming.R
+++ b/R/pkg/tests/fulltests/test_streaming.R
@@ -236,7 +236,7 @@ test_that("Watermark", {
 
   times <- collect(sql("SELECT * FROM times"))
   # looks like write timing can affect the first bucket; but it should be t
-  expect_equal(times[order(times$eventTime),][1, 2], 2)
+  expect_equal(times[order(times$eventTime), ][1, 2], 2)
 
   stopQuery(q)
   unlink(parquetPath)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org