You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by felixcheung <gi...@git.apache.org> on 2018/08/21 06:10:22 UTC

[GitHub] spark pull request #22161: [SPARK-25167][SPARKR][TEST][MINOR] Minor fixes fo...

Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22161#discussion_r211487544
  
    --- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
    @@ -3613,11 +3613,11 @@ test_that("Collect on DataFrame when NAs exists at the top of a timestamp column
     test_that("catalog APIs, currentDatabase, setCurrentDatabase, listDatabases", {
       expect_equal(currentDatabase(), "default")
       expect_error(setCurrentDatabase("default"), NA)
    -  expect_error(setCurrentDatabase("foo"),
    -               "Error in setCurrentDatabase : analysis error - Database 'foo' does not exist")
    +  expect_error(setCurrentDatabase("zxwtyswklpf"),
    +        "Error in setCurrentDatabase : analysis error - Database 'zxwtyswklpf' does not exist")
       dbs <- collect(listDatabases())
       expect_equal(names(dbs), c("name", "description", "locationUri"))
    -  expect_equal(dbs[[1]], "default")
    +  expect_equal(which(dbs[, 1] == "default"), 1)
    --- End diff --
    
    I wonder if there is a better way to ensure the default database is named "default", perhaps? this checks for "exactly one database is named "default"" - I guess that's ok...


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org