You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by budde <gi...@git.apache.org> on 2017/02/15 19:40:09 UTC

[GitHub] spark pull request #16942: [SPARK-19611][SQL] Introduce configurable table s...

GitHub user budde opened a pull request:

    https://github.com/apache/spark/pull/16942

    [SPARK-19611][SQL] Introduce configurable table schema inference

    Replaces #16797. See the discussion in this PR for more details/justification for this change.
    
    ## Summary of changes
    [JIRA for this change](https://issues.apache.org/jira/browse/SPARK-19611)
    
    - Add spark.sql.hive.schemaInferenceMode param to SQLConf
    - Add schemaFromTableProps field to CatalogTable (set to true when schema is
      successfully read from table props)
    - Perform schema inference in HiveMetastoreCatalog if schemaFromTableProps is
      false, depending on spark.sql.hive.schemaInferenceMode.
    - Update table metadata properties in HiveExternalCatalog.alterTable()
    - Add HiveSchemaInferenceSuite tests
    
    ## How was this patch tested?
    
    The tests in HiveSchemaInferenceSuite should verify that schema inference is working as expected.
    
    ## Open issues
    
    - The option values for ```spark.sql.hive.schemaInferenceMode ``` (e.g. "INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER") should be made into constants or an enum. I couldn't find a sensible object to place them in though that doesn't introduce a dependency between sql/core and sql/hive.
    - Should "INFER_AND_SAVE" be the default behavior? This restores the out-of-the-box compatibility that was present prior to 2.1.0 but changes the behavior of 2.1.0 (which is essentially "NEVER_INFER").
    - Is ```HiveExternalCatalog.alterTable()``` the appropriate place to write back the table metadata properties outside of createTable()? Should a new external catalog method like updateTableMetadata() be introduced?
    - All partition columns will still be treated as case-insensitive even after inferring. As far as I remember, this has always been the case with schema inference prior to Spark 2.1.0 and I haven't made any attempts to reconcile this since it doesn't cause the same problems that case sensitive data fields do. Should we attempt to restore case sensitivity by inspecting file paths or leave this as-is?

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/budde/spark SPARK-19611

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/16942.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #16942
    
----
commit ced9c4d8363fb4e10e027da5a793ceabed11cfb7
Author: Budde <bu...@amazon.com>
Date:   2017-02-08T04:46:48Z

    [SPARK-19611][SQL] Introduce configurable table schema inference
    
    - Add spark.sql.hive.schemaInferenceMode param to SQLConf
    - Add schemaFromTableProps field to CatalogTable (set to true when schema is
      successfully read from table props)
    - Perform schema inference in HiveMetastoreCatalog if schemaFromTableProps is
      false, depending on spark.sql.hive.schemaInferenceMode.
    - Update table metadata properties in HiveExternalCatalog.alterTable()
    - Add HiveSchemaInferenceSuite tests

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Accidentally did a force-push to my branch for this issue. Looks like I'll have to open a new PR...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    @mallman If I did close it then it was by mistake. The "Reopen and comment" button was disabled with a message about the PR being closed by a force push when I hovered over it. Afraid I'm a bit of a n00b on GitHub PRs :/


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    **[Test build #72947 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72947/testReport)** for PR 16942 at commit [`ced9c4d`](https://github.com/apache/spark/commit/ced9c4d8363fb4e10e027da5a793ceabed11cfb7).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Tests appear to be failing due to the following error:
    
    ```
    [info] Exception encountered when attempting to run a suite with class name: org.apache.spark.sql.streaming.FileStreamSourceSuite *** ABORTED *** (0 milliseconds)
    [info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
      org.apache.spark.sql.execution.SQLExecutionSuite$$anonfun$3.apply(SQLExecutionSuite.scala:107)
    ...
    ```
    
    I don't think anything in this PR should've changed the behavior of core SQL tests, but I'll look in to this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by mallman <gi...@git.apache.org>.
Github user mallman commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Weird. I think I've seen that behavior once before. But I think the only time I force push on a PR is to rebase. Maybe that's the only kind of force push allowed for Github PRs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Test FAILed.
    Refer to this link for build results (access rights to CI server needed): 
    https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/72947/
    Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by SparkQA <gi...@git.apache.org>.
Github user SparkQA commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    **[Test build #72947 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/72947/testReport)** for PR 16942 at commit [`ced9c4d`](https://github.com/apache/spark/commit/ced9c4d8363fb4e10e027da5a793ceabed11cfb7).
     * This patch **fails Spark unit tests**.
     * This patch merges cleanly.
     * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16942: [SPARK-19611][SQL] Introduce configurable table s...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16942#discussion_r101366441
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -296,6 +296,17 @@ object SQLConf {
           .longConf
           .createWithDefault(250 * 1024 * 1024)
     
    +  val HIVE_SCHEMA_INFERENCE_MODE = buildConf("spark.sql.hive.schemaInferenceMode")
    +    .doc("Configures the action to take when a case-sensitive schema cannot be read from a Hive " +
    +      "table's properties. Valid options include INFER_AND_SAVE (infer the case-sensitive " +
    +      "schema from the underlying data files and write it back to the table properties), " +
    +      "INFER_ONLY (infer the schema but don't attempt to write it to the table properties) and " +
    +      "NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring).")
    +    .stringConf
    +    .transform(_.toUpperCase())
    +    .checkValues(Set("INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER"))
    --- End diff --
    
    As mentioned in the PR, I'm looking for a good place to store these values as constants or an enum.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by mallman <gi...@git.apache.org>.
Github user mallman commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Force pushing your branch shouldn't close the PR. You didn't close it manually?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Pinging participants from #16797: @gatorsmile, @viirya, @ericl, @mallman and @cloud-fan 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16942: [SPARK-19611][SQL] Introduce configurable table s...

Posted by budde <gi...@git.apache.org>.
Github user budde closed the pull request at:

    https://github.com/apache/spark/pull/16942


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16942: [SPARK-19611][SQL] Introduce configurable table s...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16942#discussion_r101366307
  
    --- Diff: sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSchemaInferenceSuite.scala ---
    @@ -0,0 +1,162 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +
    +package org.apache.spark.sql.hive
    +
    +import java.io.File
    +import java.util.concurrent.{Executors, TimeUnit}
    +
    +import org.scalatest.BeforeAndAfterEach
    +
    +import org.apache.spark.metrics.source.HiveCatalogMetrics
    +import org.apache.spark.sql.catalyst.TableIdentifier
    +import org.apache.spark.sql.catalyst.catalog._
    +import org.apache.spark.sql.execution.datasources.FileStatusCache
    +import org.apache.spark.sql.QueryTest
    +import org.apache.spark.sql.hive.client.HiveClient
    +import org.apache.spark.sql.hive.test.TestHiveSingleton
    +import org.apache.spark.sql.internal.SQLConf
    +import org.apache.spark.sql.test.SQLTestUtils
    +import org.apache.spark.sql.types._
    +
    +class HiveSchemaInferenceSuite
    +  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach {
    +
    +  import HiveSchemaInferenceSuite._
    +
    +  // Create a CatalogTable instance modeling an external Hive table in a metastore that isn't
    +  // controlled by Spark (i.e. has no Spark-specific table properties set).
    +  private def hiveExternalCatalogTable(
    +      tableName: String,
    +      location: String,
    +      schema: StructType,
    +      partitionColumns: Seq[String],
    +      properties: Map[String, String] = Map.empty): CatalogTable = {
    +    CatalogTable(
    +      identifier = TableIdentifier(table = tableName, database = Option("default")),
    +      tableType = CatalogTableType.EXTERNAL,
    +      storage = CatalogStorageFormat(
    +        locationUri = Option(location),
    +        inputFormat = Option("org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat"),
    +        outputFormat = Option("org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat"),
    +        serde = Option("org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"),
    +        compressed = false,
    +        properties = Map("serialization.format" -> "1")),
    +      schema = schema,
    +      provider = Option("hive"),
    +      partitionColumnNames = partitionColumns,
    +      properties = properties)
    +  }
    +
    +  // Creates CatalogTablePartition instances for adding partitions of data to our test table.
    +  private def hiveCatalogPartition(location: String, index: Int): CatalogTablePartition
    +    = CatalogTablePartition(
    +      spec = Map("partcol1" -> index.toString, "partcol2" -> index.toString),
    +      storage = CatalogStorageFormat(
    +        locationUri = Option(s"${location}/partCol1=$index/partCol2=$index/"),
    +        inputFormat = Option("org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat"),
    +        outputFormat = Option("org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat"),
    +        serde = Option("org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"),
    +        compressed = false,
    +        properties = Map("serialization.format" -> "1")))
    +
    +  // Creates a case-sensitive external Hive table for testing schema inference options. Table
    +  // will not have Spark-specific table properties set.
    +  private def setupCaseSensitiveTable(
    +      tableName: String,
    +      dir: File): Unit = {
    +    spark.range(NUM_RECORDS)
    +      .selectExpr("id as fieldOne", "id as partCol1", "id as partCol2")
    +      .write
    +      .partitionBy("partCol1", "partCol2")
    +      .mode("overwrite")
    +      .parquet(dir.getAbsolutePath)
    +
    +    val lowercaseSchema = StructType(Seq(
    +      StructField("fieldone", LongType),
    +      StructField("partcol1", IntegerType),
    +      StructField("partcol2", IntegerType)))
    +
    +    val client = spark.sharedState.externalCatalog.asInstanceOf[HiveExternalCatalog].client
    +
    +    val catalogTable = hiveExternalCatalogTable(
    +      tableName,
    +      dir.getAbsolutePath,
    +      lowercaseSchema,
    +      Seq("partcol1", "partcol2"))
    +    client.createTable(catalogTable, true)
    +
    +    val partitions = (0 until NUM_RECORDS).map(hiveCatalogPartition(dir.getAbsolutePath, _)).toSeq
    +    client.createPartitions("default", tableName, partitions, true)
    +  }
    +
    +  // Create a test table used for a single unit test, with data stored in the specified directory.
    +  private def withTestTable(dir: File)(f: File => Unit): Unit = {
    +    setupCaseSensitiveTable(TEST_TABLE_NAME, dir)
    +    try f(dir) finally spark.sql(s"DROP TABLE IF EXISTS $TEST_TABLE_NAME")
    +  }
    +
    +  override def beforeEach(): Unit = {
    +    super.beforeEach()
    +    FileStatusCache.resetForTesting()
    +  }
    +
    +  override def afterEach(): Unit = {
    +    super.afterEach()
    +    FileStatusCache.resetForTesting()
    +  }
    +
    +  test("Queries against case-sensitive tables with no schema in table properties should work " +
    +    "when schema inference is enabled") {
    +    withSQLConf("spark.sql.hive.schemaInferenceMode" -> "INFER_AND_SAVE") {
    --- End diff --
    
    Will change this to reference the key via the constant in SQLConf rather than ```"spark.sql.hive.schemaInferenceMode```". 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark issue #16942: [SPARK-19611][SQL] Introduce configurable table schema i...

Posted by AmplabJenkins <gi...@git.apache.org>.
Github user AmplabJenkins commented on the issue:

    https://github.com/apache/spark/pull/16942
  
    Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] spark pull request #16942: [SPARK-19611][SQL] Introduce configurable table s...

Posted by budde <gi...@git.apache.org>.
Github user budde commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16942#discussion_r101366583
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -296,6 +296,17 @@ object SQLConf {
           .longConf
           .createWithDefault(250 * 1024 * 1024)
     
    +  val HIVE_SCHEMA_INFERENCE_MODE = buildConf("spark.sql.hive.schemaInferenceMode")
    +    .doc("Configures the action to take when a case-sensitive schema cannot be read from a Hive " +
    +      "table's properties. Valid options include INFER_AND_SAVE (infer the case-sensitive " +
    +      "schema from the underlying data files and write it back to the table properties), " +
    +      "INFER_ONLY (infer the schema but don't attempt to write it to the table properties) and " +
    +      "NEVER_INFER (fallback to using the case-insensitive metastore schema instead of inferring).")
    +    .stringConf
    +    .transform(_.toUpperCase())
    +    .checkValues(Set("INFER_AND_SAVE", "INFER_ONLY", "NEVER_INFER"))
    +    .createWithDefault("INFER_AND_SAVE")
    --- End diff --
    
    I'm open for discussion on whether or not this should be the default behavior.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org