You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/05/09 04:59:04 UTC
[jira] [Assigned] (SPARK-20667) Cleanup the cataloged metadata
after completing the package of sql/core and sql/hive
[ https://issues.apache.org/jira/browse/SPARK-20667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Apache Spark reassigned SPARK-20667:
------------------------------------
Assignee: Xiao Li (was: Apache Spark)
> Cleanup the cataloged metadata after completing the package of sql/core and sql/hive
> ------------------------------------------------------------------------------------
>
> Key: SPARK-20667
> URL: https://issues.apache.org/jira/browse/SPARK-20667
> Project: Spark
> Issue Type: Test
> Components: SQL
> Affects Versions: 2.2.0
> Reporter: Xiao Li
> Assignee: Xiao Li
>
> So far, we do not drop all the cataloged tables after each package. Sometimes, we might hit strange test case errors because the previous test suite did not drop the tables/functions/database. At least, we can first clean up the environment when completing the package of sql/core and sql/hive.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org