You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "todd.chen (JIRA)" <ji...@apache.org> on 2016/11/14 12:46:58 UTC

[jira] [Comment Edited] (SPARK-18050) spark 2.0.1 enable hive throw AlreadyExistsException(message:Database default already exists)

    [ https://issues.apache.org/jira/browse/SPARK-18050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15663821#comment-15663821 ] 

todd.chen edited comment on SPARK-18050 at 11/14/16 12:46 PM:
--------------------------------------------------------------

16/11/14 20:38:03 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:04 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/11/14 20:38:04 INFO metastore.ObjectStore: ObjectStore, initialize called
16/11/14 20:38:04 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/11/14 20:38:04 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:06 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/11/14 20:38:09 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:09 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/11/14 20:38:12 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
16/11/14 20:38:12 INFO metastore.ObjectStore: Initialized ObjectStore
16/11/14 20:38:13 INFO metastore.HiveMetaStore: Added admin role in metastore
16/11/14 20:38:13 INFO metastore.HiveMetaStore: Added public role in metastore
16/11/14 20:38:14 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/11/14 20:38:14 INFO metastore.HiveMetaStore: 0: get_all_databases
16/11/14 20:38:14 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_all_databases	
16/11/14 20:38:14 INFO metastore.HiveMetaStore: 0: get_functions: db=bi pat=*
16/11/14 20:38:14 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=bi pat=*	
16/11/14 20:38:14 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=default pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=search pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=search pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=test_randy pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=test_randy pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=testaa pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=testaa pat=*	
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /var/folders/0h/bdlvyj3j21d3t65dt8thq7500000gp/T/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95_resources
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /tmp/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95/_tmp_space.db
16/11/14 20:38:15 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /user/hive/warehouse
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /var/folders/0h/bdlvyj3j21d3t65dt8thq7500000gp/T/59a86193-b20e-4b0e-8c74-ccc43e3f5203_resources
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /tmp/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203/_tmp_space.db
16/11/14 20:38:15 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /user/hive/warehouse
16/11/14 20:38:16 INFO metastore.HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:file:/user/hive/warehouse, parameters:{})
16/11/14 20:38:16 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=create_database: Database(name:default, description:default database, locationUri:file:/user/hive/warehouse, parameters:{})	
16/11/14 20:38:16 ERROR metastore.RetryingHMSHandler: AlreadyExistsException(message:Database default already exists)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:891)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy21.create_database(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:644)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
	at com.sun.proxy.$Proxy22.createDatabase(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:306)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply$mcV$sp(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:280)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:269)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createDatabase(HiveClientImpl.scala:308)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply$mcV$sp(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:72)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:98)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:147)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
	at org.apache.spark.sql.hive.HiveSessionCatalog.<init>(HiveSessionCatalog.scala:51)
	at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:49)
	at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
	at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:31)
	at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:471)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReader.load(HiveDataFrameReader.scala:25)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply$mcV$sp(HiveDataFrameReaderSuite.scala:19)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply(HiveDataFrameReaderSuite.scala:15)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply(HiveDataFrameReaderSuite.scala:15)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.scalatest.TestSuite$class.withFixture(TestSuite.scala:196)
	at org.scalatest.FunSuite.withFixture(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite.org$scalatest$BeforeAndAfterAll$$super$run(HiveDataFrameReaderSuite.scala:12)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite.run(HiveDataFrameReaderSuite.scala:12)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1334)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1334)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1011)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1010)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1500)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
	at org.scalatest.tools.Runner$.run(Runner.scala:850)
	at org.scalatest.tools.Runner.run(Runner.scala)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

16/11/14 20:38:16 INFO execution.SparkSqlParser: Parsing command: track_liked
16/11/14 20:38:16 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=track_liked



was (Author: cjuexuan):
16/11/14 20:38:03 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:03 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:04 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
16/11/14 20:38:04 INFO metastore.ObjectStore: ObjectStore, initialize called
16/11/14 20:38:04 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored
16/11/14 20:38:04 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:06 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:06 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
16/11/14 20:38:09 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:09 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:11 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing
16/11/14 20:38:12 INFO metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is MYSQL
16/11/14 20:38:12 INFO metastore.ObjectStore: Initialized ObjectStore
16/11/14 20:38:13 INFO metastore.HiveMetaStore: Added admin role in metastore
16/11/14 20:38:13 INFO metastore.HiveMetaStore: Added public role in metastore
16/11/14 20:38:14 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty
16/11/14 20:38:14 INFO metastore.HiveMetaStore: 0: get_all_databases
16/11/14 20:38:14 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_all_databases	
16/11/14 20:38:14 INFO metastore.HiveMetaStore: 0: get_functions: db=bi pat=*
16/11/14 20:38:14 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=bi pat=*	
16/11/14 20:38:14 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table.
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=default pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=search pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=search pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=test_randy pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=test_randy pat=*	
16/11/14 20:38:15 INFO metastore.HiveMetaStore: 0: get_functions: db=testaa pat=*
16/11/14 20:38:15 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_functions: db=testaa pat=*	
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /var/folders/0h/bdlvyj3j21d3t65dt8thq7500000gp/T/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95_resources
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /tmp/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/81f6f9b7-5e21-49ce-9dcc-48b5297e8d95/_tmp_space.db
16/11/14 20:38:15 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /user/hive/warehouse
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.mapjoin.optimized.keys does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.optimize.multigroupby.common.distincts does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.mapjoin.lazy.hashtable does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.min.worker.threads does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.thrift.http.max.worker.threads does not exist
16/11/14 20:38:15 WARN conf.HiveConf: HiveConf of name hive.server2.logging.operation.verbose does not exist
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /var/folders/0h/bdlvyj3j21d3t65dt8thq7500000gp/T/59a86193-b20e-4b0e-8c74-ccc43e3f5203_resources
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203
16/11/14 20:38:15 INFO session.SessionState: Created local directory: /tmp/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203
16/11/14 20:38:15 INFO session.SessionState: Created HDFS directory: /tmp/hive-cjuexuan/cjuexuan/59a86193-b20e-4b0e-8c74-ccc43e3f5203/_tmp_space.db
16/11/14 20:38:15 INFO client.HiveClientImpl: Warehouse location for Hive client (version 1.2.1) is /user/hive/warehouse
16/11/14 20:38:16 INFO metastore.HiveMetaStore: 0: create_database: Database(name:default, description:default database, locationUri:file:/user/hive/warehouse, parameters:{})
16/11/14 20:38:16 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=create_database: Database(name:default, description:default database, locationUri:file:/user/hive/warehouse, parameters:{})	
16/11/14 20:38:16 ERROR metastore.RetryingHMSHandler: AlreadyExistsException(message:Database default already exists)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_database(HiveMetaStore.java:891)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
	at com.sun.proxy.$Proxy21.create_database(Unknown Source)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:644)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:156)
	at com.sun.proxy.$Proxy22.createDatabase(Unknown Source)
	at org.apache.hadoop.hive.ql.metadata.Hive.createDatabase(Hive.java:306)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply$mcV$sp(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$createDatabase$1.apply(HiveClientImpl.scala:309)
	at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:280)
	at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:227)
	at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:226)
	at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:269)
	at org.apache.spark.sql.hive.client.HiveClientImpl.createDatabase(HiveClientImpl.scala:308)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply$mcV$sp(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$createDatabase$1.apply(HiveExternalCatalog.scala:99)
	at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:72)
	at org.apache.spark.sql.hive.HiveExternalCatalog.createDatabase(HiveExternalCatalog.scala:98)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.createDatabase(SessionCatalog.scala:147)
	at org.apache.spark.sql.catalyst.catalog.SessionCatalog.<init>(SessionCatalog.scala:89)
	at org.apache.spark.sql.hive.HiveSessionCatalog.<init>(HiveSessionCatalog.scala:51)
	at org.apache.spark.sql.hive.HiveSessionState.catalog$lzycompute(HiveSessionState.scala:49)
	at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:48)
	at org.apache.spark.sql.hive.HiveSessionState.catalog(HiveSessionState.scala:31)
	at org.apache.spark.sql.DataFrameReader.table(DataFrameReader.scala:471)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReader.load(HiveDataFrameReader.scala:25)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply$mcV$sp(HiveDataFrameReaderSuite.scala:19)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply(HiveDataFrameReaderSuite.scala:15)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite$$anonfun$1.apply(HiveDataFrameReaderSuite.scala:15)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
	at org.scalatest.TestSuite$class.withFixture(TestSuite.scala:196)
	at org.scalatest.FunSuite.withFixture(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:183)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:196)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:196)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:229)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:396)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:384)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:379)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:229)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
	at org.scalatest.Suite$class.run(Suite.scala:1147)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:233)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:233)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite.org$scalatest$BeforeAndAfterAll$$super$run(HiveDataFrameReaderSuite.scala:12)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:213)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
	at com.ximalaya.xql.engine.exec.hive.HiveDataFrameReaderSuite.run(HiveDataFrameReaderSuite.scala:12)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:45)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1340)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$1.apply(Runner.scala:1334)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:1334)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1011)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1010)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:1500)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1010)
	at org.scalatest.tools.Runner$.run(Runner.scala:850)
	at org.scalatest.tools.Runner.run(Runner.scala)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.runScalaTest2(ScalaTestRunner.java:138)
	at org.jetbrains.plugins.scala.testingSupport.scalaTest.ScalaTestRunner.main(ScalaTestRunner.java:28)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)

16/11/14 20:38:16 INFO execution.SparkSqlParser: Parsing command: track_liked
16/11/14 20:38:16 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=track_liked
16/11/14 20:38:16 INFO HiveMetaStore.audit: ugi=cjuexuan	ip=unknown-ip-addr	cmd=get_table : db=default tbl=track_liked	
16/11/14 20:38:17 INFO parser.CatalystSqlParser: Parsing command: bigint
16/11/14 20:38:17 INFO parser.CatalystSqlParser: Parsing command: bigint
16/11/14 20:38:20 INFO memory.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 378.9 KB, free 911.9 MB)
16/11/14 20:38:20 INFO memory.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 39.0 KB, free 911.9 MB)
16/11/14 20:38:20 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.100:59459 (size: 39.0 KB, free: 912.3 MB)
16/11/14 20:38:20 INFO spark.SparkContext: Created broadcast 0 from show at HiveDataFrameReaderSuite.scala:19
16/11/14 20:38:20 INFO log.PerfLogger: <PERFLOG method=OrcGetSplits from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
16/11/14 20:38:20 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
16/11/14 20:38:20 INFO orc.OrcInputFormat: FooterCacheHitRatio: 0/0
16/11/14 20:38:20 INFO log.PerfLogger: </PERFLOG method=OrcGetSplits start=1479127100671 end=1479127100839 duration=168 from=org.apache.hadoop.hive.ql.io.orc.ReaderImpl>
16/11/14 20:38:20 INFO spark.SparkContext: Starting job: show at HiveDataFrameReaderSuite.scala:19
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Got job 0 (show at HiveDataFrameReaderSuite.scala:19) with 1 output partitions
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Final stage: ResultStage 0 (show at HiveDataFrameReaderSuite.scala:19)
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Missing parents: List()
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[4] at show at HiveDataFrameReaderSuite.scala:19), which has no missing parents
16/11/14 20:38:20 INFO memory.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 7.5 KB, free 911.9 MB)
16/11/14 20:38:20 INFO memory.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 4.3 KB, free 911.9 MB)
16/11/14 20:38:20 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.1.100:59459 (size: 4.3 KB, free: 912.3 MB)
16/11/14 20:38:20 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1012
16/11/14 20:38:20 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at show at HiveDataFrameReaderSuite.scala:19)
16/11/14 20:38:20 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/11/14 20:38:21 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0, ANY, 5421 bytes)
16/11/14 20:38:21 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
16/11/14 20:38:21 INFO rdd.HadoopRDD: Input split: hdfs://tracker.test.lan:8020/xima-data/track/1d/liked/c/2016/02/28/part-00000:0+2144
16/11/14 20:38:21 INFO Configuration.deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/11/14 20:38:21 INFO Configuration.deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/11/14 20:38:21 INFO Configuration.deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/11/14 20:38:21 INFO Configuration.deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/11/14 20:38:21 INFO Configuration.deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/11/14 20:38:21 INFO orc.OrcRawRecordMerger: min key = null, max key = null
16/11/14 20:38:21 INFO orc.ReaderImpl: Reading ORC rows from hdfs://tracker.test.lan:8020/xima-data/track/1d/liked/c/2016/02/28/part-00000 with {include: [true, true, true], offset: 0, length: 9223372036854775807}
16/11/14 20:38:21 INFO codegen.CodeGenerator: Code generated in 289.404912 ms
16/11/14 20:38:21 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 1291 bytes result sent to driver
16/11/14 20:38:21 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 962 ms on localhost (1/1)
16/11/14 20:38:21 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/11/14 20:38:21 INFO scheduler.DAGScheduler: ResultStage 0 (show at HiveDataFrameReaderSuite.scala:19) finished in 0.980 s
16/11/14 20:38:21 INFO scheduler.DAGScheduler: Job 0 finished: show at HiveDataFrameReaderSuite.scala:19, took 1.115636 s
16/11/14 20:38:22 INFO codegen.CodeGenerator: Code generated in 17.97001 ms
+-------+-----+
|trackid|count|
+-------+-----+
|      0|    0|
| 100812|    0|
| 115500|    0|
| 117787|    0|
| 118986|    0|
| 120393|    0|
| 126868|    0|
| 143732|    0|
| 145641|    0|
| 152186|    0|
| 158172|    0|
| 162981|    0|
| 164050|    0|
| 164975|    0|
| 167101|    0|
| 167113|    0|
| 167118|    0|
| 167119|    0|
| 184950|    0|
| 187350|    0|
+-------+-----+
only showing top 20 rows

16/11/14 20:38:22 INFO server.ServerConnector: Stopped ServerConnector@a137d7a{HTTP/1.1}{0.0.0.0:4040}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7495699f{/stages/stage/kill,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@289778cd{/api,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@30501e60{/,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@11e33bac{/static,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@280e8a1a{/executors/threadDump/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@27a7ef08{/executors/threadDump,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@61f80d55{/executors/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@b965857{/executors,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@71dfcf21{/environment/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4ce14f05{/environment,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@6401d0a0{/storage/rdd/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7ea08277{/storage/rdd,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@29528a22{/storage/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@31133b6e{/storage,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@784c5ef5{/stages/pool/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@4ba380c7{/stages/pool,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@73ad7e90{/stages/stage/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@65f58c6e{/stages/stage,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@c6634d{/stages/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@3701e6e4{/stages,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@5fc930f0{/jobs/job/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@301aa982{/jobs/job,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@7ae9a33a{/jobs/json,null,UNAVAILABLE}
16/11/14 20:38:22 INFO handler.ContextHandler: Stopped o.s.j.s.ServletContextHandler@f4cfd90{/jobs,null,UNAVAILABLE}
16/11/14 20:38:22 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.1.100:4040
16/11/14 20:38:22 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/11/14 20:38:22 INFO memory.MemoryStore: MemoryStore cleared
16/11/14 20:38:22 INFO storage.BlockManager: BlockManager stopped
16/11/14 20:38:22 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/11/14 20:38:22 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/11/14 20:38:22 INFO spark.SparkContext: Successfully stopped SparkContext
16/11/14 20:38:22 INFO util.ShutdownHookManager: Shutdown hook called
16/11/14 20:38:22 INFO util.ShutdownHookManager: Deleting directory /private/var/folders/0h/bdlvyj3j21d3t65dt8thq7500000gp/T/spark-7ebdd541-e25a-4cf8-ae40-0cbd3ee68b20

Process finished with exit code 0

> spark 2.0.1 enable hive throw AlreadyExistsException(message:Database default already exists)
> ---------------------------------------------------------------------------------------------
>
>                 Key: SPARK-18050
>                 URL: https://issues.apache.org/jira/browse/SPARK-18050
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>         Environment: jdk1.8, macOs,spark 2.0.1
>            Reporter: todd.chen
>
> in spark 2.0.1 ,I enable hive support and when init the sqlContext ,throw a AlreadyExistsException(message:Database default already exists),same as 
> https://www.mail-archive.com/dev@spark.apache.org/msg15306.html ,my code is 
> {code}
>   private val master = "local[*]"
>   private val appName = "xqlServerSpark"
>   val fileSystem = FileSystem.get()
>   val sparkConf = new SparkConf().setMaster(master).
>     setAppName(appName).set("spark.sql.warehouse.dir", s"${fileSystem.getUri.toASCIIString}/user/hive/warehouse")
>   val   hiveContext = SparkSession.builder().config(sparkConf).enableHiveSupport().getOrCreate().sqlContext
>     print(sparkConf.get("spark.sql.warehouse.dir"))
>     hiveContext.sql("show tables").show()
> {code}
> the result is correct,but a exception also throwBy the code



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org