You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Xiaoyu Wang <wa...@jd.com> on 2015/07/03 05:37:13 UTC

Spark Thriftserver exec insert sql got error on Hadoop federation

Hi all!
My sql case is:
insert overwrite table test1 select * From test;
In the job end got move file error.
I see hive-0.13.1 support for viewfs is not good. until hive-1.1.0+
How to upgrade the hive version for spark? Or how to fix the bug on 
"org.spark-project.hive".

My version:
Spark version : 1.4.0
Hadoop version : 2.6.0 with HDFS HA federation

Error log:
15/07/03 11:12:21 ERROR SparkExecuteStatementOperation: Error executing 
query:
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move 
sourceviewfs://nsX/tmp/hive-admin/hive_2015-07-03_11-12-19_247_3487063948514865926-1/-ext-10000 
to destination viewfs://nsX/user/hive/warehouse/test1
         at 
org.apache.hadoop.hive.ql.metadata.Hive.renameFile(Hive.java:2250)
         at 
org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:2378)
         at 
org.apache.hadoop.hive.ql.metadata.Table.replaceFiles(Table.java:673)
         at 
org.apache.hadoop.hive.ql.metadata.Hive.loadTable(Hive.java:1467)
         at 
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply$mcV$sp(ClientWrapper.scala:401)
         at 
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply(ClientWrapper.scala:401)
         at 
org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$loadTable$1.apply(ClientWrapper.scala:401)
         at 
org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:139)
         at 
org.apache.spark.sql.hive.client.ClientWrapper.loadTable(ClientWrapper.scala:400)
         at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult$lzycompute(InsertIntoHiveTable.scala:241)
         at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.sideEffectResult(InsertIntoHiveTable.scala:124)
         at 
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.doExecute(InsertIntoHiveTable.scala:261)
         at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
         at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88)
         at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
         at 
org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87)
         at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:939)
         at 
org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:939)
         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144)
         at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128)
         at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
         at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:744)
         at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:178)
         at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
         at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
         at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:233)
         at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:344)
         at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
         at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
         at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
         at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:55)
         at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Renames across Mount points not supported
         at 
org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:444)
         at 
org.apache.hadoop.hive.ql.metadata.Hive.renameFile(Hive.java:2246)
         ... 35 more
15/07/03 11:12:21 WARN ThriftCLIService: Error executing statement:
org.apache.hive.service.cli.HiveSQLException: 
org.apache.hadoop.hive.ql.metadata.HiveException: Unable to move 
sourceviewfs://nsX/tmp/hive-admin/hive_2015-07-03_11-12-19_247_3487063948514865926-1/-ext-10000 
to destination viewfs://nsX/user/hive/warehouse/test1
         at 
org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.run(Shim13.scala:206)
         at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:231)
         at 
org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:218)
         at 
org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:233)
         at 
org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:344)
         at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
         at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
         at 
org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
         at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
         at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:55)
         at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
         at java.lang.Thread.run(Thread.java:745)


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org