You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Mars Max <ma...@baidu.com> on 2014/12/30 11:19:25 UTC

Spark SQL insert overwrite table failed.

While I was doing JOIN operation of three tables using Spark 1.1.1, and
always got the following error. However, I've never met the exception in
Spark 1.1.0 with the same operation and same data. Does anyone meet the
problem?

14/12/30 17:49:33 ERROR CliDriver:
org.apache.hadoop.hive.ql.metadata.HiveException: checkPaths:
hdfs://xxxxxx.com:20632/tmp/hive-work/hive_2014-12-30_17-46-25_327_2097835982529092412-1/-ext-10000
has nested
directoryhdfs://xxxxx/tmp/hive-work/hive_2014-12-30_17-46-25_327_2097835982529092412-1/-ext-10000/_temporary
        at
org.apache.hadoop.hive.ql.metadata.Hive.checkPaths(Hive.java:2081)
        at org.apache.hadoop.hive.ql.metadata.Hive.copyFiles(Hive.java:2222)
        at
org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1224)
        at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.result$lzycompute(InsertIntoHiveTable.scala:238)
        at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.result(InsertIntoHiveTable.scala:173)
        at
org.apache.spark.sql.hive.execution.InsertIntoHiveTable.execute(InsertIntoHiveTable.scala:164)
        at
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd$lzycompute(HiveContext.scala:382)
        at
org.apache.spark.sql.hive.HiveContext$QueryExecution.toRdd(HiveContext.scala:382)
        at
org.apache.spark.sql.SchemaRDDLike$class.$init$(SchemaRDDLike.scala:58)
        at org.apache.spark.sql.SchemaRDD.<init>(SchemaRDD.scala:103)
        at org.apache.spark.sql.hive.HiveContext.sql(HiveContext.scala:98)
        at
org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:58)
        at
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:291)
        at
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
        at
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:226)
        at
org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-insert-overwrite-table-failed-tp20903.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org