You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by lk_spark <lk...@163.com> on 2022/03/21 06:00:01 UTC
NoSuchMethodError: org.apache.spark.sql.execution.command.CreateViewCommand.copy
hi, all :
I got a strange error:
bin/spark-shell --deploy-mode client
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
22/03/21 13:51:39 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
22/03/21 13:51:46 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
22/03/21 13:51:46 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
Spark context Web UI available at http://client-10-0-161-29:4040
Spark context available as 'sc' (master = yarn, app id = application_1644825367082_16937).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.2.1
/_/
Using Scala version 2.12.15 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_281)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val parqfile = spark.read.parquet("/tmp/datax/tmp/python/ods_io_install/ods_io_install/")
parqfile: org.apache.spark.sql.DataFrame = [spid: string, region_rule: string ... 7 more fields]
scala> parqfile.printSchema
root
|-- spid: string (nullable = true)
|-- region_rule: string (nullable = true)
|-- app_version: string (nullable = true)
|-- device_id: string (nullable = true)
|-- is_install: string (nullable = true)
|-- last_install_time: string (nullable = true)
|-- last_uninstall_time: string (nullable = true)
|-- last_use_time: string (nullable = true)
|-- pdate: integer (nullable = true)
scala> parqfile.show(2)
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
| spid| region_rule|app_version| device_id|is_install|last_install_time|last_uninstall_time| last_use_time| pdate|
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
|13025|北京市房屋建筑与装饰工程预算定额计...| 1.0.29.2|ea68f0cc-7038-43a...| 1| null| null|2021-06-05 11:49:...|20220320|
|13025| 山东省建筑工程消耗量定额计算规则(...| 1.0.31.0|c16e1260-5700-4a4...| 1| null| null|2022-01-08 17:55:...|20220320|
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
only showing top 2 rows
scala> parqfile.createOrReplaceTempView("ods_io_install_temp")
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_hash replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_first_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_last_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_show_last_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_show_first_n replaced a previously registered function.
java.lang.NoSuchMethodError: org.apache.spark.sql.execution.command.CreateViewCommand.copy(Lorg/apache/spark/sql/catalyst/TableIdentifier;Lscala/collection/Seq;Lscala/Option;Lscala/collection/immutable/Map;Lscala/Option;Lorg/apache/spark/sql/catalyst/plans/logical/LogicalPlan;ZZLorg/apache/spark/sql/catalyst/analysis/ViewType;Z)Lorg/apache/spark/sql/execution/command/CreateViewCommand;
at org.apache.spark.sql.catalyst.optimizer.SubmarineRowFilterExtension.apply(SubmarineRowFilterExtension.scala:125)
at org.apache.spark.sql.catalyst.optimizer.SubmarineRowFilterExtension.apply(SubmarineRowFilterExtension.scala:41)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)
at scala.collection.immutable.List.foreach(List.scala:431)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:138)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:196)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:196)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:130)
at org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:166)
at org.apache.spark.sql.execution.QueryExecution.withCteMap(QueryExecution.scala:73)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:163)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:163)
at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:214)
at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:259)
at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:228)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:98)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:91)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:88)
at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:3734)
at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:3306)
... 47 elided
I don't what's reason, Please help.
Re:NoSuchMethodError: org.apache.spark.sql.execution.command.CreateViewCommand.copy
Posted by lk_spark <lk...@163.com>.
sorry, it's my env problem.
At 2022-03-21 14:00:01, "lk_spark" <lk...@163.com> wrote:
hi, all :
I got a strange error:
bin/spark-shell --deploy-mode client
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
22/03/21 13:51:39 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
22/03/21 13:51:46 WARN util.Utils: spark.executor.instances less than spark.dynamicAllocation.minExecutors is invalid, ignoring its setting, please update your configs.
22/03/21 13:51:46 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to request executors before the AM has registered!
Spark context Web UI available at http://client-10-0-161-29:4040
Spark context available as 'sc' (master = yarn, app id = application_1644825367082_16937).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.2.1
/_/
Using Scala version 2.12.15 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_281)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val parqfile = spark.read.parquet("/tmp/datax/tmp/python/ods_io_install/ods_io_install/")
parqfile: org.apache.spark.sql.DataFrame = [spid: string, region_rule: string ... 7 more fields]
scala> parqfile.printSchema
root
|-- spid: string (nullable = true)
|-- region_rule: string (nullable = true)
|-- app_version: string (nullable = true)
|-- device_id: string (nullable = true)
|-- is_install: string (nullable = true)
|-- last_install_time: string (nullable = true)
|-- last_uninstall_time: string (nullable = true)
|-- last_use_time: string (nullable = true)
|-- pdate: integer (nullable = true)
scala> parqfile.show(2)
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
| spid| region_rule|app_version| device_id|is_install|last_install_time|last_uninstall_time| last_use_time| pdate|
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
|13025|北京市房屋建筑与装饰工程预算定额计...| 1.0.29.2|ea68f0cc-7038-43a...| 1| null| null|2021-06-05 11:49:...|20220320|
|13025| 山东省建筑工程消耗量定额计算规则(...| 1.0.31.0|c16e1260-5700-4a4...| 1| null| null|2022-01-08 17:55:...|20220320|
+-----+-------------------------------------+-----------+--------------------+----------+-----------------+-------------------+--------------------+--------+
only showing top 2 rows
scala> parqfile.createOrReplaceTempView("ods_io_install_temp")
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_hash replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_first_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_last_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_show_last_n replaced a previously registered function.
22/03/21 13:54:38 WARN analysis.SimpleFunctionRegistry: The function mask_show_first_n replaced a previously registered function.
java.lang.NoSuchMethodError: org.apache.spark.sql.execution.command.CreateViewCommand.copy(Lorg/apache/spark/sql/catalyst/TableIdentifier;Lscala/collection/Seq;Lscala/Option;Lscala/collection/immutable/Map;Lscala/Option;Lorg/apache/spark/sql/catalyst/plans/logical/LogicalPlan;ZZLorg/apache/spark/sql/catalyst/analysis/ViewType;Z)Lorg/apache/spark/sql/execution/command/CreateViewCommand;
at org.apache.spark.sql.catalyst.optimizer.SubmarineRowFilterExtension.apply(SubmarineRowFilterExtension.scala:125)
at org.apache.spark.sql.catalyst.optimizer.SubmarineRowFilterExtension.apply(SubmarineRowFilterExtension.scala:41)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:211)
at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:208)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:200)
at scala.collection.immutable.List.foreach(List.scala:431)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:200)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:179)
at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:179)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:138)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:196)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:196)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:134)
at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:130)
at org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:148)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:166)
at org.apache.spark.sql.execution.QueryExecution.withCteMap(QueryExecution.scala:73)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:163)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:163)
at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:214)
at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:259)
at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:228)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:98)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:110)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:106)
at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:481)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:82)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:481)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:457)
at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:106)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:93)
at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:91)
at org.apache.spark.sql.Dataset.<init>(Dataset.scala:219)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:91)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:88)
at org.apache.spark.sql.Dataset.withPlan(Dataset.scala:3734)
at org.apache.spark.sql.Dataset.createOrReplaceTempView(Dataset.scala:3306)
... 47 elided
I don't what's reason, Please help.