You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "miaowang (Jira)" <ji...@apache.org> on 2022/08/01 06:20:00 UTC

[jira] [Updated] (SPARK-39578) The driver cannot parse the SQL statement and the job is not executed

     [ https://issues.apache.org/jira/browse/SPARK-39578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

miaowang updated SPARK-39578:
-----------------------------
    Attachment: image-2022-08-01-14-19-38-669.png

> The driver cannot parse the SQL statement and the job is not executed
> ---------------------------------------------------------------------
>
>                 Key: SPARK-39578
>                 URL: https://issues.apache.org/jira/browse/SPARK-39578
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 2.4.8
>            Reporter: miaowang
>            Priority: Minor
>         Attachments: image-2022-08-01-14-19-38-669.png
>
>
> We use Spark's API to submit SQL execution tasks. Occasionally, SQL cannot be executed and the task does not exit. Please give us some advice on how to solve this problem.
> Operation execution mode:
> !image-2022-06-24-18-41-50-864.png!
> Read SQL script and submit jobs in sequence。
> Abnormal operation information:
>  
> {code:java}
> //22/06/23 20:55:36 WARN HiveConf: HiveConf of name hive.strict.checks.type.safety does not exist
> 22/06/23 20:55:36 WARN HiveConf: HiveConf of name hive.strict.checks.cartesian.product does not exist
> 22/06/23 20:55:37 INFO metastore: Trying to connect to metastore with URI thrift://bdp-datalake-hive-metastore-01-10-8-49-114:9083
> 22/06/23 20:55:37 INFO metastore: Connected to metastore.
> 22/06/23 20:55:38 INFO SessionState: Created local directory: /hdfsdata/subdata10/yarn/nmcgroup/usercache/hive/appcache/application_1643023142753_3753253/container_e16_1643023142753_3753253_01_000001/tmp/nobody
> 22/06/23 20:55:38 INFO SessionState: Created local directory: /hdfsdata/subdata10/yarn/nmcgroup/usercache/hive/appcache/application_1643023142753_3753253/container_e16_1643023142753_3753253_01_000001/tmp/944b1102-4564-4879-8305-4df390b26669_resources
> 22/06/23 20:55:38 INFO SessionState: Created HDFS directory: /tmp/hive/hive/944b1102-4564-4879-8305-4df390b26669
> 22/06/23 20:55:38 INFO SessionState: Created local directory: /hdfsdata/subdata10/yarn/nmcgroup/usercache/hive/appcache/application_1643023142753_3753253/container_e16_1643023142753_3753253_01_000001/tmp/nobody/944b1102-4564-4879-8305-4df390b26669
> 22/06/23 20:55:38 INFO SessionState: Created HDFS directory: /tmp/hive/hive/944b1102-4564-4879-8305-4df390b26669/_tmp_space.db
> 22/06/23 20:55:38 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is /user/hive/warehouse
> 22/06/23 20:55:47 INFO ContextCleaner: Cleaned accumulator 7
> 22/06/23 20:55:47 INFO ContextCleaner: Cleaned accumulator 12
> 22/06/23 20:55:47 INFO ContextCleaner: Cleaned accumulator 14
> 22/06/23 20:55:47 INFO BlockManagerInfo: Removed broadcast_0_piece0 on bdp-hdfs-36-10-8-33-65:39014 in memory (size: 31.1 KB, free: 3.0 GB)
> 22/06/23 20:55:48 INFO BlockManagerInfo: Removed broadcast_0_piece0 on bdp-hdfs-16-10-8-33-45:33394 in memory (size: 31.1 KB, free: 8.4 GB)
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 10
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 2
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 18
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 24
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 19
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 6
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 5
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 17
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 25
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 13
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 29
> 22/06/23 20:55:48 INFO BlockManagerInfo: Removed broadcast_1_piece0 on bdp-hdfs-36-10-8-33-65:39014 in memory (size: 4.0 KB, free: 3.0 GB)
> 22/06/23 20:55:48 INFO BlockManagerInfo: Removed broadcast_1_piece0 on bdp-hdfs-16-10-8-33-45:33394 in memory (size: 4.0 KB, free: 8.4 GB)
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 28
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 4
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 16
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 21
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 20
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 9
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 23
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 26
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 0
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 3
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 11
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 15
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 27
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 8
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 1
> 22/06/23 20:55:48 INFO ContextCleaner: Cleaned accumulator 22
> 22/06/23 20:55:49 INFO InMemoryFileIndex: It took 66 ms to list leaf files for 1 paths.
> 22/06/23 20:55:49 INFO InMemoryFileIndex: It took 8 ms to list leaf files for 1 paths.
> ====正在执行====>drop table if exists dc.cowell_omni_channel_jingli_day03_operator_info_org_20220622
> ====执行结束====>drop table if exists dc.cowell_omni_channel_jingli_day03_operator_info_org_20220622
> ====正在执行====>  create table dc.cowell_omni_channel_jingli_day03_operator_info_org_20220622 as  select  zone_new  ,data_from  ,ent_name  ,business_id  ,org_name  ,org_no  ,store_id  ,chnl  ,coalesce(day_paid,0) as day_paid           {code}
>  
> Driver log output
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org