You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ajith S (JIRA)" <ji...@apache.org> on 2019/03/09 08:41:00 UTC

[jira] [Commented] (SPARK-27114) SQL Tab shows duplicate executions for some commands

    [ https://issues.apache.org/jira/browse/SPARK-27114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16788576#comment-16788576 ] 

Ajith S commented on SPARK-27114:
---------------------------------

will be working on this

> SQL Tab shows duplicate executions for some commands
> ----------------------------------------------------
>
>                 Key: SPARK-27114
>                 URL: https://issues.apache.org/jira/browse/SPARK-27114
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.0
>            Reporter: Ajith S
>            Priority: Major
>         Attachments: Screenshot from 2019-03-09 14-04-07.png
>
>
> run simple sql  command
> {{create table abc ( a int );}}
> Open SQL tab in SparkUI, we can see duplicate entries for the execution. Tested behaviour in thriftserver and sparksql
> *check attachment*
> After analysis for spark-sql, the call stacks for duplicate execution id seems to be
> {code:java}
> $anonfun$withNewExecutionId$1:78, SQLExecution$ (org.apache.spark.sql.execution)
> apply:-1, 2057192703 (org.apache.spark.sql.execution.SQLExecution$$$Lambda$1036)
> withSQLConfPropagated:147, SQLExecution$ (org.apache.spark.sql.execution)
> withNewExecutionId:74, SQLExecution$ (org.apache.spark.sql.execution)
> withAction:3346, Dataset (org.apache.spark.sql)
> <init>:203, Dataset (org.apache.spark.sql)
> ofRows:88, Dataset$ (org.apache.spark.sql)
> sql:656, SparkSession (org.apache.spark.sql)
> sql:685, SQLContext (org.apache.spark.sql)
> run:63, SparkSQLDriver (org.apache.spark.sql.hive.thriftserver)
> processCmd:372, SparkSQLCLIDriver (org.apache.spark.sql.hive.thriftserver)
> processLine:376, CliDriver (org.apache.hadoop.hive.cli)
> main:275, SparkSQLCLIDriver$ (org.apache.spark.sql.hive.thriftserver)
> main:-1, SparkSQLCLIDriver (org.apache.spark.sql.hive.thriftserver)
> invoke0:-1, NativeMethodAccessorImpl (sun.reflect)
> invoke:62, NativeMethodAccessorImpl (sun.reflect)
> invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
> invoke:498, Method (java.lang.reflect)
> start:52, JavaMainApplication (org.apache.spark.deploy)
> org$apache$spark$deploy$SparkSubmit$$runMain:855, SparkSubmit (org.apache.spark.deploy)
> doRunMain$1:162, SparkSubmit (org.apache.spark.deploy)
> submit:185, SparkSubmit (org.apache.spark.deploy)
> doSubmit:87, SparkSubmit (org.apache.spark.deploy)
> doSubmit:934, SparkSubmit$$anon$2 (org.apache.spark.deploy)
> main:943, SparkSubmit$ (org.apache.spark.deploy)
> main:-1, SparkSubmit (org.apache.spark.deploy){code}
> {code:java}
> $anonfun$withNewExecutionId$1:78, SQLExecution$ (org.apache.spark.sql.execution)
> apply:-1, 2057192703 (org.apache.spark.sql.execution.SQLExecution$$$Lambda$1036)
> withSQLConfPropagated:147, SQLExecution$ (org.apache.spark.sql.execution)
> withNewExecutionId:74, SQLExecution$ (org.apache.spark.sql.execution)
> run:65, SparkSQLDriver (org.apache.spark.sql.hive.thriftserver)
> processCmd:372, SparkSQLCLIDriver (org.apache.spark.sql.hive.thriftserver)
> processLine:376, CliDriver (org.apache.hadoop.hive.cli)
> main:275, SparkSQLCLIDriver$ (org.apache.spark.sql.hive.thriftserver)
> main:-1, SparkSQLCLIDriver (org.apache.spark.sql.hive.thriftserver)
> invoke0:-1, NativeMethodAccessorImpl (sun.reflect)
> invoke:62, NativeMethodAccessorImpl (sun.reflect)
> invoke:43, DelegatingMethodAccessorImpl (sun.reflect)
> invoke:498, Method (java.lang.reflect)
> start:52, JavaMainApplication (org.apache.spark.deploy)
> org$apache$spark$deploy$SparkSubmit$$runMain:855, SparkSubmit (org.apache.spark.deploy)
> doRunMain$1:162, SparkSubmit (org.apache.spark.deploy)
> submit:185, SparkSubmit (org.apache.spark.deploy)
> doSubmit:87, SparkSubmit (org.apache.spark.deploy)
> doSubmit:934, SparkSubmit$$anon$2 (org.apache.spark.deploy)
> main:943, SparkSubmit$ (org.apache.spark.deploy)
> main:-1, SparkSubmit (org.apache.spark.deploy){code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org