You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jaime de Roque Martínez (JIRA)" <ji...@apache.org> on 2018/11/23 11:30:00 UTC

[jira] [Updated] (SPARK-26157) Asynchronous execution of stored procedure

     [ https://issues.apache.org/jira/browse/SPARK-26157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jaime de Roque Martínez updated SPARK-26157:
--------------------------------------------
    Description: 
I am executing a jar file with spark-submit.

This jar file is a scala program, which combines operations spark-related and non-spark-related.

The issue comes when I execute a stored procedure from scala using jdbc. This SP is in a Microsoft SQL database and, basically, performs some operations and populates a table with about 500 rows, one by one.

Then, the next step in the program is read that table and perform some additional calculations. This step is grabbing always less rows than created by stored procedure, but this is because this step is not properly sync with the previous one, starting its execution without waiting the previous step to be done.

I have tried:
 * Insert a Thread.sleep(10000) between both instructions and{color:#14892c} it seems to work{color}.

 * Execute the program just with one Executor => {color:#d04437}it doesn't work{color}.

I would like to know why is it happening and how can I solve it without the sleep, because that's not a admissible solution.

Thank you very much!!

  was:
I am executing a jar file with spark-submit.

This jar file is a scala program, which combines operations spark-related and non-spark-related.

The issue comes when I execute a stored procedure from scala using jdbc. This SP is in a Microsoft SQL database and, basically, performs some operations and populates a table with about 500 rows, one by one.

Then, the next step in the program is read that table and perform some additional calculations. This step is taking always less rows than created by stored procedure, but this is because this step is not properly sync with the previous one, starting its execution without waiting the previous step to be done.

I have tried:
 * Insert a Thread.sleep(10000) between both instructions and{color:#14892c} it seems to work{color}.

 * Execute the program just with one Executor => {color:#d04437}it doesn't work{color}.

I would like to know why is it happening and how can I solve it without the sleep, because that's not a admissible solution.

Thank you very much!!


> Asynchronous execution of stored procedure
> ------------------------------------------
>
>                 Key: SPARK-26157
>                 URL: https://issues.apache.org/jira/browse/SPARK-26157
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.3.0
>            Reporter: Jaime de Roque Martínez
>            Priority: Major
>
> I am executing a jar file with spark-submit.
> This jar file is a scala program, which combines operations spark-related and non-spark-related.
> The issue comes when I execute a stored procedure from scala using jdbc. This SP is in a Microsoft SQL database and, basically, performs some operations and populates a table with about 500 rows, one by one.
> Then, the next step in the program is read that table and perform some additional calculations. This step is grabbing always less rows than created by stored procedure, but this is because this step is not properly sync with the previous one, starting its execution without waiting the previous step to be done.
> I have tried:
>  * Insert a Thread.sleep(10000) between both instructions and{color:#14892c} it seems to work{color}.
>  * Execute the program just with one Executor => {color:#d04437}it doesn't work{color}.
> I would like to know why is it happening and how can I solve it without the sleep, because that's not a admissible solution.
> Thank you very much!!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org