You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Idan Zalzberg (JIRA)" <ji...@apache.org> on 2015/05/25 17:53:17 UTC

[jira] [Commented] (SPARK-5363) Spark 1.2 freeze without error notification

    [ https://issues.apache.org/jira/browse/SPARK-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14558349#comment-14558349 ] 

Idan Zalzberg commented on SPARK-5363:
--------------------------------------

Can't prove it's related to the same issue but we have been experiencing hangs with BroadcastHashJoin, even though we use the scala API.

I was unable to create a simple repro, but in a complicated sql statement that contains multiple tables, joined with BroadcastHashJoin. 
Calling "collect" on the RDD, causes the spark context to hang.

> Spark 1.2 freeze without error notification
> -------------------------------------------
>
>                 Key: SPARK-5363
>                 URL: https://issues.apache.org/jira/browse/SPARK-5363
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.2.0, 1.2.1, 1.3.0
>            Reporter: Tassilo Klein
>            Assignee: Davies Liu
>            Priority: Blocker
>             Fix For: 1.2.2, 1.3.0, 1.4.0
>
>
> After a number of calls to a map().collect() statement Spark freezes without reporting any error.  Within the map a large broadcast variable is used.
> The freezing can be avoided by setting 'spark.python.worker.reuse = false' (Spark 1.2) or using an earlier version, however, at the prize of low speed. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org