You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Michal Romaniuk <mi...@imperial.ac.uk> on 2013/11/22 18:34:11 UTC
Re: user Digest 21 Nov 2013 23:48:53 -0000 Issue 145
Hi Jey,
The work directory seems to be empty, and the same for the logs
directory. I can only see the job's Web UI and it says that it failed.
In the Errors column it shows the same error as appears in the Python
console:
org.apache.spark.SparkException (org.apache.spark.SparkException: Python
worker exited unexpectedly (crashed))
Does this mean the Python workers crash right away when they're started?
Thanks,
Michal
On 21/11/13 23:48, user-digest-help@spark.incubator.apache.org wrote:
> Hi Michal,
>
> Try taking a look at the stderr output of the failed executor. You can
> do this by logging into the Spark worker that had the failed executor,
> and look for the executor ID in the "work" subdirectory. You should
> see a file named "stderr" that has the error messages from the failed
> executor.
>
> Hope that helps,
> -Jey
>
> On Thu, Nov 21, 2013 at 1:01 PM, Michal Romaniuk
> <mi...@imperial.ac.uk> wrote:
>> > Hi,
>> >
>> > I'm trying to run a Python job that keeps failing. It's a map followed
>> > by collect, and works fine if I use Python's built-in map instead of Spark.
>> >
>> > I tried to replace the mapping function with an identity (lambda x: x)
>> > and that works fine with Spark, so Spark seems to be configured correctly.
>> >
>> > The error I get is:
>> >
>> > org.apache.spark.SparkException (org.apache.spark.SparkException: Python
>> > worker exited unexpectedly (crashed))
>> >
>> > The problem is that I can't see what went wrong in the Python code. Any
>> > ideas on how to debug this?
>> >
>> > Thanks,
>> > Michal
>> >