You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Shiva Kumar <sh...@gmail.com> on 2018/10/24 06:50:17 UTC

Not able to reproduce the issue SPARK-23207

Hi All,
I am verifying SPARK-23207
<https://issues.apache.org/jira/browse/SPARK-23207>, (
https://github.com/apache/spark/pull/20393)

when i execute below script in the spark-shell,

import scala.sys.process._

import org.apache.spark.TaskContext
val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
  x
}.repartition(200).map { x =>
  if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
    throw new Exception("pkill -f java".!!)
  }
  x
}
res.distinct().count()

i am getting the following exception.
[image: image.png]

But not reproducing the issue.
The line ""pkill -f java".!!"
  is killing all the java process and so throwing above exception.

kindly let me know how to reproduce the issue.

Thanks and regards
Shivakumar Sondur

Re: Not able to reproduce the issue SPARK-23207

Posted by Wenchen Fan <cl...@gmail.com>.
Please refer to
https://github.com/apache/spark/pull/22112#issuecomment-418479757 for the
discussion about how to reproduce it. Long story short, it needs a large
cluster.

On Wed, Oct 24, 2018 at 2:51 PM Shiva Kumar <sh...@gmail.com> wrote:

> Hi All,
> I am verifying SPARK-23207
> <https://issues.apache.org/jira/browse/SPARK-23207>, (
> https://github.com/apache/spark/pull/20393)
>
> when i execute below script in the spark-shell,
>
> import scala.sys.process._
>
> import org.apache.spark.TaskContext
> val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
>   x
> }.repartition(200).map { x =>
>   if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
>     throw new Exception("pkill -f java".!!)
>   }
>   x
> }
> res.distinct().count()
>
> i am getting the following exception.
> [image: image.png]
>
> But not reproducing the issue.
> The line ""pkill -f java".!!"
>   is killing all the java process and so throwing above exception.
>
> kindly let me know how to reproduce the issue.
>
> Thanks and regards
> Shivakumar Sondur
>
>