You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kousuke Saruta (JIRA)" <ji...@apache.org> on 2016/01/06 10:25:40 UTC

[jira] [Resolved] (SPARK-12340) overstep the bounds of Int in SparkPlan.executeTake

     [ https://issues.apache.org/jira/browse/SPARK-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Kousuke Saruta resolved SPARK-12340.
------------------------------------
       Resolution: Fixed
    Fix Version/s: 2.0.0

> overstep the bounds of Int in SparkPlan.executeTake
> ---------------------------------------------------
>
>                 Key: SPARK-12340
>                 URL: https://issues.apache.org/jira/browse/SPARK-12340
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>            Reporter: QiangCai
>            Assignee: QiangCai
>             Fix For: 2.0.0
>
>
> Reproduce
>   
>    sql e.g.  select * from talbe1 where c1 = 'abc' limit 2147483638
>   
>    n will be 2147483638 in SparkPlan.executeTake(n: Int). If the first partition just have one row ( buf.size will be one), the result of  cod e numPartsToTry = (1.5 * n * partsScanned / buf.size).toInt will be Int.MaxValue. Then math.min(partsScanned + numPartsToTry, totalParts) will be Int.MinValue (-2147483648) .
> Exception
> java.lang.IllegalArgumentException: Attempting to access a non-existent partition: -2147483648. Total number of partitions: 200
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitJob$2.apply(DAGScheduler.scala:531)
>         at org.apache.spark.scheduler.DAGScheduler$$anonfun$submitJob$2.apply(DAGScheduler.scala:530)
>         at scala.Option.foreach(Option.scala:236)
>         at org.apache.spark.scheduler.DAGScheduler.submitJob(DAGScheduler.scala:530)
>         at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:558)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>         at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:215)
>         at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:207)
>         at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
>         at org.apache.spark.sql.DataFrame$$anonfun$collect$1.apply(DataFrame.scala:1386)
>         at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
>         at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:1904)
>         at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1385)
>         at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1315)
>         at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1378)
>         at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:178)
>         at org.apache.spark.sql.hbase.HBaseSQLCliDriver$.process(HBaseSQLCliDriver.scala:122)
>         at org.apache.spark.sql.hbase.HBaseSQLCliDriver$.processLine(HBaseSQLCliDriver.scala:102)
>         at org.apache.spark.sql.hbase.HBaseSQLCliDriver$.main(HBaseSQLCliDriver.scala:80)
>         at org.apache.spark.sql.hbase.HBaseSQLCliDriver.main(HBaseSQLCliDriver.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org