You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kylin.apache.org by "Shaofeng SHI (JIRA)" <ji...@apache.org> on 2018/11/12 09:40:00 UTC

[jira] [Resolved] (KYLIN-3667) ArrayIndexOutOfBoundsException in NDCuboidBuilder

     [ https://issues.apache.org/jira/browse/KYLIN-3667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Shaofeng SHI resolved KYLIN-3667.
---------------------------------
    Resolution: Cannot Reproduce

> ArrayIndexOutOfBoundsException in NDCuboidBuilder
> -------------------------------------------------
>
>                 Key: KYLIN-3667
>                 URL: https://issues.apache.org/jira/browse/KYLIN-3667
>             Project: Kylin
>          Issue Type: Bug
>          Components: Spark Engine
>    Affects Versions: v2.5.0
>         Environment: AWS EMR 
>            Reporter: Hubert STEFANI
>            Priority: Major
>         Attachments: cube.json
>
>
> The former errors
> https://issues.apache.org/jira/browse/KYLIN-3115
> and
> https://issues.apache.org/jira/browse/KYLIN-1768
> still remain in step SparkCubingByLayer.
> we encounter the following error : 
> java.lang.ArrayIndexOutOfBoundsException at java.lang.System.arraycopy(Native Method)
>  at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKeyInternal(NDCuboidBuilder.java:106)
>  at org.apache.kylin.engine.mr.common.NDCuboidBuilder.buildKey2(NDCuboidBuilder.java:87)
>  at org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:425)
>  at org.apache.kylin.engine.spark.SparkCubingByLayer$CuboidFlatMap.call(SparkCubingByLayer.java:370)
>  at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
>  at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
>  at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
>  at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
>  at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
>  at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
>  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>  at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>  at org.apache.spark.scheduler.Task.run(Task.scala:99)
>  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  
> Do we have to (painfully) change dimensions size or should it be fixed through a patch ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)