You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Sameer Tilak <ss...@live.com> on 2014/12/10 21:34:25 UTC

MLlib: Libsvm: Loss was due to java.lang.ArrayIndexOutOfBoundsException

Hi All,When I am running LinearRegressionWithSGD, I get the following error. Any help on how to debug this further will be highly appreciated.
14/12/10 20:26:02 WARN TaskSetManager: Loss was due to java.lang.ArrayIndexOutOfBoundsExceptionjava.lang.ArrayIndexOutOfBoundsException: 150323	at breeze.linalg.operators.DenseVector_SparseVector_Ops$$anon$129.apply(SparseVectorOps.scala:231)	at breeze.linalg.operators.DenseVector_SparseVector_Ops$$anon$129.apply(SparseVectorOps.scala:216)	at breeze.linalg.operators.BinaryRegistry$class.apply(BinaryOp.scala:60)	at breeze.linalg.VectorOps$$anon$178.apply(Vector.scala:391)	at breeze.linalg.NumericOps$class.dot(NumericOps.scala:83)	at breeze.linalg.DenseVector.dot(DenseVector.scala:47)	at org.apache.spark.mllib.optimization.LeastSquaresGradient.compute(Gradient.scala:125)	at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:180)	at org.apache.spark.mllib.optimization.GradientDescent$$anonfun$runMiniBatchSGD$1$$anonfun$1.apply(GradientDescent.scala:179)	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)	at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:144)	at scala.collection.Iterator$class.foreach(Iterator.scala:727)	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:144)	at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1157)	at scala.collection.TraversableOnce$class.aggregate(TraversableOnce.scala:201)	at scala.collection.AbstractIterator.aggregate(Iterator.scala:1157)	at org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838)	at org.apache.spark.rdd.RDD$$anonfun$21.apply(RDD.scala:838)	at org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116)	at org.apache.spark.SparkContext$$anonfun$23.apply(SparkContext.scala:1116)	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)	at org.apache.spark.scheduler.Task.run(Task.scala:51)	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187)	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)	at java.lang.Thread.run(Thread.java:745)
Best regards,--Sameer.