You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@systemml.apache.org by "Mike Dusenberry (JIRA)" <ji...@apache.org> on 2016/03/28 18:17:25 UTC

[jira] [Closed] (SYSTEMML-597) Random Forest Execution Fails

     [ https://issues.apache.org/jira/browse/SYSTEMML-597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mike Dusenberry closed SYSTEMML-597.
------------------------------------

> Random Forest Execution Fails
> -----------------------------
>
>                 Key: SYSTEMML-597
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-597
>             Project: SystemML
>          Issue Type: Bug
>    Affects Versions: SystemML 0.9
>            Reporter: Mike Dusenberry
>            Assignee: Arvind Surve
>             Fix For: SystemML 0.10
>
>
> An issue was raised with running our random forest algorithm on SystemML 0.9 via MLContext with Scala Spark on a cluster.  The following example runs on a local machine (on bleeding-edge), but not on the given cluster.
> Notice the error involves {{distinct_values_offset}}.  That variable should only be used if `num_cat_features` is > 0, but based on line 106 (https://github.com/apache/incubator-systemml/blob/master/scripts/algorithms/random-forest.dml#L106) the else clause is used (because no R file), and {{num_cat_features}} is set to 0.  Also, based on the above line 106 (and no R file) that variable should never be initialized either.
> Code:
> {code}
> // Create a SystemML context
> import org.apache.sysml.api.MLContext
> val ml = new MLContext(sc)
> // Generate random data
> val X = sc.parallelize(Seq((0.3, 0.1, 0.5), (0.3, 1.0, 0.6), (0.7, 0.8, 1.0), (0.3, 0.1, 0.1), (0.5, 0.8, 0.5))).toDF
> val Y = sc.parallelize(Seq((1, 0, 0), (0, 1, 0), (0, 0, 1), (0, 1, 0), (1, 0, 0))).toDF
> // Register inputs & outputs
> ml.reset()
> ml.registerInput("X", X)
> ml.registerInput("Y_bin", Y)
> ml.registerOutput("M")
> // Run the script
> val nargs = Map("X" -> "", "Y" -> "", "M" -> "")
> val outputs = ml.execute("/home/biadmin/spark-enablement/installs/SystemML/algorithms/random-forest.dml", nargs)
> val M = outputs.getDF(sqlContext, "M")
> {code}
> Output:
> {code}
> import org.apache.sysml.api.MLContext
> ml: org.apache.sysml.api.MLContext = org.apache.sysml.api.MLContext@677d8e36
> X: org.apache.spark.sql.DataFrame = [_1: double, _2: double, _3: double]
> Y: org.apache.spark.sql.DataFrame = [_1: int, _2: int, _3: int]
> nargs: scala.collection.immutable.Map[String,String] = Map(X -> "", Y -> "", M -> "")
> org.apache.sysml.runtime.DMLRuntimeException: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in while program block generated from while statement block between lines 263 and 1167 -- Error evaluating while program block.
> 	at org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:153)
> 	at org.apache.sysml.api.MLContext.executeUsingSimplifiedCompilationChain(MLContext.java:1337)
> 	at org.apache.sysml.api.MLContext.compileAndExecuteScript(MLContext.java:1203)
> 	at org.apache.sysml.api.MLContext.compileAndExecuteScript(MLContext.java:1149)
> 	at org.apache.sysml.api.MLContext.execute(MLContext.java:631)
> 	at org.apache.sysml.api.MLContext.execute(MLContext.java:666)
> 	at org.apache.sysml.api.MLContext.execute(MLContext.java:679)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:79)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:84)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:86)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:88)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:90)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:92)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:94)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:96)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:98)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:100)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:102)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:104)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:106)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:108)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:110)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:112)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:114)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:116)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:118)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:120)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:122)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:124)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:126)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:128)
> 	at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:130)
> 	at $iwC$$iwC$$iwC$$iwC.<init>(<console>:132)
> 	at $iwC$$iwC$$iwC.<init>(<console>:134)
> 	at $iwC$$iwC.<init>(<console>:136)
> 	at $iwC.<init>(<console>:138)
> 	at <init>(<console>:140)
> 	at .<init>(<console>:144)
> 	at .<clinit>(<console>)
> 	at .<init>(<console>:7)
> 	at .<clinit>(<console>)
> 	at $print(<console>)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:497)
> 	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> 	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
> 	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> 	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> 	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> 	at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:646)
> 	at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:611)
> 	at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:604)
> 	at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
> 	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
> 	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:292)
> 	at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
> 	at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
> 	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> 	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in while program block generated from while statement block between lines 263 and 1167 -- Error evaluating while program block.
> 	at org.apache.sysml.runtime.controlprogram.WhileProgramBlock.execute(WhileProgramBlock.java:196)
> 	at org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:146)
> 	... 65 more
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in program block generated from statement block between lines 278 and 669 -- Error evaluating instruction: SPARK°chkpoint°distinct_values_offset·MATRIX·DOUBLE°_mVar20842·MATRIX·DOUBLE°MEMORY_AND_DISK
> 	at org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:339)
> 	at org.apache.sysml.runtime.controlprogram.ProgramBlock.executeInstructions(ProgramBlock.java:227)
> 	at org.apache.sysml.runtime.controlprogram.ProgramBlock.execute(ProgramBlock.java:169)
> 	at org.apache.sysml.runtime.controlprogram.WhileProgramBlock.execute(WhileProgramBlock.java:183)
> 	... 66 more
> Caused by: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://ehaascluster/user/biadmin/scratch_space/_p204381_10.136.68.22/_t0/temp11700_701
> 	at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
> 	at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
> 	at org.apache.hadoop.mapred.SequenceFileInputFormat.listStatus(SequenceFileInputFormat.java:45)
> 	at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
> 	at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
> 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
> 	at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:65)
> 	at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:47)
> 	at org.apache.sysml.runtime.instructions.spark.CheckpointSPInstruction.getNumCoalescePartitions(CheckpointSPInstruction.java:156)
> 	at org.apache.sysml.runtime.instructions.spark.CheckpointSPInstruction.processInstruction(CheckpointSPInstruction.java:101)
> 	at org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:309)
> 	... 69 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)