You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/11/26 18:35:58 UTC

Build failed in Jenkins: Mahout-Quality #2875

See <https://builds.apache.org/job/Mahout-Quality/2875/>

------------------------------------------
[...truncated 6231 lines...]
- C = A %*% B incompatible B keys
- Spark-specific C = At %*% B , join
- C = At %*% B , join, String-keyed
- C = At %*% B , zippable, String-keyed
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = A %*% inCoreB
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = inCoreA %*%: B
- C = A.t %*% A
- C = A.t %*% A fat non-graph
- C = A.t %*% A non-int key
- C = A + B
A=
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:5.0,1:6.0,2:7.0}
}
B=
{
  0  =>	{0:0.3016352775165334,1:0.4033616890500241,2:0.24617539094020902}
  1  =>	{0:0.007969135790208748,1:0.7844429658867654,2:0.6879141705693822}
  2  =>	{0:0.2857409160055201,1:0.3340123257915044,2:0.6992849710796821}
}
C=
{
  0  =>	{0:1.3016352775165334,1:2.403361689050024,2:3.246175390940209}
  1  =>	{0:3.0079691357902085,1:4.7844429658867655,2:5.687914170569382}
  2  =>	{0:5.28574091600552,1:6.334012325791504,2:7.699284971079682}
}
- C = A + B, identically partitioned
- C = A + B side test 1
- C = A + B side test 2
- C = A + B side test 3
- Ax
- A'x
- colSums, colMeans
- rowSums, rowMeans
- A.diagv
- numNonZeroElementsPerColumn
- C = A cbind B, cogroup
- C = A cbind B, zip
- B = A + 1.0
- C = A rbind B
- C = A rbind B, with empty
- scalarOps
0 [Executor task launch worker-1] ERROR org.apache.spark.executor.Executor  - Exception in task 9.0 in stage 245.0 (TID 543)
java.io.IOException: PARSING_ERROR(2)
	at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
	at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
	at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
	at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
	at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
	at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
	at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
	at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
	at org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:662)
0 [Executor task launch worker-0] ERROR org.apache.spark.executor.Executor  - Exception in task 8.0 in stage 245.0 (TID 542)
java.io.IOException: PARSING_ERROR(2)
	at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
	at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
	at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
	at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
	at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
	at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
	at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
	at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
	at org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:662)
0 [Executor task launch worker-2] ERROR org.apache.spark.executor.Executor  - Exception in task 7.0 in stage 245.0 (TID 541)
java.io.IOException: PARSING_ERROR(2)
	at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
	at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
	at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
	at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
	at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
	at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
	at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
	at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
	at org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
	at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:662)
17 [Result resolver thread-3] ERROR org.apache.spark.scheduler.TaskSetManager  - Task 9 in stage 245.0 failed 1 times; aborting job
- C = A + B missing rows *** FAILED ***
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 245.0 failed 1 times, most recent failure: Lost task 9.0 in stage 245.0 (TID 543, localhost): java.io.IOException: PARSING_ERROR(2)
        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
        org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
        org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
        org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
        org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
        org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
        org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
        org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
        org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
        sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        java.lang.reflect.Method.invoke(Method.java:597)
        java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        java.lang.Thread.run(Thread.java:662)
Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  at scala.Option.foreach(Option.scala:236)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
  ...
- C = cbind(A, B) with missing rows
collected A = 
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{}
  2  =>	{}
  3  =>	{0:3.0,1:4.0,2:5.0}
}
collected B = 
{
  2  =>	{0:1.0,1:1.0,2:1.0}
  1  =>	{0:1.0,1:1.0,2:1.0}
  3  =>	{0:4.0,1:5.0,2:6.0}
  0  =>	{0:2.0,1:3.0,2:4.0}
}
- B = A + 1.0 missing rows
Run completed in 1 minute, 35 seconds.
Total number of tests run: 75
Suites: completed 10, aborted 0
Tests: succeeded 74, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................ SUCCESS [5.412s]
[INFO] Apache Mahout ..................................... SUCCESS [2.142s]
[INFO] Mahout Math ....................................... SUCCESS [2:17.190s]
[INFO] Mahout MapReduce Legacy ........................... SUCCESS [11:08.955s]
[INFO] Mahout Integration ................................ SUCCESS [1:36.112s]
[INFO] Mahout Examples ................................... SUCCESS [51.407s]
[INFO] Mahout Release Package ............................ SUCCESS [0.114s]
[INFO] Mahout Math Scala bindings ........................ SUCCESS [2:02.916s]
[INFO] Mahout Spark bindings ............................. FAILURE [2:18.218s]
[INFO] Mahout Spark bindings shell ....................... SKIPPED
[INFO] Mahout H2O backend ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20:24.635s
[INFO] Finished at: Wed Nov 26 17:32:58 UTC 2014
[INFO] Final Memory: 88M/445M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0-M2:test (test) on project mahout-spark_2.10: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :mahout-spark_2.10
Build step 'Invoke top-level Maven targets' marked build as failure
[PMD] Skipping publisher since build result is FAILURE
[TASKS] Skipping publisher since build result is FAILURE
Archiving artifacts
Sending artifact delta relative to Mahout-Quality #2874
Archived 72 artifacts
Archive block size is 32768
Received 3048 blocks and 36816273 bytes
Compression is 73.1%
Took 35 sec
Recording test results
Publishing Javadoc

Jenkins build is back to normal : Mahout-Quality #2877

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Mahout-Quality/2877/>


Build failed in Jenkins: Mahout-Quality #2876

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Mahout-Quality/2876/>

------------------------------------------
[...truncated 6434 lines...]
pcaControl:
{
  0  =>	{0:-5.668224460908154,1:-6.133214511308873,2:0.779825278384518,3:-1.0626643480581115,4:-0.06538841105740897,5:-0.04349879929171554,6:-0.06507643146171325,7:-0.006672372115154449,8:-0.0019160303748368523,9:7.147168739488035E-4}
  1  =>	{0:14.746441059414165,1:0.1238449432155428,2:0.509150191469107,3:0.4137388528291847,4:0.4602304546866354,5:-0.030092835495961597,6:-0.056985559374213894,7:-0.005383768497924209,8:-0.0030665235947086983,9:-0.003185941529074587}
  2  =>	{0:-1.0990711930828203,1:2.112004016731812,2:0.6369936971603877,3:0.40066673113765894,4:-0.09407717241766621,5:0.16713018353526032,6:0.016204859925633024,7:0.023855732691034304,8:-0.0016098234173380218,9:9.122665238331369E-4}
  3  =>	{0:-2.343738208181828,1:1.7280000589380524,2:1.0496211749048339,3:0.3391338800120744,4:0.42118942185569147,5:-0.058002321673436046,6:-0.06279289708931955,7:-0.005985509935939571,8:-0.001577611924415433,9:6.435206497681367E-4}
  4  =>	{0:-2.8258135063306025,1:2.5456315939021112,2:-2.7761375304054297,3:0.18896801583102982,4:-0.08909300750068946,5:-0.04768178986720063,6:0.005996445601075281,7:-0.004688932648084842,8:-0.0019988695090549114,9:5.368932160626223E-4}
  5  =>	{0:14.783534397198967,1:0.29501509257269437,2:0.7769477989297868,3:-1.1500481296888652,4:-0.09462906519128594,5:0.18156858452961613,6:0.017776821093516578,7:-0.003623965976588348,8:0.007827898130456826,9:7.230575194194227E-4}
  6  =>	{0:9.171538465556496,1:-7.175558039374563,2:-3.635190529531252,3:0.26468851672819754,4:-0.10959537162291529,5:-0.029728456598475363,6:0.020325484403651273,7:-0.008415615578830245,8:0.006282408064286262,9:0.0010658814415176934}
  7  =>	{0:-3.0041815916415446,1:1.942663933478009,2:0.5119422239977843,3:0.2763405492176308,4:-0.16727903392370685,5:-0.03560463803468493,6:-0.0636262914744333,7:-0.005132709375403085,8:-0.0020968858498234227,9:6.459293841668969E-4}
  8  =>	{0:-1.4367848261801806,1:2.9643681587815434,2:-2.755553136079842,3:0.04120009370991556,4:-0.1492391527326176,5:-0.025079506195922904,6:0.006004013139264973,7:-0.004597335572305195,8:0.007223888432389089,9:7.344779171772158E-4}
  9  =>	{0:-1.7283784683760932,1:2.758066793169005,2:-2.7878134378042914,3:0.20694927779394884,4:-0.15124172553064372,5:0.17362751994048398,6:0.009930210366534378,7:-0.005660577591472821,8:-0.002278641733622213,9:7.533068782920091E-4}
}
- dspca
spectrum:{0:300.0,1:110.3638323514327,2:40.60058497098381,3:14.936120510359183,4:5.494691666620254,5:2.02138409972564,6:0.7436256529999076,7:0.2735645896663549,8:0.10063878837075356,9:0.037022941226003865,10:0.013619978928745453,11:0.005010510237073698,12:0.001843263705998463,13:0.0010,14:0.0010,15:0.0010,16:0.0010,17:0.0010,18:0.0010,19:0.0010,20:0.0010,21:0.0010,22:0.0010,23:0.0010,24:0.0010,25:0.0010,26:0.0010,27:0.0010,28:0.0010,29:0.0010,30:0.0010,31:0.0010,32:0.0010,33:0.0010,34:0.0010,35:0.0010,36:0.0010,37:0.0010,38:0.0010,39:0.0010}
Control block:
{
  0  =>	{0:0.3947476722883563,1:-0.08695028358267716,2:-1.0574297632219802}
  1  =>	{0:0.4076559804271818,1:0.013563509240543453,2:-0.6050700722864573}
  2  =>	{0:0.15935325307337903,1:0.07468219465060774,2:-0.37963073350622206}
}
ALS factorized approximation block:
{
  0  =>	{0:0.3947518179224231,1:-0.08695389395155544,2:-1.0574494478839265}
  1  =>	{0:0.4076399435035176,1:0.013566854170126305,2:-0.6050777716454986}
  2  =>	{0:0.15934618500050707,1:0.0746779194686282,2:-0.3796636071626735}
}
norm of residuals 0.009174
train iteration rmses: List(1.7992630164822846E-7, 1.308589600373296E-7, 1.1214324598457284E-7, 1.6832499777966416E-7)
- dals
DrmLikeOpsSuite:
{
  0  =>	{0:2.0,1:3.0,2:4.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:4.0,1:5.0,2:6.0}
  3  =>	{0:5.0,1:6.0,2:7.0}
}
- mapBlock
{
  0  =>	{0:2.0,1:3.0}
  1  =>	{0:3.0,1:4.0}
  2  =>	{0:4.0,1:5.0}
  3  =>	{0:5.0,1:6.0}
}
- col range
{
  0  =>	{0:2.0,1:3.0,2:4.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
}
- row range
{
  0  =>	{0:3.0,1:4.0}
  1  =>	{0:4.0,1:5.0}
}
- col, row range
- exact, min and auto ||
ItemSimilarityDriverSuite:
- ItemSimilarityDriver, non-full-spec CSV
- ItemSimilarityDriver TSV 
- ItemSimilarityDriver log-ish files
- ItemSimilarityDriver legacy supported file format
- ItemSimilarityDriver write search engine output
- ItemSimilarityDriver recursive file discovery using filename patterns
- ItemSimilarityDriver, two input paths
- ItemSimilarityDriver, two inputs of different dimensions
- ItemSimilarityDriver cross similarity two separate items spaces
- A.t %*% B after changing row cardinality of A
- Changing row cardinality of an IndexedDataset
- ItemSimilarityDriver cross similarity two separate items spaces, missing rows in B
BlasSuite:
AB' num partitions = 2.
{
  2  =>	{0:50.0,1:74.0}
  1  =>	{0:38.0,1:56.0}
  0  =>	{0:26.0,1:38.0}
}
- ABt
- A * B Hadamard
- A + B Elementwise
- A - B Elementwise
- A / B Elementwise
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
- AtA slim
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:2.0,1:3.0,2:4.0}
  2  =>	{0:3.0,1:4.0,2:5.0}
}
- At
SimilarityAnalysisSuite:
- cooccurrence [A'A], [B'A] boolbean data using LLR
- cooccurrence [A'A], [B'A] double data using LLR
- cooccurrence [A'A], [B'A] integer data using LLR
- cooccurrence two matrices with different number of columns
- LLR calc
- downsampling by number per row
RLikeDrmOpsSuite:
- A.t
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
- C = A %*% B
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
Q=
{
  0  =>	{0:0.40273861426601687,1:-0.9153150324187648}
  1  =>	{0:0.9153150324227656,1:0.40273861426427493}
}
- C = A %*% B mapBlock {}
- C = A %*% B incompatible B keys
- Spark-specific C = At %*% B , join
- C = At %*% B , join, String-keyed
- C = At %*% B , zippable, String-keyed
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = A %*% inCoreB
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = inCoreA %*%: B
- C = A.t %*% A
- C = A.t %*% A fat non-graph
- C = A.t %*% A non-int key
- C = A + B
A=
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:5.0,1:6.0,2:7.0}
}
B=
{
  0  =>	{0:0.22714592856356974,1:0.49073693790718265,2:0.5600700358110763}
  1  =>	{0:0.8707860384737347,1:0.08115713235779232,2:0.8308883093866211}
  2  =>	{0:0.0760147685863507,1:0.599528176666852,2:0.9779400063973901}
}
C=
{
  0  =>	{0:1.2271459285635697,1:2.4907369379071826,2:3.5600700358110764}
  1  =>	{0:3.8707860384737347,1:4.081157132357792,2:5.830888309386621}
  2  =>	{0:5.076014768586351,1:6.599528176666852,2:7.97794000639739}
}
- C = A + B, identically partitioned
- C = A + B side test 1
- C = A + B side test 2
- C = A + B side test 3
- Ax
- A'x
- colSums, colMeans
- rowSums, rowMeans
- A.diagv
- numNonZeroElementsPerColumn
- C = A cbind B, cogroup
- C = A cbind B, zip
- B = A + 1.0
- C = A rbind B
- C = A rbind B, with empty
- scalarOps
- C = A + B missing rows
- C = cbind(A, B) with missing rows
collected A = 
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{}
  2  =>	{}
  3  =>	{0:3.0,1:4.0,2:5.0}
}
collected B = 
{
  2  =>	{0:1.0,1:1.0,2:1.0}
  1  =>	{0:1.0,1:1.0,2:1.0}
  3  =>	{0:4.0,1:5.0,2:6.0}
  0  =>	{0:2.0,1:3.0,2:4.0}
}
- B = A + 1.0 missing rows
Run completed in 1 minute, 40 seconds.
Total number of tests run: 75
Suites: completed 10, aborted 0
Tests: succeeded 74, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................ SUCCESS [4.569s]
[INFO] Apache Mahout ..................................... SUCCESS [1.908s]
[INFO] Mahout Math ....................................... SUCCESS [2:15.380s]
[INFO] Mahout MapReduce Legacy ........................... SUCCESS [11:25.798s]
[INFO] Mahout Integration ................................ SUCCESS [1:27.220s]
[INFO] Mahout Examples ................................... SUCCESS [52.412s]
[INFO] Mahout Release Package ............................ SUCCESS [0.094s]
[INFO] Mahout Math Scala bindings ........................ SUCCESS [2:06.371s]
[INFO] Mahout Spark bindings ............................. FAILURE [2:21.877s]
[INFO] Mahout Spark bindings shell ....................... SKIPPED
[INFO] Mahout H2O backend ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20:37.859s
[INFO] Finished at: Thu Nov 27 17:34:46 UTC 2014
[INFO] Final Memory: 87M/418M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0-M2:test (test) on project mahout-spark_2.10: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :mahout-spark_2.10
Build step 'Invoke top-level Maven targets' marked build as failure
[PMD] Skipping publisher since build result is FAILURE
[TASKS] Skipping publisher since build result is FAILURE
Archiving artifacts
Sending artifact delta relative to Mahout-Quality #2874
Archived 72 artifacts
Archive block size is 32768
Received 3536 blocks and 23506941 bytes
Compression is 83.1%
Took 30 sec
Recording test results
Publishing Javadoc