You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/12/05 18:43:38 UTC

Build failed in Jenkins: Mahout-Quality #2884

See <https://builds.apache.org/job/Mahout-Quality/2884/>

------------------------------------------
[...truncated 6176 lines...]
AB' num partitions = 2.
{
  2  =>	{0:50.0,1:74.0}
  1  =>	{0:38.0,1:56.0}
  0  =>	{0:26.0,1:38.0}
}
- ABt
- A * B Hadamard
- A + B Elementwise
- A - B Elementwise
- A / B Elementwise
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
- AtA slim
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:2.0,1:3.0,2:4.0}
  2  =>	{0:3.0,1:4.0,2:5.0}
}
- At
SimilarityAnalysisSuite:
- cooccurrence [A'A], [B'A] boolbean data using LLR
- cooccurrence [A'A], [B'A] double data using LLR
- cooccurrence [A'A], [B'A] integer data using LLR
- cooccurrence two matrices with different number of columns
- LLR calc
- downsampling by number per row
RLikeDrmOpsSuite:
- A.t
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
- C = A %*% B
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
Q=
{
  0  =>	{0:0.40273861426601687,1:-0.9153150324187648}
  1  =>	{0:0.9153150324227656,1:0.40273861426427493}
}
- C = A %*% B mapBlock {}
- C = A %*% B incompatible B keys
- Spark-specific C = At %*% B , join
- C = At %*% B , join, String-keyed
- C = At %*% B , zippable, String-keyed
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = A %*% inCoreB
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = inCoreA %*%: B
- C = A.t %*% A
- C = A.t %*% A fat non-graph
- C = A.t %*% A non-int key
- C = A + B
A=
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:5.0,1:6.0,2:7.0}
}
B=
{
  0  =>	{0:0.1278752111621395,1:0.8264502482199997,2:0.5603724816652847}
  1  =>	{0:0.7285060737025262,1:0.08897655758075973,2:0.8999292045242518}
  2  =>	{0:0.16063218282922576,1:0.7869875615165453,2:0.8066031538485666}
}
C=
{
  0  =>	{0:1.1278752111621395,1:2.8264502482199996,2:3.5603724816652846}
  1  =>	{0:3.728506073702526,1:4.08897655758076,2:5.899929204524252}
  2  =>	{0:5.160632182829226,1:6.786987561516545,2:7.806603153848567}
}
- C = A + B, identically partitioned
- C = A + B side test 1
- C = A + B side test 2
- C = A + B side test 3
- Ax
- A'x
- colSums, colMeans
- rowSums, rowMeans
- A.diagv
- numNonZeroElementsPerColumn
- C = A cbind B, cogroup
- C = A cbind B, zip
- B = A + 1.0
- C = A rbind B
- C = A rbind B, with empty
- scalarOps
0 [Executor task launch worker-1] ERROR org.apache.spark.executor.Executor  - Exception in task 9.0 in stage 245.0 (TID 543)
java.io.IOException: PARSING_ERROR(2)
	at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
	at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
	at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
	at org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
	at org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
	at org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
	at org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
	at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
	at org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
	at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
	at java.lang.Thread.run(Thread.java:662)
12 [Result resolver thread-2] ERROR org.apache.spark.scheduler.TaskSetManager  - Task 9 in stage 245.0 failed 1 times; aborting job
- C = A + B missing rows *** FAILED ***
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 245.0 failed 1 times, most recent failure: Lost task 9.0 in stage 245.0 (TID 543, localhost): java.io.IOException: PARSING_ERROR(2)
        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
        org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
        org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
        org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
        org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
        org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
        org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
        org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
        org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
        sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        java.lang.reflect.Method.invoke(Method.java:597)
        java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
        java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
        java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
        java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
        java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
        java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
        org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
        org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
        java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
        java.lang.Thread.run(Thread.java:662)
Driver stacktrace:
  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
  at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
  at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
  at scala.Option.foreach(Option.scala:236)
  at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
  ...
- C = cbind(A, B) with missing rows
collected A = 
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{}
  2  =>	{}
  3  =>	{0:3.0,1:4.0,2:5.0}
}
collected B = 
{
  2  =>	{0:1.0,1:1.0,2:1.0}
  1  =>	{0:1.0,1:1.0,2:1.0}
  3  =>	{0:4.0,1:5.0,2:6.0}
  0  =>	{0:2.0,1:3.0,2:4.0}
}
- B = A + 1.0 missing rows
Run completed in 1 minute, 51 seconds.
Total number of tests run: 75
Suites: completed 10, aborted 0
Tests: succeeded 74, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................ SUCCESS [5.139s]
[INFO] Apache Mahout ..................................... SUCCESS [2.105s]
[INFO] Mahout Math ....................................... SUCCESS [2:18.887s]
[INFO] Mahout MapReduce Legacy ........................... SUCCESS [12:12.225s]
[INFO] Mahout Integration ................................ SUCCESS [1:30.689s]
[INFO] Mahout Examples ................................... SUCCESS [58.206s]
[INFO] Mahout Release Package ............................ SUCCESS [0.122s]
[INFO] Mahout Math Scala bindings ........................ SUCCESS [2:05.164s]
[INFO] Mahout Spark bindings ............................. FAILURE [2:35.899s]
[INFO] Mahout Spark bindings shell ....................... SKIPPED
[INFO] Mahout H2O backend ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 21:56.087s
[INFO] Finished at: Fri Dec 05 17:42:00 UTC 2014
[INFO] Final Memory: 81M/434M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0-M2:test (test) on project mahout-spark_2.10: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :mahout-spark_2.10
Build step 'Invoke top-level Maven targets' marked build as failure
[PMD] Skipping publisher since build result is FAILURE
[TASKS] Skipping publisher since build result is FAILURE
Archiving artifacts
Sending artifact delta relative to Mahout-Quality #2883
Archived 72 artifacts
Archive block size is 32768
Received 3511 blocks and 24325013 bytes
Compression is 82.5%
Took 30 sec
Recording test results
Publishing Javadoc

Jenkins build is back to normal : Mahout-Quality #2885

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Mahout-Quality/2885/>


Re: Build failed in Jenkins: Mahout-Quality #2884

Posted by Dmitriy Lyubimov <dl...@gmail.com>.
this snappy thing again.

On Fri, Dec 5, 2014 at 9:43 AM, Apache Jenkins Server <
jenkins@builds.apache.org> wrote:

> See <https://builds.apache.org/job/Mahout-Quality/2884/>
>
> ------------------------------------------
> [...truncated 6176 lines...]
> AB' num partitions = 2.
> {
>   2  => {0:50.0,1:74.0}
>   1  => {0:38.0,1:56.0}
>   0  => {0:26.0,1:38.0}
> }
>  [32m- ABt [0m
>  [32m- A * B Hadamard [0m
>  [32m- A + B Elementwise [0m
>  [32m- A - B Elementwise [0m
>  [32m- A / B Elementwise [0m
> {
>   0  => {0:5.0,1:8.0}
>   1  => {0:8.0,1:13.0}
> }
> {
>   0  => {0:5.0,1:8.0}
>   1  => {0:8.0,1:13.0}
> }
>  [32m- AtA slim [0m
> {
>   0  => {0:1.0,1:2.0,2:3.0}
>   1  => {0:2.0,1:3.0,2:4.0}
>   2  => {0:3.0,1:4.0,2:5.0}
> }
>  [32m- At [0m
>  [32mSimilarityAnalysisSuite: [0m
>  [32m- cooccurrence [A'A], [B'A] boolbean data using LLR [0m
>  [32m- cooccurrence [A'A], [B'A] double data using LLR [0m
>  [32m- cooccurrence [A'A], [B'A] integer data using LLR [0m
>  [32m- cooccurrence two matrices with different number of columns [0m
>  [32m- LLR calc [0m
>  [32m- downsampling by number per row [0m
>  [32mRLikeDrmOpsSuite: [0m
>  [32m- A.t [0m
> {
>   1  => {0:25.0,1:39.0}
>   0  => {0:11.0,1:17.0}
> }
> {
>   1  => {0:25.0,1:39.0}
>   0  => {0:11.0,1:17.0}
> }
>  [32m- C = A %*% B [0m
> {
>   0  => {0:11.0,1:17.0}
>   1  => {0:25.0,1:39.0}
> }
> {
>   0  => {0:11.0,1:17.0}
>   1  => {0:25.0,1:39.0}
> }
> Q=
> {
>   0  => {0:0.40273861426601687,1:-0.9153150324187648}
>   1  => {0:0.9153150324227656,1:0.40273861426427493}
> }
>  [32m- C = A %*% B mapBlock {} [0m
>  [32m- C = A %*% B incompatible B keys [0m
>  [32m- Spark-specific C = At %*% B , join [0m
>  [32m- C = At %*% B , join, String-keyed [0m
>  [32m- C = At %*% B , zippable, String-keyed [0m
> {
>   0  => {0:26.0,1:35.0,2:46.0,3:51.0}
>   1  => {0:50.0,1:69.0,2:92.0,3:105.0}
>   2  => {0:62.0,1:86.0,2:115.0,3:132.0}
>   3  => {0:74.0,1:103.0,2:138.0,3:159.0}
> }
>  [32m- C = A %*% inCoreB [0m
> {
>   0  => {0:26.0,1:35.0,2:46.0,3:51.0}
>   1  => {0:50.0,1:69.0,2:92.0,3:105.0}
>   2  => {0:62.0,1:86.0,2:115.0,3:132.0}
>   3  => {0:74.0,1:103.0,2:138.0,3:159.0}
> }
>  [32m- C = inCoreA %*%: B [0m
>  [32m- C = A.t %*% A [0m
>  [32m- C = A.t %*% A fat non-graph [0m
>  [32m- C = A.t %*% A non-int key [0m
>  [32m- C = A + B [0m
> A=
> {
>   0  => {0:1.0,1:2.0,2:3.0}
>   1  => {0:3.0,1:4.0,2:5.0}
>   2  => {0:5.0,1:6.0,2:7.0}
> }
> B=
> {
>   0  => {0:0.1278752111621395,1:0.8264502482199997,2:0.5603724816652847}
>   1  => {0:0.7285060737025262,1:0.08897655758075973,2:0.8999292045242518}
>   2  => {0:0.16063218282922576,1:0.7869875615165453,2:0.8066031538485666}
> }
> C=
> {
>   0  => {0:1.1278752111621395,1:2.8264502482199996,2:3.5603724816652846}
>   1  => {0:3.728506073702526,1:4.08897655758076,2:5.899929204524252}
>   2  => {0:5.160632182829226,1:6.786987561516545,2:7.806603153848567}
> }
>  [32m- C = A + B, identically partitioned [0m
>  [32m- C = A + B side test 1 [0m
>  [32m- C = A + B side test 2 [0m
>  [32m- C = A + B side test 3 [0m
>  [32m- Ax [0m
>  [32m- A'x [0m
>  [32m- colSums, colMeans [0m
>  [32m- rowSums, rowMeans [0m
>  [32m- A.diagv [0m
>  [32m- numNonZeroElementsPerColumn [0m
>  [32m- C = A cbind B, cogroup [0m
>  [32m- C = A cbind B, zip [0m
>  [32m- B = A + 1.0 [0m
>  [32m- C = A rbind B [0m
>  [32m- C = A rbind B, with empty [0m
>  [32m- scalarOps [0m
> 0 [Executor task launch worker-1] ERROR
> org.apache.spark.executor.Executor  - Exception in task 9.0 in stage 245.0
> (TID 543)
> java.io.IOException: PARSING_ERROR(2)
>         at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
>         at org.xerial.snappy.SnappyNative.uncompressedLength(Native Method)
>         at org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
>         at
> org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
>         at
> org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
>         at
> org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58)
>         at
> org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
>         at
> org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
>         at
> org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
>         at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
>         at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
>         at
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
>         at
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
>         at
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
>         at
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
>         at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
>         at
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
>         at
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
>         at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>         at java.lang.Thread.run(Thread.java:662)
> 12 [Result resolver thread-2] ERROR
> org.apache.spark.scheduler.TaskSetManager  - Task 9 in stage 245.0 failed 1
> times; aborting job
>  [31m- C = A + B missing rows *** FAILED *** [0m
>  [31m  org.apache.spark.SparkException: Job aborted due to stage failure:
> Task 9 in stage 245.0 failed 1 times, most recent failure: Lost task 9.0 in
> stage 245.0 (TID 543, localhost): java.io.IOException: PARSING_ERROR(2) [0m
>  [31m
> org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) [0m
>  [31m        org.xerial.snappy.SnappyNative.uncompressedLength(Native
> Method) [0m
>  [31m        org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
> [0m
>  [31m
> org.xerial.snappy.SnappyInputStream.readFully(SnappyInputStream.java:125)
> [0m
>  [31m
> org.xerial.snappy.SnappyInputStream.readHeader(SnappyInputStream.java:88)
> [0m
>  [31m
> org.xerial.snappy.SnappyInputStream.<init>(SnappyInputStream.java:58) [0m
>  [31m
> org.apache.spark.io.SnappyCompressionCodec.compressedInputStream(CompressionCodec.scala:128)
> [0m
>  [31m
> org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:232)
> [0m
>  [31m
> org.apache.spark.broadcast.TorrentBroadcast.readObject(TorrentBroadcast.scala:169)
> [0m
>  [31m        sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> [0m
>  [31m
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> [0m
>  [31m        java.lang.reflect.Method.invoke(Method.java:597) [0m
>  [31m
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969) [0m
>  [31m
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871) [0m
>  [31m
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
> [0m
>  [31m
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327) [0m
>  [31m
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969) [0m
>  [31m
> java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893) [0m
>  [31m
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
> [0m
>  [31m
> java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327) [0m
>  [31m
> java.io.ObjectInputStream.readObject(ObjectInputStream.java:349) [0m
>  [31m
> org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
> [0m
>  [31m
> org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
> [0m
>  [31m
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:159) [0m
>  [31m
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> [0m
>  [31m
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> [0m
>  [31m        java.lang.Thread.run(Thread.java:662) [0m
>  [31mDriver stacktrace: [0m
>  [31m  at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
> [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
> [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
> [0m
>  [31m  at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> [0m
>  [31m  at
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
> [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
> [0m
>  [31m  at scala.Option.foreach(Option.scala:236) [0m
>  [31m  at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
> [0m
>  [31m  ... [0m
>  [32m- C = cbind(A, B) with missing rows [0m
> collected A =
> {
>   0  => {0:1.0,1:2.0,2:3.0}
>   1  => {}
>   2  => {}
>   3  => {0:3.0,1:4.0,2:5.0}
> }
> collected B =
> {
>   2  => {0:1.0,1:1.0,2:1.0}
>   1  => {0:1.0,1:1.0,2:1.0}
>   3  => {0:4.0,1:5.0,2:6.0}
>   0  => {0:2.0,1:3.0,2:4.0}
> }
>  [32m- B = A + 1.0 missing rows [0m
>  [36mRun completed in 1 minute, 51 seconds. [0m
>  [36mTotal number of tests run: 75 [0m
>  [36mSuites: completed 10, aborted 0 [0m
>  [36mTests: succeeded 74, failed 1, canceled 0, ignored 1, pending 0 [0m
>  [31m*** 1 TEST FAILED *** [0m
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Mahout Build Tools ................................ SUCCESS [5.139s]
> [INFO] Apache Mahout ..................................... SUCCESS [2.105s]
> [INFO] Mahout Math ....................................... SUCCESS
> [2:18.887s]
> [INFO] Mahout MapReduce Legacy ........................... SUCCESS
> [12:12.225s]
> [INFO] Mahout Integration ................................ SUCCESS
> [1:30.689s]
> [INFO] Mahout Examples ................................... SUCCESS
> [58.206s]
> [INFO] Mahout Release Package ............................ SUCCESS [0.122s]
> [INFO] Mahout Math Scala bindings ........................ SUCCESS
> [2:05.164s]
> [INFO] Mahout Spark bindings ............................. FAILURE
> [2:35.899s]
> [INFO] Mahout Spark bindings shell ....................... SKIPPED
> [INFO] Mahout H2O backend ................................ SKIPPED
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 21:56.087s
> [INFO] Finished at: Fri Dec 05 17:42:00 UTC 2014
> [INFO] Final Memory: 81M/434M
> [INFO]
> ------------------------------------------------------------------------
> [ERROR] Failed to execute goal
> org.scalatest:scalatest-maven-plugin:1.0-M2:test (test) on project
> mahout-spark_2.10: There are test failures -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> [ERROR]
> [ERROR] After correcting the problems, you can resume the build with the
> command
> [ERROR]   mvn <goals> -rf :mahout-spark_2.10
> Build step 'Invoke top-level Maven targets' marked build as failure
> [PMD] Skipping publisher since build result is FAILURE
> [TASKS] Skipping publisher since build result is FAILURE
> Archiving artifacts
> Sending artifact delta relative to Mahout-Quality #2883
> Archived 72 artifacts
> Archive block size is 32768
> Received 3511 blocks and 24325013 bytes
> Compression is 82.5%
> Took 30 sec
> Recording test results
> Publishing Javadoc
>