You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Andrew Palumbo <ap...@outlook.com> on 2014/10/30 18:21:38 UTC

RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

I just built and tested with no problems.  Probably just Jenkins acting up.

> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout Spark bindings #1728
> From: pat@occamsmachete.com
> Date: Thu, 30 Oct 2014 09:26:45 -0700
> To: dev@mahout.apache.org
> 
> At first blush this looks unrelated to the stuff I pushed to move to Spark 1.1.0
> 
> The error is in snappy parsing during some R-like ops
> 
> I don’t use native snappy myself, is anyone else seeing this or is it just  cosmic rays?
> 
> 
> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <je...@builds.apache.org> wrote:
> 
> See <https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/>
> 
> 
 		 	   		  

RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.
built master with spark-1.1.0 compiled from source for hadoop-1.2.1 and everything tested ok.

> From: ap.dev@outlook.com
> To: dev@mahout.apache.org
> Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> Date: Fri, 31 Oct 2014 13:50:36 -0400
> 
> 
> Everything seems to be building ok now on my machine. Maybe there were some bad artifacts deployed yesterday?
> 
> using sources from:
> 
> https://github.com/pferrel/mahout/tree/hadoop-client 
> 
> [andy@localhost mahout]$ echo $SPARK_HOME
> 
> [andy@localhost mahout]$ echo $HADOOP_HOME
> /home/andy/apache_builds/hadoop_bin/hadoop-1.2.1
> [andy@localhost mahout]$ mvn clean install package -Dhadoop.version=1.2.1
> 
> {...}
> [INFO] ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Mahout Build Tools ................................ SUCCESS [19.083s]
> [INFO] Apache Mahout ..................................... SUCCESS [4.209s]
> [INFO] Mahout Math ....................................... SUCCESS [3:08.416s]
> [INFO] Mahout MapReduce Legacy ........................... SUCCESS [18:52.684s]
> [INFO] Mahout Integration ................................ SUCCESS [2:23.279s]
> [INFO] Mahout Examples ................................... SUCCESS [39.626s]
> [INFO] Mahout Release Package ............................ SUCCESS [0.333s]
> [INFO] Mahout Math Scala bindings ........................ SUCCESS [4:38.502s]
> [INFO] Mahout Spark bindings ............................. SUCCESS [8:09.620s]
> [INFO] Mahout Spark bindings shell ....................... SUCCESS [43.398s]
> [INFO] Mahout H2O backend ................................ SUCCESS [6:28.326s]
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time: 45:30.526s
> [INFO] Finished at: Fri Oct 31 13:38:23 EDT 2014
> [INFO] Final Memory: 55M/441M
> [INFO] ------------------------------------------------------------------------
> 
> 
> 
> > From: ap.dev@outlook.com
> > To: dev@mahout.apache.org
> > Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > Date: Fri, 31 Oct 2014 12:59:58 -0400
> > 
> > that is .. hadoop 1.2.1.. no cluster just my local machine. 
> > 
> > Master seems to be building fine today.  
> > 
> > I'm building and testing from Pat's hadoop-client branch now.. using:
> > 
> >   $ mvn clean install package -Dhadoop.version=1.2.1  
> > 
> > With a clean maven repo and SPARK_HOME unset.
> > 
> > > From: ap.dev@outlook.com
> > > To: dev@mahout.apache.org
> > > Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > Date: Fri, 31 Oct 2014 12:44:49 -0400
> > > 
> > > no- adoop 1.2.1
> > > 
> > > > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > > From: pat@occamsmachete.com
> > > > Date: Fri, 31 Oct 2014 09:41:34 -0700
> > > > To: dev@mahout.apache.org
> > > > 
> > > > Are you on hadoop 2.2?
> > > > 
> > > > On Oct 31, 2014, at 9:37 AM, Andrew Palumbo <ap...@outlook.com> wrote:
> > > > 
> > > > Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
> > > > morning, and this time built and tested without errors. I'm double 
> > > > checking this again now.  
> > > > 
> > > > 
> > > > > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > > > From: pat@occamsmachete.com
> > > > > Date: Fri, 31 Oct 2014 09:26:56 -0700
> > > > > To: dev@mahout.apache.org
> > > > > 
> > > > > I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> > > > > 
> > > > > Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client
> > > > 
> > > > 
> > > > sure I'll try this.
> > > > 
> > > > 
> > > > > 
> > > > > This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> > > > > 
> > > > > On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > >> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> > > > > 
> > > > > I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> > > > > 
> > > > > $ mvn clean install 
> > > > > 
> > > > > from the latest master. now am getting the failure you're talking about:
> > > > > 
> > > > > - ddsvd - naive - q=1 *** FAILED ***
> > > > > org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
> > > > >       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > bindings #1728
> > > > >> From: pat@occamsmachete.com
> > > > >> Date: Thu, 30 Oct 2014 19:10:19 -0700
> > > > >> To: dev@mahout.apache.org
> > > > >> 
> > > > >> I took Gokhan’s PR and merged the master with it and compiling with 
> > > > >> 
> > > > >> mvn clean install package -Dhadoop.version=1.2.1
> > > > >> 
> > > > >> I get the same build error as the nightly.
> > > > >> 
> > > > >> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> > > > >> 
> > > > >> This seems like more than cosmic rays as Dmitriy guessed.
> > > > >> 
> > > > >> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> > > > >> 
> > > > >> more likely spark thing .
> > > > >> 
> > > > >> the error is while using torrent broadcast. AFAIK that was not default
> > > > >> choice until recently.
> > > > >> 
> > > > >> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> > > > >> 
> > > > >>> The nightly builds often due to running on an old machine and the failure
> > > > >>> is also a function of the number of concurrent jobs that are running.  If u
> > > > >>> look at the logs from the failure, it most likely would have failed due to
> > > > >>> a JVM crash (or something similar).  Its the daily builds that we need to
> > > > >>> ensure are not failing.
> > > > >>> 
> > > > >>> 
> > > > >>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> > > > >>> wrote:
> > > > >>> 
> > > > >>>> I just built and tested with no problems.  Probably just Jenkins acting
> > > > >>> up.
> > > > >>>> 
> > > > >>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> > > > >>>> Spark bindings #1728
> > > > >>>>> From: pat@occamsmachete.com
> > > > >>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> > > > >>>>> To: dev@mahout.apache.org
> > > > >>>>> 
> > > > >>>>> At first blush this looks unrelated to the stuff I pushed to move to
> > > > >>>> Spark 1.1.0
> > > > >>>>> 
> > > > >>>>> The error is in snappy parsing during some R-like ops
> > > > >>>>> 
> > > > >>>>> I don’t use native snappy myself, is anyone else seeing this or is it
> > > > >>>> just  cosmic rays?
> > > > >>>>> 
> > > > >>>>> 
> > > > >>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> > > > >>>> jenkins@builds.apache.org> wrote:
> > > > >>>>> 
> > > > >>>>> See <
> > > > >>>> 
> > > > >>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> > > > >>>>> 
> > > > >>>>> 
> > > > >>>>> 
> > > > >>>> 
> > > > >>> 
> > > > >> 
> > > > > 		 	   		  
> > > > > 
> > > > 		 	   		  
> > > > 
> > >  		 	   		  
> >  		 	   		  
>  		 	   		  
 		 	   		  

RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.
Everything seems to be building ok now on my machine. Maybe there were some bad artifacts deployed yesterday?

using sources from:

https://github.com/pferrel/mahout/tree/hadoop-client 

[andy@localhost mahout]$ echo $SPARK_HOME

[andy@localhost mahout]$ echo $HADOOP_HOME
/home/andy/apache_builds/hadoop_bin/hadoop-1.2.1
[andy@localhost mahout]$ mvn clean install package -Dhadoop.version=1.2.1

{...}
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................ SUCCESS [19.083s]
[INFO] Apache Mahout ..................................... SUCCESS [4.209s]
[INFO] Mahout Math ....................................... SUCCESS [3:08.416s]
[INFO] Mahout MapReduce Legacy ........................... SUCCESS [18:52.684s]
[INFO] Mahout Integration ................................ SUCCESS [2:23.279s]
[INFO] Mahout Examples ................................... SUCCESS [39.626s]
[INFO] Mahout Release Package ............................ SUCCESS [0.333s]
[INFO] Mahout Math Scala bindings ........................ SUCCESS [4:38.502s]
[INFO] Mahout Spark bindings ............................. SUCCESS [8:09.620s]
[INFO] Mahout Spark bindings shell ....................... SUCCESS [43.398s]
[INFO] Mahout H2O backend ................................ SUCCESS [6:28.326s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 45:30.526s
[INFO] Finished at: Fri Oct 31 13:38:23 EDT 2014
[INFO] Final Memory: 55M/441M
[INFO] ------------------------------------------------------------------------



> From: ap.dev@outlook.com
> To: dev@mahout.apache.org
> Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> Date: Fri, 31 Oct 2014 12:59:58 -0400
> 
> that is .. hadoop 1.2.1.. no cluster just my local machine. 
> 
> Master seems to be building fine today.  
> 
> I'm building and testing from Pat's hadoop-client branch now.. using:
> 
>   $ mvn clean install package -Dhadoop.version=1.2.1  
> 
> With a clean maven repo and SPARK_HOME unset.
> 
> > From: ap.dev@outlook.com
> > To: dev@mahout.apache.org
> > Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > Date: Fri, 31 Oct 2014 12:44:49 -0400
> > 
> > no- adoop 1.2.1
> > 
> > > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > From: pat@occamsmachete.com
> > > Date: Fri, 31 Oct 2014 09:41:34 -0700
> > > To: dev@mahout.apache.org
> > > 
> > > Are you on hadoop 2.2?
> > > 
> > > On Oct 31, 2014, at 9:37 AM, Andrew Palumbo <ap...@outlook.com> wrote:
> > > 
> > > Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
> > > morning, and this time built and tested without errors. I'm double 
> > > checking this again now.  
> > > 
> > > 
> > > > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > > From: pat@occamsmachete.com
> > > > Date: Fri, 31 Oct 2014 09:26:56 -0700
> > > > To: dev@mahout.apache.org
> > > > 
> > > > I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> > > > 
> > > > Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client
> > > 
> > > 
> > > sure I'll try this.
> > > 
> > > 
> > > > 
> > > > This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> > > > 
> > > > On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> > > > 
> > > > 
> > > > 
> > > >> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> > > > 
> > > > I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> > > > 
> > > > $ mvn clean install 
> > > > 
> > > > from the latest master. now am getting the failure you're talking about:
> > > > 
> > > > - ddsvd - naive - q=1 *** FAILED ***
> > > > org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
> > > >       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > bindings #1728
> > > >> From: pat@occamsmachete.com
> > > >> Date: Thu, 30 Oct 2014 19:10:19 -0700
> > > >> To: dev@mahout.apache.org
> > > >> 
> > > >> I took Gokhan’s PR and merged the master with it and compiling with 
> > > >> 
> > > >> mvn clean install package -Dhadoop.version=1.2.1
> > > >> 
> > > >> I get the same build error as the nightly.
> > > >> 
> > > >> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> > > >> 
> > > >> This seems like more than cosmic rays as Dmitriy guessed.
> > > >> 
> > > >> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> > > >> 
> > > >> more likely spark thing .
> > > >> 
> > > >> the error is while using torrent broadcast. AFAIK that was not default
> > > >> choice until recently.
> > > >> 
> > > >> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> > > >> 
> > > >>> The nightly builds often due to running on an old machine and the failure
> > > >>> is also a function of the number of concurrent jobs that are running.  If u
> > > >>> look at the logs from the failure, it most likely would have failed due to
> > > >>> a JVM crash (or something similar).  Its the daily builds that we need to
> > > >>> ensure are not failing.
> > > >>> 
> > > >>> 
> > > >>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> > > >>> wrote:
> > > >>> 
> > > >>>> I just built and tested with no problems.  Probably just Jenkins acting
> > > >>> up.
> > > >>>> 
> > > >>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> > > >>>> Spark bindings #1728
> > > >>>>> From: pat@occamsmachete.com
> > > >>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> > > >>>>> To: dev@mahout.apache.org
> > > >>>>> 
> > > >>>>> At first blush this looks unrelated to the stuff I pushed to move to
> > > >>>> Spark 1.1.0
> > > >>>>> 
> > > >>>>> The error is in snappy parsing during some R-like ops
> > > >>>>> 
> > > >>>>> I don’t use native snappy myself, is anyone else seeing this or is it
> > > >>>> just  cosmic rays?
> > > >>>>> 
> > > >>>>> 
> > > >>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> > > >>>> jenkins@builds.apache.org> wrote:
> > > >>>>> 
> > > >>>>> See <
> > > >>>> 
> > > >>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> > > >>>>> 
> > > >>>>> 
> > > >>>>> 
> > > >>>> 
> > > >>> 
> > > >> 
> > > > 		 	   		  
> > > > 
> > > 		 	   		  
> > > 
> >  		 	   		  
>  		 	   		  
 		 	   		  

RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.
that is .. hadoop 1.2.1.. no cluster just my local machine. 

Master seems to be building fine today.  

I'm building and testing from Pat's hadoop-client branch now.. using:

  $ mvn clean install package -Dhadoop.version=1.2.1  

With a clean maven repo and SPARK_HOME unset.

> From: ap.dev@outlook.com
> To: dev@mahout.apache.org
> Subject: RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> Date: Fri, 31 Oct 2014 12:44:49 -0400
> 
> no- adoop 1.2.1
> 
> > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > From: pat@occamsmachete.com
> > Date: Fri, 31 Oct 2014 09:41:34 -0700
> > To: dev@mahout.apache.org
> > 
> > Are you on hadoop 2.2?
> > 
> > On Oct 31, 2014, at 9:37 AM, Andrew Palumbo <ap...@outlook.com> wrote:
> > 
> > Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
> > morning, and this time built and tested without errors. I'm double 
> > checking this again now.  
> > 
> > 
> > > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > > From: pat@occamsmachete.com
> > > Date: Fri, 31 Oct 2014 09:26:56 -0700
> > > To: dev@mahout.apache.org
> > > 
> > > I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> > > 
> > > Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client
> > 
> > 
> > sure I'll try this.
> > 
> > 
> > > 
> > > This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> > > 
> > > On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> > > 
> > > 
> > > 
> > >> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> > > 
> > > I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> > > 
> > > $ mvn clean install 
> > > 
> > > from the latest master. now am getting the failure you're talking about:
> > > 
> > > - ddsvd - naive - q=1 *** FAILED ***
> > > org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
> > >       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> > > 
> > > 
> > > 
> > > 
> > > 
> > > bindings #1728
> > >> From: pat@occamsmachete.com
> > >> Date: Thu, 30 Oct 2014 19:10:19 -0700
> > >> To: dev@mahout.apache.org
> > >> 
> > >> I took Gokhan’s PR and merged the master with it and compiling with 
> > >> 
> > >> mvn clean install package -Dhadoop.version=1.2.1
> > >> 
> > >> I get the same build error as the nightly.
> > >> 
> > >> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> > >> 
> > >> This seems like more than cosmic rays as Dmitriy guessed.
> > >> 
> > >> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> > >> 
> > >> more likely spark thing .
> > >> 
> > >> the error is while using torrent broadcast. AFAIK that was not default
> > >> choice until recently.
> > >> 
> > >> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> > >> 
> > >>> The nightly builds often due to running on an old machine and the failure
> > >>> is also a function of the number of concurrent jobs that are running.  If u
> > >>> look at the logs from the failure, it most likely would have failed due to
> > >>> a JVM crash (or something similar).  Its the daily builds that we need to
> > >>> ensure are not failing.
> > >>> 
> > >>> 
> > >>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> > >>> wrote:
> > >>> 
> > >>>> I just built and tested with no problems.  Probably just Jenkins acting
> > >>> up.
> > >>>> 
> > >>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> > >>>> Spark bindings #1728
> > >>>>> From: pat@occamsmachete.com
> > >>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> > >>>>> To: dev@mahout.apache.org
> > >>>>> 
> > >>>>> At first blush this looks unrelated to the stuff I pushed to move to
> > >>>> Spark 1.1.0
> > >>>>> 
> > >>>>> The error is in snappy parsing during some R-like ops
> > >>>>> 
> > >>>>> I don’t use native snappy myself, is anyone else seeing this or is it
> > >>>> just  cosmic rays?
> > >>>>> 
> > >>>>> 
> > >>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> > >>>> jenkins@builds.apache.org> wrote:
> > >>>>> 
> > >>>>> See <
> > >>>> 
> > >>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> > >>>>> 
> > >>>>> 
> > >>>>> 
> > >>>> 
> > >>> 
> > >> 
> > > 		 	   		  
> > > 
> > 		 	   		  
> > 
>  		 	   		  
 		 	   		  

RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.
no- adoop 1.2.1

> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> From: pat@occamsmachete.com
> Date: Fri, 31 Oct 2014 09:41:34 -0700
> To: dev@mahout.apache.org
> 
> Are you on hadoop 2.2?
> 
> On Oct 31, 2014, at 9:37 AM, Andrew Palumbo <ap...@outlook.com> wrote:
> 
> Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
> morning, and this time built and tested without errors. I'm double 
> checking this again now.  
> 
> 
> > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> > From: pat@occamsmachete.com
> > Date: Fri, 31 Oct 2014 09:26:56 -0700
> > To: dev@mahout.apache.org
> > 
> > I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> > 
> > Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client
> 
> 
> sure I'll try this.
> 
> 
> > 
> > This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> > 
> > On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> > 
> > 
> > 
> >> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> > 
> > I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> > 
> > $ mvn clean install 
> > 
> > from the latest master. now am getting the failure you're talking about:
> > 
> > - ddsvd - naive - q=1 *** FAILED ***
> > org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
> >       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> > 
> > 
> > 
> > 
> > 
> > bindings #1728
> >> From: pat@occamsmachete.com
> >> Date: Thu, 30 Oct 2014 19:10:19 -0700
> >> To: dev@mahout.apache.org
> >> 
> >> I took Gokhan’s PR and merged the master with it and compiling with 
> >> 
> >> mvn clean install package -Dhadoop.version=1.2.1
> >> 
> >> I get the same build error as the nightly.
> >> 
> >> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> >> 
> >> This seems like more than cosmic rays as Dmitriy guessed.
> >> 
> >> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> >> 
> >> more likely spark thing .
> >> 
> >> the error is while using torrent broadcast. AFAIK that was not default
> >> choice until recently.
> >> 
> >> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> >> 
> >>> The nightly builds often due to running on an old machine and the failure
> >>> is also a function of the number of concurrent jobs that are running.  If u
> >>> look at the logs from the failure, it most likely would have failed due to
> >>> a JVM crash (or something similar).  Its the daily builds that we need to
> >>> ensure are not failing.
> >>> 
> >>> 
> >>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> >>> wrote:
> >>> 
> >>>> I just built and tested with no problems.  Probably just Jenkins acting
> >>> up.
> >>>> 
> >>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> >>>> Spark bindings #1728
> >>>>> From: pat@occamsmachete.com
> >>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> >>>>> To: dev@mahout.apache.org
> >>>>> 
> >>>>> At first blush this looks unrelated to the stuff I pushed to move to
> >>>> Spark 1.1.0
> >>>>> 
> >>>>> The error is in snappy parsing during some R-like ops
> >>>>> 
> >>>>> I don’t use native snappy myself, is anyone else seeing this or is it
> >>>> just  cosmic rays?
> >>>>> 
> >>>>> 
> >>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> >>>> jenkins@builds.apache.org> wrote:
> >>>>> 
> >>>>> See <
> >>>> 
> >>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> >>>>> 
> >>>>> 
> >>>>> 
> >>>> 
> >>> 
> >> 
> > 		 	   		  
> > 
> 		 	   		  
> 
 		 	   		  

Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Pat Ferrel <pa...@occamsmachete.com>.
Are you on hadoop 2.2?

On Oct 31, 2014, at 9:37 AM, Andrew Palumbo <ap...@outlook.com> wrote:

Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
morning, and this time built and tested without errors. I'm double 
checking this again now.  


> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> From: pat@occamsmachete.com
> Date: Fri, 31 Oct 2014 09:26:56 -0700
> To: dev@mahout.apache.org
> 
> I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> 
> Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client


sure I'll try this.


> 
> This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> 
> On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> 
> 
> 
>> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> 
> I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> 
> $ mvn clean install 
> 
> from the latest master. now am getting the failure you're talking about:
> 
> - ddsvd - naive - q=1 *** FAILED ***
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
>       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> 
> 
> 
> 
> 
> bindings #1728
>> From: pat@occamsmachete.com
>> Date: Thu, 30 Oct 2014 19:10:19 -0700
>> To: dev@mahout.apache.org
>> 
>> I took Gokhan’s PR and merged the master with it and compiling with 
>> 
>> mvn clean install package -Dhadoop.version=1.2.1
>> 
>> I get the same build error as the nightly.
>> 
>> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
>> 
>> This seems like more than cosmic rays as Dmitriy guessed.
>> 
>> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
>> 
>> more likely spark thing .
>> 
>> the error is while using torrent broadcast. AFAIK that was not default
>> choice until recently.
>> 
>> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
>> 
>>> The nightly builds often due to running on an old machine and the failure
>>> is also a function of the number of concurrent jobs that are running.  If u
>>> look at the logs from the failure, it most likely would have failed due to
>>> a JVM crash (or something similar).  Its the daily builds that we need to
>>> ensure are not failing.
>>> 
>>> 
>>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
>>> wrote:
>>> 
>>>> I just built and tested with no problems.  Probably just Jenkins acting
>>> up.
>>>> 
>>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
>>>> Spark bindings #1728
>>>>> From: pat@occamsmachete.com
>>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
>>>>> To: dev@mahout.apache.org
>>>>> 
>>>>> At first blush this looks unrelated to the stuff I pushed to move to
>>>> Spark 1.1.0
>>>>> 
>>>>> The error is in snappy parsing during some R-like ops
>>>>> 
>>>>> I don’t use native snappy myself, is anyone else seeing this or is it
>>>> just  cosmic rays?
>>>>> 
>>>>> 
>>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
>>>> jenkins@builds.apache.org> wrote:
>>>>> 
>>>>> See <
>>>> 
>>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> 
>> 
> 		 	   		  
> 
		 	   		  


RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.
Yes this is odd.. To confuse things further, I cleaned out my local maven repo again this 
morning, and this time built and tested without errors. I'm double 
checking this again now.  


> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728
> From: pat@occamsmachete.com
> Date: Fri, 31 Oct 2014 09:26:56 -0700
> To: dev@mahout.apache.org
> 
> I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1
> 
> Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client


sure I'll try this.


> 
> This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.
> 
> On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:
> 
> 
> 
> > Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 
> 
> I cleaned out my mvn repo, unset SPARK_HOME, and ran 
> 
>  $ mvn clean install 
> 
> from the latest master. now am getting the failure you're talking about:
> 
> - ddsvd - naive - q=1 *** FAILED ***
>  org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
>        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
> 
> 
> 
> 
> 
> bindings #1728
> > From: pat@occamsmachete.com
> > Date: Thu, 30 Oct 2014 19:10:19 -0700
> > To: dev@mahout.apache.org
> > 
> > I took Gokhan’s PR and merged the master with it and compiling with 
> > 
> > mvn clean install package -Dhadoop.version=1.2.1
> > 
> > I get the same build error as the nightly.
> > 
> > Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> > 
> > This seems like more than cosmic rays as Dmitriy guessed.
> > 
> > On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> > 
> > more likely spark thing .
> > 
> > the error is while using torrent broadcast. AFAIK that was not default
> > choice until recently.
> > 
> > On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> > 
> >> The nightly builds often due to running on an old machine and the failure
> >> is also a function of the number of concurrent jobs that are running.  If u
> >> look at the logs from the failure, it most likely would have failed due to
> >> a JVM crash (or something similar).  Its the daily builds that we need to
> >> ensure are not failing.
> >> 
> >> 
> >> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> >> wrote:
> >> 
> >>> I just built and tested with no problems.  Probably just Jenkins acting
> >> up.
> >>> 
> >>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> >>> Spark bindings #1728
> >>>> From: pat@occamsmachete.com
> >>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> >>>> To: dev@mahout.apache.org
> >>>> 
> >>>> At first blush this looks unrelated to the stuff I pushed to move to
> >>> Spark 1.1.0
> >>>> 
> >>>> The error is in snappy parsing during some R-like ops
> >>>> 
> >>>> I don’t use native snappy myself, is anyone else seeing this or is it
> >>> just  cosmic rays?
> >>>> 
> >>>> 
> >>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> >>> jenkins@builds.apache.org> wrote:
> >>>> 
> >>>> See <
> >>> 
> >> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> >>>> 
> >>>> 
> >>>> 
> >>> 
> >> 
> > 
> 		 	   		  
> 
 		 	   		  

Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Pat Ferrel <pa...@occamsmachete.com>.
I think that’s because the Spark in the maven repos is tied to hadoop 2 and the default in the master is 1.2.1

Sounds like you are the closest to the build machines. Can you try https://github.com/pferrel/mahout/tree/hadoop-client

This is a merge of Gokhan’s patch with master. It should default to hadoop 2 and theoretically should have all artifacts in alignment.

On Oct 30, 2014, at 8:11 PM, Andrew Palumbo <ap...@outlook.com> wrote:



> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 

I cleaned out my mvn repo, unset SPARK_HOME, and ran 

 $ mvn clean install 

from the latest master. now am getting the failure you're talking about:

- ddsvd - naive - q=1 *** FAILED ***
 org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
       org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)





bindings #1728
> From: pat@occamsmachete.com
> Date: Thu, 30 Oct 2014 19:10:19 -0700
> To: dev@mahout.apache.org
> 
> I took Gokhan’s PR and merged the master with it and compiling with 
> 
> mvn clean install package -Dhadoop.version=1.2.1
> 
> I get the same build error as the nightly.
> 
> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> 
> This seems like more than cosmic rays as Dmitriy guessed.
> 
> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> 
> more likely spark thing .
> 
> the error is while using torrent broadcast. AFAIK that was not default
> choice until recently.
> 
> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> 
>> The nightly builds often due to running on an old machine and the failure
>> is also a function of the number of concurrent jobs that are running.  If u
>> look at the logs from the failure, it most likely would have failed due to
>> a JVM crash (or something similar).  Its the daily builds that we need to
>> ensure are not failing.
>> 
>> 
>> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
>> wrote:
>> 
>>> I just built and tested with no problems.  Probably just Jenkins acting
>> up.
>>> 
>>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
>>> Spark bindings #1728
>>>> From: pat@occamsmachete.com
>>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
>>>> To: dev@mahout.apache.org
>>>> 
>>>> At first blush this looks unrelated to the stuff I pushed to move to
>>> Spark 1.1.0
>>>> 
>>>> The error is in snappy parsing during some R-like ops
>>>> 
>>>> I don’t use native snappy myself, is anyone else seeing this or is it
>>> just  cosmic rays?
>>>> 
>>>> 
>>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
>>> jenkins@builds.apache.org> wrote:
>>>> 
>>>> See <
>>> 
>> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
>>>> 
>>>> 
>>>> 
>>> 
>> 
> 
		 	   		  


RE: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Andrew Palumbo <ap...@outlook.com>.

> Subject: Re: Jenkins build became unstable: mahout-nightly » Mahout Spark 

I cleaned out my mvn repo, unset SPARK_HOME, and ran 

  $ mvn clean install 

from the latest master. now am getting the failure you're talking about:

- ddsvd - naive - q=1 *** FAILED ***
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 9 in stage 28.0 failed 1 times, most recent failure: Lost task 9.0 in stage 28.0 (TID 81, localhost): java.io.IOException: PARSING_ERROR(2)
        org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78)
    

  


bindings #1728
> From: pat@occamsmachete.com
> Date: Thu, 30 Oct 2014 19:10:19 -0700
> To: dev@mahout.apache.org
> 
> I took Gokhan’s PR and merged the master with it and compiling with 
> 
> mvn clean install package -Dhadoop.version=1.2.1
> 
> I get the same build error as the nightly.
> 
> Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.
> 
> This seems like more than cosmic rays as Dmitriy guessed.
> 
> On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> 
> more likely spark thing .
> 
> the error is while using torrent broadcast. AFAIK that was not default
> choice until recently.
> 
> On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:
> 
> > The nightly builds often due to running on an old machine and the failure
> > is also a function of the number of concurrent jobs that are running.  If u
> > look at the logs from the failure, it most likely would have failed due to
> > a JVM crash (or something similar).  Its the daily builds that we need to
> > ensure are not failing.
> > 
> > 
> > On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> > wrote:
> > 
> >> I just built and tested with no problems.  Probably just Jenkins acting
> > up.
> >> 
> >>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> >> Spark bindings #1728
> >>> From: pat@occamsmachete.com
> >>> Date: Thu, 30 Oct 2014 09:26:45 -0700
> >>> To: dev@mahout.apache.org
> >>> 
> >>> At first blush this looks unrelated to the stuff I pushed to move to
> >> Spark 1.1.0
> >>> 
> >>> The error is in snappy parsing during some R-like ops
> >>> 
> >>> I don’t use native snappy myself, is anyone else seeing this or is it
> >> just  cosmic rays?
> >>> 
> >>> 
> >>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> >> jenkins@builds.apache.org> wrote:
> >>> 
> >>> See <
> >> 
> > https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> >>> 
> >>> 
> >>> 
> >> 
> > 
> 
 		 	   		  

Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Pat Ferrel <pa...@occamsmachete.com>.
I took Gokhan’s PR and merged the master with it and compiling with 

mvn clean install package -Dhadoop.version=1.2.1

I get the same build error as the nightly.

Changing back to the master it builds fine. The default hadoop version is 1.2.1 in master so I don’t need a profile or CLI options to build for my environment.

This seems like more than cosmic rays as Dmitriy guessed.

On Oct 30, 2014, at 12:41 PM, Dmitriy Lyubimov <dl...@gmail.com> wrote:

more likely spark thing .

the error is while using torrent broadcast. AFAIK that was not default
choice until recently.

On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:

> The nightly builds often due to running on an old machine and the failure
> is also a function of the number of concurrent jobs that are running.  If u
> look at the logs from the failure, it most likely would have failed due to
> a JVM crash (or something similar).  Its the daily builds that we need to
> ensure are not failing.
> 
> 
> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> wrote:
> 
>> I just built and tested with no problems.  Probably just Jenkins acting
> up.
>> 
>>> Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
>> Spark bindings #1728
>>> From: pat@occamsmachete.com
>>> Date: Thu, 30 Oct 2014 09:26:45 -0700
>>> To: dev@mahout.apache.org
>>> 
>>> At first blush this looks unrelated to the stuff I pushed to move to
>> Spark 1.1.0
>>> 
>>> The error is in snappy parsing during some R-like ops
>>> 
>>> I don’t use native snappy myself, is anyone else seeing this or is it
>> just  cosmic rays?
>>> 
>>> 
>>> On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
>> jenkins@builds.apache.org> wrote:
>>> 
>>> See <
>> 
> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
>>> 
>>> 
>>> 
>> 
> 


Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Dmitriy Lyubimov <dl...@gmail.com>.
more likely spark thing .

the error is while using torrent broadcast. AFAIK that was not default
choice until recently.

On Thu, Oct 30, 2014 at 10:27 AM, Suneel Marthi <sm...@apache.org> wrote:

> The nightly builds often due to running on an old machine and the failure
> is also a function of the number of concurrent jobs that are running.  If u
> look at the logs from the failure, it most likely would have failed due to
> a JVM crash (or something similar).  Its the daily builds that we need to
> ensure are not failing.
>
>
> On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com>
> wrote:
>
> > I just built and tested with no problems.  Probably just Jenkins acting
> up.
> >
> > > Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> > Spark bindings #1728
> > > From: pat@occamsmachete.com
> > > Date: Thu, 30 Oct 2014 09:26:45 -0700
> > > To: dev@mahout.apache.org
> > >
> > > At first blush this looks unrelated to the stuff I pushed to move to
> > Spark 1.1.0
> > >
> > > The error is in snappy parsing during some R-like ops
> > >
> > > I don’t use native snappy myself, is anyone else seeing this or is it
> > just  cosmic rays?
> > >
> > >
> > > On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> > jenkins@builds.apache.org> wrote:
> > >
> > > See <
> >
> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> > >
> > >
> > >
> >
>

Re: Jenkins build became unstable: mahout-nightly » Mahout Spark bindings #1728

Posted by Suneel Marthi <sm...@apache.org>.
The nightly builds often due to running on an old machine and the failure
is also a function of the number of concurrent jobs that are running.  If u
look at the logs from the failure, it most likely would have failed due to
a JVM crash (or something similar).  Its the daily builds that we need to
ensure are not failing.


On Thu, Oct 30, 2014 at 1:21 PM, Andrew Palumbo <ap...@outlook.com> wrote:

> I just built and tested with no problems.  Probably just Jenkins acting up.
>
> > Subject: Re: Jenkins build became unstable:  mahout-nightly » Mahout
> Spark bindings #1728
> > From: pat@occamsmachete.com
> > Date: Thu, 30 Oct 2014 09:26:45 -0700
> > To: dev@mahout.apache.org
> >
> > At first blush this looks unrelated to the stuff I pushed to move to
> Spark 1.1.0
> >
> > The error is in snappy parsing during some R-like ops
> >
> > I don’t use native snappy myself, is anyone else seeing this or is it
> just  cosmic rays?
> >
> >
> > On Oct 29, 2014, at 4:43 PM, Apache Jenkins Server <
> jenkins@builds.apache.org> wrote:
> >
> > See <
> https://builds.apache.org/job/mahout-nightly/org.apache.mahout$mahout-spark_2.10/1728/
> >
> >
> >
>