You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Michael Armbrust <mi...@databricks.com> on 2017/03/30 23:09:07 UTC

[VOTE] Apache Spark 2.1.1 (RC2)

Please vote on releasing the following candidate as Apache Spark version
2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes
if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2
<https://github.com/apache/spark/tree/v2.1.1-rc2> (
02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter
<https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1227/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/


*FAQ*

*How can I help test this release?*

If you are a Spark user, you can help us test this release by taking an
existing Spark workload and running on this release candidate, then
reporting any regressions.

*What should happen to JIRA tickets still targeting 2.1.1?*

Committers should look at those and triage. Extremely important bug fixes,
documentation, and API tweaks that impact compatibility should be worked on
immediately. Everything else please retarget to 2.1.2 or 2.2.0.

*But my bug isn't fixed!??!*

In order to make timely releases, we will typically not hold the release
unless the bug in question is a regression from 2.1.0.

*What happened to RC1?*

There were issues with the release packaging and as a result was skipped.

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Michael Armbrust <mi...@databricks.com>.
In case it wasn't obvious by the appearance of RC3, this vote failed.

On Thu, Mar 30, 2017 at 4:09 PM, Michael Armbrust <mi...@databricks.com>
wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.1
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.1-rc2
> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>
> List of JIRA tickets resolved can be found with this filter
> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
> .
>
> The release files, including signatures, digests, etc. can be found at:
> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1227/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>
>
> *FAQ*
>
> *How can I help test this release?*
>
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> *What should happen to JIRA tickets still targeting 2.1.1?*
>
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>
> *But my bug isn't fixed!??!*
>
> In order to make timely releases, we will typically not hold the release
> unless the bug in question is a regression from 2.1.0.
>
> *What happened to RC1?*
>
> There were issues with the release packaging and as a result was skipped.
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Marcelo Vanzin <va...@cloudera.com>.
On Sat, Apr 1, 2017 at 12:34 PM, Sean Owen <so...@cloudera.com> wrote:
> LocalityPlacementStrategySuite:
>  (... just hangs ...)

This test is very heavy on DNS requests... I tried to work around that
when I wrote it but couldn't, maybe I should try some more.

I wouldn't hold the release for that.

-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Mark Hamstra <ma...@clearstorydata.com>.
LocalityPlacementStrategySuite hangs -- definitely been seeing that one for
quite awhile, not just with 2.1.1-rc, also with Ubuntu 16.10, and not with
macOS Sierra.

On Sat, Apr 1, 2017 at 12:34 PM, Sean Owen <so...@cloudera.com> wrote:

> (Tiny nits: first line says '2.1.0', just a note for next copy/paste of
> the email if needed. Also can we point people to an HTTPS URL to download
> artifacts in this boilerplate? https://home.apache.org/~pwendell/spark-
> releases/spark-2.1.1-rc2-bin/ )
>
> I'm testing on Ubuntu 16.10, with Java 8, with -Phive -Pyarn -Phadoop-2.7.
> I am getting several intermittent failures:
>
> - caching in memory, serialized, replicated *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 30000 milliseconds elapsed
> ...
>
> - caching on disk, replicated *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 30000 milliseconds elapsed
> ...
>
> - Unpersisting TorrentBroadcast on executors only in distributed mode ***
> FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 60000 milliseconds elapsed
> ...
>
> - replicating blocks of object with class defined in repl *** FAILED ***
>   isContain was true Interpreter output contained 'Exception':
> ...
>   scala>      |      | java.util.concurrent.TimeoutException: Executors
> were not up in 60 seconds
> ..
>
> - using external shuffle service *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 60000 milliseconds elapsed
> ...
>
> LocalityPlacementStrategySuite:
>  (... just hangs ...)
>
>
> Only the last two are persistent. It might be an env issue or test issue,
> so just wondering if anyone else sees these?
>
>
> On Fri, Mar 31, 2017 at 12:09 AM Michael Armbrust <mi...@databricks.com>
> wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.1
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.1-rc2
> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>
> List of JIRA tickets resolved can be found with this filter
> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
> .
>
> The release files, including signatures, digests, etc. can be found at:
> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1227/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>
>
> *FAQ*
>
> *How can I help test this release?*
>
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> *What should happen to JIRA tickets still targeting 2.1.1?*
>
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>
> *But my bug isn't fixed!??!*
>
> In order to make timely releases, we will typically not hold the release
> unless the bug in question is a regression from 2.1.0.
>
> *What happened to RC1?*
>
> There were issues with the release packaging and as a result was skipped.
>
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
(Tiny nits: first line says '2.1.0', just a note for next copy/paste of the
email if needed. Also can we point people to an HTTPS URL to download
artifacts in this boilerplate?
https://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/ )

I'm testing on Ubuntu 16.10, with Java 8, with -Phive -Pyarn -Phadoop-2.7.
I am getting several intermittent failures:

- caching in memory, serialized, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
30000 milliseconds elapsed
...

- caching on disk, replicated *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
30000 milliseconds elapsed
...

- Unpersisting TorrentBroadcast on executors only in distributed mode ***
FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
60000 milliseconds elapsed
...

- replicating blocks of object with class defined in repl *** FAILED ***
  isContain was true Interpreter output contained 'Exception':
...
  scala>      |      | java.util.concurrent.TimeoutException: Executors
were not up in 60 seconds
..

- using external shuffle service *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
60000 milliseconds elapsed
...

LocalityPlacementStrategySuite:
 (... just hangs ...)


Only the last two are persistent. It might be an env issue or test issue,
so just wondering if anyone else sees these?


On Fri, Mar 31, 2017 at 12:09 AM Michael Armbrust <mi...@databricks.com>
wrote:

Please vote on releasing the following candidate as Apache Spark version
2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes
if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2
<https://github.com/apache/spark/tree/v2.1.1-rc2> (
02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter
<https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1227/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/


*FAQ*

*How can I help test this release?*

If you are a Spark user, you can help us test this release by taking an
existing Spark workload and running on this release candidate, then
reporting any regressions.

*What should happen to JIRA tickets still targeting 2.1.1?*

Committers should look at those and triage. Extremely important bug fixes,
documentation, and API tweaks that impact compatibility should be worked on
immediately. Everything else please retarget to 2.1.2 or 2.2.0.

*But my bug isn't fixed!??!*

In order to make timely releases, we will typically not hold the release
unless the bug in question is a regression from 2.1.0.

*What happened to RC1?*

There were issues with the release packaging and as a result was skipped.

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
This is maybe a blocker. See my suggested action about voting on the
current artifact to, I believe, eliminate the possible blocking part of the
issue in the short term.

On Tue, Apr 4, 2017, 22:02 Mridul Muralidharan <mr...@gmail.com> wrote:

> Hi,
>
>
> https://issues.apache.org/jira/browse/SPARK-20202?jql=priority%20%3D%20Blocker%20AND%20affectedVersion%20%3D%20%222.1.1%22%20and%20project%3D%22spark%22
>
>
> Indicates there is another blocker (SPARK-20197 should have come in
> the list too, but was marked major).
>
>
> Regards,
> Mridul
>
> On Tue, Apr 4, 2017 at 11:35 AM, Michael Armbrust
> <mi...@databricks.com> wrote:
> > Thanks for the comments everyone.  This vote fails.  Here's how I think
> we
> > should proceed:
> >  - [SPARK-20197] - SparkR CRAN - appears to be resolved
> >  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
> report
> > if this is a regression and if there is an easy fix that we should wait
> for.
> >
> > For all the other test failures, please take the time to look through
> JIRA
> > and open an issue if one does not already exist so that we can triage if
> > these are just environmental issues.  If I don't hear any objections I'm
> > going to go ahead with RC3 tomorrow.
> >
> > On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
> > wrote:
> >>
> >> -1
> >> sorry, found an issue with SparkR CRAN check.
> >> Opened SPARK-20197 and working on fix.
> >>
> >> ________________________________
> >> From: holden.karau@gmail.com <ho...@gmail.com> on behalf of
> Holden
> >> Karau <ho...@pigscanfly.ca>
> >> Sent: Friday, March 31, 2017 6:25:20 PM
> >> To: Xiao Li
> >> Cc: Michael Armbrust; dev@spark.apache.org
> >> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
> >>
> >> -1 (non-binding)
> >>
> >> Python packaging doesn't seem to have quite worked out (looking at
> >> PKG-INFO the description is "Description: !!!!! missing pandoc do not
> upload
> >> to PyPI !!!!"), ideally it would be nice to have this as a version we
> >> upgrade to PyPi.
> >> Building this on my own machine results in a longer description.
> >>
> >> My guess is that whichever machine was used to package this is missing
> the
> >> pandoc executable (or possibly pypandoc library).
> >>
> >> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
> >>>
> >>> +1
> >>>
> >>> Xiao
> >>>
> >>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
> >>>>
> >>>> Please vote on releasing the following candidate as Apache Spark
> version
> >>>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
> passes
> >>>> if a majority of at least 3 +1 PMC votes are cast.
> >>>>
> >>>> [ ] +1 Release this package as Apache Spark 2.1.1
> >>>> [ ] -1 Do not release this package because ...
> >>>>
> >>>>
> >>>> To learn more about Apache Spark, please see http://spark.apache.org/
> >>>>
> >>>> The tag to be voted on is v2.1.1-rc2
> >>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
> >>>>
> >>>> List of JIRA tickets resolved can be found with this filter.
> >>>>
> >>>> The release files, including signatures, digests, etc. can be found
> at:
> >>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
> >>>>
> >>>> Release artifacts are signed with the following key:
> >>>> https://people.apache.org/keys/committer/pwendell.asc
> >>>>
> >>>> The staging repository for this release can be found at:
> >>>>
> https://repository.apache.org/content/repositories/orgapachespark-1227/
> >>>>
> >>>> The documentation corresponding to this release can be found at:
> >>>>
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
> >>>>
> >>>>
> >>>> FAQ
> >>>>
> >>>> How can I help test this release?
> >>>>
> >>>> If you are a Spark user, you can help us test this release by taking
> an
> >>>> existing Spark workload and running on this release candidate, then
> >>>> reporting any regressions.
> >>>>
> >>>> What should happen to JIRA tickets still targeting 2.1.1?
> >>>>
> >>>> Committers should look at those and triage. Extremely important bug
> >>>> fixes, documentation, and API tweaks that impact compatibility should
> be
> >>>> worked on immediately. Everything else please retarget to 2.1.2 or
> 2.2.0.
> >>>>
> >>>> But my bug isn't fixed!??!
> >>>>
> >>>> In order to make timely releases, we will typically not hold the
> release
> >>>> unless the bug in question is a regression from 2.1.0.
> >>>>
> >>>> What happened to RC1?
> >>>>
> >>>> There were issues with the release packaging and as a result was
> >>>> skipped.
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >> Cell : 425-233-8271
> >> Twitter: https://twitter.com/holdenkarau
> >
> >
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Mridul Muralidharan <mr...@gmail.com>.
Hi,

https://issues.apache.org/jira/browse/SPARK-20202?jql=priority%20%3D%20Blocker%20AND%20affectedVersion%20%3D%20%222.1.1%22%20and%20project%3D%22spark%22


Indicates there is another blocker (SPARK-20197 should have come in
the list too, but was marked major).


Regards,
Mridul

On Tue, Apr 4, 2017 at 11:35 AM, Michael Armbrust
<mi...@databricks.com> wrote:
> Thanks for the comments everyone.  This vote fails.  Here's how I think we
> should proceed:
>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report
> if this is a regression and if there is an easy fix that we should wait for.
>
> For all the other test failures, please take the time to look through JIRA
> and open an issue if one does not already exist so that we can triage if
> these are just environmental issues.  If I don't hear any objections I'm
> going to go ahead with RC3 tomorrow.
>
> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
> wrote:
>>
>> -1
>> sorry, found an issue with SparkR CRAN check.
>> Opened SPARK-20197 and working on fix.
>>
>> ________________________________
>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf of Holden
>> Karau <ho...@pigscanfly.ca>
>> Sent: Friday, March 31, 2017 6:25:20 PM
>> To: Xiao Li
>> Cc: Michael Armbrust; dev@spark.apache.org
>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>
>> -1 (non-binding)
>>
>> Python packaging doesn't seem to have quite worked out (looking at
>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>> upgrade to PyPi.
>> Building this on my own machine results in a longer description.
>>
>> My guess is that whichever machine was used to package this is missing the
>> pandoc executable (or possibly pypandoc library).
>>
>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>>>
>>> +1
>>>
>>> Xiao
>>>
>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>>>
>>>> Please vote on releasing the following candidate as Apache Spark version
>>>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes
>>>> if a majority of at least 3 +1 PMC votes are cast.
>>>>
>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>> [ ] -1 Do not release this package because ...
>>>>
>>>>
>>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>>
>>>> The tag to be voted on is v2.1.1-rc2
>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>
>>>> List of JIRA tickets resolved can be found with this filter.
>>>>
>>>> The release files, including signatures, digests, etc. can be found at:
>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>
>>>> Release artifacts are signed with the following key:
>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>
>>>> The staging repository for this release can be found at:
>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>
>>>> The documentation corresponding to this release can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>
>>>>
>>>> FAQ
>>>>
>>>> How can I help test this release?
>>>>
>>>> If you are a Spark user, you can help us test this release by taking an
>>>> existing Spark workload and running on this release candidate, then
>>>> reporting any regressions.
>>>>
>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>
>>>> Committers should look at those and triage. Extremely important bug
>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>
>>>> But my bug isn't fixed!??!
>>>>
>>>> In order to make timely releases, we will typically not hold the release
>>>> unless the bug in question is a regression from 2.1.0.
>>>>
>>>> What happened to RC1?
>>>>
>>>> There were issues with the release packaging and as a result was
>>>> skipped.
>>>
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271
>> Twitter: https://twitter.com/holdenkarau
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
I think this is Java 8 v Java 7, if you look at the previous build you see
a lot of the same missing classes but tagged as "warning" rather than
"error". I think all in all it makes sense to stick to JDK7 to build the
legacy build which have been built with it previously.

If there is consensus on that I'm happy to update the env variables for the
RC3 build to set a JDK7 JAVA_HOME (but I'd want to double check with
someone about which jobs need to be updated to make sure I don't miss any).

On Sat, Apr 15, 2017 at 2:33 AM, Sean Owen <so...@cloudera.com> wrote:

> I don't think this is an example of Java 8 javadoc being more strict; it
> is not finding classes, not complaining about syntax.
> (Hyukjin cleaned up all of the javadoc 8 errors in master, and they're
> different and much more extensive!)
>
> It wouldn't necessarily break anything to build with Java 8 because it'll
> still emit Java 7 bytecode, etc.
>
> That said, it may very well be that it is somehow due to Java 7 vs 8, and
> is probably best to stick to 1.7 in the release build.
>
> On Sat, Apr 15, 2017 at 1:38 AM Ryan Blue <rb...@netflix.com.invalid>
> wrote:
>
>> I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.
>>
>> I think we should definitely use Java 1.7 for the release if we used it
>> for the previous releases in the 2.1 line. We don't want to break java 1.7
>> users in a patch release.
>>
>> rb
>>
>> On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>
>>> Ok and with a bit more digging between RC2 and RC3 we apparently
>>> switched which JVM we are building the docs with.
>>>
>>> The relevant side by side diff of the build logs (
>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%
>>> 20Release/job/spark-release-docs/60/consoleFull https://
>>> amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
>>> job/spark-release-docs/59/consoleFull ):
>>>
>>> HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is
>>> now at 02b165d... Preparing Spark release v2.1.1-rc2
>>> Checked out Spark git hash 2ed19cf                            | Checked
>>> out Spark git hash 02b165d
>>> Building Spark docs                                             Building
>>> Spark docs
>>> Configuration file: /home/jenkins/workspace/spark-release-doc
>>> Configuration file: /home/jenkins/workspace/spark-release-doc
>>> Moving to project root and building API docs.                   Moving
>>> to project root and building API docs.
>>> Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running
>>> 'build/sbt -Pkinesis-asl clean compile unidoc' from /
>>> Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using
>>> /usr/java/jdk1.7.0_79 as default JAVA_HOME.
>>> Note, this will be overridden by -java-home if it is set.       Note,
>>> this will be overridden by -java-home if it is set.
>>>
>>> There have been some known issues with building the docs with JDK8 and I
>>> believe those fixes are in mainline, and we could cherry pick these changes
>>> in -- but I think it might be more reasonable to just build the 2.1 docs
>>> with JDK7.
>>>
>>> What do people think?
>>>
>>>


-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
I don't think this is an example of Java 8 javadoc being more strict; it is
not finding classes, not complaining about syntax.
(Hyukjin cleaned up all of the javadoc 8 errors in master, and they're
different and much more extensive!)

It wouldn't necessarily break anything to build with Java 8 because it'll
still emit Java 7 bytecode, etc.

That said, it may very well be that it is somehow due to Java 7 vs 8, and
is probably best to stick to 1.7 in the release build.

On Sat, Apr 15, 2017 at 1:38 AM Ryan Blue <rb...@netflix.com.invalid> wrote:

> I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.
>
> I think we should definitely use Java 1.7 for the release if we used it
> for the previous releases in the 2.1 line. We don't want to break java 1.7
> users in a patch release.
>
> rb
>
> On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> Ok and with a bit more digging between RC2 and RC3 we apparently switched
>> which JVM we are building the docs with.
>>
>> The relevant side by side diff of the build logs (
>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/60/consoleFull
>>
>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/59/consoleFull
>> ):
>>
>> HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is
>> now at 02b165d... Preparing Spark release v2.1.1-rc2
>> Checked out Spark git hash 2ed19cf                            | Checked
>> out Spark git hash 02b165d
>> Building Spark docs                                             Building
>> Spark docs
>> Configuration file: /home/jenkins/workspace/spark-release-doc
>> Configuration file: /home/jenkins/workspace/spark-release-doc
>> Moving to project root and building API docs.                   Moving to
>> project root and building API docs.
>> Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running
>> 'build/sbt -Pkinesis-asl clean compile unidoc' from /
>> Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using
>> /usr/java/jdk1.7.0_79 as default JAVA_HOME.
>> Note, this will be overridden by -java-home if it is set.       Note,
>> this will be overridden by -java-home if it is set.
>>
>> There have been some known issues with building the docs with JDK8 and I
>> believe those fixes are in mainline, and we could cherry pick these changes
>> in -- but I think it might be more reasonable to just build the 2.1 docs
>> with JDK7.
>>
>> What do people think?
>>
>>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Ryan Blue <rb...@netflix.com.INVALID>.
I've hit this before, where Javadoc for 1.8 is much more strict than 1.7.

I think we should definitely use Java 1.7 for the release if we used it for
the previous releases in the 2.1 line. We don't want to break java 1.7
users in a patch release.

rb

On Fri, Apr 14, 2017 at 5:21 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> Ok and with a bit more digging between RC2 and RC3 we apparently switched
> which JVM we are building the docs with.
>
> The relevant side by side diff of the build logs (
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%
> 20Release/job/spark-release-docs/60/consoleFull https://
> amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> job/spark-release-docs/59/consoleFull ):
>
> HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is
> now at 02b165d... Preparing Spark release v2.1.1-rc2
> Checked out Spark git hash 2ed19cf                            | Checked
> out Spark git hash 02b165d
> Building Spark docs                                             Building
> Spark docs
> Configuration file: /home/jenkins/workspace/spark-release-doc
> Configuration file: /home/jenkins/workspace/spark-release-doc
> Moving to project root and building API docs.                   Moving to
> project root and building API docs.
> Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running
> 'build/sbt -Pkinesis-asl clean compile unidoc' from /
> Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using
> /usr/java/jdk1.7.0_79 as default JAVA_HOME.
> Note, this will be overridden by -java-home if it is set.       Note, this
> will be overridden by -java-home if it is set.
>
> There have been some known issues with building the docs with JDK8 and I
> believe those fixes are in mainline, and we could cherry pick these changes
> in -- but I think it might be more reasonable to just build the 2.1 docs
> with JDK7.
>
> What do people think?
>
>
> On Fri, Apr 14, 2017 at 4:53 PM, Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> At first glance the error seems similar to one Pedro Rodriguez ran into
>> during 2.0, so I'm looping Pedor in if they happen to have any insight into
>> what was the cause last time.
>>
>> On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>
>>> Sure, let me dig into it :)
>>>
>>> On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <
>>> michael@databricks.com> wrote:
>>>
>>>> Have time to figure out why the doc build failed?
>>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
>>>> job/spark-release-docs/60/console
>>>>
>>>> On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <ho...@pigscanfly.ca>
>>>> wrote:
>>>>
>>>>> If it would help I'd be more than happy to look at kicking off the
>>>>> packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216
>>>>> & friends) (I'd still probably need some guidance from a previous release
>>>>> coordinator so I understand if that's not actually faster).
>>>>>
>>>>> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>>>
>>>>>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>>>>>
>>>>>> Sincerely,
>>>>>>
>>>>>> DB Tsai
>>>>>> ----------------------------------------------------------
>>>>>> Web: https://www.dbtsai.com
>>>>>> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>>>
>>>>>>
>>>>>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>>>>>> > DB,
>>>>>> >
>>>>>> > This vote already failed and there isn't a RC3 vote yet. If you
>>>>>> backport the
>>>>>> > changes to branch-2.1 they will make it into the next RC.
>>>>>> >
>>>>>> > rb
>>>>>> >
>>>>>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>>>> >>
>>>>>> >> -1
>>>>>> >>
>>>>>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>>>>>> important
>>>>>> >> since it's a critical bug that na.fill will mess up the data in
>>>>>> Long even
>>>>>> >> the data isn't null.
>>>>>> >>
>>>>>> >> Thanks.
>>>>>> >>
>>>>>> >>
>>>>>> >> Sincerely,
>>>>>> >>
>>>>>> >> DB Tsai
>>>>>> >> ----------------------------------------------------------
>>>>>> >> Web: https://www.dbtsai.com
>>>>>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>>> >>
>>>>>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <
>>>>>> holden@pigscanfly.ca>
>>>>>> >> wrote:
>>>>>> >>>
>>>>>> >>> Following up, the issues with missing pypandoc/pandoc on the
>>>>>> packaging
>>>>>> >>> machine has been resolved.
>>>>>> >>>
>>>>>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <
>>>>>> holden@pigscanfly.ca>
>>>>>> >>> wrote:
>>>>>> >>>>
>>>>>> >>>> See SPARK-20216, if Michael can let me know which machine is
>>>>>> being used
>>>>>> >>>> for packaging I can see if I can install pandoc on it (should be
>>>>>> simple but
>>>>>> >>>> I know the Jenkins cluster is a bit on the older side).
>>>>>> >>>>
>>>>>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <
>>>>>> holden@pigscanfly.ca>
>>>>>> >>>> wrote:
>>>>>> >>>>>
>>>>>> >>>>> So the fix is installing pandoc on whichever machine is used for
>>>>>> >>>>> packaging. I thought that was generally done on the machine of
>>>>>> the person
>>>>>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA,
>>>>>> but from
>>>>>> >>>>> chatting with Josh it sounds like that part might be on of the
>>>>>> Jenkins
>>>>>> >>>>> workers - is there a fixed one that is used?
>>>>>> >>>>>
>>>>>> >>>>> Regardless I'll file a JIRA for this when I get back in front
>>>>>> of my
>>>>>> >>>>> desktop (~1 hour or so).
>>>>>> >>>>>
>>>>>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>>> >>>>> <mi...@databricks.com> wrote:
>>>>>> >>>>>>
>>>>>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's
>>>>>> how I
>>>>>> >>>>>> think we should proceed:
>>>>>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a
>>>>>> JIRA and
>>>>>> >>>>>> report if this is a regression and if there is an easy fix
>>>>>> that we should
>>>>>> >>>>>> wait for.
>>>>>> >>>>>>
>>>>>> >>>>>> For all the other test failures, please take the time to look
>>>>>> through
>>>>>> >>>>>> JIRA and open an issue if one does not already exist so that
>>>>>> we can triage
>>>>>> >>>>>> if these are just environmental issues.  If I don't hear any
>>>>>> objections I'm
>>>>>> >>>>>> going to go ahead with RC3 tomorrow.
>>>>>> >>>>>>
>>>>>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> >>>>>> <fe...@hotmail.com> wrote:
>>>>>> >>>>>>>
>>>>>> >>>>>>> -1
>>>>>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>> >>>>>>> Opened SPARK-20197 and working on fix.
>>>>>> >>>>>>>
>>>>>> >>>>>>> ________________________________
>>>>>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on
>>>>>> behalf of
>>>>>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>>>>>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>> >>>>>>> To: Xiao Li
>>>>>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>>>>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>> >>>>>>>
>>>>>> >>>>>>> -1 (non-binding)
>>>>>> >>>>>>>
>>>>>> >>>>>>> Python packaging doesn't seem to have quite worked out
>>>>>> (looking at
>>>>>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing
>>>>>> pandoc do not upload
>>>>>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>>>>>> version we
>>>>>> >>>>>>> upgrade to PyPi.
>>>>>> >>>>>>> Building this on my own machine results in a longer
>>>>>> description.
>>>>>> >>>>>>>
>>>>>> >>>>>>> My guess is that whichever machine was used to package this is
>>>>>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>> >>>>>>>
>>>>>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <
>>>>>> gatorsmile@gmail.com>
>>>>>> >>>>>>> wrote:
>>>>>> >>>>>>>>
>>>>>> >>>>>>>> +1
>>>>>> >>>>>>>>
>>>>>> >>>>>>>> Xiao
>>>>>> >>>>>>>>
>>>>>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>> >>>>>>>> <mi...@databricks.com>:
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> Please vote on releasing the following candidate as Apache
>>>>>> Spark
>>>>>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018
>>>>>> at 16:30 PST and
>>>>>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>> >>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> To learn more about Apache Spark, please see
>>>>>> >>>>>>>>> http://spark.apache.org/
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> The release files, including signatures, digests, etc. can
>>>>>> be found
>>>>>> >>>>>>>>> at:
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> http://home.apache.org/~pwende
>>>>>> ll/spark-releases/spark-2.1.1-rc2-bin/
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> Release artifacts are signed with the following key:
>>>>>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> The staging repository for this release can be found at:
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> https://repository.apache.org/
>>>>>> content/repositories/orgapachespark-1227/
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> The documentation corresponding to this release can be
>>>>>> found at:
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> http://people.apache.org/~pwen
>>>>>> dell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> FAQ
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> How can I help test this release?
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> If you are a Spark user, you can help us test this release
>>>>>> by
>>>>>> >>>>>>>>> taking an existing Spark workload and running on this
>>>>>> release candidate,
>>>>>> >>>>>>>>> then reporting any regressions.
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> Committers should look at those and triage. Extremely
>>>>>> important bug
>>>>>> >>>>>>>>> fixes, documentation, and API tweaks that impact
>>>>>> compatibility should be
>>>>>> >>>>>>>>> worked on immediately. Everything else please retarget to
>>>>>> 2.1.2 or 2.2.0.
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> But my bug isn't fixed!??!
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> In order to make timely releases, we will typically not
>>>>>> hold the
>>>>>> >>>>>>>>> release unless the bug in question is a regression from
>>>>>> 2.1.0.
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> What happened to RC1?
>>>>>> >>>>>>>>>
>>>>>> >>>>>>>>> There were issues with the release packaging and as a
>>>>>> result was
>>>>>> >>>>>>>>> skipped.
>>>>>> >>>>>>>>
>>>>>> >>>>>>>>
>>>>>> >>>>>>>
>>>>>> >>>>>>>
>>>>>> >>>>>>>
>>>>>> >>>>>>> --
>>>>>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>>>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>> >>>>>>
>>>>>> >>>>>>
>>>>>> >>>>> --
>>>>>> >>>>> Cell : 425-233-8271
>>>>>> >>>>> Twitter: https://twitter.com/holdenkarau
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> --
>>>>>> >>>> Cell : 425-233-8271
>>>>>> >>>> Twitter: https://twitter.com/holdenkarau
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> --
>>>>>> >>> Cell : 425-233-8271
>>>>>> >>> Twitter: https://twitter.com/holdenkarau
>>>>>> >>
>>>>>> >>
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > --
>>>>>> > Ryan Blue
>>>>>> > Software Engineer
>>>>>> > Netflix
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Ryan Blue
Software Engineer
Netflix

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
Ok and with a bit more digging between RC2 and RC3 we apparently switched
which JVM we are building the docs with.

The relevant side by side diff of the build logs (
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/60/consoleFull

https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/59/consoleFull
):

HEAD is now at 2ed19cf... Preparing Spark release v2.1.1-rc3  | HEAD is now
at 02b165d... Preparing Spark release v2.1.1-rc2
Checked out Spark git hash 2ed19cf                            | Checked out
Spark git hash 02b165d
Building Spark docs                                             Building
Spark docs
Configuration file: /home/jenkins/workspace/spark-release-doc
Configuration file: /home/jenkins/workspace/spark-release-doc
Moving to project root and building API docs.                   Moving to
project root and building API docs.
Running 'build/sbt -Pkinesis-asl clean compile unidoc' from /   Running
'build/sbt -Pkinesis-asl clean compile unidoc' from /
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.             | Using
/usr/java/jdk1.7.0_79 as default JAVA_HOME.
Note, this will be overridden by -java-home if it is set.       Note, this
will be overridden by -java-home if it is set.

There have been some known issues with building the docs with JDK8 and I
believe those fixes are in mainline, and we could cherry pick these changes
in -- but I think it might be more reasonable to just build the 2.1 docs
with JDK7.

What do people think?


On Fri, Apr 14, 2017 at 4:53 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> At first glance the error seems similar to one Pedro Rodriguez ran into
> during 2.0, so I'm looping Pedor in if they happen to have any insight into
> what was the cause last time.
>
> On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> Sure, let me dig into it :)
>>
>> On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <michael@databricks.com
>> > wrote:
>>
>>> Have time to figure out why the doc build failed?
>>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
>>> job/spark-release-docs/60/console
>>>
>>> On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <ho...@pigscanfly.ca>
>>> wrote:
>>>
>>>> If it would help I'd be more than happy to look at kicking off the
>>>> packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216
>>>> & friends) (I'd still probably need some guidance from a previous release
>>>> coordinator so I understand if that's not actually faster).
>>>>
>>>> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>>
>>>>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>>>>
>>>>> Sincerely,
>>>>>
>>>>> DB Tsai
>>>>> ----------------------------------------------------------
>>>>> Web: https://www.dbtsai.com
>>>>> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>>
>>>>>
>>>>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>>>>> > DB,
>>>>> >
>>>>> > This vote already failed and there isn't a RC3 vote yet. If you
>>>>> backport the
>>>>> > changes to branch-2.1 they will make it into the next RC.
>>>>> >
>>>>> > rb
>>>>> >
>>>>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>>> >>
>>>>> >> -1
>>>>> >>
>>>>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>>>>> important
>>>>> >> since it's a critical bug that na.fill will mess up the data in
>>>>> Long even
>>>>> >> the data isn't null.
>>>>> >>
>>>>> >> Thanks.
>>>>> >>
>>>>> >>
>>>>> >> Sincerely,
>>>>> >>
>>>>> >> DB Tsai
>>>>> >> ----------------------------------------------------------
>>>>> >> Web: https://www.dbtsai.com
>>>>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>> >>
>>>>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <holden@pigscanfly.ca
>>>>> >
>>>>> >> wrote:
>>>>> >>>
>>>>> >>> Following up, the issues with missing pypandoc/pandoc on the
>>>>> packaging
>>>>> >>> machine has been resolved.
>>>>> >>>
>>>>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <holden@pigscanfly.ca
>>>>> >
>>>>> >>> wrote:
>>>>> >>>>
>>>>> >>>> See SPARK-20216, if Michael can let me know which machine is
>>>>> being used
>>>>> >>>> for packaging I can see if I can install pandoc on it (should be
>>>>> simple but
>>>>> >>>> I know the Jenkins cluster is a bit on the older side).
>>>>> >>>>
>>>>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <
>>>>> holden@pigscanfly.ca>
>>>>> >>>> wrote:
>>>>> >>>>>
>>>>> >>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> >>>>> packaging. I thought that was generally done on the machine of
>>>>> the person
>>>>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA,
>>>>> but from
>>>>> >>>>> chatting with Josh it sounds like that part might be on of the
>>>>> Jenkins
>>>>> >>>>> workers - is there a fixed one that is used?
>>>>> >>>>>
>>>>> >>>>> Regardless I'll file a JIRA for this when I get back in front of
>>>>> my
>>>>> >>>>> desktop (~1 hour or so).
>>>>> >>>>>
>>>>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> >>>>> <mi...@databricks.com> wrote:
>>>>> >>>>>>
>>>>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how
>>>>> I
>>>>> >>>>>> think we should proceed:
>>>>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA
>>>>> and
>>>>> >>>>>> report if this is a regression and if there is an easy fix that
>>>>> we should
>>>>> >>>>>> wait for.
>>>>> >>>>>>
>>>>> >>>>>> For all the other test failures, please take the time to look
>>>>> through
>>>>> >>>>>> JIRA and open an issue if one does not already exist so that we
>>>>> can triage
>>>>> >>>>>> if these are just environmental issues.  If I don't hear any
>>>>> objections I'm
>>>>> >>>>>> going to go ahead with RC3 tomorrow.
>>>>> >>>>>>
>>>>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>> >>>>>> <fe...@hotmail.com> wrote:
>>>>> >>>>>>>
>>>>> >>>>>>> -1
>>>>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>> >>>>>>> Opened SPARK-20197 and working on fix.
>>>>> >>>>>>>
>>>>> >>>>>>> ________________________________
>>>>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on
>>>>> behalf of
>>>>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>>>>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>> >>>>>>> To: Xiao Li
>>>>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>>>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>> >>>>>>>
>>>>> >>>>>>> -1 (non-binding)
>>>>> >>>>>>>
>>>>> >>>>>>> Python packaging doesn't seem to have quite worked out
>>>>> (looking at
>>>>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc
>>>>> do not upload
>>>>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>>>>> version we
>>>>> >>>>>>> upgrade to PyPi.
>>>>> >>>>>>> Building this on my own machine results in a longer
>>>>> description.
>>>>> >>>>>>>
>>>>> >>>>>>> My guess is that whichever machine was used to package this is
>>>>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>> >>>>>>>
>>>>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <gatorsmile@gmail.com
>>>>> >
>>>>> >>>>>>> wrote:
>>>>> >>>>>>>>
>>>>> >>>>>>>> +1
>>>>> >>>>>>>>
>>>>> >>>>>>>> Xiao
>>>>> >>>>>>>>
>>>>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>> >>>>>>>> <mi...@databricks.com>:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Please vote on releasing the following candidate as Apache
>>>>> Spark
>>>>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018
>>>>> at 16:30 PST and
>>>>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>> >>>>>>>>> [ ] -1 Do not release this package because ...
>>>>> >>>>>>>>>
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> To learn more about Apache Spark, please see
>>>>> >>>>>>>>> http://spark.apache.org/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The release files, including signatures, digests, etc. can
>>>>> be found
>>>>> >>>>>>>>> at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-
>>>>> rc2-bin/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Release artifacts are signed with the following key:
>>>>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The staging repository for this release can be found at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapache
>>>>> spark-1227/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> The documentation corresponding to this release can be found
>>>>> at:
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>>>>> 1-rc2-docs/
>>>>> >>>>>>>>>
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> FAQ
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> How can I help test this release?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>> >>>>>>>>> taking an existing Spark workload and running on this
>>>>> release candidate,
>>>>> >>>>>>>>> then reporting any regressions.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> Committers should look at those and triage. Extremely
>>>>> important bug
>>>>> >>>>>>>>> fixes, documentation, and API tweaks that impact
>>>>> compatibility should be
>>>>> >>>>>>>>> worked on immediately. Everything else please retarget to
>>>>> 2.1.2 or 2.2.0.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> But my bug isn't fixed!??!
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> In order to make timely releases, we will typically not hold
>>>>> the
>>>>> >>>>>>>>> release unless the bug in question is a regression from
>>>>> 2.1.0.
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> What happened to RC1?
>>>>> >>>>>>>>>
>>>>> >>>>>>>>> There were issues with the release packaging and as a result
>>>>> was
>>>>> >>>>>>>>> skipped.
>>>>> >>>>>>>>
>>>>> >>>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>>
>>>>> >>>>>>> --
>>>>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>>>>
>>>>> >>>>>>
>>>>> >>>>> --
>>>>> >>>>> Cell : 425-233-8271
>>>>> >>>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>>
>>>>> >>>> --
>>>>> >>>> Cell : 425-233-8271
>>>>> >>>> Twitter: https://twitter.com/holdenkarau
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>>
>>>>> >>> --
>>>>> >>> Cell : 425-233-8271
>>>>> >>> Twitter: https://twitter.com/holdenkarau
>>>>> >>
>>>>> >>
>>>>> >
>>>>> >
>>>>> >
>>>>> > --
>>>>> > Ryan Blue
>>>>> > Software Engineer
>>>>> > Netflix
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>
>>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
At first glance the error seems similar to one Pedro Rodriguez ran into
during 2.0, so I'm looping Pedor in if they happen to have any insight into
what was the cause last time.

On Fri, Apr 14, 2017 at 4:40 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> Sure, let me dig into it :)
>
> On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <mi...@databricks.com>
> wrote:
>
>> Have time to figure out why the doc build failed?
>> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
>> job/spark-release-docs/60/console
>>
>> On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>
>>> If it would help I'd be more than happy to look at kicking off the
>>> packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216
>>> & friends) (I'd still probably need some guidance from a previous release
>>> coordinator so I understand if that's not actually faster).
>>>
>>> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>
>>>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>>>
>>>> Sincerely,
>>>>
>>>> DB Tsai
>>>> ----------------------------------------------------------
>>>> Web: https://www.dbtsai.com
>>>> PGP Key ID: 0x5CED8B896A6BDFA0
>>>>
>>>>
>>>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>>>> > DB,
>>>> >
>>>> > This vote already failed and there isn't a RC3 vote yet. If you
>>>> backport the
>>>> > changes to branch-2.1 they will make it into the next RC.
>>>> >
>>>> > rb
>>>> >
>>>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>>>> >>
>>>> >> -1
>>>> >>
>>>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>>>> important
>>>> >> since it's a critical bug that na.fill will mess up the data in Long
>>>> even
>>>> >> the data isn't null.
>>>> >>
>>>> >> Thanks.
>>>> >>
>>>> >>
>>>> >> Sincerely,
>>>> >>
>>>> >> DB Tsai
>>>> >> ----------------------------------------------------------
>>>> >> Web: https://www.dbtsai.com
>>>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>>>> >>
>>>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
>>>> >> wrote:
>>>> >>>
>>>> >>> Following up, the issues with missing pypandoc/pandoc on the
>>>> packaging
>>>> >>> machine has been resolved.
>>>> >>>
>>>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>>>> >>> wrote:
>>>> >>>>
>>>> >>>> See SPARK-20216, if Michael can let me know which machine is being
>>>> used
>>>> >>>> for packaging I can see if I can install pandoc on it (should be
>>>> simple but
>>>> >>>> I know the Jenkins cluster is a bit on the older side).
>>>> >>>>
>>>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <holden@pigscanfly.ca
>>>> >
>>>> >>>> wrote:
>>>> >>>>>
>>>> >>>>> So the fix is installing pandoc on whichever machine is used for
>>>> >>>>> packaging. I thought that was generally done on the machine of
>>>> the person
>>>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
>>>> from
>>>> >>>>> chatting with Josh it sounds like that part might be on of the
>>>> Jenkins
>>>> >>>>> workers - is there a fixed one that is used?
>>>> >>>>>
>>>> >>>>> Regardless I'll file a JIRA for this when I get back in front of
>>>> my
>>>> >>>>> desktop (~1 hour or so).
>>>> >>>>>
>>>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>> >>>>> <mi...@databricks.com> wrote:
>>>> >>>>>>
>>>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>> >>>>>> think we should proceed:
>>>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA
>>>> and
>>>> >>>>>> report if this is a regression and if there is an easy fix that
>>>> we should
>>>> >>>>>> wait for.
>>>> >>>>>>
>>>> >>>>>> For all the other test failures, please take the time to look
>>>> through
>>>> >>>>>> JIRA and open an issue if one does not already exist so that we
>>>> can triage
>>>> >>>>>> if these are just environmental issues.  If I don't hear any
>>>> objections I'm
>>>> >>>>>> going to go ahead with RC3 tomorrow.
>>>> >>>>>>
>>>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>> >>>>>> <fe...@hotmail.com> wrote:
>>>> >>>>>>>
>>>> >>>>>>> -1
>>>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>>>> >>>>>>> Opened SPARK-20197 and working on fix.
>>>> >>>>>>>
>>>> >>>>>>> ________________________________
>>>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on
>>>> behalf of
>>>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>>>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>> >>>>>>> To: Xiao Li
>>>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>> >>>>>>>
>>>> >>>>>>> -1 (non-binding)
>>>> >>>>>>>
>>>> >>>>>>> Python packaging doesn't seem to have quite worked out (looking
>>>> at
>>>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc
>>>> do not upload
>>>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>>>> version we
>>>> >>>>>>> upgrade to PyPi.
>>>> >>>>>>> Building this on my own machine results in a longer description.
>>>> >>>>>>>
>>>> >>>>>>> My guess is that whichever machine was used to package this is
>>>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>> >>>>>>>
>>>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
>>>> >>>>>>> wrote:
>>>> >>>>>>>>
>>>> >>>>>>>> +1
>>>> >>>>>>>>
>>>> >>>>>>>> Xiao
>>>> >>>>>>>>
>>>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>> >>>>>>>> <mi...@databricks.com>:
>>>> >>>>>>>>>
>>>> >>>>>>>>> Please vote on releasing the following candidate as Apache
>>>> Spark
>>>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
>>>> 16:30 PST and
>>>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>> >>>>>>>>>
>>>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>> >>>>>>>>> [ ] -1 Do not release this package because ...
>>>> >>>>>>>>>
>>>> >>>>>>>>>
>>>> >>>>>>>>> To learn more about Apache Spark, please see
>>>> >>>>>>>>> http://spark.apache.org/
>>>> >>>>>>>>>
>>>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>> >>>>>>>>>
>>>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>> >>>>>>>>>
>>>> >>>>>>>>> The release files, including signatures, digests, etc. can be
>>>> found
>>>> >>>>>>>>> at:
>>>> >>>>>>>>>
>>>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-
>>>> rc2-bin/
>>>> >>>>>>>>>
>>>> >>>>>>>>> Release artifacts are signed with the following key:
>>>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>> >>>>>>>>>
>>>> >>>>>>>>> The staging repository for this release can be found at:
>>>> >>>>>>>>>
>>>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapache
>>>> spark-1227/
>>>> >>>>>>>>>
>>>> >>>>>>>>> The documentation corresponding to this release can be found
>>>> at:
>>>> >>>>>>>>>
>>>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>>>> 1-rc2-docs/
>>>> >>>>>>>>>
>>>> >>>>>>>>>
>>>> >>>>>>>>> FAQ
>>>> >>>>>>>>>
>>>> >>>>>>>>> How can I help test this release?
>>>> >>>>>>>>>
>>>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>>>> >>>>>>>>> taking an existing Spark workload and running on this release
>>>> candidate,
>>>> >>>>>>>>> then reporting any regressions.
>>>> >>>>>>>>>
>>>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>> >>>>>>>>>
>>>> >>>>>>>>> Committers should look at those and triage. Extremely
>>>> important bug
>>>> >>>>>>>>> fixes, documentation, and API tweaks that impact
>>>> compatibility should be
>>>> >>>>>>>>> worked on immediately. Everything else please retarget to
>>>> 2.1.2 or 2.2.0.
>>>> >>>>>>>>>
>>>> >>>>>>>>> But my bug isn't fixed!??!
>>>> >>>>>>>>>
>>>> >>>>>>>>> In order to make timely releases, we will typically not hold
>>>> the
>>>> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>> >>>>>>>>>
>>>> >>>>>>>>> What happened to RC1?
>>>> >>>>>>>>>
>>>> >>>>>>>>> There were issues with the release packaging and as a result
>>>> was
>>>> >>>>>>>>> skipped.
>>>> >>>>>>>>
>>>> >>>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>>
>>>> >>>>>>> --
>>>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>>>> >>>>>>
>>>> >>>>>>
>>>> >>>>> --
>>>> >>>>> Cell : 425-233-8271
>>>> >>>>> Twitter: https://twitter.com/holdenkarau
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>>
>>>> >>>> --
>>>> >>>> Cell : 425-233-8271
>>>> >>>> Twitter: https://twitter.com/holdenkarau
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>> --
>>>> >>> Cell : 425-233-8271
>>>> >>> Twitter: https://twitter.com/holdenkarau
>>>> >>
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Ryan Blue
>>>> > Software Engineer
>>>> > Netflix
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>>
>>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
Sure, let me dig into it :)

On Fri, Apr 14, 2017 at 4:21 PM, Michael Armbrust <mi...@databricks.com>
wrote:

> Have time to figure out why the doc build failed?
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%
> 20Release/job/spark-release-docs/60/console
>
> On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> If it would help I'd be more than happy to look at kicking off the
>> packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216
>> & friends) (I'd still probably need some guidance from a previous release
>> coordinator so I understand if that's not actually faster).
>>
>> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>>
>>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>>
>>> Sincerely,
>>>
>>> DB Tsai
>>> ----------------------------------------------------------
>>> Web: https://www.dbtsai.com
>>> PGP Key ID: 0x5CED8B896A6BDFA0
>>>
>>>
>>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>>> > DB,
>>> >
>>> > This vote already failed and there isn't a RC3 vote yet. If you
>>> backport the
>>> > changes to branch-2.1 they will make it into the next RC.
>>> >
>>> > rb
>>> >
>>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>>> >>
>>> >> -1
>>> >>
>>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>>> important
>>> >> since it's a critical bug that na.fill will mess up the data in Long
>>> even
>>> >> the data isn't null.
>>> >>
>>> >> Thanks.
>>> >>
>>> >>
>>> >> Sincerely,
>>> >>
>>> >> DB Tsai
>>> >> ----------------------------------------------------------
>>> >> Web: https://www.dbtsai.com
>>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>>> >>
>>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
>>> >> wrote:
>>> >>>
>>> >>> Following up, the issues with missing pypandoc/pandoc on the
>>> packaging
>>> >>> machine has been resolved.
>>> >>>
>>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>>> >>> wrote:
>>> >>>>
>>> >>>> See SPARK-20216, if Michael can let me know which machine is being
>>> used
>>> >>>> for packaging I can see if I can install pandoc on it (should be
>>> simple but
>>> >>>> I know the Jenkins cluster is a bit on the older side).
>>> >>>>
>>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>>> >>>> wrote:
>>> >>>>>
>>> >>>>> So the fix is installing pandoc on whichever machine is used for
>>> >>>>> packaging. I thought that was generally done on the machine of the
>>> person
>>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
>>> from
>>> >>>>> chatting with Josh it sounds like that part might be on of the
>>> Jenkins
>>> >>>>> workers - is there a fixed one that is used?
>>> >>>>>
>>> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>> >>>>> desktop (~1 hour or so).
>>> >>>>>
>>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>> >>>>> <mi...@databricks.com> wrote:
>>> >>>>>>
>>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>> >>>>>> think we should proceed:
>>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA
>>> and
>>> >>>>>> report if this is a regression and if there is an easy fix that
>>> we should
>>> >>>>>> wait for.
>>> >>>>>>
>>> >>>>>> For all the other test failures, please take the time to look
>>> through
>>> >>>>>> JIRA and open an issue if one does not already exist so that we
>>> can triage
>>> >>>>>> if these are just environmental issues.  If I don't hear any
>>> objections I'm
>>> >>>>>> going to go ahead with RC3 tomorrow.
>>> >>>>>>
>>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>> >>>>>> <fe...@hotmail.com> wrote:
>>> >>>>>>>
>>> >>>>>>> -1
>>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>>> >>>>>>> Opened SPARK-20197 and working on fix.
>>> >>>>>>>
>>> >>>>>>> ________________________________
>>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf
>>> of
>>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>> >>>>>>> To: Xiao Li
>>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>> >>>>>>>
>>> >>>>>>> -1 (non-binding)
>>> >>>>>>>
>>> >>>>>>> Python packaging doesn't seem to have quite worked out (looking
>>> at
>>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc
>>> do not upload
>>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>>> version we
>>> >>>>>>> upgrade to PyPi.
>>> >>>>>>> Building this on my own machine results in a longer description.
>>> >>>>>>>
>>> >>>>>>> My guess is that whichever machine was used to package this is
>>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>> >>>>>>>
>>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
>>> >>>>>>> wrote:
>>> >>>>>>>>
>>> >>>>>>>> +1
>>> >>>>>>>>
>>> >>>>>>>> Xiao
>>> >>>>>>>>
>>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>> >>>>>>>> <mi...@databricks.com>:
>>> >>>>>>>>>
>>> >>>>>>>>> Please vote on releasing the following candidate as Apache
>>> Spark
>>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
>>> 16:30 PST and
>>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>> >>>>>>>>>
>>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>> >>>>>>>>> [ ] -1 Do not release this package because ...
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> To learn more about Apache Spark, please see
>>> >>>>>>>>> http://spark.apache.org/
>>> >>>>>>>>>
>>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>> >>>>>>>>>
>>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>> >>>>>>>>>
>>> >>>>>>>>> The release files, including signatures, digests, etc. can be
>>> found
>>> >>>>>>>>> at:
>>> >>>>>>>>>
>>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-
>>> rc2-bin/
>>> >>>>>>>>>
>>> >>>>>>>>> Release artifacts are signed with the following key:
>>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>> >>>>>>>>>
>>> >>>>>>>>> The staging repository for this release can be found at:
>>> >>>>>>>>>
>>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapache
>>> spark-1227/
>>> >>>>>>>>>
>>> >>>>>>>>> The documentation corresponding to this release can be found
>>> at:
>>> >>>>>>>>>
>>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>>> 1-rc2-docs/
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> FAQ
>>> >>>>>>>>>
>>> >>>>>>>>> How can I help test this release?
>>> >>>>>>>>>
>>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>>> >>>>>>>>> taking an existing Spark workload and running on this release
>>> candidate,
>>> >>>>>>>>> then reporting any regressions.
>>> >>>>>>>>>
>>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>> >>>>>>>>>
>>> >>>>>>>>> Committers should look at those and triage. Extremely
>>> important bug
>>> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
>>> should be
>>> >>>>>>>>> worked on immediately. Everything else please retarget to
>>> 2.1.2 or 2.2.0.
>>> >>>>>>>>>
>>> >>>>>>>>> But my bug isn't fixed!??!
>>> >>>>>>>>>
>>> >>>>>>>>> In order to make timely releases, we will typically not hold
>>> the
>>> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>> >>>>>>>>>
>>> >>>>>>>>> What happened to RC1?
>>> >>>>>>>>>
>>> >>>>>>>>> There were issues with the release packaging and as a result
>>> was
>>> >>>>>>>>> skipped.
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>>> >>>>>>
>>> >>>>>>
>>> >>>>> --
>>> >>>>> Cell : 425-233-8271
>>> >>>>> Twitter: https://twitter.com/holdenkarau
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>>
>>> >>>> --
>>> >>>> Cell : 425-233-8271
>>> >>>> Twitter: https://twitter.com/holdenkarau
>>> >>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Cell : 425-233-8271
>>> >>> Twitter: https://twitter.com/holdenkarau
>>> >>
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > Ryan Blue
>>> > Software Engineer
>>> > Netflix
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>>
>>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>


-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Michael Armbrust <mi...@databricks.com>.
Have time to figure out why the doc build failed?
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/job/spark-release-docs/60/console

On Thu, Apr 13, 2017 at 9:39 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> If it would help I'd be more than happy to look at kicking off the
> packaging for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216
> & friends) (I'd still probably need some guidance from a previous release
> coordinator so I understand if that's not actually faster).
>
> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>
>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>>
>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>> > DB,
>> >
>> > This vote already failed and there isn't a RC3 vote yet. If you
>> backport the
>> > changes to branch-2.1 they will make it into the next RC.
>> >
>> > rb
>> >
>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>> >>
>> >> -1
>> >>
>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>> important
>> >> since it's a critical bug that na.fill will mess up the data in Long
>> even
>> >> the data isn't null.
>> >>
>> >> Thanks.
>> >>
>> >>
>> >> Sincerely,
>> >>
>> >> DB Tsai
>> >> ----------------------------------------------------------
>> >> Web: https://www.dbtsai.com
>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>> >>
>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
>> >> wrote:
>> >>>
>> >>> Following up, the issues with missing pypandoc/pandoc on the packaging
>> >>> machine has been resolved.
>> >>>
>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>> >>> wrote:
>> >>>>
>> >>>> See SPARK-20216, if Michael can let me know which machine is being
>> used
>> >>>> for packaging I can see if I can install pandoc on it (should be
>> simple but
>> >>>> I know the Jenkins cluster is a bit on the older side).
>> >>>>
>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>> >>>> wrote:
>> >>>>>
>> >>>>> So the fix is installing pandoc on whichever machine is used for
>> >>>>> packaging. I thought that was generally done on the machine of the
>> person
>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
>> from
>> >>>>> chatting with Josh it sounds like that part might be on of the
>> Jenkins
>> >>>>> workers - is there a fixed one that is used?
>> >>>>>
>> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
>> >>>>> desktop (~1 hour or so).
>> >>>>>
>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>> >>>>> <mi...@databricks.com> wrote:
>> >>>>>>
>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>> >>>>>> think we should proceed:
>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>> >>>>>> report if this is a regression and if there is an easy fix that we
>> should
>> >>>>>> wait for.
>> >>>>>>
>> >>>>>> For all the other test failures, please take the time to look
>> through
>> >>>>>> JIRA and open an issue if one does not already exist so that we
>> can triage
>> >>>>>> if these are just environmental issues.  If I don't hear any
>> objections I'm
>> >>>>>> going to go ahead with RC3 tomorrow.
>> >>>>>>
>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>> >>>>>> <fe...@hotmail.com> wrote:
>> >>>>>>>
>> >>>>>>> -1
>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>> >>>>>>> Opened SPARK-20197 and working on fix.
>> >>>>>>>
>> >>>>>>> ________________________________
>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf
>> of
>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>> >>>>>>> To: Xiao Li
>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>> >>>>>>>
>> >>>>>>> -1 (non-binding)
>> >>>>>>>
>> >>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do
>> not upload
>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a
>> version we
>> >>>>>>> upgrade to PyPi.
>> >>>>>>> Building this on my own machine results in a longer description.
>> >>>>>>>
>> >>>>>>> My guess is that whichever machine was used to package this is
>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>> >>>>>>>
>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
>> >>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>> +1
>> >>>>>>>>
>> >>>>>>>> Xiao
>> >>>>>>>>
>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>> >>>>>>>> <mi...@databricks.com>:
>> >>>>>>>>>
>> >>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
>> 16:30 PST and
>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>> >>>>>>>>>
>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>> >>>>>>>>> [ ] -1 Do not release this package because ...
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> To learn more about Apache Spark, please see
>> >>>>>>>>> http://spark.apache.org/
>> >>>>>>>>>
>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>> >>>>>>>>>
>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>> >>>>>>>>>
>> >>>>>>>>> The release files, including signatures, digests, etc. can be
>> found
>> >>>>>>>>> at:
>> >>>>>>>>>
>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-
>> rc2-bin/
>> >>>>>>>>>
>> >>>>>>>>> Release artifacts are signed with the following key:
>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>> >>>>>>>>>
>> >>>>>>>>> The staging repository for this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapache
>> spark-1227/
>> >>>>>>>>>
>> >>>>>>>>> The documentation corresponding to this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>> 1-rc2-docs/
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> FAQ
>> >>>>>>>>>
>> >>>>>>>>> How can I help test this release?
>> >>>>>>>>>
>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>> >>>>>>>>> taking an existing Spark workload and running on this release
>> candidate,
>> >>>>>>>>> then reporting any regressions.
>> >>>>>>>>>
>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>> >>>>>>>>>
>> >>>>>>>>> Committers should look at those and triage. Extremely important
>> bug
>> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
>> should be
>> >>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2
>> or 2.2.0.
>> >>>>>>>>>
>> >>>>>>>>> But my bug isn't fixed!??!
>> >>>>>>>>>
>> >>>>>>>>> In order to make timely releases, we will typically not hold the
>> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>> >>>>>>>>>
>> >>>>>>>>> What happened to RC1?
>> >>>>>>>>>
>> >>>>>>>>> There were issues with the release packaging and as a result was
>> >>>>>>>>> skipped.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>>>
>> >>>>>>
>> >>>>> --
>> >>>>> Cell : 425-233-8271
>> >>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Cell : 425-233-8271
>> >>>> Twitter: https://twitter.com/holdenkarau
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cell : 425-233-8271
>> >>> Twitter: https://twitter.com/holdenkarau
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Ryan Blue
>> > Software Engineer
>> > Netflix
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>
>>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Maciej Bryński <ma...@brynski.pl>.
https://issues.apache.org/jira/browse/SPARK-12717

This bug is in Spark since 1.6.0.
Any chance to get this fixed ?

M.

2017-04-14 6:39 GMT+02:00 Holden Karau <ho...@pigscanfly.ca>:
> If it would help I'd be more than happy to look at kicking off the packaging
> for RC3 since I'v been poking around in Jenkins a bit (for SPARK-20216 &
> friends) (I'd still probably need some guidance from a previous release
> coordinator so I understand if that's not actually faster).
>
> On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:
>>
>> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>>
>> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
>> > DB,
>> >
>> > This vote already failed and there isn't a RC3 vote yet. If you backport
>> > the
>> > changes to branch-2.1 they will make it into the next RC.
>> >
>> > rb
>> >
>> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>> >>
>> >> -1
>> >>
>> >> I think that back-porting SPARK-20270 and SPARK-18555 are very
>> >> important
>> >> since it's a critical bug that na.fill will mess up the data in Long
>> >> even
>> >> the data isn't null.
>> >>
>> >> Thanks.
>> >>
>> >>
>> >> Sincerely,
>> >>
>> >> DB Tsai
>> >> ----------------------------------------------------------
>> >> Web: https://www.dbtsai.com
>> >> PGP Key ID: 0x5CED8B896A6BDFA0
>> >>
>> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
>> >> wrote:
>> >>>
>> >>> Following up, the issues with missing pypandoc/pandoc on the packaging
>> >>> machine has been resolved.
>> >>>
>> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>> >>> wrote:
>> >>>>
>> >>>> See SPARK-20216, if Michael can let me know which machine is being
>> >>>> used
>> >>>> for packaging I can see if I can install pandoc on it (should be
>> >>>> simple but
>> >>>> I know the Jenkins cluster is a bit on the older side).
>> >>>>
>> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>> >>>> wrote:
>> >>>>>
>> >>>>> So the fix is installing pandoc on whichever machine is used for
>> >>>>> packaging. I thought that was generally done on the machine of the
>> >>>>> person
>> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
>> >>>>> from
>> >>>>> chatting with Josh it sounds like that part might be on of the
>> >>>>> Jenkins
>> >>>>> workers - is there a fixed one that is used?
>> >>>>>
>> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
>> >>>>> desktop (~1 hour or so).
>> >>>>>
>> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>> >>>>> <mi...@databricks.com> wrote:
>> >>>>>>
>> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>> >>>>>> think we should proceed:
>> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>> >>>>>> report if this is a regression and if there is an easy fix that we
>> >>>>>> should
>> >>>>>> wait for.
>> >>>>>>
>> >>>>>> For all the other test failures, please take the time to look
>> >>>>>> through
>> >>>>>> JIRA and open an issue if one does not already exist so that we can
>> >>>>>> triage
>> >>>>>> if these are just environmental issues.  If I don't hear any
>> >>>>>> objections I'm
>> >>>>>> going to go ahead with RC3 tomorrow.
>> >>>>>>
>> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>> >>>>>> <fe...@hotmail.com> wrote:
>> >>>>>>>
>> >>>>>>> -1
>> >>>>>>> sorry, found an issue with SparkR CRAN check.
>> >>>>>>> Opened SPARK-20197 and working on fix.
>> >>>>>>>
>> >>>>>>> ________________________________
>> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf of
>> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
>> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>> >>>>>>> To: Xiao Li
>> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>> >>>>>>>
>> >>>>>>> -1 (non-binding)
>> >>>>>>>
>> >>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do
>> >>>>>>> not upload
>> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version
>> >>>>>>> we
>> >>>>>>> upgrade to PyPi.
>> >>>>>>> Building this on my own machine results in a longer description.
>> >>>>>>>
>> >>>>>>> My guess is that whichever machine was used to package this is
>> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
>> >>>>>>>
>> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
>> >>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>> +1
>> >>>>>>>>
>> >>>>>>>> Xiao
>> >>>>>>>>
>> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>> >>>>>>>> <mi...@databricks.com>:
>> >>>>>>>>>
>> >>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
>> >>>>>>>>> 16:30 PST and
>> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>> >>>>>>>>>
>> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>> >>>>>>>>> [ ] -1 Do not release this package because ...
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> To learn more about Apache Spark, please see
>> >>>>>>>>> http://spark.apache.org/
>> >>>>>>>>>
>> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
>> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>> >>>>>>>>>
>> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>> >>>>>>>>>
>> >>>>>>>>> The release files, including signatures, digests, etc. can be
>> >>>>>>>>> found
>> >>>>>>>>> at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>> >>>>>>>>>
>> >>>>>>>>> Release artifacts are signed with the following key:
>> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>> >>>>>>>>>
>> >>>>>>>>> The staging repository for this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>> >>>>>>>>>
>> >>>>>>>>> The documentation corresponding to this release can be found at:
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> FAQ
>> >>>>>>>>>
>> >>>>>>>>> How can I help test this release?
>> >>>>>>>>>
>> >>>>>>>>> If you are a Spark user, you can help us test this release by
>> >>>>>>>>> taking an existing Spark workload and running on this release
>> >>>>>>>>> candidate,
>> >>>>>>>>> then reporting any regressions.
>> >>>>>>>>>
>> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>> >>>>>>>>>
>> >>>>>>>>> Committers should look at those and triage. Extremely important
>> >>>>>>>>> bug
>> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
>> >>>>>>>>> should be
>> >>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2
>> >>>>>>>>> or 2.2.0.
>> >>>>>>>>>
>> >>>>>>>>> But my bug isn't fixed!??!
>> >>>>>>>>>
>> >>>>>>>>> In order to make timely releases, we will typically not hold the
>> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>> >>>>>>>>>
>> >>>>>>>>> What happened to RC1?
>> >>>>>>>>>
>> >>>>>>>>> There were issues with the release packaging and as a result was
>> >>>>>>>>> skipped.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> --
>> >>>>>>> Cell : 425-233-8271
>> >>>>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>>>
>> >>>>>>
>> >>>>> --
>> >>>>> Cell : 425-233-8271
>> >>>>> Twitter: https://twitter.com/holdenkarau
>> >>>>
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Cell : 425-233-8271
>> >>>> Twitter: https://twitter.com/holdenkarau
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Cell : 425-233-8271
>> >>> Twitter: https://twitter.com/holdenkarau
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Ryan Blue
>> > Software Engineer
>> > Netflix
>>
>> ---------------------------------------------------------------------
>> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>>
>
>
>
> --
> Cell : 425-233-8271
> Twitter: https://twitter.com/holdenkarau



-- 
Maciek Bryński

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
If it would help I'd be more than happy to look at kicking off the
packaging for RC3 since I'v been poking around in Jenkins a bit (for
SPARK-20216
& friends) (I'd still probably need some guidance from a previous release
coordinator so I understand if that's not actually faster).

On Mon, Apr 10, 2017 at 6:39 PM, DB Tsai <db...@dbtsai.com> wrote:

> I backported the fix into both branch-2.1 and branch-2.0. Thanks.
>
> Sincerely,
>
> DB Tsai
> ----------------------------------------------------------
> Web: https://www.dbtsai.com
> PGP Key ID: 0x5CED8B896A6BDFA0
>
>
> On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
> > DB,
> >
> > This vote already failed and there isn't a RC3 vote yet. If you backport
> the
> > changes to branch-2.1 they will make it into the next RC.
> >
> > rb
> >
> > On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
> >>
> >> -1
> >>
> >> I think that back-porting SPARK-20270 and SPARK-18555 are very important
> >> since it's a critical bug that na.fill will mess up the data in Long
> even
> >> the data isn't null.
> >>
> >> Thanks.
> >>
> >>
> >> Sincerely,
> >>
> >> DB Tsai
> >> ----------------------------------------------------------
> >> Web: https://www.dbtsai.com
> >> PGP Key ID: 0x5CED8B896A6BDFA0
> >>
> >> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
> >> wrote:
> >>>
> >>> Following up, the issues with missing pypandoc/pandoc on the packaging
> >>> machine has been resolved.
> >>>
> >>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
> >>> wrote:
> >>>>
> >>>> See SPARK-20216, if Michael can let me know which machine is being
> used
> >>>> for packaging I can see if I can install pandoc on it (should be
> simple but
> >>>> I know the Jenkins cluster is a bit on the older side).
> >>>>
> >>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
> >>>> wrote:
> >>>>>
> >>>>> So the fix is installing pandoc on whichever machine is used for
> >>>>> packaging. I thought that was generally done on the machine of the
> person
> >>>>> rolling the release so I wasn't sure it made sense as a JIRA, but
> from
> >>>>> chatting with Josh it sounds like that part might be on of the
> Jenkins
> >>>>> workers - is there a fixed one that is used?
> >>>>>
> >>>>> Regardless I'll file a JIRA for this when I get back in front of my
> >>>>> desktop (~1 hour or so).
> >>>>>
> >>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
> >>>>> <mi...@databricks.com> wrote:
> >>>>>>
> >>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
> >>>>>> think we should proceed:
> >>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
> >>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
> >>>>>> report if this is a regression and if there is an easy fix that we
> should
> >>>>>> wait for.
> >>>>>>
> >>>>>> For all the other test failures, please take the time to look
> through
> >>>>>> JIRA and open an issue if one does not already exist so that we can
> triage
> >>>>>> if these are just environmental issues.  If I don't hear any
> objections I'm
> >>>>>> going to go ahead with RC3 tomorrow.
> >>>>>>
> >>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
> >>>>>> <fe...@hotmail.com> wrote:
> >>>>>>>
> >>>>>>> -1
> >>>>>>> sorry, found an issue with SparkR CRAN check.
> >>>>>>> Opened SPARK-20197 and working on fix.
> >>>>>>>
> >>>>>>> ________________________________
> >>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf of
> >>>>>>> Holden Karau <ho...@pigscanfly.ca>
> >>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
> >>>>>>> To: Xiao Li
> >>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
> >>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
> >>>>>>>
> >>>>>>> -1 (non-binding)
> >>>>>>>
> >>>>>>> Python packaging doesn't seem to have quite worked out (looking at
> >>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do
> not upload
> >>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version
> we
> >>>>>>> upgrade to PyPi.
> >>>>>>> Building this on my own machine results in a longer description.
> >>>>>>>
> >>>>>>> My guess is that whichever machine was used to package this is
> >>>>>>> missing the pandoc executable (or possibly pypandoc library).
> >>>>>>>
> >>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
> >>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> +1
> >>>>>>>>
> >>>>>>>> Xiao
> >>>>>>>>
> >>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
> >>>>>>>> <mi...@databricks.com>:
> >>>>>>>>>
> >>>>>>>>> Please vote on releasing the following candidate as Apache Spark
> >>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at
> 16:30 PST and
> >>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
> >>>>>>>>>
> >>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
> >>>>>>>>> [ ] -1 Do not release this package because ...
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> To learn more about Apache Spark, please see
> >>>>>>>>> http://spark.apache.org/
> >>>>>>>>>
> >>>>>>>>> The tag to be voted on is v2.1.1-rc2
> >>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
> >>>>>>>>>
> >>>>>>>>> List of JIRA tickets resolved can be found with this filter.
> >>>>>>>>>
> >>>>>>>>> The release files, including signatures, digests, etc. can be
> found
> >>>>>>>>> at:
> >>>>>>>>>
> >>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-
> 2.1.1-rc2-bin/
> >>>>>>>>>
> >>>>>>>>> Release artifacts are signed with the following key:
> >>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
> >>>>>>>>>
> >>>>>>>>> The staging repository for this release can be found at:
> >>>>>>>>>
> >>>>>>>>> https://repository.apache.org/content/repositories/
> orgapachespark-1227/
> >>>>>>>>>
> >>>>>>>>> The documentation corresponding to this release can be found at:
> >>>>>>>>>
> >>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-
> 2.1.1-rc2-docs/
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> FAQ
> >>>>>>>>>
> >>>>>>>>> How can I help test this release?
> >>>>>>>>>
> >>>>>>>>> If you are a Spark user, you can help us test this release by
> >>>>>>>>> taking an existing Spark workload and running on this release
> candidate,
> >>>>>>>>> then reporting any regressions.
> >>>>>>>>>
> >>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
> >>>>>>>>>
> >>>>>>>>> Committers should look at those and triage. Extremely important
> bug
> >>>>>>>>> fixes, documentation, and API tweaks that impact compatibility
> should be
> >>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2
> or 2.2.0.
> >>>>>>>>>
> >>>>>>>>> But my bug isn't fixed!??!
> >>>>>>>>>
> >>>>>>>>> In order to make timely releases, we will typically not hold the
> >>>>>>>>> release unless the bug in question is a regression from 2.1.0.
> >>>>>>>>>
> >>>>>>>>> What happened to RC1?
> >>>>>>>>>
> >>>>>>>>> There were issues with the release packaging and as a result was
> >>>>>>>>> skipped.
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> --
> >>>>>>> Cell : 425-233-8271
> >>>>>>> Twitter: https://twitter.com/holdenkarau
> >>>>>>
> >>>>>>
> >>>>> --
> >>>>> Cell : 425-233-8271
> >>>>> Twitter: https://twitter.com/holdenkarau
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>> Cell : 425-233-8271
> >>>> Twitter: https://twitter.com/holdenkarau
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Cell : 425-233-8271
> >>> Twitter: https://twitter.com/holdenkarau
> >>
> >>
> >
> >
> >
> > --
> > Ryan Blue
> > Software Engineer
> > Netflix
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>


-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by DB Tsai <db...@dbtsai.com>.
I backported the fix into both branch-2.1 and branch-2.0. Thanks.

Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0


On Mon, Apr 10, 2017 at 4:20 PM, Ryan Blue <rb...@netflix.com> wrote:
> DB,
>
> This vote already failed and there isn't a RC3 vote yet. If you backport the
> changes to branch-2.1 they will make it into the next RC.
>
> rb
>
> On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:
>>
>> -1
>>
>> I think that back-porting SPARK-20270 and SPARK-18555 are very important
>> since it's a critical bug that na.fill will mess up the data in Long even
>> the data isn't null.
>>
>> Thanks.
>>
>>
>> Sincerely,
>>
>> DB Tsai
>> ----------------------------------------------------------
>> Web: https://www.dbtsai.com
>> PGP Key ID: 0x5CED8B896A6BDFA0
>>
>> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>>
>>> Following up, the issues with missing pypandoc/pandoc on the packaging
>>> machine has been resolved.
>>>
>>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>>> wrote:
>>>>
>>>> See SPARK-20216, if Michael can let me know which machine is being used
>>>> for packaging I can see if I can install pandoc on it (should be simple but
>>>> I know the Jenkins cluster is a bit on the older side).
>>>>
>>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>>>> wrote:
>>>>>
>>>>> So the fix is installing pandoc on whichever machine is used for
>>>>> packaging. I thought that was generally done on the machine of the person
>>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>>> workers - is there a fixed one that is used?
>>>>>
>>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>>> desktop (~1 hour or so).
>>>>>
>>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust
>>>>> <mi...@databricks.com> wrote:
>>>>>>
>>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>>> think we should proceed:
>>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>>> report if this is a regression and if there is an easy fix that we should
>>>>>> wait for.
>>>>>>
>>>>>> For all the other test failures, please take the time to look through
>>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>>> going to go ahead with RC3 tomorrow.
>>>>>>
>>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung
>>>>>> <fe...@hotmail.com> wrote:
>>>>>>>
>>>>>>> -1
>>>>>>> sorry, found an issue with SparkR CRAN check.
>>>>>>> Opened SPARK-20197 and working on fix.
>>>>>>>
>>>>>>> ________________________________
>>>>>>> From: holden.karau@gmail.com <ho...@gmail.com> on behalf of
>>>>>>> Holden Karau <ho...@pigscanfly.ca>
>>>>>>> Sent: Friday, March 31, 2017 6:25:20 PM
>>>>>>> To: Xiao Li
>>>>>>> Cc: Michael Armbrust; dev@spark.apache.org
>>>>>>> Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>>>
>>>>>>> -1 (non-binding)
>>>>>>>
>>>>>>> Python packaging doesn't seem to have quite worked out (looking at
>>>>>>> PKG-INFO the description is "Description: !!!!! missing pandoc do not upload
>>>>>>> to PyPI !!!!"), ideally it would be nice to have this as a version we
>>>>>>> upgrade to PyPi.
>>>>>>> Building this on my own machine results in a longer description.
>>>>>>>
>>>>>>> My guess is that whichever machine was used to package this is
>>>>>>> missing the pandoc executable (or possibly pypandoc library).
>>>>>>>
>>>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>
>>>>>>> wrote:
>>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> Xiao
>>>>>>>>
>>>>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust
>>>>>>>> <mi...@databricks.com>:
>>>>>>>>>
>>>>>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>>>>>>>>> passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>>>>>
>>>>>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>>>>>> [ ] -1 Do not release this package because ...
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> To learn more about Apache Spark, please see
>>>>>>>>> http://spark.apache.org/
>>>>>>>>>
>>>>>>>>> The tag to be voted on is v2.1.1-rc2
>>>>>>>>> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>>>>>
>>>>>>>>> List of JIRA tickets resolved can be found with this filter.
>>>>>>>>>
>>>>>>>>> The release files, including signatures, digests, etc. can be found
>>>>>>>>> at:
>>>>>>>>>
>>>>>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>>>>>
>>>>>>>>> Release artifacts are signed with the following key:
>>>>>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>>>>>
>>>>>>>>> The staging repository for this release can be found at:
>>>>>>>>>
>>>>>>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>>>>>>
>>>>>>>>> The documentation corresponding to this release can be found at:
>>>>>>>>>
>>>>>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> FAQ
>>>>>>>>>
>>>>>>>>> How can I help test this release?
>>>>>>>>>
>>>>>>>>> If you are a Spark user, you can help us test this release by
>>>>>>>>> taking an existing Spark workload and running on this release candidate,
>>>>>>>>> then reporting any regressions.
>>>>>>>>>
>>>>>>>>> What should happen to JIRA tickets still targeting 2.1.1?
>>>>>>>>>
>>>>>>>>> Committers should look at those and triage. Extremely important bug
>>>>>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>>>>>
>>>>>>>>> But my bug isn't fixed!??!
>>>>>>>>>
>>>>>>>>> In order to make timely releases, we will typically not hold the
>>>>>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>>>>>
>>>>>>>>> What happened to RC1?
>>>>>>>>>
>>>>>>>>> There were issues with the release packaging and as a result was
>>>>>>>>> skipped.
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Cell : 425-233-8271
>>>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>>
>>>>>>
>>>>> --
>>>>> Cell : 425-233-8271
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271
>>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271
>>> Twitter: https://twitter.com/holdenkarau
>>
>>
>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Ryan Blue <rb...@netflix.com.INVALID>.
DB,

This vote already failed and there isn't a RC3 vote yet. If you backport
the changes to branch-2.1 they will make it into the next RC.

rb

On Mon, Apr 10, 2017 at 3:55 PM, DB Tsai <db...@dbtsai.com> wrote:

> -1
>
> I think that back-porting SPARK-20270
> <https://github.com/apache/spark/pull/17577> and SPARK-18555
> <https://github.com/apache/spark/pull/15994> are very important since
> it's a critical bug that na.fill will mess up the data in Long even the
> data isn't null.
>
> Thanks.
>
>
> Sincerely,
>
> DB Tsai
> ----------------------------------------------------------
> Web: https://www.dbtsai.com
> PGP Key ID: 0x5CED8B896A6BDFA0
>
> On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca>
> wrote:
>
>> Following up, the issues with missing pypandoc/pandoc on the packaging
>> machine has been resolved.
>>
>> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>
>>> See SPARK-20216, if Michael can let me know which machine is being used
>>> for packaging I can see if I can install pandoc on it (should be simple but
>>> I know the Jenkins cluster is a bit on the older side).
>>>
>>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>>> wrote:
>>>
>>>> So the fix is installing pandoc on whichever machine is used for
>>>> packaging. I thought that was generally done on the machine of the person
>>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>>> workers - is there a fixed one that is used?
>>>>
>>>> Regardless I'll file a JIRA for this when I get back in front of my
>>>> desktop (~1 hour or so).
>>>>
>>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <mi...@databricks.com>
>>>> wrote:
>>>>
>>>>> Thanks for the comments everyone.  This vote fails.  Here's how I
>>>>> think we should proceed:
>>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>>> report if this is a regression and if there is an easy fix that we should
>>>>> wait for.
>>>>>
>>>>> For all the other test failures, please take the time to look through
>>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>>> going to go ahead with RC3 tomorrow.
>>>>>
>>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <
>>>>> felixcheung_m@hotmail.com> wrote:
>>>>>
>>>>> -1
>>>>> sorry, found an issue with SparkR CRAN check.
>>>>> Opened SPARK-20197 and working on fix.
>>>>>
>>>>> ------------------------------
>>>>> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
>>>>> Holden Karau <ho...@pigscanfly.ca>
>>>>> *Sent:* Friday, March 31, 2017 6:25:20 PM
>>>>> *To:* Xiao Li
>>>>> *Cc:* Michael Armbrust; dev@spark.apache.org
>>>>> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>>
>>>>> -1 (non-binding)
>>>>>
>>>>> Python packaging doesn't seem to have quite worked out (looking
>>>>> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
>>>>> upload to PyPI !!!!"), ideally it would be nice to have this as a version
>>>>> we upgrade to PyPi.
>>>>> Building this on my own machine results in a longer description.
>>>>>
>>>>> My guess is that whichever machine was used to package this is missing
>>>>> the pandoc executable (or possibly pypandoc library).
>>>>>
>>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>>>>>
>>>>> +1
>>>>>
>>>>> Xiao
>>>>>
>>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>>>>
>>>>> Please vote on releasing the following candidate as Apache Spark
>>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30
>>>>> PST and passes if a majority of at least 3 +1 PMC votes are cast.
>>>>>
>>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>>> [ ] -1 Do not release this package because ...
>>>>>
>>>>>
>>>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>>>
>>>>> The tag to be voted on is v2.1.1-rc2
>>>>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>>>>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>>
>>>>> List of JIRA tickets resolved can be found with this filter
>>>>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>>>>> .
>>>>>
>>>>> The release files, including signatures, digests, etc. can be found at:
>>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>>
>>>>> Release artifacts are signed with the following key:
>>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>>
>>>>> The staging repository for this release can be found at:
>>>>> https://repository.apache.org/content/repositories/orgapache
>>>>> spark-1227/
>>>>>
>>>>> The documentation corresponding to this release can be found at:
>>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.
>>>>> 1-rc2-docs/
>>>>>
>>>>>
>>>>> *FAQ*
>>>>>
>>>>> *How can I help test this release?*
>>>>>
>>>>> If you are a Spark user, you can help us test this release by taking
>>>>> an existing Spark workload and running on this release candidate, then
>>>>> reporting any regressions.
>>>>>
>>>>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>>>>
>>>>> Committers should look at those and triage. Extremely important bug
>>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>>
>>>>> *But my bug isn't fixed!??!*
>>>>>
>>>>> In order to make timely releases, we will typically not hold the
>>>>> release unless the bug in question is a regression from 2.1.0.
>>>>>
>>>>> *What happened to RC1?*
>>>>>
>>>>> There were issues with the release packaging and as a result was
>>>>> skipped.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>>> Twitter: https://twitter.com/holdenkarau
>>>>>
>>>>>
>>>>> --
>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>


-- 
Ryan Blue
Software Engineer
Netflix

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by DB Tsai <db...@dbtsai.com>.
-1

I think that back-porting SPARK-20270
<https://github.com/apache/spark/pull/17577> and SPARK-18555
<https://github.com/apache/spark/pull/15994> are very important since it's
a critical bug that na.fill will mess up the data in Long even the data
isn't null.

Thanks.


Sincerely,

DB Tsai
----------------------------------------------------------
Web: https://www.dbtsai.com
PGP Key ID: 0x5CED8B896A6BDFA0

On Wed, Apr 5, 2017 at 11:12 AM, Holden Karau <ho...@pigscanfly.ca> wrote:

> Following up, the issues with missing pypandoc/pandoc on the packaging
> machine has been resolved.
>
> On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> See SPARK-20216, if Michael can let me know which machine is being used
>> for packaging I can see if I can install pandoc on it (should be simple but
>> I know the Jenkins cluster is a bit on the older side).
>>
>> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca>
>> wrote:
>>
>>> So the fix is installing pandoc on whichever machine is used for
>>> packaging. I thought that was generally done on the machine of the person
>>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>>> chatting with Josh it sounds like that part might be on of the Jenkins
>>> workers - is there a fixed one that is used?
>>>
>>> Regardless I'll file a JIRA for this when I get back in front of my
>>> desktop (~1 hour or so).
>>>
>>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <mi...@databricks.com>
>>> wrote:
>>>
>>>> Thanks for the comments everyone.  This vote fails.  Here's how I think
>>>> we should proceed:
>>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>>> report if this is a regression and if there is an easy fix that we should
>>>> wait for.
>>>>
>>>> For all the other test failures, please take the time to look through
>>>> JIRA and open an issue if one does not already exist so that we can triage
>>>> if these are just environmental issues.  If I don't hear any objections I'm
>>>> going to go ahead with RC3 tomorrow.
>>>>
>>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <felixcheung_m@hotmail.com
>>>> > wrote:
>>>>
>>>> -1
>>>> sorry, found an issue with SparkR CRAN check.
>>>> Opened SPARK-20197 and working on fix.
>>>>
>>>> ------------------------------
>>>> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
>>>> Holden Karau <ho...@pigscanfly.ca>
>>>> *Sent:* Friday, March 31, 2017 6:25:20 PM
>>>> *To:* Xiao Li
>>>> *Cc:* Michael Armbrust; dev@spark.apache.org
>>>> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>>
>>>> -1 (non-binding)
>>>>
>>>> Python packaging doesn't seem to have quite worked out (looking
>>>> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
>>>> upload to PyPI !!!!"), ideally it would be nice to have this as a version
>>>> we upgrade to PyPi.
>>>> Building this on my own machine results in a longer description.
>>>>
>>>> My guess is that whichever machine was used to package this is missing
>>>> the pandoc executable (or possibly pypandoc library).
>>>>
>>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>>>>
>>>> +1
>>>>
>>>> Xiao
>>>>
>>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>>>
>>>> Please vote on releasing the following candidate as Apache Spark
>>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30
>>>> PST and passes if a majority of at least 3 +1 PMC votes are cast.
>>>>
>>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>>> [ ] -1 Do not release this package because ...
>>>>
>>>>
>>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>>
>>>> The tag to be voted on is v2.1.1-rc2
>>>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>>>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>>
>>>> List of JIRA tickets resolved can be found with this filter
>>>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>>>> .
>>>>
>>>> The release files, including signatures, digests, etc. can be found at:
>>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>>
>>>> Release artifacts are signed with the following key:
>>>> https://people.apache.org/keys/committer/pwendell.asc
>>>>
>>>> The staging repository for this release can be found at:
>>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>>
>>>> The documentation corresponding to this release can be found at:
>>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>>
>>>>
>>>> *FAQ*
>>>>
>>>> *How can I help test this release?*
>>>>
>>>> If you are a Spark user, you can help us test this release by taking an
>>>> existing Spark workload and running on this release candidate, then
>>>> reporting any regressions.
>>>>
>>>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>>>
>>>> Committers should look at those and triage. Extremely important bug
>>>> fixes, documentation, and API tweaks that impact compatibility should be
>>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>>
>>>> *But my bug isn't fixed!??!*
>>>>
>>>> In order to make timely releases, we will typically not hold the
>>>> release unless the bug in question is a regression from 2.1.0.
>>>>
>>>> *What happened to RC1?*
>>>>
>>>> There were issues with the release packaging and as a result was
>>>> skipped.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Cell : 425-233-8271 <(425)%20233-8271>
>>>> Twitter: https://twitter.com/holdenkarau
>>>>
>>>>
>>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
Following up, the issues with missing pypandoc/pandoc on the packaging
machine has been resolved.

On Tue, Apr 4, 2017 at 3:54 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> See SPARK-20216, if Michael can let me know which machine is being used
> for packaging I can see if I can install pandoc on it (should be simple but
> I know the Jenkins cluster is a bit on the older side).
>
> On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca> wrote:
>
>> So the fix is installing pandoc on whichever machine is used for
>> packaging. I thought that was generally done on the machine of the person
>> rolling the release so I wasn't sure it made sense as a JIRA, but from
>> chatting with Josh it sounds like that part might be on of the Jenkins
>> workers - is there a fixed one that is used?
>>
>> Regardless I'll file a JIRA for this when I get back in front of my
>> desktop (~1 hour or so).
>>
>> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <mi...@databricks.com>
>> wrote:
>>
>>> Thanks for the comments everyone.  This vote fails.  Here's how I think
>>> we should proceed:
>>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>>> report if this is a regression and if there is an easy fix that we should
>>> wait for.
>>>
>>> For all the other test failures, please take the time to look through
>>> JIRA and open an issue if one does not already exist so that we can triage
>>> if these are just environmental issues.  If I don't hear any objections I'm
>>> going to go ahead with RC3 tomorrow.
>>>
>>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
>>> wrote:
>>>
>>> -1
>>> sorry, found an issue with SparkR CRAN check.
>>> Opened SPARK-20197 and working on fix.
>>>
>>> ------------------------------
>>> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
>>> Holden Karau <ho...@pigscanfly.ca>
>>> *Sent:* Friday, March 31, 2017 6:25:20 PM
>>> *To:* Xiao Li
>>> *Cc:* Michael Armbrust; dev@spark.apache.org
>>> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>>
>>> -1 (non-binding)
>>>
>>> Python packaging doesn't seem to have quite worked out (looking
>>> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
>>> upload to PyPI !!!!"), ideally it would be nice to have this as a version
>>> we upgrade to PyPi.
>>> Building this on my own machine results in a longer description.
>>>
>>> My guess is that whichever machine was used to package this is missing
>>> the pandoc executable (or possibly pypandoc library).
>>>
>>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>>>
>>> +1
>>>
>>> Xiao
>>>
>>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>>
>>> Please vote on releasing the following candidate as Apache Spark
>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST
>>> and passes if a majority of at least 3 +1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>
>>> The tag to be voted on is v2.1.1-rc2
>>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>
>>> List of JIRA tickets resolved can be found with this filter
>>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>>> .
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>
>>>
>>> *FAQ*
>>>
>>> *How can I help test this release?*
>>>
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions.
>>>
>>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>>
>>> Committers should look at those and triage. Extremely important bug
>>> fixes, documentation, and API tweaks that impact compatibility should be
>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>
>>> *But my bug isn't fixed!??!*
>>>
>>> In order to make timely releases, we will typically not hold the release
>>> unless the bug in question is a regression from 2.1.0.
>>>
>>> *What happened to RC1?*
>>>
>>> There were issues with the release packaging and as a result was skipped.
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Cell : 425-233-8271 <(425)%20233-8271>
>>> Twitter: https://twitter.com/holdenkarau
>>>
>>>
>>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
See SPARK-20216, if Michael can let me know which machine is being used for
packaging I can see if I can install pandoc on it (should be simple but I
know the Jenkins cluster is a bit on the older side).

On Tue, Apr 4, 2017 at 3:06 PM, Holden Karau <ho...@pigscanfly.ca> wrote:

> So the fix is installing pandoc on whichever machine is used for
> packaging. I thought that was generally done on the machine of the person
> rolling the release so I wasn't sure it made sense as a JIRA, but from
> chatting with Josh it sounds like that part might be on of the Jenkins
> workers - is there a fixed one that is used?
>
> Regardless I'll file a JIRA for this when I get back in front of my
> desktop (~1 hour or so).
>
> On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <mi...@databricks.com>
> wrote:
>
>> Thanks for the comments everyone.  This vote fails.  Here's how I think
>> we should proceed:
>>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
>> report if this is a regression and if there is an easy fix that we should
>> wait for.
>>
>> For all the other test failures, please take the time to look through
>> JIRA and open an issue if one does not already exist so that we can triage
>> if these are just environmental issues.  If I don't hear any objections I'm
>> going to go ahead with RC3 tomorrow.
>>
>> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
>> wrote:
>>
>> -1
>> sorry, found an issue with SparkR CRAN check.
>> Opened SPARK-20197 and working on fix.
>>
>> ------------------------------
>> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
>> Holden Karau <ho...@pigscanfly.ca>
>> *Sent:* Friday, March 31, 2017 6:25:20 PM
>> *To:* Xiao Li
>> *Cc:* Michael Armbrust; dev@spark.apache.org
>> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>>
>> -1 (non-binding)
>>
>> Python packaging doesn't seem to have quite worked out (looking
>> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
>> upload to PyPI !!!!"), ideally it would be nice to have this as a version
>> we upgrade to PyPi.
>> Building this on my own machine results in a longer description.
>>
>> My guess is that whichever machine was used to package this is missing
>> the pandoc executable (or possibly pypandoc library).
>>
>> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>>
>> +1
>>
>> Xiao
>>
>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>> passes if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.1
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.1-rc2
>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>
>> List of JIRA tickets resolved can be found with this filter
>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>> .
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>
>>
>> *FAQ*
>>
>> *How can I help test this release?*
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>
>> *But my bug isn't fixed!??!*
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.0.
>>
>> *What happened to RC1?*
>>
>> There were issues with the release packaging and as a result was skipped.
>>
>>
>>
>>
>>
>> --
>> Cell : 425-233-8271 <(425)%20233-8271>
>> Twitter: https://twitter.com/holdenkarau
>>
>>
>> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>



-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
So the fix is installing pandoc on whichever machine is used for packaging.
I thought that was generally done on the machine of the person rolling the
release so I wasn't sure it made sense as a JIRA, but from chatting with
Josh it sounds like that part might be on of the Jenkins workers - is there
a fixed one that is used?

Regardless I'll file a JIRA for this when I get back in front of my desktop
(~1 hour or so).

On Tue, Apr 4, 2017 at 2:35 PM Michael Armbrust <mi...@databricks.com>
wrote:

> Thanks for the comments everyone.  This vote fails.  Here's how I think we
> should proceed:
>  - [SPARK-20197] - SparkR CRAN - appears to be resolved
>  - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and
> report if this is a regression and if there is an easy fix that we should
> wait for.
>
> For all the other test failures, please take the time to look through JIRA
> and open an issue if one does not already exist so that we can triage if
> these are just environmental issues.  If I don't hear any objections I'm
> going to go ahead with RC3 tomorrow.
>
> On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
> wrote:
>
> -1
> sorry, found an issue with SparkR CRAN check.
> Opened SPARK-20197 and working on fix.
>
> ------------------------------
> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
> Holden Karau <ho...@pigscanfly.ca>
> *Sent:* Friday, March 31, 2017 6:25:20 PM
> *To:* Xiao Li
> *Cc:* Michael Armbrust; dev@spark.apache.org
> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>
> -1 (non-binding)
>
> Python packaging doesn't seem to have quite worked out (looking
> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
> upload to PyPI !!!!"), ideally it would be nice to have this as a version
> we upgrade to PyPi.
> Building this on my own machine results in a longer description.
>
> My guess is that whichever machine was used to package this is missing the
> pandoc executable (or possibly pypandoc library).
>
> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>
> +1
>
> Xiao
>
> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.1
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.1-rc2
> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>
> List of JIRA tickets resolved can be found with this filter
> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
> .
>
> The release files, including signatures, digests, etc. can be found at:
> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1227/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>
>
> *FAQ*
>
> *How can I help test this release?*
>
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> *What should happen to JIRA tickets still targeting 2.1.1?*
>
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>
> *But my bug isn't fixed!??!*
>
> In order to make timely releases, we will typically not hold the release
> unless the bug in question is a regression from 2.1.0.
>
> *What happened to RC1?*
>
> There were issues with the release packaging and as a result was skipped.
>
>
>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>
>
> --
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Michael Armbrust <mi...@databricks.com>.
Thanks for the comments everyone.  This vote fails.  Here's how I think we
should proceed:
 - [SPARK-20197] - SparkR CRAN - appears to be resolved
 - [SPARK-XXXX] - Python packaging - Holden, please file a JIRA and report
if this is a regression and if there is an easy fix that we should wait for.

For all the other test failures, please take the time to look through JIRA
and open an issue if one does not already exist so that we can triage if
these are just environmental issues.  If I don't hear any objections I'm
going to go ahead with RC3 tomorrow.

On Sun, Apr 2, 2017 at 1:16 PM, Felix Cheung <fe...@hotmail.com>
wrote:

> -1
> sorry, found an issue with SparkR CRAN check.
> Opened SPARK-20197 and working on fix.
>
> ------------------------------
> *From:* holden.karau@gmail.com <ho...@gmail.com> on behalf of
> Holden Karau <ho...@pigscanfly.ca>
> *Sent:* Friday, March 31, 2017 6:25:20 PM
> *To:* Xiao Li
> *Cc:* Michael Armbrust; dev@spark.apache.org
> *Subject:* Re: [VOTE] Apache Spark 2.1.1 (RC2)
>
> -1 (non-binding)
>
> Python packaging doesn't seem to have quite worked out (looking
> at PKG-INFO the description is "Description: !!!!! missing pandoc do not
> upload to PyPI !!!!"), ideally it would be nice to have this as a version
> we upgrade to PyPi.
> Building this on my own machine results in a longer description.
>
> My guess is that whichever machine was used to package this is missing the
> pandoc executable (or possibly pypandoc library).
>
> On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:
>
>> +1
>>
>> Xiao
>>
>> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>>
>>> Please vote on releasing the following candidate as Apache Spark
>>> version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST
>>> and passes if a majority of at least 3 +1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 2.1.1
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>
>>> The tag to be voted on is v2.1.1-rc2
>>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>>
>>> List of JIRA tickets resolved can be found with this filter
>>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>>> .
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>>
>>>
>>> *FAQ*
>>>
>>> *How can I help test this release?*
>>>
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions.
>>>
>>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>>
>>> Committers should look at those and triage. Extremely important bug
>>> fixes, documentation, and API tweaks that impact compatibility should be
>>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>>
>>> *But my bug isn't fixed!??!*
>>>
>>> In order to make timely releases, we will typically not hold the release
>>> unless the bug in question is a regression from 2.1.0.
>>>
>>> *What happened to RC1?*
>>>
>>> There were issues with the release packaging and as a result was skipped.
>>>
>>
>>
>
>
> --
> Cell : 425-233-8271 <(425)%20233-8271>
> Twitter: https://twitter.com/holdenkarau
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Felix Cheung <fe...@hotmail.com>.
-1
sorry, found an issue with SparkR CRAN check.
Opened SPARK-20197 and working on fix.

________________________________
From: holden.karau@gmail.com <ho...@gmail.com> on behalf of Holden Karau <ho...@pigscanfly.ca>
Sent: Friday, March 31, 2017 6:25:20 PM
To: Xiao Li
Cc: Michael Armbrust; dev@spark.apache.org
Subject: Re: [VOTE] Apache Spark 2.1.1 (RC2)

-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO the description is "Description: !!!!! missing pandoc do not upload to PyPI !!!!"), ideally it would be nice to have this as a version we upgrade to PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com>> wrote:
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>>:
Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2<https://github.com/apache/spark/tree/v2.1.1-rc2> (02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter<https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1227/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.




--
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Holden Karau <ho...@pigscanfly.ca>.
-1 (non-binding)

Python packaging doesn't seem to have quite worked out (looking at PKG-INFO
the description is "Description: !!!!! missing pandoc do not upload to PyPI
!!!!"), ideally it would be nice to have this as a version we upgrade to
PyPi.
Building this on my own machine results in a longer description.

My guess is that whichever machine was used to package this is missing the
pandoc executable (or possibly pypandoc library).

On Fri, Mar 31, 2017 at 3:40 PM, Xiao Li <ga...@gmail.com> wrote:

> +1
>
> Xiao
>
> 2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
>> passes if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.1
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.1-rc2
>> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
>> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>>
>> List of JIRA tickets resolved can be found with this filter
>> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
>> .
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1227/
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>>
>>
>> *FAQ*
>>
>> *How can I help test this release?*
>>
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> *What should happen to JIRA tickets still targeting 2.1.1?*
>>
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>>
>> *But my bug isn't fixed!??!*
>>
>> In order to make timely releases, we will typically not hold the release
>> unless the bug in question is a regression from 2.1.0.
>>
>> *What happened to RC1?*
>>
>> There were issues with the release packaging and as a result was skipped.
>>
>
>


-- 
Cell : 425-233-8271
Twitter: https://twitter.com/holdenkarau

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Xiao Li <ga...@gmail.com>.
+1

Xiao

2017-03-30 16:09 GMT-07:00 Michael Armbrust <mi...@databricks.com>:

> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and
> passes if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.1
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.1-rc2
> <https://github.com/apache/spark/tree/v2.1.1-rc2> (
> 02b165dcc2ee5245d1293a375a31660c9d4e1fa6)
>
> List of JIRA tickets resolved can be found with this filter
> <https://issues.apache.org/jira/browse/SPARK-20134?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.1>
> .
>
> The release files, including signatures, digests, etc. can be found at:
> http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1227/
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/
>
>
> *FAQ*
>
> *How can I help test this release?*
>
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> *What should happen to JIRA tickets still targeting 2.1.1?*
>
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.2 or 2.2.0.
>
> *But my bug isn't fixed!??!*
>
> In order to make timely releases, we will typically not hold the release
> unless the bug in question is a regression from 2.1.0.
>
> *What happened to RC1?*
>
> There were issues with the release packaging and as a result was skipped.
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Kazuaki Ishizaki <IS...@jp.ibm.com>.
Thank you. Yes, it is not a regression. 2.1.0 would have this failure, 
too.

Regards,
Kazuaki Ishizaki



From:   Sean Owen <so...@cloudera.com>
To:     Kazuaki Ishizaki/Japan/IBM@IBMJP, Michael Armbrust 
<mi...@databricks.com>
Cc:     "dev@spark.apache.org" <de...@spark.apache.org>
Date:   2017/04/02 18:18
Subject:        Re: [VOTE] Apache Spark 2.1.1 (RC2)



That backport is fine, for another RC even in my opinion, but it's not a 
regression. It's a JDK bug really. 2.1.0 would have failed too.

On Sun, Apr 2, 2017 at 8:20 AM Kazuaki Ishizaki <IS...@jp.ibm.com> 
wrote:
-1 (non-binding)

I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
I expect that this backport (https://github.com/apache/spark/pull/17509) 
will be integrated into Spark 2.1.1.




Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
That backport is fine, for another RC even in my opinion, but it's not a
regression. It's a JDK bug really. 2.1.0 would have failed too.

On Sun, Apr 2, 2017 at 8:20 AM Kazuaki Ishizaki <IS...@jp.ibm.com> wrote:

> -1 (non-binding)
>
> I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
> I expect that this backport (https://github.com/apache/spark/pull/17509)
> will be integrated into Spark 2.1.1.
>
>

Re: [VOTE] Apache Spark 2.1.1 (RC2)

Posted by Kazuaki Ishizaki <IS...@jp.ibm.com>.
-1 (non-binding)

I tested it on Ubuntu 16.04 and openjdk8 on ppc64le. I got several errors.
I expect that this backport (https://github.com/apache/spark/pull/17509) 
will be integrated into Spark 2.1.1.


$ java -version
openjdk version "1.8.0_111"
OpenJDK Runtime Environment (build 
1.8.0_111-8u111-b14-2ubuntu0.16.04.2-b14)
OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
$ build/mvn -DskipTests -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 
package install
$ build/mvn -Phive -Phive-thriftserver -Pyarn -Phadoop-2.7 test -pl core
...
-------------------------------------------------------
 T E S T S
-------------------------------------------------------
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Running org.apache.spark.memory.TaskMemoryManagerSuite
Tests run: 6, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.445 sec 
<<< FAILURE! - in org.apache.spark.memory.TaskMemoryManagerSuite
encodePageNumberAndOffsetOffHeap(org.apache.spark.memory.TaskMemoryManagerSuite) 
 Time elapsed: 0.007 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for 
unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
        at 
org.apache.spark.memory.TaskMemoryManagerSuite.encodePageNumberAndOffsetOffHeap(TaskMemoryManagerSuite.java:48)

offHeapConfigurationBackwardsCompatibility(org.apache.spark.memory.TaskMemoryManagerSuite) 
 Time elapsed: 0.013 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for 
unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
        at 
org.apache.spark.memory.TaskMemoryManagerSuite.offHeapConfigurationBackwardsCompatibility(TaskMemoryManagerSuite.java:138)

Running org.apache.spark.io.NioBufferedFileInputStreamSuite
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.029 sec 
- in org.apache.spark.io.NioBufferedFileInputStreamSuite
Running org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite
Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.708 sec 
<<< FAILURE! - in org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite
testPeakMemoryUsed(org.apache.spark.unsafe.map.BytesToBytesMapOnHeapSuite) 
 Time elapsed: 0.006 sec  <<< FAILURE!
java.lang.AssertionError: expected:<16648> but was:<16912>

Running org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite
Tests run: 13, Failures: 0, Errors: 13, Skipped: 0, Time elapsed: 0.043 
sec <<< FAILURE! - in 
org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite
failureToGrow(org.apache.spark.unsafe.map.BytesToBytesMapOffHeapSuite) 
Time elapsed: 0.002 sec  <<< ERROR!
java.lang.IllegalArgumentException: requirement failed: No support for 
unaligned Unsafe. Set spark.memory.offHeap.enabled to false.
...
Tests run: 207, Failures: 7, Errors: 16, Skipped: 0

Kazuaki Ishizaki



From:   Michael Armbrust <mi...@databricks.com>
To:     "dev@spark.apache.org" <de...@spark.apache.org>
Date:   2017/03/31 08:10
Subject:        [VOTE] Apache Spark 2.1.1 (RC2)



Please vote on releasing the following candidate as Apache Spark version 
2.1.0. The vote is open until Sun, April 2nd, 2018 at 16:30 PST and passes 
if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.1
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.1-rc2 (
02b165dcc2ee5245d1293a375a31660c9d4e1fa6)

List of JIRA tickets resolved can be found with this filter.

The release files, including signatures, digests, etc. can be found at:
http://home.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1227/

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.1-rc2-docs/


FAQ

How can I help test this release?

If you are a Spark user, you can help us test this release by taking an 
existing Spark workload and running on this release candidate, then 
reporting any regressions.

What should happen to JIRA tickets still targeting 2.1.1?

Committers should look at those and triage. Extremely important bug fixes, 
documentation, and API tweaks that impact compatibility should be worked 
on immediately. Everything else please retarget to 2.1.2 or 2.2.0.

But my bug isn't fixed!??!

In order to make timely releases, we will typically not hold the release 
unless the bug in question is a regression from 2.1.0.

What happened to RC1?

There were issues with the release packaging and as a result was skipped.