You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Reynold Xin <rx...@databricks.com> on 2016/12/08 08:39:52 UTC

[VOTE] Apache Spark 2.1.0 (RC2)

Please vote on releasing the following candidate as Apache Spark version
2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
if a majority of at least 3 +1 PMC votes are cast.

[ ] +1 Release this package as Apache Spark 2.1.0
[ ] -1 Do not release this package because ...


To learn more about Apache Spark, please see http://spark.apache.org/

The tag to be voted on is v2.1.0-rc2
(080717497365b83bc202ab16812ced93eb1ea7bd)

List of JIRA tickets resolved are:
https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0

The release files, including signatures, digests, etc. can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/pwendell.asc

The staging repository for this release can be found at:
https://repository.apache.org/content/repositories/orgapachespark-1217

The documentation corresponding to this release can be found at:
http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/


(Note that the docs and staging repo are still being uploaded and will be
available soon)


=======================================
How can I help test this release?
=======================================
If you are a Spark user, you can help us test this release by taking an
existing Spark workload and running on this release candidate, then
reporting any regressions.

===============================================================
What should happen to JIRA tickets still targeting 2.1.0?
===============================================================
Committers should look at those and triage. Extremely important bug fixes,
documentation, and API tweaks that impact compatibility should be worked on
immediately. Everything else please retarget to 2.1.1 or 2.2.0.

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Cody Koeninger <co...@koeninger.org>.
Agree that frequent topic deletion is not a very Kafka-esque thing to do

On Fri, Dec 9, 2016 at 12:09 PM, Shixiong(Ryan) Zhu
<sh...@databricks.com> wrote:
> Sean, "stress test for failOnDataLoss=false" is because Kafka consumer may
> be thrown NPE when a topic is deleted. I added some logic to retry on such
> failure, however, it may still fail when topic deletion is too frequent (the
> stress test). Just reopened
> https://issues.apache.org/jira/browse/SPARK-18588.
>
> Anyway, this is just a best effort to deal with Kafka issue, and in
> practice, people won't delete topic frequently, so this is not a release
> blocker.
>
> On Fri, Dec 9, 2016 at 2:55 AM, Sean Owen <so...@cloudera.com> wrote:
>>
>> As usual, the sigs / hashes are fine and licenses look fine.
>>
>> I am still seeing some test failures. A few I've seen over time and aren't
>> repeatable, but a few seem persistent. ANyone else observed these? I'm on
>> Ubuntu 16 / Java 8 building for -Pyarn -Phadoop-2.7 -Phive
>>
>> If anyone can confirm I'll investigate the cause if I can. I'd hesitate to
>> support the release yet unless the build is definitely passing for others.
>>
>>
>> udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.281 sec
>> <<< ERROR!
>> java.lang.NoSuchMethodError:
>> org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)Lscala/Tuple2;
>> at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)
>>
>>
>>
>> - caching on disk *** FAILED ***
>>   java.util.concurrent.TimeoutException: Can't find 2 executors before
>> 30000 milliseconds elapsed
>>   at
>> org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:584)
>>   at
>> org.apache.spark.DistributedSuite.org$apache$spark$DistributedSuite$$testCaching(DistributedSuite.scala:154)
>>   at
>> org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply$mcV$sp(DistributedSuite.scala:191)
>>   at
>> org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(DistributedSuite.scala:191)
>>   at
>> org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(DistributedSuite.scala:191)
>>   at
>> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>>   ...
>>
>>
>> - stress test for failOnDataLoss=false *** FAILED ***
>>   org.apache.spark.sql.streaming.StreamingQueryException: Query [id =
>> 3b191b78-7f30-46d3-93f8-5fbeecce94a2, runId =
>> 0cab93b6-19d8-47a7-88ad-d296bea72405] terminated with exception: null
>>   at
>> org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:262)
>>   at
>> org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:160)
>>   ...
>>   Cause: java.lang.NullPointerException:
>>   ...
>>
>>
>>
>> On Thu, Dec 8, 2016 at 4:40 PM Reynold Xin <rx...@databricks.com> wrote:
>>>
>>> Please vote on releasing the following candidate as Apache Spark version
>>> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
>>> if a majority of at least 3 +1 PMC votes are cast.
>>>
>>> [ ] +1 Release this package as Apache Spark 2.1.0
>>> [ ] -1 Do not release this package because ...
>>>
>>>
>>> To learn more about Apache Spark, please see http://spark.apache.org/
>>>
>>> The tag to be voted on is v2.1.0-rc2
>>> (080717497365b83bc202ab16812ced93eb1ea7bd)
>>>
>>> List of JIRA tickets resolved are:
>>> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>>>
>>> The release files, including signatures, digests, etc. can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>>>
>>> Release artifacts are signed with the following key:
>>> https://people.apache.org/keys/committer/pwendell.asc
>>>
>>> The staging repository for this release can be found at:
>>> https://repository.apache.org/content/repositories/orgapachespark-1217
>>>
>>> The documentation corresponding to this release can be found at:
>>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>>>
>>>
>>> (Note that the docs and staging repo are still being uploaded and will be
>>> available soon)
>>>
>>>
>>> =======================================
>>> How can I help test this release?
>>> =======================================
>>> If you are a Spark user, you can help us test this release by taking an
>>> existing Spark workload and running on this release candidate, then
>>> reporting any regressions.
>>>
>>> ===============================================================
>>> What should happen to JIRA tickets still targeting 2.1.0?
>>> ===============================================================
>>> Committers should look at those and triage. Extremely important bug
>>> fixes, documentation, and API tweaks that impact compatibility should be
>>> worked on immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
>

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
Sure, it's only an issue insofar as it may be a flaky test. If it's fixable
or disable-able for a possible next RC that could be helpful.

On Sat, Dec 10, 2016 at 2:09 AM Shixiong(Ryan) Zhu <sh...@databricks.com>
wrote:

> Sean, "stress test for failOnDataLoss=false" is because Kafka consumer
> may be thrown NPE when a topic is deleted. I added some logic to retry on
> such failure, however, it may still fail when topic deletion is too
> frequent (the stress test). Just reopened
> https://issues.apache.org/jira/browse/SPARK-18588.
>
> Anyway, this is just a best effort to deal with Kafka issue, and in
> practice, people won't delete topic frequently, so this is not a release
> blocker.
>
>
>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by "Shixiong(Ryan) Zhu" <sh...@databricks.com>.
Sean, "stress test for failOnDataLoss=false" is because Kafka consumer may
be thrown NPE when a topic is deleted. I added some logic to retry on such
failure, however, it may still fail when topic deletion is too frequent
(the stress test). Just reopened
https://issues.apache.org/jira/browse/SPARK-18588.

Anyway, this is just a best effort to deal with Kafka issue, and in
practice, people won't delete topic frequently, so this is not a release
blocker.

On Fri, Dec 9, 2016 at 2:55 AM, Sean Owen <so...@cloudera.com> wrote:

> As usual, the sigs / hashes are fine and licenses look fine.
>
> I am still seeing some test failures. A few I've seen over time and aren't
> repeatable, but a few seem persistent. ANyone else observed these? I'm on
> Ubuntu 16 / Java 8 building for -Pyarn -Phadoop-2.7 -Phive
>
> If anyone can confirm I'll investigate the cause if I can. I'd hesitate to
> support the release yet unless the build is definitely passing for others.
>
>
> udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.281 sec
>  <<< ERROR!
> java.lang.NoSuchMethodError: org.apache.spark.sql.catalyst.
> JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)
> Lscala/Tuple2;
> at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)
>
>
>
> - caching on disk *** FAILED ***
>   java.util.concurrent.TimeoutException: Can't find 2 executors before
> 30000 milliseconds elapsed
>   at org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(
> JobProgressListener.scala:584)
>   at org.apache.spark.DistributedSuite.org$apache$spark$DistributedSuite$$
> testCaching(DistributedSuite.scala:154)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$
> anonfun$apply$1.apply$mcV$sp(DistributedSuite.scala:191)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(
> DistributedSuite.scala:191)
>   at org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(
> DistributedSuite.scala:191)
>   at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(
> Transformer.scala:22)
>   at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   ...
>
>
> - stress test for failOnDataLoss=false *** FAILED ***
>   org.apache.spark.sql.streaming.StreamingQueryException: Query [id =
> 3b191b78-7f30-46d3-93f8-5fbeecce94a2, runId = 0cab93b6-19d8-47a7-88ad-d296bea72405]
> terminated with exception: null
>   at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$
> spark$sql$execution$streaming$StreamExecution$$runBatches(
> StreamExecution.scala:262)
>   at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(
> StreamExecution.scala:160)
>   ...
>   Cause: java.lang.NullPointerException:
>   ...
>
>
>
> On Thu, Dec 8, 2016 at 4:40 PM Reynold Xin <rx...@databricks.com> wrote:
>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
>> if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ced
>> 93eb1ea7bd)
>>
>> List of JIRA tickets resolved are:  https://issues.apache.
>> org/jira/issues/?jql=project%20%3D%20SPARK%20AND%
>> 20fixVersion%20%3D%202.1.0
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1217
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>>
>>
>> (Note that the docs and staging repo are still being uploaded and will be
>> available soon)
>>
>>
>> =======================================
>> How can I help test this release?
>> =======================================
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> ===============================================================
>> What should happen to JIRA tickets still targeting 2.1.0?
>> ===============================================================
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Sean Owen <so...@cloudera.com>.
As usual, the sigs / hashes are fine and licenses look fine.

I am still seeing some test failures. A few I've seen over time and aren't
repeatable, but a few seem persistent. ANyone else observed these? I'm on
Ubuntu 16 / Java 8 building for -Pyarn -Phadoop-2.7 -Phive

If anyone can confirm I'll investigate the cause if I can. I'd hesitate to
support the release yet unless the build is definitely passing for others.


udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.281 sec
 <<< ERROR!
java.lang.NoSuchMethodError:
org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)Lscala/Tuple2;
at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)



- caching on disk *** FAILED ***
  java.util.concurrent.TimeoutException: Can't find 2 executors before
30000 milliseconds elapsed
  at
org.apache.spark.ui.jobs.JobProgressListener.waitUntilExecutorsUp(JobProgressListener.scala:584)
  at org.apache.spark.DistributedSuite.org
$apache$spark$DistributedSuite$$testCaching(DistributedSuite.scala:154)
  at
org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply$mcV$sp(DistributedSuite.scala:191)
  at
org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(DistributedSuite.scala:191)
  at
org.apache.spark.DistributedSuite$$anonfun$32$$anonfun$apply$1.apply(DistributedSuite.scala:191)
  at
org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
  at org.scalatest.Transformer.apply(Transformer.scala:22)
  at org.scalatest.Transformer.apply(Transformer.scala:20)
  ...


- stress test for failOnDataLoss=false *** FAILED ***
  org.apache.spark.sql.streaming.StreamingQueryException: Query [id =
3b191b78-7f30-46d3-93f8-5fbeecce94a2, runId =
0cab93b6-19d8-47a7-88ad-d296bea72405] terminated with exception: null
  at org.apache.spark.sql.execution.streaming.StreamExecution.org
$apache$spark$sql$execution$streaming$StreamExecution$$runBatches(StreamExecution.scala:262)
  at
org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:160)
  ...
  Cause: java.lang.NullPointerException:
  ...



On Thu, Dec 8, 2016 at 4:40 PM Reynold Xin <rx...@databricks.com> wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2
> (080717497365b83bc202ab16812ced93eb1ea7bd)
>
> List of JIRA tickets resolved are:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Adam Roberts <AR...@uk.ibm.com>.
I've never seen the ReplSuite test OoMing with IBM's latest SDK for Java 
but have always noticed this particular test failing with the following 
instead:

java.lang.AssertionError: assertion failed: deviation too large: 
0.8506807397223823, first size: 180392, second size: 333848

This particular test could be improved and I don't think it should hold up 
releases, I've commented on [SPARK-14558] already a while back and the 
discussion ended with: 

"A better check would be to run with and without the closure cleaner 
change
-> Yea, this is what I did locally, but how to write a test for it?"

It will fail in this particular way reliably with Open/Oracle JDK as well 
if you were to use Kryo.

We don't see this test failing (either OoM or the above failure) with 
OpenJDK 8 in our test farm, this is with OpenJDK 1.8.0_51-b16 and I'm 
running with -Xmx4g -Xss2048k -Dspark.buffer.pageSize 1048576.

All other Spark unit tests pass (we see a grand total of 11980 tests) 
except for the Kafka stress test already mentioned, various 
platforms/operating systems including big-endian.

I've never seen the NoSuchMethod error mentioned in JavaUDFSuite and 
haven't seen the failure Alan mentions below either.

I also have performance data to share (HiBench and SparkSqlPerf with 
TPC-DS queries) comparing this release to Spark 2.0.2, I'll wait until the 
next RC before commenting (it is positive), looks like we'll have another 
as this RC2 vote should be closed by now and in RC3 we'd also have the 
[SPARK-18091] fix included to prevent a test's generated code exceeding 
the 64k constant pool size limit.




From:   akchin <ak...@us.ibm.com>
To:     dev@spark.apache.org
Date:   13/12/2016 19:51
Subject:        Re: [VOTE] Apache Spark 2.1.0 (RC2)



Hello, 

I am seeing this error as well except during "define case class and create
Dataset together with paste mode *** FAILED ***" 
Starts throwing OOM and GC errors after running for several minutes. 





-----
Alan Chin 
IBM Spark Technology Center 
--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Apache-Spark-2-1-0-RC2-tp20175p20215.html

Sent from the Apache Spark Developers List mailing list archive at 
Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org



Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
741598. 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by akchin <ak...@us.ibm.com>.
Hello, 

I am seeing this error as well except during "define case class and create
Dataset together with paste mode *** FAILED ***" 
Starts throwing OOM and GC errors after running for several minutes. 





-----
Alan Chin 
IBM Spark Technology Center 
--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/VOTE-Apache-Spark-2-1-0-RC2-tp20175p20215.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Mark Hamstra <ma...@clearstorydata.com>.
Yes, I see the same.

On Mon, Dec 12, 2016 at 5:52 PM, Marcelo Vanzin <va...@cloudera.com> wrote:

> Another failing test is "ReplSuite:should clone and clean line object
> in ClosureCleaner". It never passes for me, just keeps spinning until
> the JVM eventually starts throwing OOM errors. Anyone seeing that?
>
> On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
> > Please vote on releasing the following candidate as Apache Spark version
> > 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and
> passes
> > if a majority of at least 3 +1 PMC votes are cast.
> >
> > [ ] +1 Release this package as Apache Spark 2.1.0
> > [ ] -1 Do not release this package because ...
> >
> >
> > To learn more about Apache Spark, please see http://spark.apache.org/
> >
> > The tag to be voted on is v2.1.0-rc2
> > (080717497365b83bc202ab16812ced93eb1ea7bd)
> >
> > List of JIRA tickets resolved are:
> > https://issues.apache.org/jira/issues/?jql=project%20%
> 3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
> >
> > The release files, including signatures, digests, etc. can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/pwendell.asc
> >
> > The staging repository for this release can be found at:
> > https://repository.apache.org/content/repositories/orgapachespark-1217
> >
> > The documentation corresponding to this release can be found at:
> > http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
> >
> >
> > (Note that the docs and staging repo are still being uploaded and will be
> > available soon)
> >
> >
> > =======================================
> > How can I help test this release?
> > =======================================
> > If you are a Spark user, you can help us test this release by taking an
> > existing Spark workload and running on this release candidate, then
> > reporting any regressions.
> >
> > ===============================================================
> > What should happen to JIRA tickets still targeting 2.1.0?
> > ===============================================================
> > Committers should look at those and triage. Extremely important bug
> fixes,
> > documentation, and API tweaks that impact compatibility should be worked
> on
> > immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Marcelo Vanzin <va...@cloudera.com>.
Another failing test is "ReplSuite:should clone and clean line object
in ClosureCleaner". It never passes for me, just keeps spinning until
the JVM eventually starts throwing OOM errors. Anyone seeing that?

On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2
> (080717497365b83bc202ab16812ced93eb1ea7bd)
>
> List of JIRA tickets resolved are:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Reynold Xin <rx...@databricks.com>.
I uploaded a new one:
https://repository.apache.org/content/repositories/orgapachespark-1219/



On Thu, Dec 8, 2016 at 11:42 PM, Prashant Sharma <sc...@gmail.com>
wrote:

> I am getting 404 for Link https://repository.apache.org/content/
> repositories/orgapachespark-1217.
>
> --Prashant
>
>
> On Fri, Dec 9, 2016 at 10:43 AM, Michael Allman <mi...@videoamp.com>
> wrote:
>
>> I believe https://github.com/apache/spark/pull/16122 needs to be
>> included in Spark 2.1. It's a simple bug fix to some functionality that is
>> introduced in 2.1. Unfortunately, it's been manually verified only. There's
>> no unit test that covers it, and building one is far from trivial.
>>
>> Michael
>>
>>
>>
>>
>> On Dec 8, 2016, at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
>>
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
>> if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ce
>> d93eb1ea7bd)
>>
>> List of JIRA tickets resolved are:  https://issues.apache.or
>> g/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1217
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>>
>>
>> (Note that the docs and staging repo are still being uploaded and will be
>> available soon)
>>
>>
>> =======================================
>> How can I help test this release?
>> =======================================
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> ===============================================================
>> What should happen to JIRA tickets still targeting 2.1.0?
>> ===============================================================
>> Committers should look at those and triage. Extremely important bug
>> fixes, documentation, and API tweaks that impact compatibility should be
>> worked on immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>>
>>
>>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Prashant Sharma <sc...@gmail.com>.
I am getting 404 for Link
https://repository.apache.org/content/repositories/orgapachespark-1217.

--Prashant


On Fri, Dec 9, 2016 at 10:43 AM, Michael Allman <mi...@videoamp.com>
wrote:

> I believe https://github.com/apache/spark/pull/16122 needs to be included
> in Spark 2.1. It's a simple bug fix to some functionality that is
> introduced in 2.1. Unfortunately, it's been manually verified only. There's
> no unit test that covers it, and building one is far from trivial.
>
> Michael
>
>
>
>
> On Dec 8, 2016, at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
>
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ced
> 93eb1ea7bd)
>
> List of JIRA tickets resolved are:  https://issues.apache.
> org/jira/issues/?jql=project%20%3D%20SPARK%20AND%
> 20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Michael Allman <mi...@videoamp.com>.
I believe https://github.com/apache/spark/pull/16122 <https://github.com/apache/spark/pull/16122> needs to be included in Spark 2.1. It's a simple bug fix to some functionality that is introduced in 2.1. Unfortunately, it's been manually verified only. There's no unit test that covers it, and building one is far from trivial.

Michael



> On Dec 8, 2016, at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
> 
> Please vote on releasing the following candidate as Apache Spark version 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes if a majority of at least 3 +1 PMC votes are cast.
> 
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
> 
> 
> To learn more about Apache Spark, please see http://spark.apache.org/ <http://spark.apache.org/>
> 
> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ced93eb1ea7bd)
> 
> List of JIRA tickets resolved are:  https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0 <https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0>
> 
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/ <http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/>
> 
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc <https://people.apache.org/keys/committer/pwendell.asc>
> 
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217 <https://repository.apache.org/content/repositories/orgapachespark-1217>
> 
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/ <http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/>
> 
> 
> (Note that the docs and staging repo are still being uploaded and will be available soon)
> 
> 
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an existing Spark workload and running on this release candidate, then reporting any regressions.
> 
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes, documentation, and API tweaks that impact compatibility should be worked on immediately. Everything else please retarget to 2.1.1 or 2.2.0.


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Yin Huai <yh...@databricks.com>.
-1

I hit https://issues.apache.org/jira/browse/SPARK-18816, which prevents
executor page from showing the log links if an application does not have
executors initially.

On Mon, Dec 12, 2016 at 3:02 PM, Marcelo Vanzin <va...@cloudera.com> wrote:

> Actually this is not a simple pom change. The code in
> UDFRegistration.scala calls this method:
>
>           if (returnType == null) {
>            returnType =
> JavaTypeInference.inferDataType(TypeToken.of(udfReturnType))._1
>          }
>
> Because we shade guava, it's generally not very safe to call methods
> in different modules that expose shaded APIs. Can this code be
> modified to call the variant that just takes a java.lang.Class instead
> of Guava's TypeToken? It seems like that would work, since that method
> basically just wraps the argument with "TypeToken.of".
>
>
>
> On Mon, Dec 12, 2016 at 2:03 PM, Marcelo Vanzin <va...@cloudera.com>
> wrote:
> > I'm running into this when building / testing on 1.7 (haven't tried 1.8):
> >
> > udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.079
> > sec  <<< ERROR!
> > java.lang.NoSuchMethodError:
> > org.apache.spark.sql.catalyst.JavaTypeInference$.
> inferDataType(Lcom/google/common/reflect/TypeToken;)Lsc
> > ala/Tuple2;
> >        at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(
> JavaUDFSuite.java:107)
> >
> >
> > Results :
> >
> > Tests in error:
> >  JavaUDFSuite.udf3Test:107 » NoSuchMethod
> > org.apache.spark.sql.catalyst.JavaTyp...
> >
> >
> > Given the error I'm mostly sure it's something easily fixable by
> > adding Guava explicitly in the pom, so probably shouldn't block
> > anything.
> >
> >
> > On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com>
> wrote:
> >> Please vote on releasing the following candidate as Apache Spark version
> >> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and
> passes
> >> if a majority of at least 3 +1 PMC votes are cast.
> >>
> >> [ ] +1 Release this package as Apache Spark 2.1.0
> >> [ ] -1 Do not release this package because ...
> >>
> >>
> >> To learn more about Apache Spark, please see http://spark.apache.org/
> >>
> >> The tag to be voted on is v2.1.0-rc2
> >> (080717497365b83bc202ab16812ced93eb1ea7bd)
> >>
> >> List of JIRA tickets resolved are:
> >> https://issues.apache.org/jira/issues/?jql=project%20%
> 3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
> >>
> >> The release files, including signatures, digests, etc. can be found at:
> >> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
> >>
> >> Release artifacts are signed with the following key:
> >> https://people.apache.org/keys/committer/pwendell.asc
> >>
> >> The staging repository for this release can be found at:
> >> https://repository.apache.org/content/repositories/orgapachespark-1217
> >>
> >> The documentation corresponding to this release can be found at:
> >> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
> >>
> >>
> >> (Note that the docs and staging repo are still being uploaded and will
> be
> >> available soon)
> >>
> >>
> >> =======================================
> >> How can I help test this release?
> >> =======================================
> >> If you are a Spark user, you can help us test this release by taking an
> >> existing Spark workload and running on this release candidate, then
> >> reporting any regressions.
> >>
> >> ===============================================================
> >> What should happen to JIRA tickets still targeting 2.1.0?
> >> ===============================================================
> >> Committers should look at those and triage. Extremely important bug
> fixes,
> >> documentation, and API tweaks that impact compatibility should be
> worked on
> >> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
> >
> >
> >
> > --
> > Marcelo
>
>
>
> --
> Marcelo
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: dev-unsubscribe@spark.apache.org
>
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Marcelo Vanzin <va...@cloudera.com>.
Actually this is not a simple pom change. The code in
UDFRegistration.scala calls this method:

          if (returnType == null) {
           returnType =
JavaTypeInference.inferDataType(TypeToken.of(udfReturnType))._1
         }

Because we shade guava, it's generally not very safe to call methods
in different modules that expose shaded APIs. Can this code be
modified to call the variant that just takes a java.lang.Class instead
of Guava's TypeToken? It seems like that would work, since that method
basically just wraps the argument with "TypeToken.of".



On Mon, Dec 12, 2016 at 2:03 PM, Marcelo Vanzin <va...@cloudera.com> wrote:
> I'm running into this when building / testing on 1.7 (haven't tried 1.8):
>
> udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.079
> sec  <<< ERROR!
> java.lang.NoSuchMethodError:
> org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)Lsc
> ala/Tuple2;
>        at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)
>
>
> Results :
>
> Tests in error:
>  JavaUDFSuite.udf3Test:107 » NoSuchMethod
> org.apache.spark.sql.catalyst.JavaTyp...
>
>
> Given the error I'm mostly sure it's something easily fixable by
> adding Guava explicitly in the pom, so probably shouldn't block
> anything.
>
>
> On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
>> Please vote on releasing the following candidate as Apache Spark version
>> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
>> if a majority of at least 3 +1 PMC votes are cast.
>>
>> [ ] +1 Release this package as Apache Spark 2.1.0
>> [ ] -1 Do not release this package because ...
>>
>>
>> To learn more about Apache Spark, please see http://spark.apache.org/
>>
>> The tag to be voted on is v2.1.0-rc2
>> (080717497365b83bc202ab16812ced93eb1ea7bd)
>>
>> List of JIRA tickets resolved are:
>> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>>
>> The release files, including signatures, digests, etc. can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>>
>> Release artifacts are signed with the following key:
>> https://people.apache.org/keys/committer/pwendell.asc
>>
>> The staging repository for this release can be found at:
>> https://repository.apache.org/content/repositories/orgapachespark-1217
>>
>> The documentation corresponding to this release can be found at:
>> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>>
>>
>> (Note that the docs and staging repo are still being uploaded and will be
>> available soon)
>>
>>
>> =======================================
>> How can I help test this release?
>> =======================================
>> If you are a Spark user, you can help us test this release by taking an
>> existing Spark workload and running on this release candidate, then
>> reporting any regressions.
>>
>> ===============================================================
>> What should happen to JIRA tickets still targeting 2.1.0?
>> ===============================================================
>> Committers should look at those and triage. Extremely important bug fixes,
>> documentation, and API tweaks that impact compatibility should be worked on
>> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>
>
>
> --
> Marcelo



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Marcelo Vanzin <va...@cloudera.com>.
I'm running into this when building / testing on 1.7 (haven't tried 1.8):

udf3Test(test.org.apache.spark.sql.JavaUDFSuite)  Time elapsed: 0.079
sec  <<< ERROR!
java.lang.NoSuchMethodError:
org.apache.spark.sql.catalyst.JavaTypeInference$.inferDataType(Lcom/google/common/reflect/TypeToken;)Lsc
ala/Tuple2;
       at test.org.apache.spark.sql.JavaUDFSuite.udf3Test(JavaUDFSuite.java:107)


Results :

Tests in error:
 JavaUDFSuite.udf3Test:107 » NoSuchMethod
org.apache.spark.sql.catalyst.JavaTyp...


Given the error I'm mostly sure it's something easily fixable by
adding Guava explicitly in the pom, so probably shouldn't block
anything.


On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2
> (080717497365b83bc202ab16812ced93eb1ea7bd)
>
> List of JIRA tickets resolved are:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.



-- 
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Reynold Xin <rx...@databricks.com>.
I'm going to -1 this myself: https://issues.apache.org/jira/browse/
SPARK-18856 <https://issues.apache.org/jira/browse/SPARK-18856>


On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:

> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2 (080717497365b83bc202ab16812ced
> 93eb1ea7bd)
>
> List of JIRA tickets resolved are:  https://issues.apache.
> org/jira/issues/?jql=project%20%3D%20SPARK%20AND%
> 20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
>

Re: [VOTE] Apache Spark 2.1.0 (RC2)

Posted by Shivaram Venkataraman <sh...@eecs.berkeley.edu>.
+0

I am not sure how much of a problem this is but the pip packaging
seems to have changed the size of the hadoop-2.7 artifact. As you can
see in http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/,
the Hadoop 2.7 build is 359M almost double the size of the other
Hadoop versions.

This comes from the fact that we build our pip package using the
Hadoop 2.7 profile [1] and the pip package is contained inside this
tarball. The fix for this is to exclude the pip package from the
distribution in [2]

Thanks
Shivaram

[1] https://github.com/apache/spark/blob/202fcd21ce01393fa6dfaa1c2126e18e9b85ee96/dev/create-release/release-build.sh#L242
[2] https://github.com/apache/spark/blob/202fcd21ce01393fa6dfaa1c2126e18e9b85ee96/dev/make-distribution.sh#L240

On Thu, Dec 8, 2016 at 12:39 AM, Reynold Xin <rx...@databricks.com> wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 11, 2016 at 1:00 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
>
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
>
>
> To learn more about Apache Spark, please see http://spark.apache.org/
>
> The tag to be voted on is v2.1.0-rc2
> (080717497365b83bc202ab16812ced93eb1ea7bd)
>
> List of JIRA tickets resolved are:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%202.1.0
>
> The release files, including signatures, digests, etc. can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-bin/
>
> Release artifacts are signed with the following key:
> https://people.apache.org/keys/committer/pwendell.asc
>
> The staging repository for this release can be found at:
> https://repository.apache.org/content/repositories/orgapachespark-1217
>
> The documentation corresponding to this release can be found at:
> http://people.apache.org/~pwendell/spark-releases/spark-2.1.0-rc2-docs/
>
>
> (Note that the docs and staging repo are still being uploaded and will be
> available soon)
>
>
> =======================================
> How can I help test this release?
> =======================================
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
>
> ===============================================================
> What should happen to JIRA tickets still targeting 2.1.0?
> ===============================================================
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org