You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Patrick Wendell <pw...@gmail.com> on 2014/05/30 12:12:39 UTC

Announcing Spark 1.0.0

I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
is a milestone release as the first in the 1.0 line of releases,
providing API stability for Spark's core interfaces.

Spark 1.0.0 is Spark's largest release ever, with contributions from
117 developers. I'd like to thank everyone involved in this release -
it was truly a community effort with fixes, features, and
optimizations contributed from dozens of organizations.

This release expands Spark's standard libraries, introducing a new SQL
package (SparkSQL) which lets users integrate SQL queries into
existing Spark workflows. MLlib, Spark's machine learning library, is
expanded with sparse vector support and several new algorithms. The
GraphX and Streaming libraries also introduce new features and
optimizations. Spark's core engine adds support for secured YARN
clusters, a unified tool for submitting Spark applications, and
several performance and stability improvements. Finally, Spark adds
support for Java 8 lambda syntax and improves coverage of the Java and
Python API's.

Those features only scratch the surface - check out the release notes here:
http://spark.apache.org/releases/spark-release-1-0-0.html

Note that since release artifacts were posted recently, certain
mirrors may not have working downloads for a few hours.

- Patrick

Re: possible typos in spark 1.0 documentation

Posted by Yadid Ayzenberg <ya...@media.mit.edu>.
Yep, I just issued a pull request.

Yadid


On 5/31/14, 1:25 PM, Patrick Wendell wrote:
>> 1. ctx is an instance of JavaSQLContext but the textFile method is called as
>> a member of ctx.
>> According to the API JavaSQLContext does not have such a member, so im
>> guessing this should be sc instead.
> Yeah, I think you are correct.
>
>> 2. In that same code example the object sqlCtx is referenced, but it is
>> never instantiated in the code.
>> should this be ctx?
> Also correct.
>
> I think it would be good to be consistent and always have "ctx" refer
> to a JavaSparkContext and have "sqlCtx" refer to a JavaSQLContext.
>
> Any interest in creating a pull request for this? We'd be happy to
> accept the change.
>
> - Patrick


Re: possible typos in spark 1.0 documentation

Posted by Patrick Wendell <pw...@gmail.com>.
> 1. ctx is an instance of JavaSQLContext but the textFile method is called as
> a member of ctx.
> According to the API JavaSQLContext does not have such a member, so im
> guessing this should be sc instead.

Yeah, I think you are correct.

> 2. In that same code example the object sqlCtx is referenced, but it is
> never instantiated in the code.
> should this be ctx?

Also correct.

I think it would be good to be consistent and always have "ctx" refer
to a JavaSparkContext and have "sqlCtx" refer to a JavaSQLContext.

Any interest in creating a pull request for this? We'd be happy to
accept the change.

- Patrick

possible typos in spark 1.0 documentation

Posted by Yadid Ayzenberg <ya...@media.mit.edu>.
Congrats on the new 1.0 release. Amazing work !

It looks like there may some typos in the latest 
http://spark.apache.org/docs/latest/sql-programming-guide.html

in the "Running SQL on RDDs" section when choosing the java example:

1. ctx is an instance of JavaSQLContext but the textFile method is 
called as a member of ctx.
According to the API JavaSQLContext does not have such a member, so im 
guessing this should be sc instead.

2. In that same code example the object sqlCtx is referenced, but it is 
never instantiated in the code.
should this be ctx?

Cheers,

Yadid


Re: Announcing Spark 1.0.0

Posted by Chanwit Kaewkasi <ch...@gmail.com>.
Congratulations !!

-chanwit

--
Chanwit Kaewkasi
linkedin.com/in/chanwit


On Fri, May 30, 2014 at 5:12 PM, Patrick Wendell <pw...@gmail.com> wrote:
> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick

Re: Announcing Spark 1.0.0

Posted by Patrick Wendell <pw...@gmail.com>.
Hey Rahul,

The v1.0.0 tag is correct. When we release Spark we create multiple
candidates. One of the candidates is promoted to the full release. So
rc11 is also the same as the official v1.0.0 release.

- Patrick

On Wed, Jun 4, 2014 at 8:29 PM, Rahul Singhal <Ra...@guavus.com> wrote:
> Could someone please clarify my confusion or is this not an issue that we
> should be concerned about?
>
> Thanks,
> Rahul Singhal
>
>
>
>
>
> On 30/05/14 5:28 PM, "Rahul Singhal" <Ra...@guavus.com> wrote:
>
>>Is it intentional/ok that the tag v1.0.0 is behind tag v1.0.0-rc11?
>>
>>
>>Thanks,
>>Rahul Singhal
>>
>>
>>
>>
>>
>>On 30/05/14 3:43 PM, "Patrick Wendell" <pw...@gmail.com> wrote:
>>
>>>I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>>is a milestone release as the first in the 1.0 line of releases,
>>>providing API stability for Spark's core interfaces.
>>>
>>>Spark 1.0.0 is Spark's largest release ever, with contributions from
>>>117 developers. I'd like to thank everyone involved in this release -
>>>it was truly a community effort with fixes, features, and
>>>optimizations contributed from dozens of organizations.
>>>
>>>This release expands Spark's standard libraries, introducing a new SQL
>>>package (SparkSQL) which lets users integrate SQL queries into
>>>existing Spark workflows. MLlib, Spark's machine learning library, is
>>>expanded with sparse vector support and several new algorithms. The
>>>GraphX and Streaming libraries also introduce new features and
>>>optimizations. Spark's core engine adds support for secured YARN
>>>clusters, a unified tool for submitting Spark applications, and
>>>several performance and stability improvements. Finally, Spark adds
>>>support for Java 8 lambda syntax and improves coverage of the Java and
>>>Python API's.
>>>
>>>Those features only scratch the surface - check out the release notes
>>>here:
>>>http://spark.apache.org/releases/spark-release-1-0-0.html
>>>
>>>Note that since release artifacts were posted recently, certain
>>>mirrors may not have working downloads for a few hours.
>>>
>>>- Patrick
>>
>

Re: Announcing Spark 1.0.0

Posted by Rahul Singhal <Ra...@guavus.com>.
Could someone please clarify my confusion or is this not an issue that we
should be concerned about?

Thanks,
Rahul Singhal





On 30/05/14 5:28 PM, "Rahul Singhal" <Ra...@guavus.com> wrote:

>Is it intentional/ok that the tag v1.0.0 is behind tag v1.0.0-rc11?
>
>
>Thanks,
>Rahul Singhal
>
>
>
>
>
>On 30/05/14 3:43 PM, "Patrick Wendell" <pw...@gmail.com> wrote:
>
>>I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>is a milestone release as the first in the 1.0 line of releases,
>>providing API stability for Spark's core interfaces.
>>
>>Spark 1.0.0 is Spark's largest release ever, with contributions from
>>117 developers. I'd like to thank everyone involved in this release -
>>it was truly a community effort with fixes, features, and
>>optimizations contributed from dozens of organizations.
>>
>>This release expands Spark's standard libraries, introducing a new SQL
>>package (SparkSQL) which lets users integrate SQL queries into
>>existing Spark workflows. MLlib, Spark's machine learning library, is
>>expanded with sparse vector support and several new algorithms. The
>>GraphX and Streaming libraries also introduce new features and
>>optimizations. Spark's core engine adds support for secured YARN
>>clusters, a unified tool for submitting Spark applications, and
>>several performance and stability improvements. Finally, Spark adds
>>support for Java 8 lambda syntax and improves coverage of the Java and
>>Python API's.
>>
>>Those features only scratch the surface - check out the release notes
>>here:
>>http://spark.apache.org/releases/spark-release-1-0-0.html
>>
>>Note that since release artifacts were posted recently, certain
>>mirrors may not have working downloads for a few hours.
>>
>>- Patrick
>


Re: Announcing Spark 1.0.0

Posted by Rahul Singhal <Ra...@guavus.com>.
Is it intentional/ok that the tag v1.0.0 is behind tag v1.0.0-rc11?


Thanks,
Rahul Singhal





On 30/05/14 3:43 PM, "Patrick Wendell" <pw...@gmail.com> wrote:

>I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>is a milestone release as the first in the 1.0 line of releases,
>providing API stability for Spark's core interfaces.
>
>Spark 1.0.0 is Spark's largest release ever, with contributions from
>117 developers. I'd like to thank everyone involved in this release -
>it was truly a community effort with fixes, features, and
>optimizations contributed from dozens of organizations.
>
>This release expands Spark's standard libraries, introducing a new SQL
>package (SparkSQL) which lets users integrate SQL queries into
>existing Spark workflows. MLlib, Spark's machine learning library, is
>expanded with sparse vector support and several new algorithms. The
>GraphX and Streaming libraries also introduce new features and
>optimizations. Spark's core engine adds support for secured YARN
>clusters, a unified tool for submitting Spark applications, and
>several performance and stability improvements. Finally, Spark adds
>support for Java 8 lambda syntax and improves coverage of the Java and
>Python API's.
>
>Those features only scratch the surface - check out the release notes
>here:
>http://spark.apache.org/releases/spark-release-1-0-0.html
>
>Note that since release artifacts were posted recently, certain
>mirrors may not have working downloads for a few hours.
>
>- Patrick


Re: Announcing Spark 1.0.0

Posted by Dean Wampler <de...@gmail.com>.
Congratulations!!


On Fri, May 30, 2014 at 5:12 AM, Patrick Wendell <pw...@gmail.com> wrote:

> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick
>



-- 
Dean Wampler, Ph.D.
Typesafe
@deanwampler
http://typesafe.com
http://polyglotprogramming.com

Re: Announcing Spark 1.0.0

Posted by Ognen Duzlevski <og...@gmail.com>.
How exciting! Congratulations! :-)
Ognen

On 5/30/14, 5:12 AM, Patrick Wendell wrote:
> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick


Re: Announcing Spark 1.0.0

Posted by jose farfan <jo...@gmail.com>.
Awesome work



On Fri, May 30, 2014 at 12:12 PM, Patrick Wendell <pw...@gmail.com>
wrote:

> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick
>

Re: Announcing Spark 1.0.0

Posted by Margusja <ma...@roo.ee>.
Now I can download. Thanks.

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"

On 30/05/14 13:48, Patrick Wendell wrote:
> It is updated - try holding "Shift + refresh" in your browser, you are
> probably caching the page.
>
> On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
>> Please update the http://spark.apache.org/docs/latest/  link
>>
>>
>> On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
>>> Is it possible to download pre build package?
>>>
>>> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
>>> - gives me 404
>>>
>>> Best regards, Margus (Margusja) Roo
>>> +372 51 48 780
>>> http://margus.roo.ee
>>> http://ee.linkedin.com/in/margusroo
>>> skype: margusja
>>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>>>
>>>
>>> On 30/05/14 13:18, Christopher Nguyen wrote:
>>>> Awesome work, Pat et al.!
>>>>
>>>> --
>>>> Christopher T. Nguyen
>>>> Co-founder & CEO, Adatao <http://adatao.com>
>>>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
>>>> <ma...@gmail.com>> wrote:
>>>>
>>>>      I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>>>      is a milestone release as the first in the 1.0 line of releases,
>>>>      providing API stability for Spark's core interfaces.
>>>>
>>>>      Spark 1.0.0 is Spark's largest release ever, with contributions from
>>>>      117 developers. I'd like to thank everyone involved in this release -
>>>>      it was truly a community effort with fixes, features, and
>>>>      optimizations contributed from dozens of organizations.
>>>>
>>>>      This release expands Spark's standard libraries, introducing a new
>>>> SQL
>>>>      package (SparkSQL) which lets users integrate SQL queries into
>>>>      existing Spark workflows. MLlib, Spark's machine learning library, is
>>>>      expanded with sparse vector support and several new algorithms. The
>>>>      GraphX and Streaming libraries also introduce new features and
>>>>      optimizations. Spark's core engine adds support for secured YARN
>>>>      clusters, a unified tool for submitting Spark applications, and
>>>>      several performance and stability improvements. Finally, Spark adds
>>>>      support for Java 8 lambda syntax and improves coverage of the Java
>>>> and
>>>>      Python API's.
>>>>
>>>>      Those features only scratch the surface - check out the release
>>>>      notes here:
>>>>      http://spark.apache.org/releases/spark-release-1-0-0.html
>>>>
>>>>      Note that since release artifacts were posted recently, certain
>>>>      mirrors may not have working downloads for a few hours.
>>>>
>>>>      - Patrick
>>>>
>>>>


Re: Announcing Spark 1.0.0

Posted by John Omernik <jo...@omernik.com>.
By the  way:

This is great work. I am new to the spark world, and have been like a kid
in a candy store learnign all it can do.

Is there a good list of build variables? What I me is like the SPARK_HIVE
variable described on the Spark SQL page. I'd like to include that, but
once I found that I wondered if there were other options I should consider
before building.

Thanks!



On Fri, May 30, 2014 at 6:52 AM, John Omernik <jo...@omernik.com> wrote:

> All:
>
> In the pom.xml file I see the MapR repository, but it's not included in
> the ./project/SparkBuild.scala file. Is this expected?  I know to build I
> have to add it there otherwise sbt hates me with evil red messages and
> such.
>
> John
>
>
> On Fri, May 30, 2014 at 6:24 AM, Kousuke Saruta <sarutak@oss.nttdata.co.jp
> > wrote:
>
>> Hi all
>>
>>
>>
>> In <https://spark.apache.org/downloads.html>, the URL for release note
>> of 1.0.0 seems to be wrong.
>>
>> The URL should be
>> https://spark.apache.org/releases/spark-release-1-0-0.html but links to
>> https://spark.apache.org/releases/spark-release-1.0.0.html
>>
>>
>>
>> Best Regards,
>>
>> Kousuke
>>
>>
>>
>> *From:* prabeesh k [mailto:prabsmails@gmail.com]
>> *Sent:* Friday, May 30, 2014 8:18 PM
>> *To:* user@spark.apache.org
>> *Subject:* Re: Announcing Spark 1.0.0
>>
>>
>>
>> I forgot to hard refresh.
>>
>> thanks
>>
>>
>>
>>
>>
>> On Fri, May 30, 2014 at 4:18 PM, Patrick Wendell <pw...@gmail.com>
>> wrote:
>>
>> It is updated - try holding "Shift + refresh" in your browser, you are
>> probably caching the page.
>>
>>
>> On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
>> > Please update the http://spark.apache.org/docs/latest/  link
>> >
>> >
>> > On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
>> >>
>> >> Is it possible to download pre build package?
>> >>
>> >>
>> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
>> >> - gives me 404
>> >>
>> >> Best regards, Margus (Margusja) Roo
>> >> +372 51 48 780
>> >> http://margus.roo.ee
>> >> http://ee.linkedin.com/in/margusroo
>> >> skype: margusja
>> >> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>> >>
>> >>
>> >> On 30/05/14 13:18, Christopher Nguyen wrote:
>> >>>
>> >>> Awesome work, Pat et al.!
>> >>>
>> >>> --
>> >>> Christopher T. Nguyen
>> >>> Co-founder & CEO, Adatao <http://adatao.com>
>> >>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>> >>>
>> >>>
>> >>>
>> >>>
>> >>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
>> >>> <ma...@gmail.com>> wrote:
>> >>>
>> >>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark
>> 1.0.0
>> >>>     is a milestone release as the first in the 1.0 line of releases,
>> >>>     providing API stability for Spark's core interfaces.
>> >>>
>> >>>     Spark 1.0.0 is Spark's largest release ever, with contributions
>> from
>> >>>     117 developers. I'd like to thank everyone involved in this
>> release -
>> >>>     it was truly a community effort with fixes, features, and
>> >>>     optimizations contributed from dozens of organizations.
>> >>>
>> >>>     This release expands Spark's standard libraries, introducing a new
>> >>> SQL
>> >>>     package (SparkSQL) which lets users integrate SQL queries into
>> >>>     existing Spark workflows. MLlib, Spark's machine learning
>> library, is
>> >>>     expanded with sparse vector support and several new algorithms.
>> The
>> >>>     GraphX and Streaming libraries also introduce new features and
>> >>>     optimizations. Spark's core engine adds support for secured YARN
>> >>>     clusters, a unified tool for submitting Spark applications, and
>> >>>     several performance and stability improvements. Finally, Spark
>> adds
>> >>>     support for Java 8 lambda syntax and improves coverage of the Java
>> >>> and
>> >>>     Python API's.
>> >>>
>> >>>     Those features only scratch the surface - check out the release
>> >>>     notes here:
>> >>>     http://spark.apache.org/releases/spark-release-1-0-0.html
>> >>>
>> >>>     Note that since release artifacts were posted recently, certain
>> >>>     mirrors may not have working downloads for a few hours.
>> >>>
>> >>>     - Patrick
>> >>>
>> >>>
>> >>
>> >
>>
>>
>>
>
>

Re: Announcing Spark 1.0.0

Posted by John Omernik <jo...@omernik.com>.
All:

In the pom.xml file I see the MapR repository, but it's not included in the
./project/SparkBuild.scala file. Is this expected?  I know to build I have
to add it there otherwise sbt hates me with evil red messages and such.

John


On Fri, May 30, 2014 at 6:24 AM, Kousuke Saruta <sa...@oss.nttdata.co.jp>
wrote:

> Hi all
>
>
>
> In <https://spark.apache.org/downloads.html>, the URL for release note of
> 1.0.0 seems to be wrong.
>
> The URL should be
> https://spark.apache.org/releases/spark-release-1-0-0.html but links to
> https://spark.apache.org/releases/spark-release-1.0.0.html
>
>
>
> Best Regards,
>
> Kousuke
>
>
>
> *From:* prabeesh k [mailto:prabsmails@gmail.com]
> *Sent:* Friday, May 30, 2014 8:18 PM
> *To:* user@spark.apache.org
> *Subject:* Re: Announcing Spark 1.0.0
>
>
>
> I forgot to hard refresh.
>
> thanks
>
>
>
>
>
> On Fri, May 30, 2014 at 4:18 PM, Patrick Wendell <pw...@gmail.com>
> wrote:
>
> It is updated - try holding "Shift + refresh" in your browser, you are
> probably caching the page.
>
>
> On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
> > Please update the http://spark.apache.org/docs/latest/  link
> >
> >
> > On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
> >>
> >> Is it possible to download pre build package?
> >>
> >>
> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
> >> - gives me 404
> >>
> >> Best regards, Margus (Margusja) Roo
> >> +372 51 48 780
> >> http://margus.roo.ee
> >> http://ee.linkedin.com/in/margusroo
> >> skype: margusja
> >> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> >>
> >>
> >> On 30/05/14 13:18, Christopher Nguyen wrote:
> >>>
> >>> Awesome work, Pat et al.!
> >>>
> >>> --
> >>> Christopher T. Nguyen
> >>> Co-founder & CEO, Adatao <http://adatao.com>
> >>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
> >>>
> >>>
> >>>
> >>>
> >>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
> >>> <ma...@gmail.com>> wrote:
> >>>
> >>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark
> 1.0.0
> >>>     is a milestone release as the first in the 1.0 line of releases,
> >>>     providing API stability for Spark's core interfaces.
> >>>
> >>>     Spark 1.0.0 is Spark's largest release ever, with contributions
> from
> >>>     117 developers. I'd like to thank everyone involved in this
> release -
> >>>     it was truly a community effort with fixes, features, and
> >>>     optimizations contributed from dozens of organizations.
> >>>
> >>>     This release expands Spark's standard libraries, introducing a new
> >>> SQL
> >>>     package (SparkSQL) which lets users integrate SQL queries into
> >>>     existing Spark workflows. MLlib, Spark's machine learning library,
> is
> >>>     expanded with sparse vector support and several new algorithms. The
> >>>     GraphX and Streaming libraries also introduce new features and
> >>>     optimizations. Spark's core engine adds support for secured YARN
> >>>     clusters, a unified tool for submitting Spark applications, and
> >>>     several performance and stability improvements. Finally, Spark adds
> >>>     support for Java 8 lambda syntax and improves coverage of the Java
> >>> and
> >>>     Python API's.
> >>>
> >>>     Those features only scratch the surface - check out the release
> >>>     notes here:
> >>>     http://spark.apache.org/releases/spark-release-1-0-0.html
> >>>
> >>>     Note that since release artifacts were posted recently, certain
> >>>     mirrors may not have working downloads for a few hours.
> >>>
> >>>     - Patrick
> >>>
> >>>
> >>
> >
>
>
>

RE: Announcing Spark 1.0.0

Posted by Kousuke Saruta <sa...@oss.nttdata.co.jp>.
Hi all

 

In <https://spark.apache.org/downloads.html>, the URL for release note of 1.0.0 seems to be wrong.

The URL should be https://spark.apache.org/releases/spark-release-1-0-0.html but links to https://spark.apache.org/releases/spark-release-1.0.0.html

 

Best Regards,

Kousuke

 

From: prabeesh k [mailto:prabsmails@gmail.com] 
Sent: Friday, May 30, 2014 8:18 PM
To: user@spark.apache.org
Subject: Re: Announcing Spark 1.0.0

 

I forgot to hard refresh.

thanks

 

 

On Fri, May 30, 2014 at 4:18 PM, Patrick Wendell <pw...@gmail.com> wrote:

It is updated - try holding "Shift + refresh" in your browser, you are
probably caching the page.


On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
> Please update the http://spark.apache.org/docs/latest/  link
>
>
> On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
>>
>> Is it possible to download pre build package?
>>
>> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
>> - gives me 404
>>
>> Best regards, Margus (Margusja) Roo
>> +372 51 48 780 <tel:%2B372%2051%2048%20780> 
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>>
>>
>> On 30/05/14 13:18, Christopher Nguyen wrote:
>>>
>>> Awesome work, Pat et al.!
>>>
>>> --
>>> Christopher T. Nguyen
>>> Co-founder & CEO, Adatao <http://adatao.com>
>>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>>>
>>>
>>>
>>>
>>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
>>> <ma...@gmail.com>> wrote:
>>>
>>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>>     is a milestone release as the first in the 1.0 line of releases,
>>>     providing API stability for Spark's core interfaces.
>>>
>>>     Spark 1.0.0 is Spark's largest release ever, with contributions from
>>>     117 developers. I'd like to thank everyone involved in this release -
>>>     it was truly a community effort with fixes, features, and
>>>     optimizations contributed from dozens of organizations.
>>>
>>>     This release expands Spark's standard libraries, introducing a new
>>> SQL
>>>     package (SparkSQL) which lets users integrate SQL queries into
>>>     existing Spark workflows. MLlib, Spark's machine learning library, is
>>>     expanded with sparse vector support and several new algorithms. The
>>>     GraphX and Streaming libraries also introduce new features and
>>>     optimizations. Spark's core engine adds support for secured YARN
>>>     clusters, a unified tool for submitting Spark applications, and
>>>     several performance and stability improvements. Finally, Spark adds
>>>     support for Java 8 lambda syntax and improves coverage of the Java
>>> and
>>>     Python API's.
>>>
>>>     Those features only scratch the surface - check out the release
>>>     notes here:
>>>     http://spark.apache.org/releases/spark-release-1-0-0.html
>>>
>>>     Note that since release artifacts were posted recently, certain
>>>     mirrors may not have working downloads for a few hours.
>>>
>>>     - Patrick
>>>
>>>
>>
>

 


Re: Announcing Spark 1.0.0

Posted by prabeesh k <pr...@gmail.com>.
I forgot to hard refresh.
thanks



On Fri, May 30, 2014 at 4:18 PM, Patrick Wendell <pw...@gmail.com> wrote:

> It is updated - try holding "Shift + refresh" in your browser, you are
> probably caching the page.
>
> On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
> > Please update the http://spark.apache.org/docs/latest/  link
> >
> >
> > On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
> >>
> >> Is it possible to download pre build package?
> >>
> >>
> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
> >> - gives me 404
> >>
> >> Best regards, Margus (Margusja) Roo
> >> +372 51 48 780
> >> http://margus.roo.ee
> >> http://ee.linkedin.com/in/margusroo
> >> skype: margusja
> >> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
> >>
> >>
> >> On 30/05/14 13:18, Christopher Nguyen wrote:
> >>>
> >>> Awesome work, Pat et al.!
> >>>
> >>> --
> >>> Christopher T. Nguyen
> >>> Co-founder & CEO, Adatao <http://adatao.com>
> >>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
> >>>
> >>>
> >>>
> >>>
> >>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
> >>> <ma...@gmail.com>> wrote:
> >>>
> >>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark
> 1.0.0
> >>>     is a milestone release as the first in the 1.0 line of releases,
> >>>     providing API stability for Spark's core interfaces.
> >>>
> >>>     Spark 1.0.0 is Spark's largest release ever, with contributions
> from
> >>>     117 developers. I'd like to thank everyone involved in this
> release -
> >>>     it was truly a community effort with fixes, features, and
> >>>     optimizations contributed from dozens of organizations.
> >>>
> >>>     This release expands Spark's standard libraries, introducing a new
> >>> SQL
> >>>     package (SparkSQL) which lets users integrate SQL queries into
> >>>     existing Spark workflows. MLlib, Spark's machine learning library,
> is
> >>>     expanded with sparse vector support and several new algorithms. The
> >>>     GraphX and Streaming libraries also introduce new features and
> >>>     optimizations. Spark's core engine adds support for secured YARN
> >>>     clusters, a unified tool for submitting Spark applications, and
> >>>     several performance and stability improvements. Finally, Spark adds
> >>>     support for Java 8 lambda syntax and improves coverage of the Java
> >>> and
> >>>     Python API's.
> >>>
> >>>     Those features only scratch the surface - check out the release
> >>>     notes here:
> >>>     http://spark.apache.org/releases/spark-release-1-0-0.html
> >>>
> >>>     Note that since release artifacts were posted recently, certain
> >>>     mirrors may not have working downloads for a few hours.
> >>>
> >>>     - Patrick
> >>>
> >>>
> >>
> >
>

Re: Announcing Spark 1.0.0

Posted by Patrick Wendell <pw...@gmail.com>.
It is updated - try holding "Shift + refresh" in your browser, you are
probably caching the page.

On Fri, May 30, 2014 at 3:46 AM, prabeesh k <pr...@gmail.com> wrote:
> Please update the http://spark.apache.org/docs/latest/  link
>
>
> On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:
>>
>> Is it possible to download pre build package?
>>
>> http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz
>> - gives me 404
>>
>> Best regards, Margus (Margusja) Roo
>> +372 51 48 780
>> http://margus.roo.ee
>> http://ee.linkedin.com/in/margusroo
>> skype: margusja
>> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>>
>>
>> On 30/05/14 13:18, Christopher Nguyen wrote:
>>>
>>> Awesome work, Pat et al.!
>>>
>>> --
>>> Christopher T. Nguyen
>>> Co-founder & CEO, Adatao <http://adatao.com>
>>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>>>
>>>
>>>
>>>
>>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
>>> <ma...@gmail.com>> wrote:
>>>
>>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>>     is a milestone release as the first in the 1.0 line of releases,
>>>     providing API stability for Spark's core interfaces.
>>>
>>>     Spark 1.0.0 is Spark's largest release ever, with contributions from
>>>     117 developers. I'd like to thank everyone involved in this release -
>>>     it was truly a community effort with fixes, features, and
>>>     optimizations contributed from dozens of organizations.
>>>
>>>     This release expands Spark's standard libraries, introducing a new
>>> SQL
>>>     package (SparkSQL) which lets users integrate SQL queries into
>>>     existing Spark workflows. MLlib, Spark's machine learning library, is
>>>     expanded with sparse vector support and several new algorithms. The
>>>     GraphX and Streaming libraries also introduce new features and
>>>     optimizations. Spark's core engine adds support for secured YARN
>>>     clusters, a unified tool for submitting Spark applications, and
>>>     several performance and stability improvements. Finally, Spark adds
>>>     support for Java 8 lambda syntax and improves coverage of the Java
>>> and
>>>     Python API's.
>>>
>>>     Those features only scratch the surface - check out the release
>>>     notes here:
>>>     http://spark.apache.org/releases/spark-release-1-0-0.html
>>>
>>>     Note that since release artifacts were posted recently, certain
>>>     mirrors may not have working downloads for a few hours.
>>>
>>>     - Patrick
>>>
>>>
>>
>

Re: Announcing Spark 1.0.0

Posted by prabeesh k <pr...@gmail.com>.
Please update the http://spark.apache.org/docs/latest/  link


On Fri, May 30, 2014 at 4:03 PM, Margusja <ma...@roo.ee> wrote:

> Is it possible to download pre build package?
> http://mirror.symnds.com/software/Apache/incubator/
> spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz - gives me 404
>
> Best regards, Margus (Margusja) Roo
> +372 51 48 780
> http://margus.roo.ee
> http://ee.linkedin.com/in/margusroo
> skype: margusja
> ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"
>
>
> On 30/05/14 13:18, Christopher Nguyen wrote:
>
>> Awesome work, Pat et al.!
>>
>> --
>> Christopher T. Nguyen
>> Co-founder & CEO, Adatao <http://adatao.com>
>> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>>
>>
>>
>>
>> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com
>> <ma...@gmail.com>> wrote:
>>
>>     I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>>     is a milestone release as the first in the 1.0 line of releases,
>>     providing API stability for Spark's core interfaces.
>>
>>     Spark 1.0.0 is Spark's largest release ever, with contributions from
>>     117 developers. I'd like to thank everyone involved in this release -
>>     it was truly a community effort with fixes, features, and
>>     optimizations contributed from dozens of organizations.
>>
>>     This release expands Spark's standard libraries, introducing a new SQL
>>     package (SparkSQL) which lets users integrate SQL queries into
>>     existing Spark workflows. MLlib, Spark's machine learning library, is
>>     expanded with sparse vector support and several new algorithms. The
>>     GraphX and Streaming libraries also introduce new features and
>>     optimizations. Spark's core engine adds support for secured YARN
>>     clusters, a unified tool for submitting Spark applications, and
>>     several performance and stability improvements. Finally, Spark adds
>>     support for Java 8 lambda syntax and improves coverage of the Java and
>>     Python API's.
>>
>>     Those features only scratch the surface - check out the release
>>     notes here:
>>     http://spark.apache.org/releases/spark-release-1-0-0.html
>>
>>     Note that since release artifacts were posted recently, certain
>>     mirrors may not have working downloads for a few hours.
>>
>>     - Patrick
>>
>>
>>
>

Re: Announcing Spark 1.0.0

Posted by Margusja <ma...@roo.ee>.
Is it possible to download pre build package?
http://mirror.symnds.com/software/Apache/incubator/spark/spark-1.0.0/spark-1.0.0-bin-hadoop2.tgz 
- gives me 404

Best regards, Margus (Margusja) Roo
+372 51 48 780
http://margus.roo.ee
http://ee.linkedin.com/in/margusroo
skype: margusja
ldapsearch -x -h ldap.sk.ee -b c=EE "(serialNumber=37303140314)"

On 30/05/14 13:18, Christopher Nguyen wrote:
> Awesome work, Pat et al.!
>
> --
> Christopher T. Nguyen
> Co-founder & CEO, Adatao <http://adatao.com>
> linkedin.com/in/ctnguyen <http://linkedin.com/in/ctnguyen>
>
>
>
> On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pwendell@gmail.com 
> <ma...@gmail.com>> wrote:
>
>     I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
>     is a milestone release as the first in the 1.0 line of releases,
>     providing API stability for Spark's core interfaces.
>
>     Spark 1.0.0 is Spark's largest release ever, with contributions from
>     117 developers. I'd like to thank everyone involved in this release -
>     it was truly a community effort with fixes, features, and
>     optimizations contributed from dozens of organizations.
>
>     This release expands Spark's standard libraries, introducing a new SQL
>     package (SparkSQL) which lets users integrate SQL queries into
>     existing Spark workflows. MLlib, Spark's machine learning library, is
>     expanded with sparse vector support and several new algorithms. The
>     GraphX and Streaming libraries also introduce new features and
>     optimizations. Spark's core engine adds support for secured YARN
>     clusters, a unified tool for submitting Spark applications, and
>     several performance and stability improvements. Finally, Spark adds
>     support for Java 8 lambda syntax and improves coverage of the Java and
>     Python API's.
>
>     Those features only scratch the surface - check out the release
>     notes here:
>     http://spark.apache.org/releases/spark-release-1-0-0.html
>
>     Note that since release artifacts were posted recently, certain
>     mirrors may not have working downloads for a few hours.
>
>     - Patrick
>
>


Re: Announcing Spark 1.0.0

Posted by Christopher Nguyen <ct...@adatao.com>.
Awesome work, Pat et al.!

--
Christopher T. Nguyen
Co-founder & CEO, Adatao <http://adatao.com>
linkedin.com/in/ctnguyen



On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pw...@gmail.com> wrote:

> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick
>

Re: Announcing Spark 1.0.0

Posted by Christopher Nguyen <ct...@adatao.com>.
Awesome work, Pat et al.!

--
Christopher T. Nguyen
Co-founder & CEO, Adatao <http://adatao.com>
linkedin.com/in/ctnguyen



On Fri, May 30, 2014 at 3:12 AM, Patrick Wendell <pw...@gmail.com> wrote:

> I'm thrilled to announce the availability of Spark 1.0.0! Spark 1.0.0
> is a milestone release as the first in the 1.0 line of releases,
> providing API stability for Spark's core interfaces.
>
> Spark 1.0.0 is Spark's largest release ever, with contributions from
> 117 developers. I'd like to thank everyone involved in this release -
> it was truly a community effort with fixes, features, and
> optimizations contributed from dozens of organizations.
>
> This release expands Spark's standard libraries, introducing a new SQL
> package (SparkSQL) which lets users integrate SQL queries into
> existing Spark workflows. MLlib, Spark's machine learning library, is
> expanded with sparse vector support and several new algorithms. The
> GraphX and Streaming libraries also introduce new features and
> optimizations. Spark's core engine adds support for secured YARN
> clusters, a unified tool for submitting Spark applications, and
> several performance and stability improvements. Finally, Spark adds
> support for Java 8 lambda syntax and improves coverage of the Java and
> Python API's.
>
> Those features only scratch the surface - check out the release notes here:
> http://spark.apache.org/releases/spark-release-1-0-0.html
>
> Note that since release artifacts were posted recently, certain
> mirrors may not have working downloads for a few hours.
>
> - Patrick
>