You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by Nicholas Chammas <ni...@gmail.com> on 2016/04/13 03:05:09 UTC

Re: Spark 1.6.1 packages on S3 corrupt?

Yes, this is a known issue. The core devs are already aware of it. [CC dev]

FWIW, I believe the Spark 1.6.1 / Hadoop 2.6 package on S3 is not corrupt.
It may be the only 1.6.1 package that is not corrupt, though. :/

Nick


On Tue, Apr 12, 2016 at 9:00 PM Augustus Hong <au...@branchmetrics.io>
wrote:

> Hi all,
>
> I'm trying to launch a cluster with the spark-ec2 script but seeing the
> error below.  Are the packages on S3 corrupted / not in the correct format?
>
> Initializing spark
>
> --2016-04-13 00:25:39--
> http://s3.amazonaws.com/spark-related-packages/spark-1.6.1-bin-hadoop1.tgz
>
> Resolving s3.amazonaws.com (s3.amazonaws.com)... 54.231.11.67
>
> Connecting to s3.amazonaws.com (s3.amazonaws.com)|54.231.11.67|:80...
> connected.
>
> HTTP request sent, awaiting response... 200 OK
>
> Length: 277258240 (264M) [application/x-compressed]
>
> Saving to: ‘spark-1.6.1-bin-hadoop1.tgz’
>
> 100%[==================================================================================================================>]
> 277,258,240 37.6MB/s   in 9.2s
>
> 2016-04-13 00:25:49 (28.8 MB/s) - ‘spark-1.6.1-bin-hadoop1.tgz’ saved
> [277258240/277258240]
>
> Unpacking Spark
>
>
> gzip: stdin: not in gzip format
>
> tar: Child returned status 1
>
> tar: Error is not recoverable: exiting now
>
> mv: missing destination file operand after `spark'
>
> Try `mv --help' for more information.
>
>
>
>
>
>
> --
> [image: Branch] <https://branch.io/?bmp=xink-sig>
> Augustus Hong
> Software Engineer
>
>