You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by redocpot <ju...@gmail.com> on 2014/07/28 17:15:24 UTC

sbt directory missed

Hi, 

I have started a EC2 cluster using Spark by running spark-ec2 script.

Just a little confused, I can not find sbt/ directory under /spark.

I have checked spark-version, it's 1.0.0 (default). When I was working
0.9.x, sbt/ has been there.

Is the script changed in 1.0.X ? I can not find any change log on this. Or
maybe I am missing something.

Certainly, I can download sbt and make things work. Just want to make things
clear.

Thank you.

Here is the file list of spark/

root@ip-10-81-154-223:~# ls -l spark
total 384
drwxrwxr-x 10 1000 1000   4096 Jul 28 14:58 .
drwxr-xr-x 20 root root   4096 Jul 28 14:58 ..
drwxrwxr-x  2 1000 1000   4096 Jul 28 13:34 bin
-rw-rw-r--  1 1000 1000 281471 May 26 07:02 CHANGES.txt
drwxrwxr-x  2 1000 1000   4096 Jul 28 08:22 conf
drwxrwxr-x  4 1000 1000   4096 May 26 07:02 ec2
drwxrwxr-x  3 1000 1000   4096 May 26 07:02 examples
drwxrwxr-x  2 1000 1000   4096 May 26 07:02 lib
-rw-rw-r--  1 1000 1000  29983 May 26 07:02 LICENSE
drwxr-xr-x  2 root root   4096 Jul 28 14:42 logs
-rw-rw-r--  1 1000 1000  22559 May 26 07:02 NOTICE
drwxrwxr-x  6 1000 1000   4096 May 26 07:02 python
-rw-rw-r--  1 1000 1000   4221 May 26 07:02 README.md
-rw-rw-r--  1 1000 1000     35 May 26 07:02 RELEASE
drwxrwxr-x  2 1000 1000   4096 May 26 07:02 sbin









--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: sbt directory missed

Posted by Hao REN <ju...@gmail.com>.
What makes one confused is that,

spark-0.9.2-bin-hadoop1.tgz
<http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-hadoop1.tgz> =>
contains source code and sbt
spark-1.0.1-bin-hadoop1.tgz
<http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-hadoop1.tgz> =>
does not

According to their name, they are all binary package.

Every time when I need the a cluster with source code, I have to give the
git cmt hash to the script.

Is this intentional ?

Thank you.



2014-07-28 23:30 GMT+02:00 redocpot <ju...@gmail.com>:

> Thank you for your reply.
>
> I need sbt for packaging my project and then submit it.
>
> Could you tell me how to run a spark project on 1.0 AMI without sbt?
>
> I don't understand why 1.0 only contains the prebuilt packages. I dont
> think
> it makes sense, since sbt is essential.
>
> User has to download sbt or clone github repo, whereas in 0.9 ami, sbt is
> pre-installed.
>
> A command like:
> $ sbt/sbt package run
> could do the job.
>
> Thanks. =)
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783p10812.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>



-- 
REN Hao

Data Engineer @ ClaraVista

Paris, France

Tel:  +33 06 14 54 57 24

Re: sbt directory missed

Posted by redocpot <ju...@gmail.com>.
Thank you for your reply.

I need sbt for packaging my project and then submit it.

Could you tell me how to run a spark project on 1.0 AMI without sbt?

I don't understand why 1.0 only contains the prebuilt packages. I dont think
it makes sense, since sbt is essential.

User has to download sbt or clone github repo, whereas in 0.9 ami, sbt is
pre-installed.

A command like: 
$ sbt/sbt package run
could do the job.

Thanks. =)



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783p10812.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: sbt directory missed

Posted by Shivaram Venkataraman <sh...@eecs.berkeley.edu>.
I think the 1.0 AMI only contains the prebuilt packages (i.e just the
binaries) of Spark and not the source code. If you want to build Spark on
EC2, you'll can clone the github repo and then use sbt.

Thanks
Shivaram


On Mon, Jul 28, 2014 at 8:49 AM, redocpot <ju...@gmail.com> wrote:

> update:
>
> Just checked the python launch script, when retrieving spark, it will refer
> to this script:
> https://github.com/mesos/spark-ec2/blob/v3/spark/init.sh
>
> where each version number is mapped to a tar file,
>
>     0.9.2)
>       if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-hadoop1.tgz
>       else
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-cdh4.tgz
>       fi
>       ;;
>     1.0.0)
>       if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-1.0.0-bin-hadoop1.tgz
>       else
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-1.0.0-bin-cdh4.tgz
>       fi
>       ;;
>     1.0.1)
>       if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-hadoop1.tgz
>       else
>         wget
> http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-cdh4.tgz
>       fi
>       ;;
>
> I just checked the three last tar file. I find the /sbt directory and many
> other directory like bagel, mllib, etc in 0.9.2 tar file. However, they are
> not in 1.0.0 and 1.0.1 tar files.
>
> I am not sure that 1.0.X versions are mapped to the correct tar files.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783p10784.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Re: sbt directory missed

Posted by redocpot <ju...@gmail.com>.
update:

Just checked the python launch script, when retrieving spark, it will refer
to this script:
https://github.com/mesos/spark-ec2/blob/v3/spark/init.sh

where each version number is mapped to a tar file,

    0.9.2)
      if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
        wget
http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-hadoop1.tgz
      else
        wget
http://s3.amazonaws.com/spark-related-packages/spark-0.9.2-bin-cdh4.tgz
      fi
      ;;
    1.0.0)
      if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
        wget
http://s3.amazonaws.com/spark-related-packages/spark-1.0.0-bin-hadoop1.tgz
      else
        wget
http://s3.amazonaws.com/spark-related-packages/spark-1.0.0-bin-cdh4.tgz
      fi
      ;;
    1.0.1)
      if [[ "$HADOOP_MAJOR_VERSION" == "1" ]]; then
        wget
http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-hadoop1.tgz
      else
        wget
http://s3.amazonaws.com/spark-related-packages/spark-1.0.1-bin-cdh4.tgz
      fi
      ;;

I just checked the three last tar file. I find the /sbt directory and many
other directory like bagel, mllib, etc in 0.9.2 tar file. However, they are
not in 1.0.0 and 1.0.1 tar files.

I am not sure that 1.0.X versions are mapped to the correct tar files.




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/sbt-directory-missed-tp10783p10784.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.