You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by a b <au...@yahoo.com> on 2013/08/06 00:42:56 UTC
cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
i get a whirr error:
Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
i can see the file is named: hadoop-1.2.1.tar.gz.mds
how do i tell whirr to use a different suffix?
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
ok - i found it - thanks for helping me
You can log into instances using the following ssh commands:
[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@184.73.74.224
[hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.184.237
To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@184.73.74.224
Warning: Permanently added '184.73.74.224' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 00:32:54 UTC 2013
System load: 0.46 Processes: 58
Usage of /: 13.9% of 7.87GB Users logged in: 0
Memory usage: 33% IP address for eth0: 10.179.7.81
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
42 packages can be updated.
24 updates are security updates.
Last login: Tue Aug 6 00:32:19 2013 from 50.0.160.200
ab@ip-10-179-7-81:~$ which hadoop
/usr/local/hadoop-1.2.1/bin/hadoop
ab@ip-10-179-7-81:~$
________________________________
From: Andrew Bayer <an...@gmail.com>
To: a b <au...@yahoo.com>
Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Monday, August 5, 2013 4:48 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
A.
On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>Reading package lists...^M
>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>, error=^M
>gzip: stdin: not in gzip format^M
>tar: Child returned status 1^M
>tar: Error is not recoverable: exiting now^M
>mv: cannot stat `jdk1*': No such file or directory^M
>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 4:22 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
>ok - i'm not sure what you are asking.
>>
>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>
>>
>>
>>where these are the properties i think i changed or added from the original recipe:
>>
>>
>>whirr.cluster-name=hadoop-ec2
>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>whirr.hardware-id=t1.micro
>>whirr.image-id=us-east-1/ami-25d9a94c
>>whirr.hadoop.version=1.2.1
>>whirr.provider=aws-ec2
>>whirr.identity=${env:AWS_ACCESS_KEY}
>>whirr.credential=${env:AWS_SECRET_KEY}
>>whirr.location-id=us-east-1
>>whirr.java.install-function=install_oracle_jdk7
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 4:11 PM
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>
>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>
>>>A.
>>>
>>>
>>>
>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>
>>>i get a whirr error:
>>>>
>>>>
>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>
>>>>
>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>
>>>>
>>>>how do i tell whirr to use a different suffix?
>>>>
>>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
it looks like the install_openjdk
function install_openjdk_deb() {
retry_apt_get update
retry_apt_get -y install openjdk-6-jdk
# Try to set JAVA_HOME in a number of commonly used locations
# Lifting JAVA_HOME detection from jclouds
only tries to install: openjdk-6-jdk
but i am running an ubuntu server which is missing the x11 libraries (libxt), i guess i need: openjdk-6-jre-headless
it doesn't look like i can override this from the properties file - what do you think?
________________________________
From: a b <au...@yahoo.com>
To: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 10:06 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
i also took your advice and switched to the open jdk:
ab@ubuntu12-64:~$ tail ~/whirr/recipes/hadoop.properties
#whirr.hadoop.version=1.0.4
whirr.hadoop.version=1.2.1
#whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY}
whirr.credential=${env:AWS_SECRET_KEY}
whirr.location-id=us-east-1
#whirr.java.install-function=install_oracle_jdk7
whirr.java.install-function=install_openjdk
but when i launched:
1575 whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
1576 tail ~/whirr/recipes/hadoop.properties
1577 history
i got this error message on the console:
Get:36 http://archive.ubuntu.com precise-updates/multiverse Translation-en [7834 B]
Get:37 http://archive.ubuntu.com precise-updates/restricted Translation-en [2432 B]
Get:38 http://archive.ubuntu.com precise-updates/universe Translation-en [124 kB]
Fetched 7548 kB in 9s (828 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-6-jdk : Depends: openjdk-6-jre (>=
6b27-1.12.6-1ubuntu0.12.04.2) but it is not going to be installed
Recommends: libxt-dev but it is not going to be installed
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-6-jdk : Depends: openjdk-6-jre (>= 6b27-1.12.6-1ubuntu0.12.04.2) but it is not going to be installed
Recommends: libxt-dev but it is not going to be
installed
i checked to make sure it was accurate:
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.132.62
Warning: Permanently added '54.227.132.62' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 16:55:22 UTC 2013
System load:
0.06 Processes: 58
Usage of /: 9.4% of 7.87GB Users logged in: 0
Memory usage: 15% IP address for eth0: 10.164.33.99
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
24 packages can be updated.
24 updates are security updates.
Last login: Tue Aug 6 16:53:36 2013 from 50.0.160.200
ab@ip-10-164-33-99:~$ java
-version
The program 'java' can be found in the following packages:
* default-jre
* gcj-4.6-jre-headless
* openjdk-6-jre-headless
* gcj-4.5-jre-headless
* openjdk-7-jre-headless
Ask your administrator to install one of them
ab@ip-10-164-33-99:~$ ls -l /usr/bin/j*
-rwxr-xr-x 1 root root 39440 Nov 19 2012 /usr/bin/join
-rwxr-xr-x 1 root root 3918 Mar 18 19:34 /usr/bin/json_pp
ab@ip-10-164-33-99:~$ exit
i need coaching on this - as well. i'm on a bad luck streak - i'm not buying any lotto tickets this week.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Monday, August 5, 2013 5:54 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Try not building as root - that can throw things off.
A.
On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
as for the java 7 problem - i found this suggestion:
>
>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
>
>i tried to download whirr - as suggested here:
>
>
>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
>
>o i did: git clone ...
>
>o i modified: core/src/main/resources/functions/...
>o i did: mvn eclipse:eclipse ...
>o i skipped: eclipse import
>o i ran: mvn install
>
>
>it fails in the "mvn install" during test
>
>
>Tests in error:
>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>[..]
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>
>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>[INFO] Whirr ............................................. SUCCESS [4.488s]
>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>
>
>
>is there a better way to try this suggestion?
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Monday, August 5, 2013 4:48 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>Reading package lists...^M
>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>, error=^M
>>gzip: stdin: not in gzip format^M
>>tar: Child returned status 1^M
>>tar: Error is not recoverable: exiting now^M
>>mv: cannot stat `jdk1*': No such file or directory^M
>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 4:22 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>
>>ok - i'm not sure what you are asking.
>>>
>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>
>>>
>>>
>>>where these are the properties i think i changed or added from the original recipe:
>>>
>>>
>>>whirr.cluster-name=hadoop-ec2
>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>whirr.hardware-id=t1.micro
>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>whirr.hadoop.version=1.2.1
>>>whirr.provider=aws-ec2
>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>whirr.location-id=us-east-1
>>>whirr.java.install-function=install_oracle_jdk7
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:11 PM
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>
>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>
>>>>A.
>>>>
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>i get a whirr error:
>>>>>
>>>>>
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>
>>>>>
>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>
>>>>>
>>>>>how do i tell whirr to use a different suffix?
>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
That's just odd. I don't know why apt-get install openjdk-6-jdk isn't
sufficient on its own. Has anyone else seen this?
A.
On Tue, Aug 6, 2013 at 10:06 AM, a b <au...@yahoo.com> wrote:
> i also took your advice and switched to the open jdk:
>
> ab@ubuntu12-64:~$ tail ~/whirr/recipes/hadoop.properties
>
> #whirr.hadoop.version=1.0.4
> whirr.hadoop.version=1.2.1
> #whirr.hadoop.tarball.url=
> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>
> whirr.provider=aws-ec2
> whirr.identity=${env:AWS_ACCESS_KEY}
> whirr.credential=${env:AWS_SECRET_KEY}
> whirr.location-id=us-east-1
> #whirr.java.install-function=install_oracle_jdk7
> whirr.java.install-function=install_openjdk
>
> but when i launched:
>
> 1575 whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
> 1576 tail ~/whirr/recipes/hadoop.properties
> 1577 history
>
> i got this error message on the console:
>
> Get:36 http://archive.ubuntu.com precise-updates/multiverse
> Translation-en [7834 B]
> Get:37 http://archive.ubuntu.com precise-updates/restricted
> Translation-en [2432 B]
> Get:38 http://archive.ubuntu.com precise-updates/universe Translation-en
> [124 kB]
> Fetched 7548 kB in 9s (828 kB/s)
> Reading package lists...
> Reading package lists...
> Building dependency tree...
> Reading state information...
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
>
> The following packages have unmet dependencies:
> openjdk-6-jdk : Depends: openjdk-6-jre (>= 6b27-1.12.6-1ubuntu0.12.04.2)
> but it is not going to be installed
> Recommends: libxt-dev but it is not going to be installed
> Reading package lists...
> Building dependency tree...
> Reading state information...
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
>
> The following packages have unmet dependencies:
> openjdk-6-jdk : Depends: openjdk-6-jre (>= 6b27-1.12.6-1ubuntu0.12.04.2)
> but it is not going to be installed
> Recommends: libxt-dev but it is not going to be installed
>
>
> i checked to make sure it was accurate:
>
> ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile
> /dev/null" -o StrictHostKeyChecking=no ab@54.227.132.62
> Warning: Permanently added '54.227.132.62' (ECDSA) to the list of known
> hosts.
>
> Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
>
> * Documentation: https://help.ubuntu.com/
>
> System information as of Tue Aug 6 16:55:22 UTC 2013
>
> System load: 0.06 Processes: 58
> Usage of /: 9.4% of 7.87GB Users logged in: 0
> Memory usage: 15% IP address for eth0: 10.164.33.99
>
> Swap usage: 0%
>
> Graph this data and manage this system at
> https://landscape.canonical.com/
>
> Get cloud support with Ubuntu Advantage Cloud Guest:
> http://www.ubuntu.com/business/services/cloud
>
> Use Juju to deploy your cloud instances and workloads:
> https://juju.ubuntu.com/#cloud-precise
>
> 24 packages can be updated.
>
> 24 updates are security updates.
>
> Last login: Tue Aug 6 16:53:36 2013 from 50.0.160.200
> ab@ip-10-164-33-99:~$ java -version
> The program 'java' can be found in the following packages:
> * default-jre
> * gcj-4.6-jre-headless
> * openjdk-6-jre-headless
> * gcj-4.5-jre-headless
> * openjdk-7-jre-headless
> Ask your administrator to install one of them
> ab@ip-10-164-33-99:~$ ls -l /usr/bin/j*
> -rwxr-xr-x 1 root root 39440 Nov 19 2012 /usr/bin/join
> -rwxr-xr-x 1 root root 3918 Mar 18 19:34 /usr/bin/json_pp
> ab@ip-10-164-33-99:~$ exit
>
> i need coaching on this - as well. i'm on a bad luck streak - i'm not
> buying any lotto tickets this week.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i also took your advice and switched to the open jdk:
ab@ubuntu12-64:~$ tail ~/whirr/recipes/hadoop.properties
#whirr.hadoop.version=1.0.4
whirr.hadoop.version=1.2.1
#whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY}
whirr.credential=${env:AWS_SECRET_KEY}
whirr.location-id=us-east-1
#whirr.java.install-function=install_oracle_jdk7
whirr.java.install-function=install_openjdk
but when i launched:
1575 whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
1576 tail ~/whirr/recipes/hadoop.properties
1577 history
i got this error message on the console:
Get:36 http://archive.ubuntu.com precise-updates/multiverse Translation-en [7834 B]
Get:37 http://archive.ubuntu.com precise-updates/restricted Translation-en [2432 B]
Get:38 http://archive.ubuntu.com precise-updates/universe Translation-en [124 kB]
Fetched 7548 kB in 9s (828 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-6-jdk : Depends: openjdk-6-jre (>= 6b27-1.12.6-1ubuntu0.12.04.2) but it is not going to be installed
Recommends: libxt-dev but it is not going to be installed
Reading package lists...
Building dependency tree...
Reading state information...
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
openjdk-6-jdk : Depends: openjdk-6-jre (>= 6b27-1.12.6-1ubuntu0.12.04.2) but it is not going to be installed
Recommends: libxt-dev but it is not going to be installed
i checked to make sure it was accurate:
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.132.62
Warning: Permanently added '54.227.132.62' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 16:55:22 UTC 2013
System load: 0.06 Processes: 58
Usage of /: 9.4% of 7.87GB Users logged in: 0
Memory usage: 15% IP address for eth0: 10.164.33.99
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
24 packages can be updated.
24 updates are security updates.
Last login: Tue Aug 6 16:53:36 2013 from 50.0.160.200
ab@ip-10-164-33-99:~$ java -version
The program 'java' can be found in the following packages:
* default-jre
* gcj-4.6-jre-headless
* openjdk-6-jre-headless
* gcj-4.5-jre-headless
* openjdk-7-jre-headless
Ask your administrator to install one of them
ab@ip-10-164-33-99:~$ ls -l /usr/bin/j*
-rwxr-xr-x 1 root root 39440 Nov 19 2012 /usr/bin/join
-rwxr-xr-x 1 root root 3918 Mar 18 19:34 /usr/bin/json_pp
ab@ip-10-164-33-99:~$ exit
i need coaching on this - as well. i'm on a bad luck streak - i'm not buying any lotto tickets this week.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Monday, August 5, 2013 5:54 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Try not building as root - that can throw things off.
A.
On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
as for the java 7 problem - i found this suggestion:
>
>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
>
>i tried to download whirr - as suggested here:
>
>
>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
>
>o i did: git clone ...
>
>o i modified: core/src/main/resources/functions/...
>o i did: mvn eclipse:eclipse ...
>o i skipped: eclipse import
>o i ran: mvn install
>
>
>it fails in the "mvn install" during test
>
>
>Tests in error:
>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>[..]
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>
>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>[INFO] Whirr ............................................. SUCCESS [4.488s]
>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>
>
>
>is there a better way to try this suggestion?
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Monday, August 5, 2013 4:48 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>Reading package lists...^M
>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>, error=^M
>>gzip: stdin: not in gzip format^M
>>tar: Child returned status 1^M
>>tar: Error is not recoverable: exiting now^M
>>mv: cannot stat `jdk1*': No such file or directory^M
>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 4:22 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>
>>ok - i'm not sure what you are asking.
>>>
>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>
>>>
>>>
>>>where these are the properties i think i changed or added from the original recipe:
>>>
>>>
>>>whirr.cluster-name=hadoop-ec2
>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>whirr.hardware-id=t1.micro
>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>whirr.hadoop.version=1.2.1
>>>whirr.provider=aws-ec2
>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>whirr.location-id=us-east-1
>>>whirr.java.install-function=install_oracle_jdk7
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:11 PM
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>
>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>
>>>>A.
>>>>
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>i get a whirr error:
>>>>>
>>>>>
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>
>>>>>
>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>
>>>>>
>>>>>how do i tell whirr to use a different suffix?
>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
the 2nd instance has everything:
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
Warning: Permanently added '54.227.189.132' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 18:15:37 UTC 2013
System load: 0.05 Processes: 60
Usage of /: 17.3% of 7.87GB Users logged in: 0
Memory usage: 54% IP address for eth0: 10.10.141.106
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
Last login: Tue Aug 6 17:53:26 2013 from 50.0.160.200
ab@ip-10-10-141-106:~$ which java
/usr/lib/jvm/java-1.6.0-openjdk-amd64/bin/java
ab@ip-10-10-141-106:~$ which hadoop
/usr/local/hadoop-1.2.1/bin/hadoop
ab@ip-10-10-141-106:~$ ab@ip-10-10-141-106:~$ ps -ef | grep hadoop
hadoop 8748 1 0 17:53 ? 00:00:11 /usr/lib/jvm/java-1.6.0-openjdk-amd64/bin/java -Dproc_namenode ......... org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop 9252 1 0 17:53 ? 00:00:10 /usr/lib/jvm/java-1.6.0-openjdk-amd64/bin/java -Dproc_jobtracker ....... org.apache.hadoop.mapred.JobTracker
ab 10973 10745 0 18:18 pts/0 00:00:00 grep hadoop
ab@ip-10-10-141-106:~$
should i send you the whirr.log file?
________________________________
From: a b <au...@yahoo.com>
To: Andrew Bayer <an...@gmail.com>; "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 11:04 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
i can't find java or hadoop - i'll look at the whirr.log for more info:
You can log into instances using the following ssh commands:
[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
[hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
Warning: Permanently added '54.224.175.65' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux
3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 17:59:19 UTC 2013
System load: 0.08 Processes: 57
Usage of /: 10.2% of 7.87GB Users logged in: 0
Memory usage: 26% IP address for eth0: 10.179.37.185
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
24 packages can be updated.
24 updates are security updates.
Last login: Tue Aug 6 17:53:09 2013 from 50.0.160.200
ab@ip-10-179-37-185:~$ which java
ab@ip-10-179-37-185:~$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 17:49 ? 00:00:00 /sbin/init
root 2 0 0 17:49 ? 00:00:00 [kthreadd]
root 3 2 0 17:49 ? 00:00:00 [ksoftirqd/0]
root 4
2 0 17:49 ? 00:00:00 [kworker/0:0]
root 5 2 0 17:49 ? 00:00:00 [kworker/u:0]
root 6 2 0 17:49 ? 00:00:00 [migration/0]
root 7 2 0 17:49 ? 00:00:00 [watchdog/0]
root 8 2 0 17:49 ? 00:00:00 [cpuset]
root 9 2 0 17:49 ? 00:00:00 [khelper]
root
10 2 0 17:49 ? 00:00:00 [kdevtmpfs]
root 11 2 0 17:49 ? 00:00:00 [netns]
root 12 2 0 17:49 ? 00:00:00 [xenwatch]
root 13 2 0 17:49 ? 00:00:01 [xenbus]
root 14 2 0 17:49 ? 00:00:00 [sync_supers]
root 15 2 0 17:49 ? 00:00:00 [bdi-default]
root
16 2 0 17:49 ? 00:00:00 [kintegrityd]
root 17 2 0 17:49 ? 00:00:00 [kblockd]
root 18 2 0 17:49 ? 00:00:00 [ata_sff]
root 19 2 0 17:49 ? 00:00:00 [khubd]
root 20 2 0 17:49 ? 00:00:00 [md]
root 21 2 0 17:49 ? 00:00:00 [kworker/0:1]
root 23
2 0 17:49 ? 00:00:00 [kworker/u:1]
root 24 2 0 17:49 ? 00:00:00 [khungtaskd]
root 25 2 0 17:49 ? 00:00:00 [kswapd0]
root 26 2 0 17:49 ? 00:00:00 [ksmd]
root 27 2 0 17:49 ? 00:00:00 [fsnotify_mark]
root 28 2 0 17:49 ? 00:00:00 [ecryptfs-kthrea]
root 29 2 0
17:49 ? 00:00:00 [crypto]
root 37 2 0 17:49 ? 00:00:00 [kthrotld]
root 38 2 0 17:49 ? 00:00:00 [khvcd]
root 57 2 0 17:49 ? 00:00:00 [devfreq_wq]
root 158 2 0 17:49 ? 00:00:00 [jbd2/xvda1-8]
root 159 2 0 17:49 ? 00:00:00 [ext4-dio-unwrit]
root 270 1 0 17:49
? 00:00:00 upstart-udev-bridge --daemon
root 272 1 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 319 272 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 320 272 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 413 1 0 17:49 ? 00:00:00 upstart-socket-bridge --daemon
root 468 1 0 17:49 ? 00:00:00 dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases
-1
root 615 1 0 17:49 ? 00:00:00 /usr/sbin/sshd -D
syslog 631 1 0 17:49 ? 00:00:00 rsyslogd -c5
102 651 1 0 17:49 ? 00:00:00 dbus-daemon --system --fork --activation=upstart
root 700 1 0 17:49 tty4 00:00:00 /sbin/getty -8 38400 tty4
root 707 1 0 17:49 tty5 00:00:00 /sbin/getty -8 38400 tty5
root 713 1 0 17:49 tty2 00:00:00 /sbin/getty -8 38400
tty2
root 714 1 0 17:49 tty3 00:00:00 /sbin/getty -8 38400 tty3
root 717 1 0 17:49 tty6 00:00:00 /sbin/getty -8 38400 tty6
daemon 730 1 0 17:49 ? 00:00:00 atd
root 731 1 0 17:49 ? 00:00:00 cron
root 732 1 0 17:49 ? 00:00:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 764 1 0 17:49 tty1 00:00:00 /sbin/getty -8 38400 tty1
whoopsie
768 1 0 17:49 ? 00:00:00 whoopsie
root 1523 615 0 17:59 ? 00:00:00 sshd: ab [priv]
ab 1620 1523 0 17:59 ? 00:00:00 sshd: ab@pts/0
ab 1621 1620 1 17:59 pts/0 00:00:00 -bash
root 1670 2 0 17:59 ? 00:00:00 [flush-202:1]
ab 1672 1621 0 17:59 pts/0 00:00:00 ps -ef
ab@ip-10-179-37-185:~$ which hadoop
ab@ip-10-179-37-185:~$
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 10:35 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Do you have the whirr.log from that attempt?
A.
On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>
>
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
> 1545 history
>
>
>
>this is the console at the end of "mvn install"
>
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>[INFO] Whirr ............................................. SUCCESS [4.662s]
>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>[INFO] ------------------------------------------------------------------------
>[INFO] BUILD SUCCESS
>[INFO] ------------------------------------------------------------------------
>[INFO] Total time: 3:43.355s
>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>[INFO] Final Memory: 109M/262M
>[INFO] ------------------------------------------------------------------------
>ab@ubuntu12-64:~/git/whirr$ cd
>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
>
>
>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>
>
>i need some more coaching.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 5:54 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Try not building as root - that can throw things off.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
>as for the java 7 problem - i found this suggestion:
>>
>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>
>>
>>i tried to download whirr - as suggested here:
>>
>>
>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>
>>
>>o i did: git clone ...
>>
>>o i modified: core/src/main/resources/functions/...
>>o i did: mvn eclipse:eclipse ...
>>o i skipped: eclipse import
>>o i ran: mvn install
>>
>>
>>it fails in the "mvn install" during test
>>
>>
>>Tests in error:
>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>[..]
>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>
>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>
>>
>>
>>is there a better way to try this suggestion?
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Monday, August 5, 2013 4:48 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>
>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>Reading package lists...^M
>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>, error=^M
>>>gzip: stdin: not in gzip format^M
>>>tar: Child returned status 1^M
>>>tar: Error is not recoverable: exiting now^M
>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:22 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>
>>>ok - i'm not sure what you are asking.
>>>>
>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>
>>>>
>>>>
>>>>where these are the properties i think i changed or added from the original recipe:
>>>>
>>>>
>>>>whirr.cluster-name=hadoop-ec2
>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>whirr.hardware-id=t1.micro
>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>whirr.hadoop.version=1.2.1
>>>>whirr.provider=aws-ec2
>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>whirr.location-id=us-east-1
>>>>whirr.java.install-function=install_oracle_jdk7
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>
>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>i get a whirr error:
>>>>>>
>>>>>>
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>
>>>>>>
>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>
>>>>>>
>>>>>>how do i tell whirr to use a different suffix?
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i can't find java or hadoop - i'll look at the whirr.log for more info:
You can log into instances using the following ssh commands:
[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
[hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
ab@ubuntu12-64:~$ ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
Warning: Permanently added '54.224.175.65' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.2.0-48-virtual x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Tue Aug 6 17:59:19 UTC 2013
System load: 0.08 Processes: 57
Usage of /: 10.2% of 7.87GB Users logged in: 0
Memory usage: 26% IP address for eth0: 10.179.37.185
Swap usage: 0%
Graph this data and manage this system at https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Use Juju to deploy your cloud instances and workloads:
https://juju.ubuntu.com/#cloud-precise
24 packages can be updated.
24 updates are security updates.
Last login: Tue Aug 6 17:53:09 2013 from 50.0.160.200
ab@ip-10-179-37-185:~$ which java
ab@ip-10-179-37-185:~$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 17:49 ? 00:00:00 /sbin/init
root 2 0 0 17:49 ? 00:00:00 [kthreadd]
root 3 2 0 17:49 ? 00:00:00 [ksoftirqd/0]
root 4 2 0 17:49 ? 00:00:00 [kworker/0:0]
root 5 2 0 17:49 ? 00:00:00 [kworker/u:0]
root 6 2 0 17:49 ? 00:00:00 [migration/0]
root 7 2 0 17:49 ? 00:00:00 [watchdog/0]
root 8 2 0 17:49 ? 00:00:00 [cpuset]
root 9 2 0 17:49 ? 00:00:00 [khelper]
root 10 2 0 17:49 ? 00:00:00 [kdevtmpfs]
root 11 2 0 17:49 ? 00:00:00 [netns]
root 12 2 0 17:49 ? 00:00:00 [xenwatch]
root 13 2 0 17:49 ? 00:00:01 [xenbus]
root 14 2 0 17:49 ? 00:00:00 [sync_supers]
root 15 2 0 17:49 ? 00:00:00 [bdi-default]
root 16 2 0 17:49 ? 00:00:00 [kintegrityd]
root 17 2 0 17:49 ? 00:00:00 [kblockd]
root 18 2 0 17:49 ? 00:00:00 [ata_sff]
root 19 2 0 17:49 ? 00:00:00 [khubd]
root 20 2 0 17:49 ? 00:00:00 [md]
root 21 2 0 17:49 ? 00:00:00 [kworker/0:1]
root 23 2 0 17:49 ? 00:00:00 [kworker/u:1]
root 24 2 0 17:49 ? 00:00:00 [khungtaskd]
root 25 2 0 17:49 ? 00:00:00 [kswapd0]
root 26 2 0 17:49 ? 00:00:00 [ksmd]
root 27 2 0 17:49 ? 00:00:00 [fsnotify_mark]
root 28 2 0 17:49 ? 00:00:00 [ecryptfs-kthrea]
root 29 2 0 17:49 ? 00:00:00 [crypto]
root 37 2 0 17:49 ? 00:00:00 [kthrotld]
root 38 2 0 17:49 ? 00:00:00 [khvcd]
root 57 2 0 17:49 ? 00:00:00 [devfreq_wq]
root 158 2 0 17:49 ? 00:00:00 [jbd2/xvda1-8]
root 159 2 0 17:49 ? 00:00:00 [ext4-dio-unwrit]
root 270 1 0 17:49 ? 00:00:00 upstart-udev-bridge --daemon
root 272 1 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 319 272 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 320 272 0 17:49 ? 00:00:00 /sbin/udevd --daemon
root 413 1 0 17:49 ? 00:00:00 upstart-socket-bridge --daemon
root 468 1 0 17:49 ? 00:00:00 dhclient3 -e IF_METRIC=100 -pf /var/run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases -1
root 615 1 0 17:49 ? 00:00:00 /usr/sbin/sshd -D
syslog 631 1 0 17:49 ? 00:00:00 rsyslogd -c5
102 651 1 0 17:49 ? 00:00:00 dbus-daemon --system --fork --activation=upstart
root 700 1 0 17:49 tty4 00:00:00 /sbin/getty -8 38400 tty4
root 707 1 0 17:49 tty5 00:00:00 /sbin/getty -8 38400 tty5
root 713 1 0 17:49 tty2 00:00:00 /sbin/getty -8 38400 tty2
root 714 1 0 17:49 tty3 00:00:00 /sbin/getty -8 38400 tty3
root 717 1 0 17:49 tty6 00:00:00 /sbin/getty -8 38400 tty6
daemon 730 1 0 17:49 ? 00:00:00 atd
root 731 1 0 17:49 ? 00:00:00 cron
root 732 1 0 17:49 ? 00:00:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket
root 764 1 0 17:49 tty1 00:00:00 /sbin/getty -8 38400 tty1
whoopsie 768 1 0 17:49 ? 00:00:00 whoopsie
root 1523 615 0 17:59 ? 00:00:00 sshd: ab [priv]
ab 1620 1523 0 17:59 ? 00:00:00 sshd: ab@pts/0
ab 1621 1620 1 17:59 pts/0 00:00:00 -bash
root 1670 2 0 17:59 ? 00:00:00 [flush-202:1]
ab 1672 1621 0 17:59 pts/0 00:00:00 ps -ef
ab@ip-10-179-37-185:~$ which hadoop
ab@ip-10-179-37-185:~$
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 10:35 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Do you have the whirr.log from that attempt?
A.
On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>
>
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
> 1545 history
>
>
>
>this is the console at the end of "mvn install"
>
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>[INFO] Whirr ............................................. SUCCESS [4.662s]
>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>[INFO] ------------------------------------------------------------------------
>[INFO] BUILD SUCCESS
>[INFO] ------------------------------------------------------------------------
>[INFO] Total time: 3:43.355s
>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>[INFO] Final Memory: 109M/262M
>[INFO] ------------------------------------------------------------------------
>ab@ubuntu12-64:~/git/whirr$ cd
>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
>
>
>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>
>
>i need some more coaching.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 5:54 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Try not building as root - that can throw things off.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
>as for the java 7 problem - i found this suggestion:
>>
>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>
>>
>>i tried to download whirr - as suggested here:
>>
>>
>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>
>>
>>o i did: git clone ...
>>
>>o i modified: core/src/main/resources/functions/...
>>o i did: mvn eclipse:eclipse ...
>>o i skipped: eclipse import
>>o i ran: mvn install
>>
>>
>>it fails in the "mvn install" during test
>>
>>
>>Tests in error:
>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>[..]
>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>
>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>
>>
>>
>>is there a better way to try this suggestion?
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Monday, August 5, 2013 4:48 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>
>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>Reading package lists...^M
>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>, error=^M
>>>gzip: stdin: not in gzip format^M
>>>tar: Child returned status 1^M
>>>tar: Error is not recoverable: exiting now^M
>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:22 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>
>>>ok - i'm not sure what you are asking.
>>>>
>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>
>>>>
>>>>
>>>>where these are the properties i think i changed or added from the original recipe:
>>>>
>>>>
>>>>whirr.cluster-name=hadoop-ec2
>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>whirr.hardware-id=t1.micro
>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>whirr.hadoop.version=1.2.1
>>>>whirr.provider=aws-ec2
>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>whirr.location-id=us-east-1
>>>>whirr.java.install-function=install_oracle_jdk7
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>
>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>i get a whirr error:
>>>>>>
>>>>>>
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>
>>>>>>
>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>
>>>>>>
>>>>>>how do i tell whirr to use a different suffix?
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i ran on c1.xlarge, i'll send the properties an log directly to you
david@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
Running on provider aws-ec2 using identity AKIAJSRQBZBTPW75CUDA
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
at org.apache.whirr.cli.Main.run(Main.java:69)
at org.apache.whirr.cli.Main.main(Main.java:102)
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
at org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
... 4 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
... 7 more
Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
^Cdavid@ubuntu12-64:~$
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 12:11 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
A.
On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
can you help me move forward?
>
>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>
>
>
>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Tuesday, August 6, 2013 11:00 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>
>
>A.
>
>
>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
>so now i feel like an idiot, it is running now:
>>
>>ab@ubuntu12-64:~$ rm whirr.log
>>ab@ubuntu12-64:~$ !1544
>>
>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>Started cluster of 2 instances
>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>
>>
>>You can log into instances using the following ssh commands:
>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>
>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>ab@ubuntu12-64:~$
>>
>>
>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>
>>Sent: Tuesday, August 6, 2013 10:35 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Do you have the whirr.log from that attempt?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>
>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>
>>>
>>>
>>> 1530 git clone git://git.apache.org/whirr.git
>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1532 cd whirr/
>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1534 mvn install
>>> 1535 cd
>>> 1536 rm whirr.log
>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>> 1538 export MAVEN_OPTS=-Xmx200m
>>> 1539 cd ~/git/whirr/
>>> 1540 mvn install
>>> 1541 export MAVEN_OPTS=-Xmx1G
>>> 1542 mvn install
>>> 1543 cd
>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>> 1545 history
>>>
>>>
>>>
>>>this is the console at the end of "mvn install"
>>>
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] BUILD SUCCESS
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Total time: 3:43.355s
>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>[INFO] Final Memory: 109M/262M
>>>[INFO] ------------------------------------------------------------------------
>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>> ... 4 more
>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>> ... 7 more
>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:724)
>>>
>>>
>>>
>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>
>>>
>>>i need some more coaching.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 5:54 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Try not building as root - that can throw things off.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>
>>>as for the java 7 problem - i found this suggestion:
>>>>
>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>
>>>>
>>>>i tried to download whirr - as suggested here:
>>>>
>>>>
>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>
>>>>
>>>>o i did: git clone ...
>>>>
>>>>o i modified: core/src/main/resources/functions/...
>>>>o i did: mvn eclipse:eclipse ...
>>>>o i skipped: eclipse import
>>>>o i ran: mvn install
>>>>
>>>>
>>>>it fails in the "mvn install" during test
>>>>
>>>>
>>>>Tests in error:
>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>[..]
>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>
>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>
>>>>
>>>>
>>>>is there a better way to try this suggestion?
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>
>>>>To: a b <au...@yahoo.com>
>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>Reading package lists...^M
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>, error=^M
>>>>>gzip: stdin: not in gzip format^M
>>>>>tar: Child returned status 1^M
>>>>>tar: Error is not recoverable: exiting now^M
>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>ok - i'm not sure what you are asking.
>>>>>>
>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>
>>>>>>
>>>>>>
>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>
>>>>>>
>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>whirr.hardware-id=t1.micro
>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>whirr.hadoop.version=1.2.1
>>>>>>whirr.provider=aws-ec2
>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>whirr.location-id=us-east-1
>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>
>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>
>>>>>>>i get a whirr error:
>>>>>>>>
>>>>>>>>
>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>
>>>>>>>>
>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>
>>>>>>>>
>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
no problem, i can do that - give me a little while and i'll get back.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 12:11 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
A.
On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
can you help me move forward?
>
>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>
>
>
>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Tuesday, August 6, 2013 11:00 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>
>
>A.
>
>
>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
>so now i feel like an idiot, it is running now:
>>
>>ab@ubuntu12-64:~$ rm whirr.log
>>ab@ubuntu12-64:~$ !1544
>>
>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>Started cluster of 2 instances
>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>
>>
>>You can log into instances using the following ssh commands:
>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>
>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>ab@ubuntu12-64:~$
>>
>>
>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>
>>Sent: Tuesday, August 6, 2013 10:35 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Do you have the whirr.log from that attempt?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>
>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>
>>>
>>>
>>> 1530 git clone git://git.apache.org/whirr.git
>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1532 cd whirr/
>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1534 mvn install
>>> 1535 cd
>>> 1536 rm whirr.log
>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>> 1538 export MAVEN_OPTS=-Xmx200m
>>> 1539 cd ~/git/whirr/
>>> 1540 mvn install
>>> 1541 export MAVEN_OPTS=-Xmx1G
>>> 1542 mvn install
>>> 1543 cd
>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>> 1545 history
>>>
>>>
>>>
>>>this is the console at the end of "mvn install"
>>>
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] BUILD SUCCESS
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Total time: 3:43.355s
>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>[INFO] Final Memory: 109M/262M
>>>[INFO] ------------------------------------------------------------------------
>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>> ... 4 more
>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>> ... 7 more
>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:724)
>>>
>>>
>>>
>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>
>>>
>>>i need some more coaching.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 5:54 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Try not building as root - that can throw things off.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>
>>>as for the java 7 problem - i found this suggestion:
>>>>
>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>
>>>>
>>>>i tried to download whirr - as suggested here:
>>>>
>>>>
>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>
>>>>
>>>>o i did: git clone ...
>>>>
>>>>o i modified: core/src/main/resources/functions/...
>>>>o i did: mvn eclipse:eclipse ...
>>>>o i skipped: eclipse import
>>>>o i ran: mvn install
>>>>
>>>>
>>>>it fails in the "mvn install" during test
>>>>
>>>>
>>>>Tests in error:
>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>[..]
>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>
>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>
>>>>
>>>>
>>>>is there a better way to try this suggestion?
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>
>>>>To: a b <au...@yahoo.com>
>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>Reading package lists...^M
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>, error=^M
>>>>>gzip: stdin: not in gzip format^M
>>>>>tar: Child returned status 1^M
>>>>>tar: Error is not recoverable: exiting now^M
>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>ok - i'm not sure what you are asking.
>>>>>>
>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>
>>>>>>
>>>>>>
>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>
>>>>>>
>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>whirr.hardware-id=t1.micro
>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>whirr.hadoop.version=1.2.1
>>>>>>whirr.provider=aws-ec2
>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>whirr.location-id=us-east-1
>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>
>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>
>>>>>>>i get a whirr error:
>>>>>>>>
>>>>>>>>
>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>
>>>>>>>>
>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>
>>>>>>>>
>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i found some suggestions for private ec2 amis and added the following lines to my hadoop properties file:
whirr.bootstrap-user=ubuntu
whirr.private-key-file=/home/ab/whirr/ab-ubuntu-12-64.pem
whirr.public-key-file=/home/ab/whirr/ab-ubuntu-12-64.pem.pub
this booted and started running if i used the openjdk. i then tried to switch to the oracle_jdk7 which is already installed on the private ami:
whirr.java.install-function=install_oracle_jdk7
#whirr.java.install-function=install_openjdk
thinking the script would notice that it was already installed and skip it - but hadoop did not start on either node. i think i'll comment out the install line for java all together and see what happens.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 4:22 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
That suggests you need to tweak the SSH settings on the image.
I'll try to make a run at testing your config tomorrow.
A.
On Tue, Aug 6, 2013 at 4:20 PM, a b <au...@yahoo.com> wrote:
so i had an inspiration - i booted an ubuntu server, installed oracle java 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami in the properties file and tried to launch - for both the 0.8.2 version and the current version a got a packet size error - and both hung in some kind of a slow loop, repeating 7 tries to get the right packet size. should launching from my own ami work?
>
>
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Tuesday, August 6, 2013 12:11 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
>
>
>The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
>
>
>A.
>
>
>
>On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
>
>can you help me move forward?
>>
>>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>>
>>
>>
>>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Tuesday, August 6, 2013 11:00 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>>
>>
>>A.
>>
>>
>>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>>
>>so now i feel like an idiot, it is running now:
>>>
>>>ab@ubuntu12-64:~$ rm whirr.log
>>>ab@ubuntu12-64:~$ !1544
>>>
>>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>>Started cluster of 2 instances
>>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>>
>>>
>>>You can log into instances using the following ssh commands:
>>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>>
>>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>>ab@ubuntu12-64:~$
>>>
>>>
>>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>
>>>Sent: Tuesday, August 6, 2013 10:35 AM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Do you have the whirr.log from that attempt?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>>
>>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>>
>>>>
>>>>
>>>> 1530 git clone git://git.apache.org/whirr.git
>>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1532 cd whirr/
>>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1534 mvn install
>>>> 1535 cd
>>>> 1536 rm whirr.log
>>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>>> 1538 export MAVEN_OPTS=-Xmx200m
>>>> 1539 cd ~/git/whirr/
>>>> 1540 mvn install
>>>> 1541 export MAVEN_OPTS=-Xmx1G
>>>> 1542 mvn install
>>>> 1543 cd
>>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>> 1545 history
>>>>
>>>>
>>>>
>>>>this is the console at the end of "mvn install"
>>>>
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] BUILD SUCCESS
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Total time: 3:43.355s
>>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>>[INFO] Final Memory: 109M/262M
>>>>[INFO] ------------------------------------------------------------------------
>>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>>> ... 4 more
>>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>>> ... 7 more
>>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>> at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>>
>>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>>
>>>>
>>>>i need some more coaching.
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 5:54 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Try not building as root - that can throw things off.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>as for the java 7 problem - i found this suggestion:
>>>>>
>>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>>
>>>>>
>>>>>i tried to download whirr - as suggested here:
>>>>>
>>>>>
>>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>>
>>>>>
>>>>>o i did: git clone ...
>>>>>
>>>>>o i modified: core/src/main/resources/functions/...
>>>>>o i did: mvn eclipse:eclipse ...
>>>>>o i skipped: eclipse import
>>>>>o i ran: mvn install
>>>>>
>>>>>
>>>>>it fails in the "mvn install" during test
>>>>>
>>>>>
>>>>>Tests in error:
>>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>>[..]
>>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>>
>>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>>
>>>>>[INFO] ------------------------------------------------------------------------
>>>>>[INFO] Reactor Summary:
>>>>>[INFO]
>>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>>
>>>>>
>>>>>
>>>>>is there a better way to try this suggestion?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>
>>>>>To: a b <au...@yahoo.com>
>>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>>Reading package lists...^M
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>>, error=^M
>>>>>>gzip: stdin: not in gzip format^M
>>>>>>tar: Child returned status 1^M
>>>>>>tar: Error is not recoverable: exiting now^M
>>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>>
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>ok - i'm not sure what you are asking.
>>>>>>>
>>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>>
>>>>>>>
>>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>>whirr.hardware-id=t1.micro
>>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>>whirr.hadoop.version=1.2.1
>>>>>>>whirr.provider=aws-ec2
>>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>>whirr.location-id=us-east-1
>>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>________________________________
>>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>>
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>>
>>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>>
>>>>>>>>A.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>>
>>>>>>>>i get a whirr error:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
For what it's worth, an easy way to get a Hadoop cluster running with all
the bells and whistles is whirr-cm (https://github.com/cloudera/whirr-cm),
which uses Cloudera Manager to deploy/configure a cluster with Hadoop,
HBase, etc. I've tested it on Ubuntu and CentOS pretty thoroughly.
A.
On Wed, Aug 7, 2013 at 3:32 PM, a b <au...@yahoo.com> wrote:
> i've come to a conclusion and i am moving on:
>
> o i discovered no way to run oracle-jdk7 with ubuntu
> - i couldn't modify the existing install file and then rebuild using
> the current version from git, the rebuild of the current git code failed
> during the launch
> - i couldn't preinstall oracle-jdk7 and then launch that ami, the
> running cluster did not have hadoop running, i'm not sure why - although i
> know the hadoop home directory is missing, it is also missing in a
> successful launch
>
> o i discovered only 1 way to run openjdk-6 with ubuntu
> - it will not run with a public ubuntu image, the launch fails during
> the openjdk install which is missing (at least) libxt from x11 which is
> required for headless java install
> - if openjdk-6 is preinstalled, then the launch will run, the openjdk-6
> install appears to notice openjdk is already there and skips a second
> install, and then launches hadoop successfully
>
> i am going to try to run with openjdk-6 and hadoop 1.2.1. thanks andrew
> for helping me and giving me the encouragement needed to get to some
> conclusion. now, i can move on and actually try to run hadoop.
>
>
> ------------------------------
> *From:* a b <au...@yahoo.com>
> *To:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Tuesday, August 6, 2013 4:46 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> ok, then - i'll give up for today - i'd tweak, but i've no idea what to
> do. thanks so much for helping me - sorry, i'm having such a struggle
> getting out of the nest.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 4:22 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> That suggests you need to tweak the SSH settings on the image.
>
> I'll try to make a run at testing your config tomorrow.
>
> A.
>
> On Tue, Aug 6, 2013 at 4:20 PM, a b <au...@yahoo.com> wrote:
>
> so i had an inspiration - i booted an ubuntu server, installed oracle java
> 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami
> in the properties file and tried to launch - for both the 0.8.2 version and
> the current version a got a packet size error - and both hung in some kind
> of a slow loop, repeating 7 tries to get the right packet size. should
> launching from my own ami work?
>
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 12:11 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> On the no private key issue - that's odd. Looks like a race condition of
> some sort, but I'm not sure how it's happening. That does look like a bug
> of some sort in jclouds, I think. I'll dig into it when I get a chance. But
> it doesn't seem to be actually blocking anything - the instances are still
> getting created, and the initial login to the instances is working fine.
>
> The really weird thing is that one instance is behaving differently than
> the other - apt-get update is getting repo URLS at archive.ubuntu.com on
> the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange.
> Mind trying with a larger size than t1.micro, just in case that's being
> weird for some reason?
>
> A.
>
> On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
>
> can you help me move forward?
>
> o i can't use the 0.8 version - it doesn't install a headless version on a
> server.
> o i can't modify the current version - unmodified, it gets a:
> java.util.concurrent.ExecutionException:
> java.lang.IllegalArgumentException: no private key configured
>
> o maybe with some coaching - i'm a git beginner, i can check out a 0.8
> version and modify that? or is there a better path?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Tuesday, August 6, 2013 11:00 AM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Random weird luck is always real. =) If you'd like to open a JIRA for the
> Oracle JDK download issues, that'd be appreciated - I'll see what I can do
> with it.
>
> A.
>
> On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
> so now i feel like an idiot, it is running now:
>
> ab@ubuntu12-64:~$ rm whirr.log
> ab@ubuntu12-64:~$ !1544
>
> ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
> Started cluster of 2 instances
> Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker],
> publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f,
> nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f,
> name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185],
> publicAddresses=[54.224.175.65], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}},
> Instance{roles=[hadoop-namenode, hadoop-jobtracker],
> publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b,
> nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b,
> name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106],
> publicAddresses=[54.227.189.132], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
> You can log into instances using the following ssh commands:
> [hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.224.175.65
> [hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.227.189.132
>
> To destroy cluster, run 'whirr destroy-cluster' with the same options used
> to launch it.
> ab@ubuntu12-64:~$
>
> as you know, i did change the properties file this morning to download the
> openjdk - i don't know if that is a difference or not. let me check if java
> is installed and hadoop is running - i think i should have gotten an error,
> since i didn't update the oracle java 7 script.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 10:35 AM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Do you have the whirr.log from that attempt?
>
> A.
>
> On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i've come to a conclusion and i am moving on:
o i discovered no way to run oracle-jdk7 with ubuntu
- i couldn't modify the existing install file and then rebuild using the current version from git, the rebuild of the current git code failed during the launch
- i couldn't preinstall oracle-jdk7 and then launch that ami, the running cluster did not have hadoop running, i'm not sure why - although i know the hadoop home directory is missing, it is also missing in a successful launch
o i discovered only 1 way to run openjdk-6 with ubuntu
- it will not run with a public ubuntu image, the launch fails during the openjdk install which is missing (at least) libxt from x11 which is required for headless java install
- if openjdk-6 is preinstalled, then the launch will run, the openjdk-6 install appears to notice openjdk is already there and skips a second install, and then launches hadoop successfully
i am going to try to run with openjdk-6 and hadoop 1.2.1. thanks andrew for helping me and giving me the encouragement needed to get to some conclusion. now, i can move on and actually try to run hadoop.
________________________________
From: a b <au...@yahoo.com>
To: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 4:46 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
ok, then - i'll give up for today - i'd tweak, but i've no idea what to do. thanks so much for helping me - sorry, i'm having such a struggle getting out of the nest.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 4:22 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
That suggests you need to tweak the SSH settings on the image.
I'll try to make a run at testing your config tomorrow.
A.
On Tue, Aug 6, 2013 at 4:20 PM, a b <au...@yahoo.com> wrote:
so i had an inspiration - i booted an ubuntu server, installed oracle java 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami in the properties file and tried to launch - for both the 0.8.2 version and the current version a got a packet size error - and both hung in some kind of a slow loop, repeating 7 tries to get the right packet size. should launching from my own ami work?
>
>
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Tuesday, August 6, 2013 12:11 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
>
>
>The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
>
>
>A.
>
>
>
>On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
>
>can you help me move forward?
>>
>>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>>
>>
>>
>>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Tuesday, August 6, 2013 11:00 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>>
>>
>>A.
>>
>>
>>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>>
>>so now i feel like an idiot, it is running now:
>>>
>>>ab@ubuntu12-64:~$ rm whirr.log
>>>ab@ubuntu12-64:~$ !1544
>>>
>>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>>Started cluster of 2 instances
>>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>>
>>>
>>>You can log into instances using the following ssh commands:
>>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>>
>>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>>ab@ubuntu12-64:~$
>>>
>>>
>>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>
>>>Sent: Tuesday, August 6, 2013 10:35 AM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Do you have the whirr.log from that attempt?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>>
>>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>>
>>>>
>>>>
>>>> 1530 git clone git://git.apache.org/whirr.git
>>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1532 cd whirr/
>>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1534 mvn install
>>>> 1535 cd
>>>> 1536 rm whirr.log
>>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>>> 1538 export MAVEN_OPTS=-Xmx200m
>>>> 1539 cd ~/git/whirr/
>>>> 1540 mvn install
>>>> 1541 export MAVEN_OPTS=-Xmx1G
>>>> 1542 mvn install
>>>> 1543 cd
>>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>> 1545 history
>>>>
>>>>
>>>>
>>>>this is the console at the end of "mvn install"
>>>>
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] BUILD SUCCESS
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Total time: 3:43.355s
>>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>>[INFO] Final Memory: 109M/262M
>>>>[INFO] ------------------------------------------------------------------------
>>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>>> ... 4 more
>>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>>> ... 7 more
>>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>> at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>>
>>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>>
>>>>
>>>>i need some more coaching.
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 5:54 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Try not building as root - that can throw things off.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>as for the java 7 problem - i found this suggestion:
>>>>>
>>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>>
>>>>>
>>>>>i tried to download whirr - as suggested here:
>>>>>
>>>>>
>>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>>
>>>>>
>>>>>o i did: git clone ...
>>>>>
>>>>>o i modified: core/src/main/resources/functions/...
>>>>>o i did: mvn eclipse:eclipse ...
>>>>>o i skipped: eclipse import
>>>>>o i ran: mvn install
>>>>>
>>>>>
>>>>>it fails in the "mvn install" during test
>>>>>
>>>>>
>>>>>Tests in error:
>>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>>[..]
>>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>>
>>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>>
>>>>>[INFO] ------------------------------------------------------------------------
>>>>>[INFO] Reactor Summary:
>>>>>[INFO]
>>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>>
>>>>>
>>>>>
>>>>>is there a better way to try this suggestion?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>
>>>>>To: a b <au...@yahoo.com>
>>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>>Reading package lists...^M
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>>, error=^M
>>>>>>gzip: stdin: not in gzip format^M
>>>>>>tar: Child returned status 1^M
>>>>>>tar: Error is not recoverable: exiting now^M
>>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>>
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>ok - i'm not sure what you are asking.
>>>>>>>
>>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>>
>>>>>>>
>>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>>whirr.hardware-id=t1.micro
>>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>>whirr.hadoop.version=1.2.1
>>>>>>>whirr.provider=aws-ec2
>>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>>whirr.location-id=us-east-1
>>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>________________________________
>>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>>
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>>
>>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>>
>>>>>>>>A.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>>
>>>>>>>>i get a whirr error:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
ok, then - i'll give up for today - i'd tweak, but i've no idea what to do. thanks so much for helping me - sorry, i'm having such a struggle getting out of the nest.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 4:22 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
That suggests you need to tweak the SSH settings on the image.
I'll try to make a run at testing your config tomorrow.
A.
On Tue, Aug 6, 2013 at 4:20 PM, a b <au...@yahoo.com> wrote:
so i had an inspiration - i booted an ubuntu server, installed oracle java 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami in the properties file and tried to launch - for both the 0.8.2 version and the current version a got a packet size error - and both hung in some kind of a slow loop, repeating 7 tries to get the right packet size. should launching from my own ami work?
>
>
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Tuesday, August 6, 2013 12:11 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
>
>
>The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
>
>
>A.
>
>
>
>On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
>
>can you help me move forward?
>>
>>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>>
>>
>>
>>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Tuesday, August 6, 2013 11:00 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>>
>>
>>A.
>>
>>
>>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>>
>>so now i feel like an idiot, it is running now:
>>>
>>>ab@ubuntu12-64:~$ rm whirr.log
>>>ab@ubuntu12-64:~$ !1544
>>>
>>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>>Started cluster of 2 instances
>>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>>
>>>
>>>You can log into instances using the following ssh commands:
>>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>>
>>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>>ab@ubuntu12-64:~$
>>>
>>>
>>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>
>>>Sent: Tuesday, August 6, 2013 10:35 AM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Do you have the whirr.log from that attempt?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>>
>>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>>
>>>>
>>>>
>>>> 1530 git clone git://git.apache.org/whirr.git
>>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1532 cd whirr/
>>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>>> 1534 mvn install
>>>> 1535 cd
>>>> 1536 rm whirr.log
>>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>>> 1538 export MAVEN_OPTS=-Xmx200m
>>>> 1539 cd ~/git/whirr/
>>>> 1540 mvn install
>>>> 1541 export MAVEN_OPTS=-Xmx1G
>>>> 1542 mvn install
>>>> 1543 cd
>>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>> 1545 history
>>>>
>>>>
>>>>
>>>>this is the console at the end of "mvn install"
>>>>
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] BUILD SUCCESS
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Total time: 3:43.355s
>>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>>[INFO] Final Memory: 109M/262M
>>>>[INFO] ------------------------------------------------------------------------
>>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>>> ... 4 more
>>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>>> ... 7 more
>>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>> at java.lang.Thread.run(Thread.java:724)
>>>>
>>>>
>>>>
>>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>>
>>>>
>>>>i need some more coaching.
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 5:54 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Try not building as root - that can throw things off.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>as for the java 7 problem - i found this suggestion:
>>>>>
>>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>>
>>>>>
>>>>>i tried to download whirr - as suggested here:
>>>>>
>>>>>
>>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>>
>>>>>
>>>>>o i did: git clone ...
>>>>>
>>>>>o i modified: core/src/main/resources/functions/...
>>>>>o i did: mvn eclipse:eclipse ...
>>>>>o i skipped: eclipse import
>>>>>o i ran: mvn install
>>>>>
>>>>>
>>>>>it fails in the "mvn install" during test
>>>>>
>>>>>
>>>>>Tests in error:
>>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>>[..]
>>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>>
>>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>>
>>>>>[INFO] ------------------------------------------------------------------------
>>>>>[INFO] Reactor Summary:
>>>>>[INFO]
>>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>>
>>>>>
>>>>>
>>>>>is there a better way to try this suggestion?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>
>>>>>To: a b <au...@yahoo.com>
>>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>>Reading package lists...^M
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>>, error=^M
>>>>>>gzip: stdin: not in gzip format^M
>>>>>>tar: Child returned status 1^M
>>>>>>tar: Error is not recoverable: exiting now^M
>>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>>
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>ok - i'm not sure what you are asking.
>>>>>>>
>>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>>
>>>>>>>
>>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>>whirr.hardware-id=t1.micro
>>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>>whirr.hadoop.version=1.2.1
>>>>>>>whirr.provider=aws-ec2
>>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>>whirr.location-id=us-east-1
>>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>________________________________
>>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>>
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>>
>>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>>
>>>>>>>>A.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>>
>>>>>>>>i get a whirr error:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
That suggests you need to tweak the SSH settings on the image.
I'll try to make a run at testing your config tomorrow.
A.
On Tue, Aug 6, 2013 at 4:20 PM, a b <au...@yahoo.com> wrote:
> so i had an inspiration - i booted an ubuntu server, installed oracle java
> 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami
> in the properties file and tried to launch - for both the 0.8.2 version and
> the current version a got a packet size error - and both hung in some kind
> of a slow loop, repeating 7 tries to get the right packet size. should
> launching from my own ami work?
>
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 12:11 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> On the no private key issue - that's odd. Looks like a race condition of
> some sort, but I'm not sure how it's happening. That does look like a bug
> of some sort in jclouds, I think. I'll dig into it when I get a chance. But
> it doesn't seem to be actually blocking anything - the instances are still
> getting created, and the initial login to the instances is working fine.
>
> The really weird thing is that one instance is behaving differently than
> the other - apt-get update is getting repo URLS at archive.ubuntu.com on
> the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange.
> Mind trying with a larger size than t1.micro, just in case that's being
> weird for some reason?
>
> A.
>
> On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
>
> can you help me move forward?
>
> o i can't use the 0.8 version - it doesn't install a headless version on a
> server.
> o i can't modify the current version - unmodified, it gets a:
> java.util.concurrent.ExecutionException:
> java.lang.IllegalArgumentException: no private key configured
>
> o maybe with some coaching - i'm a git beginner, i can check out a 0.8
> version and modify that? or is there a better path?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Tuesday, August 6, 2013 11:00 AM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Random weird luck is always real. =) If you'd like to open a JIRA for the
> Oracle JDK download issues, that'd be appreciated - I'll see what I can do
> with it.
>
> A.
>
> On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
> so now i feel like an idiot, it is running now:
>
> ab@ubuntu12-64:~$ rm whirr.log
> ab@ubuntu12-64:~$ !1544
>
> ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
> Started cluster of 2 instances
> Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker],
> publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f,
> nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f,
> name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185],
> publicAddresses=[54.224.175.65], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}},
> Instance{roles=[hadoop-namenode, hadoop-jobtracker],
> publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b,
> nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b,
> name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106],
> publicAddresses=[54.227.189.132], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
> You can log into instances using the following ssh commands:
> [hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.224.175.65
> [hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.227.189.132
>
> To destroy cluster, run 'whirr destroy-cluster' with the same options used
> to launch it.
> ab@ubuntu12-64:~$
>
> as you know, i did change the properties file this morning to download the
> openjdk - i don't know if that is a difference or not. let me check if java
> is installed and hadoop is running - i think i should have gotten an error,
> since i didn't update the oracle java 7 script.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 10:35 AM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Do you have the whirr.log from that attempt?
>
> A.
>
> On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
so i had an inspiration - i booted an ubuntu server, installed oracle java 7 by hand, and then saved it as a new private ec2 ami. i put the id for ami in the properties file and tried to launch - for both the 0.8.2 version and the current version a got a packet size error - and both hung in some kind of a slow loop, repeating 7 tries to get the right packet size. should launching from my own ami work?
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 12:11 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
On the no private key issue - that's odd. Looks like a race condition of some sort, but I'm not sure how it's happening. That does look like a bug of some sort in jclouds, I think. I'll dig into it when I get a chance. But it doesn't seem to be actually blocking anything - the instances are still getting created, and the initial login to the instances is working fine.
The really weird thing is that one instance is behaving differently than the other - apt-get update is getting repo URLS at archive.ubuntu.com on the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange. Mind trying with a larger size than t1.micro, just in case that's being weird for some reason?
A.
On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
can you help me move forward?
>
>o i can't use the 0.8 version - it doesn't install a headless version on a server.
>o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
>
>
>
>o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Tuesday, August 6, 2013 11:00 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
>
>
>A.
>
>
>On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
>so now i feel like an idiot, it is running now:
>>
>>ab@ubuntu12-64:~$ rm whirr.log
>>ab@ubuntu12-64:~$ !1544
>>
>>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>>Started cluster of 2 instances
>>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>>
>>
>>You can log into instances using the following ssh commands:
>>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>>
>>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>>ab@ubuntu12-64:~$
>>
>>
>>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>
>>Sent: Tuesday, August 6, 2013 10:35 AM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Do you have the whirr.log from that attempt?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>>
>>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>>
>>>
>>>
>>> 1530 git clone git://git.apache.org/whirr.git
>>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1532 cd whirr/
>>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>>> 1534 mvn install
>>> 1535 cd
>>> 1536 rm whirr.log
>>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>>> 1538 export MAVEN_OPTS=-Xmx200m
>>> 1539 cd ~/git/whirr/
>>> 1540 mvn install
>>> 1541 export MAVEN_OPTS=-Xmx1G
>>> 1542 mvn install
>>> 1543 cd
>>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>> 1545 history
>>>
>>>
>>>
>>>this is the console at the end of "mvn install"
>>>
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] BUILD SUCCESS
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Total time: 3:43.355s
>>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>>[INFO] Final Memory: 109M/262M
>>>[INFO] ------------------------------------------------------------------------
>>>ab@ubuntu12-64:~/git/whirr$ cd
>>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>>> at org.apache.whirr.cli.Main.run(Main.java:69)
>>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>>> ... 4 more
>>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>>> ... 7 more
>>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:724)
>>>
>>>
>>>
>>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>>
>>>
>>>i need some more coaching.
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 5:54 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Try not building as root - that can throw things off.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>>
>>>as for the java 7 problem - i found this suggestion:
>>>>
>>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>>
>>>>
>>>>i tried to download whirr - as suggested here:
>>>>
>>>>
>>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>>
>>>>
>>>>o i did: git clone ...
>>>>
>>>>o i modified: core/src/main/resources/functions/...
>>>>o i did: mvn eclipse:eclipse ...
>>>>o i skipped: eclipse import
>>>>o i ran: mvn install
>>>>
>>>>
>>>>it fails in the "mvn install" during test
>>>>
>>>>
>>>>Tests in error:
>>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>>[..]
>>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>>
>>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>>
>>>>[INFO] ------------------------------------------------------------------------
>>>>[INFO] Reactor Summary:
>>>>[INFO]
>>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>>
>>>>
>>>>
>>>>is there a better way to try this suggestion?
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>
>>>>To: a b <au...@yahoo.com>
>>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>>Sent: Monday, August 5, 2013 4:48 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>>Reading package lists...^M
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>>, error=^M
>>>>>gzip: stdin: not in gzip format^M
>>>>>tar: Child returned status 1^M
>>>>>tar: Error is not recoverable: exiting now^M
>>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>>
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>ok - i'm not sure what you are asking.
>>>>>>
>>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>>
>>>>>>
>>>>>>
>>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>>
>>>>>>
>>>>>>whirr.cluster-name=hadoop-ec2
>>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>>whirr.hardware-id=t1.micro
>>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>>whirr.hadoop.version=1.2.1
>>>>>>whirr.provider=aws-ec2
>>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>>whirr.location-id=us-east-1
>>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>>
>>>>>>
>>>>>>
>>>>>>________________________________
>>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>>
>>>>>>
>>>>>>
>>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>>
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>>
>>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>>
>>>>>>>A.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>>
>>>>>>>i get a whirr error:
>>>>>>>>
>>>>>>>>
>>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>>
>>>>>>>>
>>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>>
>>>>>>>>
>>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
On the no private key issue - that's odd. Looks like a race condition of
some sort, but I'm not sure how it's happening. That does look like a bug
of some sort in jclouds, I think. I'll dig into it when I get a chance. But
it doesn't seem to be actually blocking anything - the instances are still
getting created, and the initial login to the instances is working fine.
The really weird thing is that one instance is behaving differently than
the other - apt-get update is getting repo URLS at archive.ubuntu.com on
the bad one, and us-east-1.ec2.archive.ubuntu.com. That's just strange.
Mind trying with a larger size than t1.micro, just in case that's being
weird for some reason?
A.
On Tue, Aug 6, 2013 at 11:35 AM, a b <au...@yahoo.com> wrote:
> can you help me move forward?
>
> o i can't use the 0.8 version - it doesn't install a headless version on a
> server.
> o i can't modify the current version - unmodified, it gets a:
> java.util.concurrent.ExecutionException:
> java.lang.IllegalArgumentException: no private key configured
>
> o maybe with some coaching - i'm a git beginner, i can check out a 0.8
> version and modify that? or is there a better path?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Tuesday, August 6, 2013 11:00 AM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Random weird luck is always real. =) If you'd like to open a JIRA for the
> Oracle JDK download issues, that'd be appreciated - I'll see what I can do
> with it.
>
> A.
>
> On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
>
> so now i feel like an idiot, it is running now:
>
> ab@ubuntu12-64:~$ rm whirr.log
> ab@ubuntu12-64:~$ !1544
>
> ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
> Started cluster of 2 instances
> Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker],
> publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f,
> nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f,
> name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185],
> publicAddresses=[54.224.175.65], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}},
> Instance{roles=[hadoop-namenode, hadoop-jobtracker],
> publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b,
> nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b,
> name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106],
> publicAddresses=[54.227.189.132], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
> You can log into instances using the following ssh commands:
> [hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.224.175.65
> [hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.227.189.132
>
> To destroy cluster, run 'whirr destroy-cluster' with the same options used
> to launch it.
> ab@ubuntu12-64:~$
>
> as you know, i did change the properties file this morning to download the
> openjdk - i don't know if that is a difference or not. let me check if java
> is installed and hadoop is running - i think i should have gotten an error,
> since i didn't update the oracle java 7 script.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 10:35 AM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Do you have the whirr.log from that attempt?
>
> A.
>
> On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
can you help me move forward?
o i can't use the 0.8 version - it doesn't install a headless version on a server.
o i can't modify the current version - unmodified, it gets a: java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: no private key configured
o maybe with some coaching - i'm a git beginner, i can check out a 0.8 version and modify that? or is there a better path?
________________________________
From: Andrew Bayer <an...@gmail.com>
To: a b <au...@yahoo.com>
Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 11:00 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
A.
On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
so now i feel like an idiot, it is running now:
>
>ab@ubuntu12-64:~$ rm whirr.log
>ab@ubuntu12-64:~$ !1544
>
>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>Started cluster of 2 instances
>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
>You can log into instances using the following ssh commands:
>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>
>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>ab@ubuntu12-64:~$
>
>
>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>
>Sent: Tuesday, August 6, 2013 10:35 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Do you have the whirr.log from that attempt?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>
>>
>>
>> 1530 git clone git://git.apache.org/whirr.git
>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1532 cd whirr/
>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1534 mvn install
>> 1535 cd
>> 1536 rm whirr.log
>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>> 1538 export MAVEN_OPTS=-Xmx200m
>> 1539 cd ~/git/whirr/
>> 1540 mvn install
>> 1541 export MAVEN_OPTS=-Xmx1G
>> 1542 mvn install
>> 1543 cd
>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>> 1545 history
>>
>>
>>
>>this is the console at the end of "mvn install"
>>
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>[INFO] ------------------------------------------------------------------------
>>[INFO] BUILD SUCCESS
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Total time: 3:43.355s
>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>[INFO] Final Memory: 109M/262M
>>[INFO] ------------------------------------------------------------------------
>>ab@ubuntu12-64:~/git/whirr$ cd
>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>> at org.apache.whirr.cli.Main.run(Main.java:69)
>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>> ... 4 more
>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>> ... 7 more
>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:724)
>>
>>
>>
>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>
>>
>>i need some more coaching.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 5:54 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Try not building as root - that can throw things off.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>
>>as for the java 7 problem - i found this suggestion:
>>>
>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>
>>>
>>>i tried to download whirr - as suggested here:
>>>
>>>
>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>
>>>
>>>o i did: git clone ...
>>>
>>>o i modified: core/src/main/resources/functions/...
>>>o i did: mvn eclipse:eclipse ...
>>>o i skipped: eclipse import
>>>o i ran: mvn install
>>>
>>>
>>>it fails in the "mvn install" during test
>>>
>>>
>>>Tests in error:
>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>[..]
>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>
>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>
>>>
>>>
>>>is there a better way to try this suggestion?
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>
>>>To: a b <au...@yahoo.com>
>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>Sent: Monday, August 5, 2013 4:48 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>
>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>Reading package lists...^M
>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>, error=^M
>>>>gzip: stdin: not in gzip format^M
>>>>tar: Child returned status 1^M
>>>>tar: Error is not recoverable: exiting now^M
>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>ok - i'm not sure what you are asking.
>>>>>
>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>
>>>>>
>>>>>
>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>
>>>>>
>>>>>whirr.cluster-name=hadoop-ec2
>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>whirr.hardware-id=t1.micro
>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>whirr.hadoop.version=1.2.1
>>>>>whirr.provider=aws-ec2
>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>whirr.location-id=us-east-1
>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>
>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>i get a whirr error:
>>>>>>>
>>>>>>>
>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>
>>>>>>>
>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>
>>>>>>>
>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
will this work for a jira report?
it looks like the install_openjdk:
function install_openjdk_deb() {
retry_apt_get update
retry_apt_get -y install openjdk-6-jdk
# Try to set JAVA_HOME in a number of commonly used locations
# Lifting JAVA_HOME detection from jclouds
only tries to install: openjdk-6-jdk
but i am running an ubuntu server which is missing the x11 libraries (libxt), i guess i need: openjdk-6-jre-headless
where do i file the report?
________________________________
From: Andrew Bayer <an...@gmail.com>
To: a b <au...@yahoo.com>
Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 11:00 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Random weird luck is always real. =) If you'd like to open a JIRA for the Oracle JDK download issues, that'd be appreciated - I'll see what I can do with it.
A.
On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
so now i feel like an idiot, it is running now:
>
>ab@ubuntu12-64:~$ rm whirr.log
>ab@ubuntu12-64:~$ !1544
>
>~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
>Started cluster of 2 instances
>Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04,
description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2,
imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
>You can log into instances using the following ssh commands:
>[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
>[hadoop-namenode+hadoop-jobtracker]: ssh -i
/home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
>
>To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
>ab@ubuntu12-64:~$
>
>
>as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update the oracle java 7 script.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>
>Sent: Tuesday, August 6, 2013 10:35 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Do you have the whirr.log from that attempt?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>
>>
>>
>> 1530 git clone git://git.apache.org/whirr.git
>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1532 cd whirr/
>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1534 mvn install
>> 1535 cd
>> 1536 rm whirr.log
>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>> 1538 export MAVEN_OPTS=-Xmx200m
>> 1539 cd ~/git/whirr/
>> 1540 mvn install
>> 1541 export MAVEN_OPTS=-Xmx1G
>> 1542 mvn install
>> 1543 cd
>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>> 1545 history
>>
>>
>>
>>this is the console at the end of "mvn install"
>>
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>[INFO] ------------------------------------------------------------------------
>>[INFO] BUILD SUCCESS
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Total time: 3:43.355s
>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>[INFO] Final Memory: 109M/262M
>>[INFO] ------------------------------------------------------------------------
>>ab@ubuntu12-64:~/git/whirr$ cd
>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>> at org.apache.whirr.cli.Main.run(Main.java:69)
>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>> ... 4 more
>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>> ... 7 more
>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:724)
>>
>>
>>
>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>
>>
>>i need some more coaching.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 5:54 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Try not building as root - that can throw things off.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>
>>as for the java 7 problem - i found this suggestion:
>>>
>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>
>>>
>>>i tried to download whirr - as suggested here:
>>>
>>>
>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>
>>>
>>>o i did: git clone ...
>>>
>>>o i modified: core/src/main/resources/functions/...
>>>o i did: mvn eclipse:eclipse ...
>>>o i skipped: eclipse import
>>>o i ran: mvn install
>>>
>>>
>>>it fails in the "mvn install" during test
>>>
>>>
>>>Tests in error:
>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>[..]
>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>
>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>
>>>
>>>
>>>is there a better way to try this suggestion?
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>
>>>To: a b <au...@yahoo.com>
>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>Sent: Monday, August 5, 2013 4:48 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>
>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>Reading package lists...^M
>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>, error=^M
>>>>gzip: stdin: not in gzip format^M
>>>>tar: Child returned status 1^M
>>>>tar: Error is not recoverable: exiting now^M
>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>ok - i'm not sure what you are asking.
>>>>>
>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>
>>>>>
>>>>>
>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>
>>>>>
>>>>>whirr.cluster-name=hadoop-ec2
>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>whirr.hardware-id=t1.micro
>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>whirr.hadoop.version=1.2.1
>>>>>whirr.provider=aws-ec2
>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>whirr.location-id=us-east-1
>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>
>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>i get a whirr error:
>>>>>>>
>>>>>>>
>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>
>>>>>>>
>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>
>>>>>>>
>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Random weird luck is always real. =) If you'd like to open a JIRA for the
Oracle JDK download issues, that'd be appreciated - I'll see what I can do
with it.
A.
On Tue, Aug 6, 2013 at 10:59 AM, a b <au...@yahoo.com> wrote:
> so now i feel like an idiot, it is running now:
>
> ab@ubuntu12-64:~$ rm whirr.log
> ab@ubuntu12-64:~$ !1544
>
> ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
> Started cluster of 2 instances
> Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker],
> publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f,
> nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f,
> name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185],
> publicAddresses=[54.224.175.65], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}},
> Instance{roles=[hadoop-namenode, hadoop-jobtracker],
> publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b,
> nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b,
> name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d,
> description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]},
> group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu,
> arch=paravirtual, version=12.04,
> description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624,
> is64Bit=true}, status=RUNNING[running], loginPort=22,
> hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106],
> publicAddresses=[54.227.189.132], hardware={id=t1.micro,
> providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630,
> volumes=[{id=vol-b95f76f9, type=SAN, device=/dev/sda1, bootDevice=true,
> durable=true}], hypervisor=xen,
> supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)},
> loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
>
>
> You can log into instances using the following ssh commands:
> [hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.224.175.65
> [hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o
> "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no
> ab@54.227.189.132
>
> To destroy cluster, run 'whirr destroy-cluster' with the same options used
> to launch it.
> ab@ubuntu12-64:~$
>
> as you know, i did change the properties file this morning to download the
> openjdk - i don't know if that is a difference or not. let me check if java
> is installed and hadoop is running - i think i should have gotten an error,
> since i didn't update the oracle java 7 script.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 10:35 AM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Do you have the whirr.log from that attempt?
>
> A.
>
> On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
so now i feel like an idiot, it is running now:
ab@ubuntu12-64:~$ rm whirr.log
ab@ubuntu12-64:~$ !1544
~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXX
Started cluster of 2 instances
Cluster{instances=[Instance{roles=[hadoop-datanode, hadoop-tasktracker], publicIp=54.224.175.65, privateIp=10.179.37.185, id=us-east-1/i-5fc0083f, nodeMetadata={id=us-east-1/i-5fc0083f, providerId=i-5fc0083f, name=hadoop-ec2-5fc0083f, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-179-37-185, privateAddresses=[10.179.37.185], publicAddresses=[54.224.175.65], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b85f76f8, type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen,
supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-5fc0083f}}}, Instance{roles=[hadoop-namenode, hadoop-jobtracker], publicIp=54.227.189.132, privateIp=10.10.141.106, id=us-east-1/i-69eede0b, nodeMetadata={id=us-east-1/i-69eede0b, providerId=i-69eede0b, name=hadoop-ec2-69eede0b, location={scope=ZONE, id=us-east-1d, description=us-east-1d, parent=us-east-1, iso3166Codes=[US-VA]}, group=hadoop-ec2, imageId=us-east-1/ami-23d9a94a, os={family=ubuntu, arch=paravirtual, version=12.04, description=099720109477/ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20130624, is64Bit=true}, status=RUNNING[running], loginPort=22, hostname=ip-10-10-141-106, privateAddresses=[10.10.141.106], publicAddresses=[54.227.189.132], hardware={id=t1.micro, providerId=t1.micro, processors=[{cores=1.0, speed=1.0}], ram=630, volumes=[{id=vol-b95f76f9,
type=SAN, device=/dev/sda1, bootDevice=true, durable=true}], hypervisor=xen, supportsImage=And(requiresRootDeviceType(ebs),Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,ALWAYS_TRUE)}, loginUser=ubuntu, userMetadata={Name=hadoop-ec2-69eede0b}}}]}
You can log into instances using the following ssh commands:
[hadoop-datanode+hadoop-tasktracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.224.175.65
[hadoop-namenode+hadoop-jobtracker]: ssh -i /home/ab/.ssh/id_rsa -o "UserKnownHostsFile /dev/null" -o StrictHostKeyChecking=no ab@54.227.189.132
To destroy cluster, run 'whirr destroy-cluster' with the same options used to launch it.
ab@ubuntu12-64:~$
as you know, i did change the properties file this morning to download the openjdk - i don't know if that is a difference or not. let me check if java is installed and hadoop is running - i think i should have gotten an error, since i didn't update theoracle java 7 script.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 10:35 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Do you have the whirr.log from that attempt?
A.
On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>
>
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
> 1545 history
>
>
>
>this is the console at the end of "mvn install"
>
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>[INFO] Whirr ............................................. SUCCESS [4.662s]
>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>[INFO] ------------------------------------------------------------------------
>[INFO] BUILD SUCCESS
>[INFO] ------------------------------------------------------------------------
>[INFO] Total time: 3:43.355s
>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>[INFO] Final Memory: 109M/262M
>[INFO] ------------------------------------------------------------------------
>ab@ubuntu12-64:~/git/whirr$ cd
>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
>
>
>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>
>
>i need some more coaching.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 5:54 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Try not building as root - that can throw things off.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
>as for the java 7 problem - i found this suggestion:
>>
>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>
>>
>>i tried to download whirr - as suggested here:
>>
>>
>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>
>>
>>o i did: git clone ...
>>
>>o i modified: core/src/main/resources/functions/...
>>o i did: mvn eclipse:eclipse ...
>>o i skipped: eclipse import
>>o i ran: mvn install
>>
>>
>>it fails in the "mvn install" during test
>>
>>
>>Tests in error:
>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>[..]
>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>
>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>
>>
>>
>>is there a better way to try this suggestion?
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Monday, August 5, 2013 4:48 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>
>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>Reading package lists...^M
>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>, error=^M
>>>gzip: stdin: not in gzip format^M
>>>tar: Child returned status 1^M
>>>tar: Error is not recoverable: exiting now^M
>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:22 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>
>>>ok - i'm not sure what you are asking.
>>>>
>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>
>>>>
>>>>
>>>>where these are the properties i think i changed or added from the original recipe:
>>>>
>>>>
>>>>whirr.cluster-name=hadoop-ec2
>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>whirr.hardware-id=t1.micro
>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>whirr.hadoop.version=1.2.1
>>>>whirr.provider=aws-ec2
>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>whirr.location-id=us-east-1
>>>>whirr.java.install-function=install_oracle_jdk7
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>
>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>i get a whirr error:
>>>>>>
>>>>>>
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>
>>>>>>
>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>
>>>>>>
>>>>>>how do i tell whirr to use a different suffix?
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i sent the whirr.log - it has an exception.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: a b <au...@yahoo.com>
Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Tuesday, August 6, 2013 10:51 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
You can send it to me directly if you want.
On the openjdk thing - I just don't get why openjdk-6-jdk isn't going to install its dependencies. That's normal apt-get behavior.
A.
On Tue, Aug 6, 2013 at 10:42 AM, a b <au...@yahoo.com> wrote:
no - but i can recreate it - it is probably too large to post - will it be obvious what to extract?
>
>
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Tuesday, August 6, 2013 10:35 AM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Do you have the whirr.log from that attempt?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
>i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>>
>>
>>
>> 1530 git clone git://git.apache.org/whirr.git
>> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1532 cd whirr/
>> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
>> 1534 mvn install
>> 1535 cd
>> 1536 rm whirr.log
>> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
>> 1538 export MAVEN_OPTS=-Xmx200m
>> 1539 cd ~/git/whirr/
>> 1540 mvn install
>> 1541 export MAVEN_OPTS=-Xmx1G
>> 1542 mvn install
>> 1543 cd
>> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>> 1545 history
>>
>>
>>
>>this is the console at the end of "mvn install"
>>
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>>[INFO] Whirr ............................................. SUCCESS [4.662s]
>>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>>[INFO] ------------------------------------------------------------------------
>>[INFO] BUILD SUCCESS
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Total time: 3:43.355s
>>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>>[INFO] Final Memory: 109M/262M
>>[INFO] ------------------------------------------------------------------------
>>ab@ubuntu12-64:~/git/whirr$ cd
>>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
>> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
>> at org.apache.whirr.cli.Main.run(Main.java:69)
>> at org.apache.whirr.cli.Main.main(Main.java:102)
>>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
>> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
>> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
>> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
>> ... 4 more
>>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
>> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
>> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
>> ... 7 more
>>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
>> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
>> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
>> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:724)
>>
>>
>>
>>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>>
>>
>>i need some more coaching.
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 5:54 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Try not building as root - that can throw things off.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>>
>>as for the java 7 problem - i found this suggestion:
>>>
>>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>>
>>>
>>>i tried to download whirr - as suggested here:
>>>
>>>
>>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>>
>>>
>>>o i did: git clone ...
>>>
>>>o i modified: core/src/main/resources/functions/...
>>>o i did: mvn eclipse:eclipse ...
>>>o i skipped: eclipse import
>>>o i ran: mvn install
>>>
>>>
>>>it fails in the "mvn install" during test
>>>
>>>
>>>Tests in error:
>>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>>[..]
>>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>>
>>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>>
>>>[INFO] ------------------------------------------------------------------------
>>>[INFO] Reactor Summary:
>>>[INFO]
>>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>>
>>>
>>>
>>>is there a better way to try this suggestion?
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>
>>>To: a b <au...@yahoo.com>
>>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>>Sent: Monday, August 5, 2013 4:48 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>>
>>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>>Reading package lists...^M
>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>>, error=^M
>>>>gzip: stdin: not in gzip format^M
>>>>tar: Child returned status 1^M
>>>>tar: Error is not recoverable: exiting now^M
>>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>>
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:22 PM
>>>>
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>ok - i'm not sure what you are asking.
>>>>>
>>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>>
>>>>>
>>>>>
>>>>>where these are the properties i think i changed or added from the original recipe:
>>>>>
>>>>>
>>>>>whirr.cluster-name=hadoop-ec2
>>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>>whirr.hardware-id=t1.micro
>>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>>whirr.hadoop.version=1.2.1
>>>>>whirr.provider=aws-ec2
>>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>>whirr.location-id=us-east-1
>>>>>whirr.java.install-function=install_oracle_jdk7
>>>>>
>>>>>
>>>>>
>>>>>________________________________
>>>>> From: Andrew Bayer <an...@gmail.com>
>>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>>
>>>>>
>>>>>
>>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>>
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>>
>>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>>
>>>>>>A.
>>>>>>
>>>>>>
>>>>>>
>>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>>
>>>>>>i get a whirr error:
>>>>>>>
>>>>>>>
>>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>>
>>>>>>>
>>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>>
>>>>>>>
>>>>>>>how do i tell whirr to use a different suffix?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
You can send it to me directly if you want.
On the openjdk thing - I just don't get why openjdk-6-jdk isn't going to
install its dependencies. That's normal apt-get behavior.
A.
On Tue, Aug 6, 2013 at 10:42 AM, a b <au...@yahoo.com> wrote:
> no - but i can recreate it - it is probably too large to post - will it be
> obvious what to extract?
>
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Tuesday, August 6, 2013 10:35 AM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Do you have the whirr.log from that attempt?
>
> A.
>
> On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
>
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
no - but i can recreate it - it is probably too large to post - will it be obvious what to extract?
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Tuesday, August 6, 2013 10:35 AM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Do you have the whirr.log from that attempt?
A.
On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
>
>
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster
--config ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
> 1545 history
>
>
>
>this is the console at the end of "mvn install"
>
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
>[INFO] Whirr ............................................. SUCCESS [4.662s]
>[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
>[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
>[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
>[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
>[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
>[INFO] Apache Whirr HBase
................................ SUCCESS [5.196s]
>[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
>[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
>[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
>[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
>[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
>[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
>[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
>[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
>[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
>[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
>[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
>[INFO] Apache Whirr Examples
............................. SUCCESS [4.494s]
>[INFO] ------------------------------------------------------------------------
>[INFO] BUILD SUCCESS
>[INFO] ------------------------------------------------------------------------
>[INFO] Total time: 3:43.355s
>[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
>[INFO] Final Memory: 109M/262M
>[INFO] ------------------------------------------------------------------------
>ab@ubuntu12-64:~/git/whirr$ cd
>ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
>Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
>Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
>Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
>Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
>Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
> at
org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
>
>
>2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
>
>
>i need some more coaching.
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 5:54 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Try not building as root - that can throw things off.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
>as for the java 7 problem - i found this suggestion:
>>
>>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>>
>>
>>i tried to download whirr - as suggested here:
>>
>>
>>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>>
>>
>>o i did: git clone ...
>>
>>o i modified: core/src/main/resources/functions/...
>>o i did: mvn eclipse:eclipse ...
>>o i skipped: eclipse import
>>o i ran: mvn install
>>
>>
>>it fails in the "mvn install" during test
>>
>>
>>Tests in error:
>>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>>[..]
>> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
>> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>>
>>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>>
>>[INFO] ------------------------------------------------------------------------
>>[INFO] Reactor Summary:
>>[INFO]
>>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>>[INFO] Whirr ............................................. SUCCESS [4.488s]
>>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>>
>>
>>
>>is there a better way to try this suggestion?
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>
>>To: a b <au...@yahoo.com>
>>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>>Sent: Monday, August 5, 2013 4:48 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>>
>>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>>Reading package lists...^M
>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>>, error=^M
>>>gzip: stdin: not in gzip format^M
>>>tar: Child returned status 1^M
>>>tar: Error is not recoverable: exiting now^M
>>>mv: cannot stat `jdk1*': No such file or directory^M
>>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>>
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:22 PM
>>>
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>>
>>>ok - i'm not sure what you are asking.
>>>>
>>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>>
>>>>
>>>>
>>>>where these are the properties i think i changed or added from the original recipe:
>>>>
>>>>
>>>>whirr.cluster-name=hadoop-ec2
>>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>>whirr.hardware-id=t1.micro
>>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>>whirr.hadoop.version=1.2.1
>>>>whirr.provider=aws-ec2
>>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>>whirr.location-id=us-east-1
>>>>whirr.java.install-function=install_oracle_jdk7
>>>>
>>>>
>>>>
>>>>________________________________
>>>> From: Andrew Bayer <an...@gmail.com>
>>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>>Sent: Monday, August 5, 2013 4:11 PM
>>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>>
>>>>
>>>>
>>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>>
>>>>
>>>>A.
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>>
>>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>>
>>>>>A.
>>>>>
>>>>>
>>>>>
>>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>>
>>>>>i get a whirr error:
>>>>>>
>>>>>>
>>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>>
>>>>>>
>>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>>
>>>>>>
>>>>>>how do i tell whirr to use a different suffix?
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Do you have the whirr.log from that attempt?
A.
On Mon, Aug 5, 2013 at 8:34 PM, a b <au...@yahoo.com> wrote:
> i built whirr in my own directory - i didn't change it (yet) - i just
> check it out and tried to compile - you can see i had some memory issues
> that i didn't notice right away:
>
> 1530 git clone git://git.apache.org/whirr.git
> 1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1532 cd whirr/
> 1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
> 1534 mvn install
> 1535 cd
> 1536 rm whirr.log
> 1537 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1538 export MAVEN_OPTS=-Xmx200m
> 1539 cd ~/git/whirr/
> 1540 mvn install
> 1541 export MAVEN_OPTS=-Xmx1G
> 1542 mvn install
> 1543 cd
> 1544 ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> 1545 history
>
> this is the console at the end of "mvn install"
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
> [INFO] Whirr ............................................. SUCCESS [4.662s]
> [INFO] Apache Whirr Core ................................. SUCCESS
> [1:43.913s]
> [INFO] Apache Whirr Cassandra ............................ SUCCESS
> [10.705s]
> [INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
> [INFO] Apache Whirr Hadoop ............................... SUCCESS
> [11.294s]
> [INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
> [INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
> [INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
> [INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
> [INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
> [INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
> [INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
> [INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
> [INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
> [INFO] Apache Whirr Chef ................................. SUCCESS
> [12.153s]
> [INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
> [INFO] Apache Whirr Kerberos ............................. SUCCESS
> [12.207s]
> [INFO] Apache Whirr CLI .................................. SUCCESS
> [14.940s]
> [INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
> [INFO]
> ------------------------------------------------------------------------
> [INFO] BUILD SUCCESS
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Total time: 3:43.355s
> [INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
> [INFO] Final Memory: 109M/262M
> [INFO]
> ------------------------------------------------------------------------
> ab@ubuntu12-64:~/git/whirr$ cd
> ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config
> ~/whirr/recipes/hadoop.properties
> Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
> Exception in thread "main" java.lang.RuntimeException:
> java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
> at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
> at org.apache.whirr.cli.Main.run(Main.java:69)
> at org.apache.whirr.cli.Main.main(Main.java:102)
> Caused by: java.io.IOException: java.util.concurrent.ExecutionException:
> java.io.IOException: Too many instance failed while bootstrapping! 0
> successfully started instances while 0 instances failed
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
> at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
> at
> org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
> at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
> ... 4 more
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException:
> Too many instance failed while bootstrapping! 0 successfully started
> instances while 0 instances failed
> at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
> at java.util.concurrent.FutureTask.get(FutureTask.java:111)
> at
> org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
> ... 7 more
> Caused by: java.io.IOException: Too many instance failed while
> bootstrapping! 0 successfully started instances while 0 instances failed
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
> at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
> at java.util.concurrent.FutureTask.run(FutureTask.java:166)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:724)
>
> 2 instances were started - like i asked for - the 1st instance was
> terminated almost immediately, the 2nd was left running for a while -
> eventually, i hit ^c on the launch and terminated the 2nd instance from the
> aws console.
>
> i need some more coaching.
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 5:54 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Try not building as root - that can throw things off.
>
> A.
>
> On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
>
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
i built whirr in my own directory - i didn't change it (yet) - i just check it out and tried to compile - you can see i had some memory issues that i didn't notice right away:
1530 git clone git://git.apache.org/whirr.git
1531 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
1532 cd whirr/
1533 mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs
1534 mvn install
1535 cd
1536 rm whirr.log
1537 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
1538 export MAVEN_OPTS=-Xmx200m
1539 cd ~/git/whirr/
1540 mvn install
1541 export MAVEN_OPTS=-Xmx1G
1542 mvn install
1543 cd
1544 ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
1545 history
this is the console at the end of "mvn install"
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.519s]
[INFO] Whirr ............................................. SUCCESS [4.662s]
[INFO] Apache Whirr Core ................................. SUCCESS [1:43.913s]
[INFO] Apache Whirr Cassandra ............................ SUCCESS [10.705s]
[INFO] Apache Whirr Ganglia .............................. SUCCESS [3.683s]
[INFO] Apache Whirr Hadoop ............................... SUCCESS [11.294s]
[INFO] Apache Whirr ZooKeeper ............................ SUCCESS [4.477s]
[INFO] Apache Whirr HBase ................................ SUCCESS [5.196s]
[INFO] Apache Whirr YARN ................................. SUCCESS [4.867s]
[INFO] Apache Whirr CDH .................................. SUCCESS [5.552s]
[INFO] Apache Whirr Mahout ............................... SUCCESS [2.452s]
[INFO] Apache Whirr Pig .................................. SUCCESS [2.393s]
[INFO] Apache Whirr ElasticSearch ........................ SUCCESS [4.145s]
[INFO] Apache Whirr Hama ................................. SUCCESS [2.841s]
[INFO] Apache Whirr Puppet ............................... SUCCESS [3.242s]
[INFO] Apache Whirr Chef ................................. SUCCESS [12.153s]
[INFO] Apache Whirr Solr ................................. SUCCESS [4.997s]
[INFO] Apache Whirr Kerberos ............................. SUCCESS [12.207s]
[INFO] Apache Whirr CLI .................................. SUCCESS [14.940s]
[INFO] Apache Whirr Examples ............................. SUCCESS [4.494s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3:43.355s
[INFO] Finished at: Mon Aug 05 20:11:58 PDT 2013
[INFO] Final Memory: 109M/262M
[INFO] ------------------------------------------------------------------------
ab@ubuntu12-64:~/git/whirr$ cd
ab@ubuntu12-64:~$ ~/git/whirr/bin/whirr launch-cluster --config ~/whirr/recipes/hadoop.properties
Running on provider aws-ec2 using identity XXXXXXXXXXXXXXXXXXX
Exception in thread "main" java.lang.RuntimeException: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:128)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:69)
at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:59)
at org.apache.whirr.cli.Main.run(Main.java:69)
at org.apache.whirr.cli.Main.main(Main.java:102)
Caused by: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:125)
at org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:131)
at org.apache.whirr.ClusterController.bootstrapCluster(ClusterController.java:137)
at org.apache.whirr.ClusterController.launchCluster(ClusterController.java:113)
... 4 more
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:252)
at java.util.concurrent.FutureTask.get(FutureTask.java:111)
at org.apache.whirr.actions.BootstrapClusterAction.doAction(BootstrapClusterAction.java:120)
... 7 more
Caused by: java.io.IOException: Too many instance failed while bootstrapping! 0 successfully started instances while 0 instances failed
at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:93)
at org.apache.whirr.compute.StartupProcess.call(StartupProcess.java:41)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
2 instances were started - like i asked for - the 1st instance was terminated almost immediately, the 2nd was left running for a while - eventually, i hit ^c on the launch and terminated the 2nd instance from the aws console.
i need some more coaching.
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Monday, August 5, 2013 5:54 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Try not building as root - that can throw things off.
A.
On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
as for the java 7 problem - i found this suggestion:
>
>http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
>
>i tried to download whirr - as suggested here:
>
>
>https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
>
>o i did: git clone ...
>
>o i modified: core/src/main/resources/functions/...
>o i did: mvn eclipse:eclipse ...
>o i skipped: eclipse import
>o i ran: mvn install
>
>
>it fails in the "mvn install" during test
>
>
>Tests in error:
>
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
>[..]
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
>
>Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
>[INFO] ------------------------------------------------------------------------
>[INFO] Reactor Summary:
>[INFO]
>[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
>[INFO] Whirr ............................................. SUCCESS [4.488s]
>[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
>[INFO] Apache Whirr Cassandra ............................ SKIPPED
>[INFO]
Apache Whirr Ganglia .............................. SKIPPED
>[INFO] Apache Whirr Hadoop ............................... SKIPPED
>
>
>
>is there a better way to try this suggestion?
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>
>To: a b <au...@yahoo.com>
>Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
>Sent: Monday, August 5, 2013 4:48 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
>Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>>Reading package lists...^M
>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>>, error=^M
>>gzip: stdin: not in gzip format^M
>>tar: Child returned status 1^M
>>tar: Error is not recoverable: exiting now^M
>>mv: cannot stat `jdk1*': No such file or directory^M
>>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>>
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 4:22 PM
>>
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>>
>>ok - i'm not sure what you are asking.
>>>
>>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>>
>>>
>>>
>>>where these are the properties i think i changed or added from the original recipe:
>>>
>>>
>>>whirr.cluster-name=hadoop-ec2
>>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>>whirr.hardware-id=t1.micro
>>>whirr.image-id=us-east-1/ami-25d9a94c
>>>whirr.hadoop.version=1.2.1
>>>whirr.provider=aws-ec2
>>>whirr.identity=${env:AWS_ACCESS_KEY}
>>>whirr.credential=${env:AWS_SECRET_KEY}
>>>whirr.location-id=us-east-1
>>>whirr.java.install-function=install_oracle_jdk7
>>>
>>>
>>>
>>>________________________________
>>> From: Andrew Bayer <an...@gmail.com>
>>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>>Sent: Monday, August 5, 2013 4:11 PM
>>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>>
>>>
>>>
>>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>>
>>>
>>>A.
>>>
>>>
>>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>>
>>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>>
>>>>A.
>>>>
>>>>
>>>>
>>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>>
>>>>i get a whirr error:
>>>>>
>>>>>
>>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>>
>>>>>
>>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>>
>>>>>
>>>>>how do i tell whirr to use a different suffix?
>>>>>
>>>>>
>>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Try not building as root - that can throw things off.
A.
On Mon, Aug 5, 2013 at 5:44 PM, a b <au...@yahoo.com> wrote:
> as for the java 7 problem - i found this suggestion:
>
>
> http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
>
> i tried to download whirr - as suggested here:
>
> https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
>
> o i did: git clone ...
> o i modified: core/src/main/resources/functions/...
> o i did: mvn eclipse:eclipse ...
> o i skipped: eclipse import
> o i ran: mvn install
>
> it fails in the "mvn install" during test
>
> Tests in error:
>
> testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
>
> testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest):
> cluster-user != root or do not run as root
> [..]
>
> testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
>
> testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest):
> cluster-user != root or do not run as root
> testOverrides(org.apache.whirr.command.AbstractClusterCommandTest):
> cluster-user != root or do not run as root
>
> Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
>
> [INFO]
> ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
> [INFO] Whirr ............................................. SUCCESS [4.488s]
> [INFO] Apache Whirr Core ................................. FAILURE
> [17.113s]
> [INFO] Apache Whirr Cassandra ............................ SKIPPED
> [INFO] Apache Whirr Ganglia .............................. SKIPPED
> [INFO] Apache Whirr Hadoop ............................... SKIPPED
>
> is there a better way to try this suggestion?
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
>
> *To:* a b <au...@yahoo.com>
> *Cc:* "user@whirr.apache.org" <us...@whirr.apache.org>
> *Sent:* Monday, August 5, 2013 4:48 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Looks like it's the Oracle JDK7 download that's failing -
> http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gzisn't actually there. I don't know if they actually have a consistent link
> to get the JDK7 tarball regardless of version. I'd just use OpenJDK for
> now, if you can.
>
> A.
>
> On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
>
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
as for the java 7 problem - i found this suggestion:
http://stackoverflow.com/questions/13225149/how-to-install-jdk-7-on-ec2-cluster-via-whirr
i tried to download whirr - as suggested here:
https://cwiki.apache.org/confluence/display/WHIRR/How+To+Contribute
o i did: git clone ...
o i modified: core/src/main/resources/functions/...
o i did: mvn eclipse:eclipse ...
o i skipped: eclipse import
o i ran: mvn install
it fails in the "mvn install" during test
Tests in error:
testActionIsExecutedOnAllRelevantNodes(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
testFilterScriptExecutionByRole(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
testFilterScriptExecutionByInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
testFilterScriptExecutionByRoleAndInstanceId(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
testNoScriptExecutionsForNoop(org.apache.whirr.actions.StopServicesActionTest): cluster-user != root or do not run as root
[..]
testExecuteOnlyBootstrapForNoop(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
testExecuteOnlyBootstrapForNoopWithListener(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
testNoInitScriptsAfterConfigurationStartedAndNoConfigScriptsAfterDestroy(org.apache.whirr.service.DryRunModuleTest): cluster-user != root or do not run as root
testOverrides(org.apache.whirr.command.AbstractClusterCommandTest): cluster-user != root or do not run as root
Tests run: 99, Failures: 0, Errors: 49, Skipped: 0
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Whirr Build Tools .......................... SUCCESS [2.612s]
[INFO] Whirr ............................................. SUCCESS [4.488s]
[INFO] Apache Whirr Core ................................. FAILURE [17.113s]
[INFO] Apache Whirr Cassandra ............................ SKIPPED
[INFO] Apache Whirr Ganglia .............................. SKIPPED
[INFO] Apache Whirr Hadoop ............................... SKIPPED
is there a better way to try this suggestion?
________________________________
From: Andrew Bayer <an...@gmail.com>
To: a b <au...@yahoo.com>
Cc: "user@whirr.apache.org" <us...@whirr.apache.org>
Sent: Monday, August 5, 2013 4:48 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Looks like it's the Oracle JDK7 download that's failing - http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't actually there. I don't know if they actually have a consistent link to get the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if you can.
A.
On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
>Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
>Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
>Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
>Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
>Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
>Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
>Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
>Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
>Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
>Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
>Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
>Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
>Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
>Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
>Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
>Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
>Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
>Fetched 26.1 MB in 21s (1,241 kB/s)^M
>Reading package lists...^M
>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
>, error=^M
>gzip: stdin: not in gzip format^M
>tar: Child returned status 1^M
>tar: Error is not recoverable: exiting now^M
>mv: cannot stat `jdk1*': No such file or directory^M
>update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 4:22 PM
>
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
>ok - i'm not sure what you are asking.
>>
>>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>>
>>
>>
>>where these are the properties i think i changed or added from the original recipe:
>>
>>
>>whirr.cluster-name=hadoop-ec2
>>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>>whirr.hardware-id=t1.micro
>>whirr.image-id=us-east-1/ami-25d9a94c
>>whirr.hadoop.version=1.2.1
>>whirr.provider=aws-ec2
>>whirr.identity=${env:AWS_ACCESS_KEY}
>>whirr.credential=${env:AWS_SECRET_KEY}
>>whirr.location-id=us-east-1
>>whirr.java.install-function=install_oracle_jdk7
>>
>>
>>
>>________________________________
>> From: Andrew Bayer <an...@gmail.com>
>>To: user@whirr.apache.org; a b <au...@yahoo.com>
>>Sent: Monday, August 5, 2013 4:11 PM
>>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>>
>>
>>
>>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>>
>>
>>A.
>>
>>
>>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>>
>>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>>
>>>A.
>>>
>>>
>>>
>>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>>
>>>i get a whirr error:
>>>>
>>>>
>>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>>
>>>>
>>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>>
>>>>
>>>>how do i tell whirr to use a different suffix?
>>>>
>>>>
>>>
>>
>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Looks like it's the Oracle JDK7 download that's failing -
http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.tar.gz isn't
actually there. I don't know if they actually have a consistent link to get
the JDK7 tarball regardless of version. I'd just use OpenJDK for now, if
you can.
A.
On Mon, Aug 5, 2013 at 4:33 PM, a b <au...@yahoo.com> wrote:
> Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64
> Packages [1,273 kB]^M
> Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64
> Packages [4,786 kB]^M
> Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages
> [1,274 kB]^M
> Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386
> Packages [4,796 kB]^M
> Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main
> TranslationIndex [3,706 B]^M
> Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> TranslationIndex [2,922 B]^M
> Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Sources [412 kB]^M
> Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Sources [93.1 kB]^M
> Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64
> Packages [672 kB]^M
> Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> amd64 Packages [210 kB]^M
> Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386
> Packages [692 kB]^M
> Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> i386 Packages [214 kB]^M
> Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> TranslationIndex [3,564 B]^M
> Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> TranslationIndex [2,850 B]^M
> Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main
> Translation-en [726 kB]^M
> Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe
> Translation-en [3,341 kB]^M
> Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main
> Translation-en [298 kB]^M
> Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe
> Translation-en [123 kB]^M
> Fetched 26.1 MB in 21s (1,241 kB/s)^M
> Reading package lists...^M
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5.
> Continuing.^M
> , error=^M
> gzip: stdin: not in gzip format^M
> tar: Child returned status 1^M
> tar: Error is not recoverable: exiting now^M
> mv: cannot stat `jdk1*': No such file or directory^M
> update-alternatives: error: alternative path
> /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:22 PM
>
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, Whirr does try to download the md5 file, but if it fails to find it,
> that's not a blocking error - it'll keep going anyway. What's after that in
> the logs?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
>
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
Get:19 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB]^M
Get:20 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB]^M
Get:21 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB]^M
Get:22 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB]^M
Get:23 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B]^M
Get:24 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B]^M
Get:25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [412 kB]^M
Get:26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [93.1 kB]^M
Get:27 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [672 kB]^M
Get:28 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [210 kB]^M
Get:29 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [692 kB]^M
Get:30 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [214 kB]^M
Get:31 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B]^M
Get:32 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B]^M
Get:33 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB]^M
Get:34 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB]^M
Get:35 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [298 kB]^M
Get:36 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [123 kB]^M
Fetched 26.1 MB in 21s (1,241 kB/s)^M
Reading package lists...^M
Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5. Continuing.^M
, error=^M
gzip: stdin: not in gzip format^M
tar: Child returned status 1^M
tar: Error is not recoverable: exiting now^M
mv: cannot stat `jdk1*': No such file or directory^M
update-alternatives: error: alternative path /usr/lib/jvm/java-7-oracle/bin/java doesn't exist.^M
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Monday, August 5, 2013 4:22 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Ok, Whirr does try to download the md5 file, but if it fails to find it, that's not a blocking error - it'll keep going anyway. What's after that in the logs?
A.
On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
ok - i'm not sure what you are asking.
>
>whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
>
>
>
>where these are the properties i think i changed or added from the original recipe:
>
>
>whirr.cluster-name=hadoop-ec2
>whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
>whirr.hardware-id=t1.micro
>whirr.image-id=us-east-1/ami-25d9a94c
>whirr.hadoop.version=1.2.1
>whirr.provider=aws-ec2
>whirr.identity=${env:AWS_ACCESS_KEY}
>whirr.credential=${env:AWS_SECRET_KEY}
>whirr.location-id=us-east-1
>whirr.java.install-function=install_oracle_jdk7
>
>
>
>________________________________
> From: Andrew Bayer <an...@gmail.com>
>To: user@whirr.apache.org; a b <au...@yahoo.com>
>Sent: Monday, August 5, 2013 4:11 PM
>Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
>
>
>Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
>
>
>A.
>
>
>On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
>
>That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>>
>>A.
>>
>>
>>
>>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>>
>>i get a whirr error:
>>>
>>>
>>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>>
>>>
>>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>>
>>>
>>>how do i tell whirr to use a different suffix?
>>>
>>>
>>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Ok, Whirr does try to download the md5 file, but if it fails to find it,
that's not a blocking error - it'll keep going anyway. What's after that in
the logs?
A.
On Mon, Aug 5, 2013 at 4:19 PM, a b <au...@yahoo.com> wrote:
> ok - i'm not sure what you are asking.
>
> *whirr launch-cluster --config ~/whirr/recipes/hadoop.properties*
>
> where these are the properties i think i changed or added from the
> original recipe:
>
> *whirr.cluster-name=hadoop-ec2*
> *whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker*
> *whirr.hardware-id=t1.micro*
> ***whirr.image-id=us-east-1/ami-25d9a94c*
> ***whirr.hadoop.version=1.2.1*
> ***whirr.provider=aws-ec2***
> *whirr.identity=${env:AWS_ACCESS_KEY}*
> *whirr.credential=${env:AWS_SECRET_KEY}*
> *whirr.location-id=us-east-1*
> *whirr.java.install-function=install_oracle_jdk7*
>
> ------------------------------
> *From:* Andrew Bayer <an...@gmail.com>
> *To:* user@whirr.apache.org; a b <au...@yahoo.com>
> *Sent:* Monday, August 5, 2013 4:11 PM
> *Subject:* Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
>
> Ok, it looks like they've actually been doing .mds for a while. Where are
> you seeing this error?
>
> A.
>
> On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com>wrote:
>
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>
>
>
>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by a b <au...@yahoo.com>.
ok - i'm not sure what you are asking.
whirr launch-cluster --config
~/whirr/recipes/hadoop.properties
where these are the properties i think i changed or added from the original recipe:
whirr.cluster-name=hadoop-ec2
whirr.instance-templates=1
hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker
whirr.hardware-id=t1.micro
whirr.image-id=us-east-1/ami-25d9a94c
whirr.hadoop.version=1.2.1
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY}
whirr.credential=${env:AWS_SECRET_KEY}
whirr.location-id=us-east-1
whirr.java.install-function=install_oracle_jdk7
________________________________
From: Andrew Bayer <an...@gmail.com>
To: user@whirr.apache.org; a b <au...@yahoo.com>
Sent: Monday, August 5, 2013 4:11 PM
Subject: Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Ok, it looks like they've actually been doing .mds for a while. Where are you seeing this error?
A.
On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
That looks like they broke the hadoop-1.2.1 release - the file should be .md5. I'd bug the Hadoop project about that.
>
>A.
>
>
>
>On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
>i get a whirr error:
>>
>>
>>Could not download http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.md5
>>
>>
>>when i browse over to: http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>>i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>
>>
>>how do i tell whirr to use a different suffix?
>>
>>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
Ok, it looks like they've actually been doing .mds for a while. Where are
you seeing this error?
A.
On Mon, Aug 5, 2013 at 4:09 PM, Andrew Bayer <an...@gmail.com> wrote:
> That looks like they broke the hadoop-1.2.1 release - the file should be
> .md5. I'd bug the Hadoop project about that.
>
> A.
>
>
> On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
>
>> i get a whirr error:
>>
>> Could not download
>> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
>> md5
>>
>> when i browse over to:
>> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
>> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>>
>> how do i tell whirr to use a different suffix?
>>
>>
>
Re: cannot find hadoop-1.2.1.tar.gz.md5 - file suffix is mds
Posted by Andrew Bayer <an...@gmail.com>.
That looks like they broke the hadoop-1.2.1 release - the file should be
.md5. I'd bug the Hadoop project about that.
A.
On Mon, Aug 5, 2013 at 3:42 PM, a b <au...@yahoo.com> wrote:
> i get a whirr error:
>
> Could not download
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz.
> md5
>
> when i browse over to:
> http://apache.osuosl.org/hadoop/common/hadoop-1.2.1/
> i can see the file is named: hadoop-1.2.1.tar.gz.mds
>
> how do i tell whirr to use a different suffix?
>
>