You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by Chris Schilling <ch...@thecleversense.com> on 2011/10/12 00:13:09 UTC

authentication trouble

Hello,

New to whirr, having trouble *running whirr from an ec2 instance* (authentication when setting up other machines)

First, here is my configuration:
whirr.cluster-name=hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1 hadoop-datanode+hadoop-tasktracker

# For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY}

# The size of the instance to use. See http://aws.amazon.com/ec2/instance-types/
whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-da0cf8b3
whirr.location-id=us-east-1
# By default use the user system SSH keys. Override them here.
whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
whirr.public-key-file=${whirr.private-key-file}.pub



I export the credentials, then create the key:
 ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr

Then I launch the cluster:
whirr launch-cluster --config hadoop-ec2.properties --private-key-file ~/.ssh/id_rsa_whirr

The nodes start (costs me $!), but then authentication errors all over the place, along with Preconditions failures.  Here are some samples of the  
java.lang.NullPointerException: architecture
        at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
        at org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
        at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
        at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
        at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
        at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
......

Then the authentication errors begin:
<<authenticated>> woke to: net.schmizz.sshj.userauth.UserAuthException: publickey auth failed
<< (ubuntu@184.72.177.130:22) error acquiring SSHClient(ubuntu@184.72.177.130:22): Exhausted available authentication methods
net.schmizz.sshj.userauth.UserAuthException: Exhausted available authentication methods
        at net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
        at net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
        at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
        at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
        at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
        at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
        at org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
        at org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
        at org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
        at org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
        at org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
......

Please advise!


Chris Schilling
Sr. Data Mining Engineer
Clever Sense, Inc.
"Curating the World Around You"
--------------------------------------------------------------
Winner of the 2011 Fortune Brainstorm Start-up Idol

Wanna join the Clever Team? We're hiring!
--------------------------------------------------------------


Re: authentication trouble

Posted by Paul Baclace <pa...@gmail.com>.
Your first error is a failure in configuration:  "NullPointerException: 
architecture".  It might be a missing attribute for the ami query.

I have the following:

whirr.hardware-id=c1.medium
jclouds.ec2.ami-query=owner-id=999999999999;state=available;image-type=machine;root-device-type=instance-store;architecture=x86_32

I think the architecture attribute could be or at one point was 
inferred, but not any more.

In any case, the more constraints put on jclouds.ec2.ami-query  the 
better since that speeds up the query.


Paul

On 20111011 15:52 , Chris Schilling wrote:
> Customized the java install script?  I downloaded the latest and just 
> unpacked and tried to run:
> curl -O 
> http://www.apache.org/dist/incubator/whirr/whirr-0.6.0-incubating/whirr-0.6.0-incubating.tar.gz
>
> I can wait until morning.  Thanks for taking a look...
> Chris
>
>
> On Tue, Oct 11, 2011 at 3:47 PM, Andrei Savu <savu.andrei@gmail.com 
> <ma...@gmail.com>> wrote:
>
>     I see that you've customised the java install script. Are you
>     building on top of the 0.6.0 release or on top of trunk?
>
>     I see nothing strange in the log file. I will be able to say more
>     tomorrow as I try to replicate the same behaviour.
>
>     @Adrian I know that you've tried to launch a cluster from within
>     the Amazon cloud. Any feedback on this? Thanks!
>
>
>     On Tue, Oct 11, 2011 at 11:36 PM, Chris Schilling
>     <chris@thecleversense.com <ma...@thecleversense.com>> wrote:
>
>         Okay, no the ssh keypair does not need a password.  I
>         installed whirr on a separate ec2 instance.  So, this may be
>         internal communication issues between ec2.  Here is the full
>         whirr.log
>
>         2011-10-11 22:08:47,807 DEBUG
>         [org.apache.whirr.service.ComputeCache] (main) creating new
>         ComputeServiceContext
>         org.apache.whirr.service.ComputeCache$Key@1a689880
>         2011-10-11 22:09:50,094 DEBUG
>         [org.apache.whirr.service.ComputeCache] (main) creating new
>         ComputeServiceContext
>         org.apache.whirr.service.ComputeCache$Key@1a689880
>         2011-10-11 22:09:56,433 DEBUG
>         [org.apache.whirr.service.ComputeCache] (main) created new
>         ComputeServiceContext  [id=aws-ec2,
>         endpoint=https://ec2.us-east-1.amazonaws.com,
>         apiVersion=2010-06-15, identity=1FTR7NCN01CEAR6FK2G2,
>         iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
>         2011-10-11 22:09:56,454 INFO 
>         [org.apache.whirr.actions.BootstrapClusterAction] (main)
>         Bootstrapping cluster
>         2011-10-11 22:09:56,455 INFO 
>         [org.apache.whirr.actions.BootstrapClusterAction] (main)
>         Configuring template
>         2011-10-11 22:09:56,473 DEBUG
>         [org.apache.whirr.actions.BootstrapClusterAction] (main)
>         Running script:
>         #!/bin/bash
>         set +u
>         shopt -s xpg_echo
>         shopt -s expand_aliases
>         unset PATH JAVA_HOME LD_LIBRARY_PATH
>         function abort {
>            echo "aborting: $@" 1>&2
>            exit 1
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function configure_hostnames() {
>           local OPTIND
>           local OPTARG
>
>           CLOUD_PROVIDER=
>           while getopts "c:" OPTION; do
>             case $OPTION in
>             c)
>               CLOUD_PROVIDER="$OPTARG"
>               shift $((OPTIND-1)); OPTIND=1
>               ;;
>             esac
>           done
>
>           case $CLOUD_PROVIDER in
>             cloudservers | cloudservers-uk | cloudservers-us )
>               if which dpkg &> /dev/null; then
>                 PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' |
>         cut -d: -f2 | awk '{ print $1}'`
>                 HOSTNAME=`echo $PRIVATE_IP | tr .
>         -`.static.cloud-ips.com <http://static.cloud-ips.com>
>                 echo $HOSTNAME > /etc/hostname
>                 sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/"
>         /etc/hosts
>                 set +e
>                 /etc/init.d/hostname restart
>                 set -e
>                 sleep 2
>                 hostname
>               fi
>               ;;
>           esac
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function install_java_deb() {
>           # Enable multiverse
>           # TODO: check that it is not already enabled
>           sed -i -e 's/universe$/universe multiverse/'
>         /etc/apt/sources.list
>
>           DISTRO=`lsb_release -s -c`
>           cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
>         deb http://archive.canonical.com/ubuntu $DISTRO partner
>         deb-src http://archive.canonical.com/ubuntu $DISTRO partner
>         EOF
>
>           apt-get update
>
>           echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1   
>         boolean true
>         sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
>         sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
>         sun-java6-jre   sun-java6-jre/stopthread        boolean true
>         sun-java6-jre   sun-java6-jre/jcepolicy note
>         sun-java6-bin   shared/present-sun-dlj-v1-1     note
>         sun-java6-jdk   shared/present-sun-dlj-v1-1     note
>         sun-java6-jre   shared/present-sun-dlj-v1-1     note
>         ' | debconf-set-selections
>
>           apt-get -y install sun-java6-jdk
>
>           echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>           export JAVA_HOME=/usr/lib/jvm/java-6-sun
>           java -version
>
>         }
>
>         function install_java_rpm() {
>           MACHINE_TYPE=`uname -m`
>           if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>             JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>           else
>             JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>           fi
>           JDK_INSTALL_PATH=/usr/java
>           mkdir -p $JDK_INSTALL_PATH
>           cd $JDK_INSTALL_PATH
>           wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>           chmod +x $JDK_PACKAGE
>           mv /bin/more /bin/more.no <http://more.no>
>           yes | ./$JDK_PACKAGE -noregister
>           mv /bin/more.no <http://more.no> /bin/more
>           rm -f *.rpm $JDK_PACKAGE
>
>           export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>           echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>           alternatives --install /usr/bin/java java
>         $JAVA_HOME/bin/java 17000
>           alternatives --set java $JAVA_HOME/bin/java
>           java -version
>         }
>
>         function install_java() {
>           if which dpkg &> /dev/null; then
>             install_java_deb
>           elif which rpm &> /dev/null; then
>             install_java_rpm
>           fi
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function install_tarball() {
>           if [[ "$1" != "" ]]; then
>             # Download a .tar.gz file and extract to target dir
>
>             local tar_url=$1
>             local tar_file=`basename $tar_url`
>             local tar_file_md5=`basename $tar_url.md5`
>
>             local target=${2:-/usr/local/}
>             mkdir -p $target
>
>             local curl="curl -L --silent --show-error --fail
>         --connect-timeout 10 --max-time 600 --retry 5"
>             # any download should take less than 10 minutes
>
>             for retry_count in `seq 1 3`;
>             do
>               $curl -O $tar_url || true
>               $curl -O $tar_url.md5 || true
>
>               if [ ! -e $tar_file_md5 ]; then
>                 echo "Could not download  $tar_url.md5. Continuing."
>                 break;
>               elif md5sum -c $tar_file_md5; then
>                 break;
>               else
>                 # workaround for cassandra broken .md5 files
>                 if [ `md5sum $tar_file | awk '{print $1}'` = `cat
>         $tar_file_md5` ]; then
>                   break;
>                 fi
>
>                 rm -f $tar_file $tar_file_md5
>               fi
>
>               if [ ! $retry_count -eq "3" ]; then
>                 sleep 10
>               fi
>             done
>
>             if [ ! -e $tar_file ]; then
>               echo "Failed to download $tar_file. Aborting."
>               exit 1
>             fi
>
>             tar xzf $tar_file -C $target
>             rm -f $tar_file $tar_file_md5
>           fi
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function update_repo() {
>           if which dpkg &> /dev/null; then
>             sudo apt-get update
>           elif which rpm &> /dev/null; then
>             yum update -y yum
>           fi
>         }
>
>         function install_hadoop() {
>           local OPTIND
>           local OPTARG
>
>           CLOUD_PROVIDER=
>           HADOOP_TAR_URL=
>           while getopts "c:u:" OPTION; do
>             case $OPTION in
>             c)
>               CLOUD_PROVIDER="$OPTARG"
>               ;;
>             u)
>               HADOOP_TAR_URL="$OPTARG"
>               ;;
>             esac
>           done
>
>           HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>
>           update_repo
>
>           if ! id hadoop &> /dev/null; then
>             useradd hadoop
>           fi
>
>           install_tarball $HADOOP_TAR_URL
>           ln -s $HADOOP_HOME /usr/local/hadoop
>
>           echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>           echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >>
>         ~root/.bashrc
>         }
>
>         export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>         configure_hostnames -c aws-ec2 || exit 1
>         install_java || exit 1
>         install_tarball || exit 1
>         install_hadoop -c aws-ec2 -u
>         http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
>         || exit 1
>         exit 0
>
>         2011-10-11 22:09:56,490 DEBUG [jclouds.compute] (main) >>
>         searching params([biggest=false, fastest=false,
>         imageName=null, imageDescription=null,
>         imageId=us-east-1/ami-da0cf8b3, imagePredicate=null,
>         imageVersion=null, location=[id=us-east-1, scope=REGION,
>         description=us-east-1, parent=aws-ec2, iso3166Codes=[US-VA],
>         metadata={}], minCores=0.0, minRam=0, osFamily=null,
>         osName=null, osDescription=null, osVersion=null, osArch=null,
>         os64Bit=null, hardwareId=m1.large])
>         2011-10-11 22:09:56,491 DEBUG [jclouds.compute] (user thread
>         0) >> providing images
>         2011-10-11 22:09:56,497 DEBUG [jclouds.compute] (user thread
>         1) >> providing images
>         2011-10-11 22:09:58,046 DEBUG [jclouds.compute] (user thread
>         1) << images(32)
>         2011-10-11 22:10:01,457 DEBUG [jclouds.compute] (user thread
>         0) << images(3123)
>         2011-10-11 22:10:02,183 DEBUG [jclouds.compute] (main) <<  
>         matched hardware(m1.large)
>         2011-10-11 22:10:02,184 DEBUG [jclouds.compute] (main) <<  
>         matched image(us-east-1/ami-da0cf8b3)
>         2011-10-11 22:10:02,194 INFO 
>         [org.apache.whirr.actions.BootstrapClusterAction] (main)
>         Configuring template
>         2011-10-11 22:10:02,196 INFO 
>         [org.apache.whirr.actions.NodeStarter] (pool-3-thread-2)
>         Starting 1 node(s) with roles [hadoop-datanode,
>         hadoop-tasktracker]
>         2011-10-11 22:10:02,196 DEBUG [jclouds.compute]
>         (pool-3-thread-2) >> running 1 node group(hadoop)
>         location(us-east-1) image(us-east-1/ami-da0cf8b3)
>         hardwareProfile(m1.large) options([groupIds=[], keyPair=null,
>         noKeyPair=false, monitoringEnabled=false, placementGroup=null,
>         noPlacementGroup=false, subnetId=null, userData=null,
>         blockDeviceMappings=[], spotPrice=null,
>         spotOptions=[formParameters={}]])
>         2011-10-11 22:10:02,199 DEBUG [jclouds.compute]
>         (pool-3-thread-2) >> searching params([biggest=false,
>         fastest=false, imageName=null,
>         imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>         imageId=null, imagePredicate=null, imageVersion=20101020,
>         location=[id=us-east-1, scope=REGION, description=us-east-1,
>         parent=aws-ec2, iso3166Codes=[US-VA], metadata={}],
>         minCores=2.0, minRam=7680, osFamily=ubuntu, osName=null,
>         osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>         osVersion=10.04, osArch=paravirtual, os64Bit=true,
>         hardwareId=null])
>         2011-10-11 22:10:02,199 DEBUG
>         [org.apache.whirr.actions.BootstrapClusterAction] (main)
>         Running script:
>         #!/bin/bash
>         set +u
>         shopt -s xpg_echo
>         shopt -s expand_aliases
>         unset PATH JAVA_HOME LD_LIBRARY_PATH
>         function abort {
>            echo "aborting: $@" 1>&2
>            exit 1
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function configure_hostnames() {
>           local OPTIND
>           local OPTARG
>
>           CLOUD_PROVIDER=
>           while getopts "c:" OPTION; do
>             case $OPTION in
>             c)
>               CLOUD_PROVIDER="$OPTARG"
>               shift $((OPTIND-1)); OPTIND=1
>               ;;
>             esac
>           done
>
>           case $CLOUD_PROVIDER in
>             cloudservers | cloudservers-uk | cloudservers-us )
>               if which dpkg &> /dev/null; then
>                 PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' |
>         cut -d: -f2 | awk '{ print $1}'`
>                 HOSTNAME=`echo $PRIVATE_IP | tr .
>         -`.static.cloud-ips.com <http://static.cloud-ips.com>
>                 echo $HOSTNAME > /etc/hostname
>                 sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/"
>         /etc/hosts
>                 set +e
>                 /etc/init.d/hostname restart
>                 set -e
>                 sleep 2
>                 hostname
>               fi
>               ;;
>           esac
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function install_java_deb() {
>           # Enable multiverse
>           # TODO: check that it is not already enabled
>           sed -i -e 's/universe$/universe multiverse/'
>         /etc/apt/sources.list
>
>           DISTRO=`lsb_release -s -c`
>           cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
>         deb http://archive.canonical.com/ubuntu $DISTRO partner
>         deb-src http://archive.canonical.com/ubuntu $DISTRO partner
>         EOF
>
>           apt-get update
>
>           echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1   
>         boolean true
>         sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
>         sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
>         sun-java6-jre   sun-java6-jre/stopthread        boolean true
>         sun-java6-jre   sun-java6-jre/jcepolicy note
>         sun-java6-bin   shared/present-sun-dlj-v1-1     note
>         sun-java6-jdk   shared/present-sun-dlj-v1-1     note
>         sun-java6-jre   shared/present-sun-dlj-v1-1     note
>         ' | debconf-set-selections
>
>           apt-get -y install sun-java6-jdk
>
>           echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>           export JAVA_HOME=/usr/lib/jvm/java-6-sun
>           java -version
>
>         }
>
>         function install_java_rpm() {
>           MACHINE_TYPE=`uname -m`
>           if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>             JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>           else
>             JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>           fi
>           JDK_INSTALL_PATH=/usr/java
>           mkdir -p $JDK_INSTALL_PATH
>           cd $JDK_INSTALL_PATH
>           wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>           chmod +x $JDK_PACKAGE
>           mv /bin/more /bin/more.no <http://more.no>
>           yes | ./$JDK_PACKAGE -noregister
>           mv /bin/more.no <http://more.no> /bin/more
>           rm -f *.rpm $JDK_PACKAGE
>
>           export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>           echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>           alternatives --install /usr/bin/java java
>         $JAVA_HOME/bin/java 17000
>           alternatives --set java $JAVA_HOME/bin/java
>           java -version
>         }
>
>         function install_java() {
>           if which dpkg &> /dev/null; then
>             install_java_deb
>           elif which rpm &> /dev/null; then
>             install_java_rpm
>           fi
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function install_tarball() {
>           if [[ "$1" != "" ]]; then
>             # Download a .tar.gz file and extract to target dir
>
>             local tar_url=$1
>             local tar_file=`basename $tar_url`
>             local tar_file_md5=`basename $tar_url.md5`
>
>             local target=${2:-/usr/local/}
>             mkdir -p $target
>
>             local curl="curl -L --silent --show-error --fail
>         --connect-timeout 10 --max-time 600 --retry 5"
>             # any download should take less than 10 minutes
>
>             for retry_count in `seq 1 3`;
>             do
>               $curl -O $tar_url || true
>               $curl -O $tar_url.md5 || true
>
>               if [ ! -e $tar_file_md5 ]; then
>                 echo "Could not download  $tar_url.md5. Continuing."
>                 break;
>               elif md5sum -c $tar_file_md5; then
>                 break;
>               else
>                 # workaround for cassandra broken .md5 files
>                 if [ `md5sum $tar_file | awk '{print $1}'` = `cat
>         $tar_file_md5` ]; then
>                   break;
>                 fi
>
>                 rm -f $tar_file $tar_file_md5
>               fi
>
>               if [ ! $retry_count -eq "3" ]; then
>                 sleep 10
>               fi
>             done
>
>             if [ ! -e $tar_file ]; then
>               echo "Failed to download $tar_file. Aborting."
>               exit 1
>             fi
>
>             tar xzf $tar_file -C $target
>             rm -f $tar_file $tar_file_md5
>           fi
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function update_repo() {
>           if which dpkg &> /dev/null; then
>             sudo apt-get update
>           elif which rpm &> /dev/null; then
>             yum update -y yum
>           fi
>         }
>
>         function install_hadoop() {
>           local OPTIND
>           local OPTARG
>
>           CLOUD_PROVIDER=
>           HADOOP_TAR_URL=
>           while getopts "c:u:" OPTION; do
>             case $OPTION in
>             c)
>               CLOUD_PROVIDER="$OPTARG"
>               ;;
>             u)
>               HADOOP_TAR_URL="$OPTARG"
>               ;;
>             esac
>           done
>
>           HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>
>           update_repo
>
>           if ! id hadoop &> /dev/null; then
>             useradd hadoop
>           fi
>
>           install_tarball $HADOOP_TAR_URL
>           ln -s $HADOOP_HOME /usr/local/hadoop
>
>           echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>           echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >>
>         ~root/.bashrc
>         }
>
>         export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>         configure_hostnames -c aws-ec2 || exit 1
>         install_java || exit 1
>         install_tarball || exit 1
>         install_hadoop -c aws-ec2 -u
>         http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz
>         || exit 1
>         exit 0
>
>         2011-10-11 22:10:02,200 DEBUG [jclouds.compute] (main) >>
>         searching params([biggest=false, fastest=false,
>         imageName=null, imageDescription=null,
>         imageId=us-east-1/ami-da0cf8b3, imagePredicate=null,
>         imageVersion=null, location=[id=us-east-1, scope=REGION,
>         description=us-east-1, parent=aws-ec2, iso3166Codes=[US-VA],
>         metadata={}], minCores=0.0, minRam=0, osFamily=null,
>         osName=null, osDescription=null, osVersion=null, osArch=null,
>         os64Bit=null, hardwareId=m1.large])
>         2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<  
>         matched hardware(m1.large)
>         2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<  
>         matched image(us-east-1/ami-da0cf8b3)
>         2011-10-11 22:10:02,204 INFO 
>         [org.apache.whirr.actions.NodeStarter] (pool-3-thread-4)
>         Starting 1 node(s) with roles [hadoop-namenode, hadoop-jobtracker]
>         2011-10-11 22:10:02,205 DEBUG [jclouds.compute]
>         (pool-3-thread-4) >> running 1 node group(hadoop)
>         location(us-east-1) image(us-east-1/ami-da0cf8b3)
>         hardwareProfile(m1.large) options([groupIds=[], keyPair=null,
>         noKeyPair=false, monitoringEnabled=false, placementGroup=null,
>         noPlacementGroup=false, subnetId=null, userData=null,
>         blockDeviceMappings=[], spotPrice=null,
>         spotOptions=[formParameters={}]])
>         2011-10-11 22:10:02,205 DEBUG [jclouds.compute]
>         (pool-3-thread-4) >> searching params([biggest=false,
>         fastest=false, imageName=null,
>         imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>         imageId=null, imagePredicate=null, imageVersion=20101020,
>         location=[id=us-east-1, scope=REGION, description=us-east-1,
>         parent=aws-ec2, iso3166Codes=[US-VA], metadata={}],
>         minCores=2.0, minRam=7680, osFamily=ubuntu, osName=null,
>         osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>         osVersion=10.04, osArch=paravirtual, os64Bit=true,
>         hardwareId=null])
>         2011-10-11 22:10:02,321 DEBUG [jclouds.compute]
>         (pool-3-thread-4) <<   matched hardware(m1.large)
>         2011-10-11 22:10:02,325 DEBUG [jclouds.compute]
>         (pool-3-thread-4) <<   matched image(us-east-1/ami-da0cf8b3)
>         2011-10-11 22:10:02,327 DEBUG [jclouds.compute]
>         (pool-3-thread-4) >> creating keyPair region(us-east-1)
>         group(hadoop)
>         2011-10-11 22:10:02,342 DEBUG [jclouds.compute]
>         (pool-3-thread-2) <<   matched hardware(m1.large)
>         2011-10-11 22:10:02,366 DEBUG [jclouds.compute]
>         (pool-3-thread-2) <<   matched image(us-east-1/ami-da0cf8b3)
>         2011-10-11 22:10:02,367 DEBUG [jclouds.compute]
>         (pool-3-thread-2) >> creating keyPair region(us-east-1)
>         group(hadoop)
>         2011-10-11 22:10:03,645 DEBUG [jclouds.compute]
>         (pool-3-thread-2) << created keyPair(jclouds#hadoop#us-east-1#2)
>         2011-10-11 22:10:03,646 DEBUG [jclouds.compute]
>         (pool-3-thread-2) >> creating securityGroup region(us-east-1)
>         name(jclouds#hadoop#us-east-1)
>         2011-10-11 22:10:03,776 DEBUG [jclouds.compute]
>         (pool-3-thread-2) << reused
>         securityGroup(jclouds#hadoop#us-east-1)
>         2011-10-11 22:10:03,776 DEBUG [jclouds.compute]
>         (pool-3-thread-2) >> running 1 instance region(us-east-1)
>         zone(null) ami(ami-da0cf8b3) params({InstanceType=[m1.large],
>         SecurityGroup.1=[jclouds#hadoop#us-east-1],
>         KeyName=[jclouds#hadoop#us-east-1#2]})
>         2011-10-11 22:10:04,765 DEBUG [jclouds.compute]
>         (pool-3-thread-4) << created keyPair(jclouds#hadoop#us-east-1#0)
>         2011-10-11 22:10:04,765 DEBUG [jclouds.compute]
>         (pool-3-thread-4) >> running 1 instance region(us-east-1)
>         zone(null) ami(ami-da0cf8b3) params({InstanceType=[m1.large],
>         SecurityGroup.1=[jclouds#hadoop#us-east-1],
>         KeyName=[jclouds#hadoop#us-east-1#0]})
>         2011-10-11 22:10:05,067 DEBUG [jclouds.compute]
>         (pool-3-thread-2) << started instances([region=us-east-1,
>         name=i-08153d68])
>         2011-10-11 22:10:05,128 DEBUG [jclouds.compute]
>         (pool-3-thread-2) << present instances([region=us-east-1,
>         name=i-08153d68])
>         2011-10-11 22:10:05,186 DEBUG [jclouds.compute]
>         (pool-3-thread-4) << started instances([region=us-east-1,
>         name=i-12153d72])
>         2011-10-11 22:10:05,249 DEBUG [jclouds.compute]
>         (pool-3-thread-4) << present instances([region=us-east-1,
>         name=i-12153d72])
>         2011-10-11 22:10:38,407 DEBUG [jclouds.compute] (user thread
>         0) >> blocking on socket [address=184.72.177.130
>         <tel:184.72.177.130>, port=22] for 600000 seconds
>         2011-10-11 22:10:43,449 DEBUG [jclouds.compute] (user thread
>         0) << socket [address=184.72.177.130 <tel:184.72.177.130>,
>         port=22] opened
>         2011-10-11 22:10:44,681 DEBUG [jclouds.compute] (user thread
>         7) >> blocking on socket [address=50.19.59.109, port=22] for
>         600000 seconds
>         2011-10-11 22:10:46,462 DEBUG [jclouds.compute] (user thread
>         0) >> running [sudo ./setup-ubuntu init] as
>         ubuntu@184.72.177.130 <ma...@184.72.177.130>
>         2011-10-11 22:10:46,534 DEBUG [jclouds.compute] (user thread
>         0) << init(0)
>         2011-10-11 22:10:46,535 DEBUG [jclouds.compute] (user thread
>         0) >> running [sudo ./setup-ubuntu start] as
>         ubuntu@184.72.177.130 <ma...@184.72.177.130>
>         2011-10-11 22:10:47,653 DEBUG [jclouds.compute] (user thread
>         0) << start(0)
>         2011-10-11 22:10:56,729 DEBUG [jclouds.compute] (user thread
>         7) << socket [address=50.19.59.109, port=22] opened
>         2011-10-11 22:11:00,695 DEBUG [jclouds.compute] (user thread
>         7) >> running [sudo ./setup-ubuntu init] as
>         ubuntu@50.19.59.109 <ma...@50.19.59.109>
>         2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread
>         7) << init(0)
>         2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread
>         7) >> running [sudo ./setup-ubuntu start] as
>         ubuntu@50.19.59.109 <ma...@50.19.59.109>
>         2011-10-11 22:11:02,078 DEBUG [jclouds.compute] (user thread
>         7) << start(0)
>         2011-10-11 22:11:36,157 DEBUG [jclouds.compute] (user thread
>         0) << complete(true)
>         2011-10-11 22:11:36,235 DEBUG [jclouds.compute] (user thread
>         0) << stdout from setup-ubuntu as ubuntu@184.72.177.130
>         <ma...@184.72.177.130>
>         Hit http://security.ubuntu.com lucid-security/main Packages
>         Hit http://archive.canonical.com lucid/partner Packages
>         Hit http://security.ubuntu.com lucid-security/universe Packages
>         Hit http://security.ubuntu.com lucid-security/multiverse Packages
>         Hit http://security.ubuntu.com lucid-security/main Sources
>         Hit http://security.ubuntu.com lucid-security/universe Sources
>         Hit http://security.ubuntu.com lucid-security/multiverse Sources
>         Hit http://archive.canonical.com lucid/partner Sources
>         Reading package lists...
>         hadoop-0.20.2.tar.gz: OK
>
>         2011-10-11 22:11:36,291 DEBUG [jclouds.compute] (user thread
>         0) << stderr from setup-ubuntu as ubuntu@184.72.177.130
>         <ma...@184.72.177.130>
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/native2ascii to provide
>         /usr/bin/native2ascii (native2ascii) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to
>         provide /usr/bin/rmic (rmic) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/schemagen to provide
>         /usr/bin/schemagen (schemagen) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/serialver to provide
>         /usr/bin/serialver (serialver) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen
>         to provide /usr/bin/wsgen (wsgen) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/wsimport to provide
>         /usr/bin/wsimport (wsimport) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to
>         provide /usr/bin/xjc (xjc) in auto mode.
>         java version "1.6.0_26"
>         Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
>         Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>
>         2011-10-11 22:11:36,293 DEBUG [jclouds.compute] (user thread
>         0) << options applied node(us-east-1/i-08153d68)
>         2011-10-11 22:11:36,296 INFO 
>         [org.apache.whirr.actions.NodeStarter] (pool-3-thread-2) Nodes
>         started: [[id=us-east-1/i-08153d68, providerId=i-08153d68,
>         group=hadoop, name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
>         privateAddresses=[10.96.230.22],
>         publicAddresses=[184.72.177.130 <tel:%5B184.72.177.130>],
>         hardware=[id=m1.large, providerId=m1.large, name=null,
>         processors=[[cores=2.0, speed=2.0]], ram=7680,
>         volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1,
>         durable=false, isBootDevice=true], [id=null, type=LOCAL,
>         size=420.0, device=/dev/sdb, durable=false,
>         isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdc, durable=false, isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>         2011-10-11 22:11:47,206 DEBUG [jclouds.compute] (user thread
>         7) << complete(true)
>         2011-10-11 22:11:47,282 DEBUG [jclouds.compute] (user thread
>         7) << stdout from setup-ubuntu as ubuntu@50.19.59.109
>         <ma...@50.19.59.109>
>         Hit http://security.ubuntu.com lucid-security/main Packages
>         Hit http://archive.canonical.com lucid/partner Packages
>         Hit http://security.ubuntu.com lucid-security/universe Packages
>         Hit http://security.ubuntu.com lucid-security/multiverse Packages
>         Hit http://security.ubuntu.com lucid-security/main Sources
>         Hit http://security.ubuntu.com lucid-security/universe Sources
>         Hit http://security.ubuntu.com lucid-security/multiverse Sources
>         Hit http://archive.canonical.com lucid/partner Sources
>         Reading package lists...
>         hadoop-0.20.2.tar.gz: OK
>
>         2011-10-11 22:11:47,338 DEBUG [jclouds.compute] (user thread
>         7) << stderr from setup-ubuntu as ubuntu@50.19.59.109
>         <ma...@50.19.59.109>
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/native2ascii to provide
>         /usr/bin/native2ascii (native2ascii) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to
>         provide /usr/bin/rmic (rmic) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/schemagen to provide
>         /usr/bin/schemagen (schemagen) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/serialver to provide
>         /usr/bin/serialver (serialver) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen
>         to provide /usr/bin/wsgen (wsgen) in auto mode.
>         update-alternatives: using
>         /usr/lib/jvm/java-6-sun/bin/wsimport to provide
>         /usr/bin/wsimport (wsimport) in auto mode.
>         update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to
>         provide /usr/bin/xjc (xjc) in auto mode.
>         java version "1.6.0_26"
>         Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
>         Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>
>         2011-10-11 22:11:47,339 DEBUG [jclouds.compute] (user thread
>         7) << options applied node(us-east-1/i-12153d72)
>         2011-10-11 22:11:47,340 INFO 
>         [org.apache.whirr.actions.NodeStarter] (pool-3-thread-4) Nodes
>         started: [[id=us-east-1/i-12153d72, providerId=i-12153d72,
>         group=hadoop, name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>         privateAddresses=[10.97.51.240],
>         publicAddresses=[50.19.59.109], hardware=[id=m1.large,
>         providerId=m1.large, name=null, processors=[[cores=2.0,
>         speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL,
>         size=10.0, device=/dev/sda1, durable=false,
>         isBootDevice=true], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdb, durable=false, isBootDevice=false], [id=null,
>         type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
>         isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>         2011-10-11 22:11:47,464 INFO 
>         [org.apache.whirr.service.FirewallManager] (main) Authorizing
>         firewall ingress to [Instance{roles=[hadoop-namenode,
>         hadoop-jobtracker], publicIp=50.19.59.109,
>         privateIp=10.97.51.240, id=us-east-1/i-12153d72,
>         nodeMetadata=[id=us-east-1/i-12153d72, providerId=i-12153d72,
>         group=hadoop, name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>         privateAddresses=[10.97.51.240],
>         publicAddresses=[50.19.59.109], hardware=[id=m1.large,
>         providerId=m1.large, name=null, processors=[[cores=2.0,
>         speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL,
>         size=10.0, device=/dev/sda1, durable=false,
>         isBootDevice=true], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdb, durable=false, isBootDevice=false], [id=null,
>         type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
>         isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on
>         ports [50070, 50030] for [75.101.253.125/32
>         <http://75.101.253.125/32>]
>         2011-10-11 22:11:47,529 WARN 
>         [org.apache.whirr.service.jclouds.FirewallSettings] (main) The
>         permission '75.101.253.125/32-1-50070-50070
>         <http://75.101.253.125/32-1-50070-50070>' has already been
>         authorized on the specified group
>         2011-10-11 22:11:47,574 WARN 
>         [org.apache.whirr.service.jclouds.FirewallSettings] (main) The
>         permission '75.101.253.125/32-1-50030-50030
>         <http://75.101.253.125/32-1-50030-50030>' has already been
>         authorized on the specified group
>         2011-10-11 22:11:47,575 INFO 
>         [org.apache.whirr.service.FirewallManager] (main) Authorizing
>         firewall ingress to [Instance{roles=[hadoop-namenode,
>         hadoop-jobtracker], publicIp=50.19.59.109,
>         privateIp=10.97.51.240, id=us-east-1/i-12153d72,
>         nodeMetadata=[id=us-east-1/i-12153d72, providerId=i-12153d72,
>         group=hadoop, name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>         privateAddresses=[10.97.51.240],
>         publicAddresses=[50.19.59.109], hardware=[id=m1.large,
>         providerId=m1.large, name=null, processors=[[cores=2.0,
>         speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL,
>         size=10.0, device=/dev/sda1, durable=false,
>         isBootDevice=true], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdb, durable=false, isBootDevice=false], [id=null,
>         type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
>         isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on
>         ports [8020, 8021] for [50.19.59.109/32 <http://50.19.59.109/32>]
>         2011-10-11 22:11:47,806 DEBUG [jclouds.compute] (main) >>
>         listing node details matching(withIds([us-east-1/i-08153d68,
>         us-east-1/i-12153d72]))
>         2011-10-11 22:11:50,315 DEBUG [jclouds.compute] (main) << list(2)
>         2011-10-11 22:11:50,315 DEBUG
>         [org.apache.whirr.actions.ConfigureClusterAction] (main) Nodes
>         in cluster: [[id=us-east-1/i-08153d68, providerId=i-08153d68,
>         group=hadoop, name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
>         privateAddresses=[10.96.230.22],
>         publicAddresses=[184.72.177.130 <tel:%5B184.72.177.130>],
>         hardware=[id=m1.large, providerId=m1.large, name=null,
>         processors=[[cores=2.0, speed=2.0]], ram=7680,
>         volumes=[[id=null, type=LOCAL, size=10.0, device=/dev/sda1,
>         durable=false, isBootDevice=true], [id=null, type=LOCAL,
>         size=420.0, device=/dev/sdb, durable=false,
>         isBootDevice=false], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdc, durable=false, isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]],
>         [id=us-east-1/i-12153d72, providerId=i-12153d72, group=hadoop,
>         name=null, location=[id=us-east-1b, scope=ZONE,
>         description=us-east-1b, parent=us-east-1,
>         iso3166Codes=[US-VA], metadata={}], uri=null,
>         imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu,
>         version=10.04, arch=paravirtual, is64Bit=true,
>         description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>         state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>         privateAddresses=[10.97.51.240],
>         publicAddresses=[50.19.59.109], hardware=[id=m1.large,
>         providerId=m1.large, name=null, processors=[[cores=2.0,
>         speed=2.0]], ram=7680, volumes=[[id=null, type=LOCAL,
>         size=10.0, device=/dev/sda1, durable=false,
>         isBootDevice=true], [id=null, type=LOCAL, size=420.0,
>         device=/dev/sdb, durable=false, isBootDevice=false], [id=null,
>         type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
>         isBootDevice=false]],
>         supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>         tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>         2011-10-11 22:11:50,316 INFO 
>         [org.apache.whirr.actions.ConfigureClusterAction] (main)
>         Running configuration script on nodes: [us-east-1/i-08153d68]
>         2011-10-11 22:11:50,318 DEBUG
>         [org.apache.whirr.actions.ConfigureClusterAction] (main) script:
>         #!/bin/bash
>         set +u
>         shopt -s xpg_echo
>         shopt -s expand_aliases
>         unset PATH JAVA_HOME LD_LIBRARY_PATH
>         function abort {
>            echo "aborting: $@" 1>&2
>            exit 1
>         }
>         #
>         # Licensed to the Apache Software Foundation (ASF) under one
>         or more
>         # contributor license agreements.  See the NOTICE file
>         distributed with
>         # this work for additional information regarding copyright
>         ownership.
>         # The ASF licenses this file to You under the Apache License,
>         Version 2.0
>         # (the "License"); you may not use this file except in
>         compliance with
>         # the License.  You may obtain a copy of the License at
>         #
>         # http://www.apache.org/licenses/LICENSE-2.0
>         #
>         # Unless required by applicable law or agreed to in writing,
>         software
>         # distributed under the License is distributed on an "AS IS"
>         BASIS,
>         # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
>         or implied.
>         # See the License for the specific language governing
>         permissions and
>         # limitations under the License.
>         #
>         function configure_hadoop() {
>           local OPTIND
>           local OPTARG
>
>           ROLES=$1
>           shift
>
>           CLOUD_PROVIDER=
>           while getopts "c:" OPTION; do
>             case $OPTION in
>             c)
>               CLOUD_PROVIDER="$OPTARG"
>               ;;
>             esac
>           done
>
>           case $CLOUD_PROVIDER in
>             ec2 | aws-ec2 )
>               # Alias /mnt as /data
>               ln -s /mnt /data
>               ;;
>             *)
>               ;;
>           esac
>
>           HADOOP_HOME=/usr/local/hadoop
>           HADOOP_CONF_DIR=$HADOOP_HOME/conf
>
>           mkdir -p /data/hadoop
>           chown hadoop:hadoop /data/hadoop
>           if [ ! -e /data/tmp ]; then
>             mkdir /data/tmp
>             chmod a+rwxt /data/tmp
>           fi
>           mkdir /etc/hadoop
>           ln -s $HADOOP_CONF_DIR /etc/hadoop/conf
>
>           # Copy generated configuration files in place
>           cp /tmp/{core,hdfs,mapred}-site.xml $HADOOP_CONF_DIR
>
>           # Keep PID files in a non-temporary directory
>           sed -i -e "s|# export HADOOP_PID_DIR=.*|export
>         HADOOP_PID_DIR=/var/run/hadoop|" \
>             $HADOOP_CONF_DIR/hadoop-env.sh
>           mkdir -p /var/run/hadoop
>           chown -R hadoop:hadoop /var/run/hadoop
>
>           # Set SSH options within the cluster
>           sed -i -e 's|# export HADOOP_SSH_OPTS=.*|export
>         HADOOP_SSH_OPTS="-o StrictHostKeyChecking=no"|' \
>             $HADOOP_CONF_DIR/hadoop-env.sh
>
>           # Disable IPv6
>           sed -i -e 's|# export HADOOP_OPTS=.*|export
>         HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"|' \
>             $HADOOP_CONF_DIR/hadoop-env.sh
>
>           # Hadoop logs should be on the /data partition
>           sed -i -e 's|# export HADOOP_LOG_DIR=.*|export
>         HADOOP_LOG_DIR=/var/log/hadoop/logs|' \
>             $HADOOP_CONF_DIR/hadoop-env.sh
>           rm -rf /var/log/hadoop
>           mkdir /data/hadoop/logs
>           chown hadoop:hadoop /data/hadoop/logs
>           ln -s /data/hadoop/logs /var/log/hadoop
>           chown -R hadoop:hadoop /var/log/hadoop
>
>           for role in $(echo "$ROLES" | tr "," "\n"); do
>             case $role in
>             hadoop-namenode)
>               start_namenode
>               ;;
>             hadoop-secondarynamenode)
>               start_hadoop_daemon secondarynamenode
>               ;;
>             hadoop-jobtracker)
>               start_hadoop_daemon jobtracker
>               ;;
>             hadoop-datanode)
>               start_hadoop_daemon datanode
>               ;;
>             hadoop-tasktracker)
>               start_hadoop_daemon tasktracker
>               ;;
>             esac
>           done
>
>         }
>
>         function start_namenode() {
>           if which dpkg &> /dev/null; then
>             AS_HADOOP="su -s /bin/bash - hadoop -c"
>           elif which rpm &> /dev/null; then
>             AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>           fi
>
>           # Format HDFS
>           [ ! -e /data/hadoop/hdfs ] && $AS_HADOOP
>         "$HADOOP_HOME/bin/hadoop namenode -format"
>
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start namenode"
>
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop dfsadmin -safemode wait"
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user"
>           # The following is questionable, as it allows a user to
>         delete another user
>           # It's needed to allow users to create their own user
>         directories
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user"
>
>           # Create temporary directory for Pig and Hive in HDFS
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /tmp"
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /tmp"
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir
>         /user/hive/warehouse"
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w
>         /user/hive/warehouse"
>
>         }
>
>         function start_hadoop_daemon() {
>           if which dpkg &> /dev/null; then
>             AS_HADOOP="su -s /bin/bash - hadoop -c"
>           elif which rpm &> /dev/null; then
>             AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>           fi
>           $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start $1"
>         }
>
>         export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>         cat >> /tmp/core-site.xml <<'END_OF_FILE'
>         <configuration>
>         <property>
>         <name>hadoop.tmp.dir</name>
>         <value>/data/tmp/hadoop-${user.name <http://user.name>}</value>
>         </property>
>         <property>
>         <name>io.file.buffer.size</name>
>         <value>65536</value>
>         </property>
>         <property>
>         <name>hadoop.rpc.socket.factory.class.default</name>
>         <value>org.apache.hadoop.net.StandardSocketFactory</value>
>         <final>true</final>
>         </property>
>         <property>
>         <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
>         <value></value>
>         </property>
>         <property>
>         <name>hadoop.rpc.socket.factory.class.JobSubmissionProtocol</name>
>         <value></value>
>         </property>
>         <property>
>         <name>fs.trash.interval</name>
>         <value>1440</value>
>         </property>
>         <property>
>         <name>fs.default.name <http://fs.default.name></name>
>         <value>hdfs://ec2-50-19-59-109.compute-1.amazonaws.com:8020/
>         <http://ec2-50-19-59-109.compute-1.amazonaws.com:8020/></value>
>         </property>
>         </configuration>
>         END_OF_FILE
>         cat >> /tmp/hdfs-site.xml <<'END_OF_FILE'
>         <configuration>
>         <property>
>         <name>dfs.block.size</name>
>         <value>134217728</value>
>         </property>
>         <property>
>         <name>dfs.data.dir</name>
>         <value>/data/hadoop/hdfs/data</value>
>         </property>
>         <property>
>         <name>dfs.datanode.du.reserved</name>
>         <value>1073741824</value>
>         </property>
>         <property>
>         <name>dfs.name.dir</name>
>         <value>/data/hadoop/hdfs/name</value>
>         </property>
>         <property>
>         <name>fs.checkpoint.dir</name>
>         <value>/data/hadoop/hdfs/secondary</value>
>         </property>
>         </configuration>
>         END_OF_FILE
>         cat >> /tmp/mapred-site.xml <<'END_OF_FILE'
>         <configuration>
>         <property>
>         <name>mapred.local.dir</name>
>         <value>/data/hadoop/mapred/local</value>
>         </property>
>         <property>
>         <name>mapred.map.tasks.speculative.execution</name>
>         <value>true</value>
>         </property>
>         <property>
>         <name>mapred.reduce.tasks.speculative.execution</name>
>         <value>false</value>
>         </property>
>         <property>
>         <name>mapred.system.dir</name>
>         <value>/hadoop/system/mapred</value>
>         </property>
>         <property>
>         <name>mapreduce.jobtracker.staging.root.dir</name>
>         <value>/user</value>
>         </property>
>         <property>
>         <name>mapred.compress.map.output</name>
>         <value>true</value>
>         </property>
>         <property>
>         <name>mapred.output.compression.type</name>
>         <value>BLOCK</value>
>         </property>
>         <property>
>         <name>mapred.child.java.opts</name>
>         <value>-Xmx550m</value>
>         </property>
>         <property>
>         <name>mapred.child.ulimit</name>
>         <value>1126400</value>
>         </property>
>         <property>
>         <name>mapred.tasktracker.map.tasks.maximum</name>
>         <value>2</value>
>         </property>
>         <property>
>         <name>mapred.tasktracker.reduce.tasks.maximum</name>
>         <value>2</value>
>         </property>
>         <property>
>         <name>mapred.reduce.tasks</name>
>         <value>2</value>
>         </property>
>         <property>
>         <name>mapred.job.tracker</name>
>         <value>ec2-50-19-59-109.compute-1.amazonaws.com:8021
>         <http://ec2-50-19-59-109.compute-1.amazonaws.com:8021></value>
>         </property>
>         </configuration>
>         END_OF_FILE
>         configure_hadoop hadoop-datanode,hadoop-tasktracker -c aws-ec2
>         || exit 1
>         exit 0
>
>         2011-10-11 22:11:50,970 DEBUG [jclouds.compute] (user thread
>         7) >> blocking on socket [address=184.72.177.130
>         <tel:184.72.177.130>, port=22] for 600000 seconds
>         2011-10-11 22:11:53,992 DEBUG [jclouds.compute] (user thread
>         7) << socket [address=184.72.177.130 <tel:184.72.177.130>,
>         port=22] opened
>         2011-10-11 22:12:57,316 DEBUG
>         [org.apache.whirr.service.ComputeCache] (Thread-1) closing
>         ComputeServiceContext  [id=aws-ec2,
>         endpoint=https://ec2.us-east-1.amazonaws.com,
>         apiVersion=2010-06-15, identity=1FTR7NCN01CEAR6FK2G2,
>         iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
>
>
>         On Tue, Oct 11, 2011 at 3:31 PM, Andrei Savu
>         <savu.andrei@gmail.com <ma...@gmail.com>> wrote:
>
>             Chris -
>
>             We've seen this issue in the past. I will take a closer
>             look in the morning (in ~10 hours). Can you upload the
>             full log somewhere? Also make the sure that the SSH
>             keypair does not need a password.
>
>             Cheers,
>
>             -- Andrei Savu / andreisavu.ro <http://andreisavu.ro>
>
>
>             On Tue, Oct 11, 2011 at 11:13 PM, Chris Schilling
>             <chris@thecleversense.com
>             <ma...@thecleversense.com>> wrote:
>
>                 Hello,
>
>                 New to whirr, having trouble *running whirr from an
>                 ec2 instance* (authentication when setting up other
>                 machines)
>
>                 First, here is my configuration:
>                 whirr.cluster-name=hadoop
>                 whirr.instance-templates=1
>                 hadoop-namenode+hadoop-jobtracker,1
>                 hadoop-datanode+hadoop-tasktracker
>
>                 # For EC2 set AWS_ACCESS_KEY_ID and
>                 AWS_SECRET_ACCESS_KEY environment variables.
>                 whirr.provider=aws-ec2
>                 whirr.identity=${env:AWS_ACCESS_KEY_ID}
>                 whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>
>                 # The size of the instance to use. See
>                 http://aws.amazon.com/ec2/instance-types/
>                 whirr.hardware-id=m1.large
>                 whirr.image-id=us-east-1/ami-da0cf8b3
>                 whirr.location-id=us-east-1
>                 # By default use the user system SSH keys. Override
>                 them here.
>                 whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
>                 whirr.public-key-file=${whirr.private-key-file}.pub
>
>
>
>                 I export the credentials, then create the key:
>                  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr
>
>                 Then I launch the cluster:
>                 whirr launch-cluster --config hadoop-ec2.properties
>                 --private-key-file ~/.ssh/id_rsa_whirr
>
>                 The nodes start (costs me $!), but then authentication
>                 errors all over the place, along with Preconditions
>                 failures.  Here are some samples of the
>                 java.lang.NullPointerException: architecture
>                         at
>                 com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>                         at
>                 org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>                         at
>                 org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>                         at
>                 com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
>                         at
>                 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
>                         at
>                 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
>                         at
>                 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
>                         at
>                 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
>                         at
>                 com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
>                         at
>                 com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
>                         at
>                 com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
>                 ......
>
>                 Then the authentication errors begin:
>                 <<authenticated>> woke to:
>                 net.schmizz.sshj.userauth.UserAuthException: publickey
>                 auth failed
>                 << (ubuntu@184.72.177.130 <tel:184.72.177.130>:22)
>                 error acquiring SSHClient(ubuntu@184.72.177.130
>                 <tel:184.72.177.130>:22): Exhausted available
>                 authentication methods
>                 net.schmizz.sshj.userauth.UserAuthException: Exhausted
>                 available authentication methods
>                         at
>                 net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
>                         at
>                 net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
>                         at
>                 net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
>                         at
>                 net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
>                         at
>                 org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
>                         at
>                 org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
>                         at
>                 org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
>                         at
>                 org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
>                         at
>                 org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
>                         at
>                 org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
>                         at
>                 org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
>                 ......
>
>                 Please advise!
>
>
>                 Chris Schilling
>                 Sr. Data Mining Engineer
>                 Clever Sense, Inc.
>                 "Curating the World Around You"
>                 --------------------------------------------------------------
>                 Winner of the 2011 Fortune Brainstorm Start-up Idol
>                 <http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>
>                 Wanna join the Clever Team? We're hiring!
>                 <http://www.thecleversense.com/jobs.html>
>                 --------------------------------------------------------------
>
>
>
>
>
>         -- 
>         Chris Schilling
>         Sr. Data Fiend
>         Clever Sense, Inc.
>         "Curating the World Around You!"
>         --------------------------------------------------------------
>         Winner of the 2011 Fortune Brainstorm Start-up Idol
>         <http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>
>         Wanna join the Clever Team? We're hiring!
>         <http://www.thecleversense.com/jobs.html>
>         --------------------------------------------------------------
>
>
>
>
>
> -- 
> Chris Schilling
> Sr. Data Fiend
> Clever Sense, Inc.
> "Curating the World Around You!"
> --------------------------------------------------------------
> Winner of the 2011 Fortune Brainstorm Start-up Idol 
> <http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>
> Wanna join the Clever Team? We're hiring! 
> <http://www.thecleversense.com/jobs.html>
> --------------------------------------------------------------
>


Re: authentication trouble

Posted by Chris Schilling <ch...@thecleversense.com>.
Customized the java install script?  I downloaded the latest and just
unpacked and tried to run:
curl -O
http://www.apache.org/dist/incubator/whirr/whirr-0.6.0-incubating/whirr-0.6.0-incubating.tar.gz

I can wait until morning.  Thanks for taking a look...
Chris


On Tue, Oct 11, 2011 at 3:47 PM, Andrei Savu <sa...@gmail.com> wrote:

> I see that you've customised the java install script. Are you building on
> top of the 0.6.0 release or on top of trunk?
>
> I see nothing strange in the log file. I will be able to say more tomorrow
> as I try to replicate the same behaviour.
>
> @Adrian I know that you've tried to launch a cluster from within the Amazon
> cloud. Any feedback on this? Thanks!
>
>
> On Tue, Oct 11, 2011 at 11:36 PM, Chris Schilling <
> chris@thecleversense.com> wrote:
>
>> Okay, no the ssh keypair does not need a password.  I installed whirr on a
>> separate ec2 instance.  So, this may be internal communication issues
>> between ec2.  Here is the full whirr.log
>>
>> 2011-10-11 22:08:47,807 DEBUG [org.apache.whirr.service.ComputeCache]
>> (main) creating new ComputeServiceContext
>> org.apache.whirr.service.ComputeCache$Key@1a689880
>> 2011-10-11 22:09:50,094 DEBUG [org.apache.whirr.service.ComputeCache]
>> (main) creating new ComputeServiceContext
>> org.apache.whirr.service.ComputeCache$Key@1a689880
>> 2011-10-11 22:09:56,433 DEBUG [org.apache.whirr.service.ComputeCache]
>> (main) created new ComputeServiceContext  [id=aws-ec2, endpoint=
>> https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
>> identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
>> 2011-10-11 22:09:56,454 INFO
>> [org.apache.whirr.actions.BootstrapClusterAction] (main) Bootstrapping
>> cluster
>> 2011-10-11 22:09:56,455 INFO
>> [org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
>> template
>> 2011-10-11 22:09:56,473 DEBUG
>> [org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
>> #!/bin/bash
>> set +u
>> shopt -s xpg_echo
>> shopt -s expand_aliases
>> unset PATH JAVA_HOME LD_LIBRARY_PATH
>> function abort {
>>    echo "aborting: $@" 1>&2
>>    exit 1
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function configure_hostnames() {
>>   local OPTIND
>>   local OPTARG
>>
>>   CLOUD_PROVIDER=
>>   while getopts "c:" OPTION; do
>>     case $OPTION in
>>     c)
>>       CLOUD_PROVIDER="$OPTARG"
>>       shift $((OPTIND-1)); OPTIND=1
>>       ;;
>>     esac
>>   done
>>
>>   case $CLOUD_PROVIDER in
>>     cloudservers | cloudservers-uk | cloudservers-us )
>>       if which dpkg &> /dev/null; then
>>         PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2
>> | awk '{ print $1}'`
>>         HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
>>         echo $HOSTNAME > /etc/hostname
>>         sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
>>         set +e
>>         /etc/init.d/hostname restart
>>         set -e
>>         sleep 2
>>         hostname
>>       fi
>>       ;;
>>   esac
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function install_java_deb() {
>>   # Enable multiverse
>>   # TODO: check that it is not already enabled
>>   sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list
>>
>>   DISTRO=`lsb_release -s -c`
>>   cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
>> deb http://archive.canonical.com/ubuntu $DISTRO partner
>> deb-src http://archive.canonical.com/ubuntu $DISTRO partner
>> EOF
>>
>>   apt-get update
>>
>>   echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jre   sun-java6-jre/stopthread        boolean true
>> sun-java6-jre   sun-java6-jre/jcepolicy note
>> sun-java6-bin   shared/present-sun-dlj-v1-1     note
>> sun-java6-jdk   shared/present-sun-dlj-v1-1     note
>> sun-java6-jre   shared/present-sun-dlj-v1-1     note
>> ' | debconf-set-selections
>>
>>   apt-get -y install sun-java6-jdk
>>
>>   echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>>   export JAVA_HOME=/usr/lib/jvm/java-6-sun
>>   java -version
>>
>> }
>>
>> function install_java_rpm() {
>>   MACHINE_TYPE=`uname -m`
>>   if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>>     JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>>   else
>>     JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>>   fi
>>   JDK_INSTALL_PATH=/usr/java
>>   mkdir -p $JDK_INSTALL_PATH
>>   cd $JDK_INSTALL_PATH
>>   wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>>   chmod +x $JDK_PACKAGE
>>   mv /bin/more /bin/more.no
>>   yes | ./$JDK_PACKAGE -noregister
>>   mv /bin/more.no /bin/more
>>   rm -f *.rpm $JDK_PACKAGE
>>
>>   export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>>   echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>>   alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
>>   alternatives --set java $JAVA_HOME/bin/java
>>   java -version
>> }
>>
>> function install_java() {
>>   if which dpkg &> /dev/null; then
>>     install_java_deb
>>   elif which rpm &> /dev/null; then
>>     install_java_rpm
>>   fi
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function install_tarball() {
>>   if [[ "$1" != "" ]]; then
>>     # Download a .tar.gz file and extract to target dir
>>
>>     local tar_url=$1
>>     local tar_file=`basename $tar_url`
>>     local tar_file_md5=`basename $tar_url.md5`
>>
>>     local target=${2:-/usr/local/}
>>     mkdir -p $target
>>
>>     local curl="curl -L --silent --show-error --fail --connect-timeout 10
>> --max-time 600 --retry 5"
>>     # any download should take less than 10 minutes
>>
>>     for retry_count in `seq 1 3`;
>>     do
>>       $curl -O $tar_url || true
>>       $curl -O $tar_url.md5 || true
>>
>>       if [ ! -e $tar_file_md5 ]; then
>>         echo "Could not download  $tar_url.md5. Continuing."
>>         break;
>>       elif md5sum -c $tar_file_md5; then
>>         break;
>>       else
>>         # workaround for cassandra broken .md5 files
>>         if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5`
>> ]; then
>>           break;
>>         fi
>>
>>         rm -f $tar_file $tar_file_md5
>>       fi
>>
>>       if [ ! $retry_count -eq "3" ]; then
>>         sleep 10
>>       fi
>>     done
>>
>>     if [ ! -e $tar_file ]; then
>>       echo "Failed to download $tar_file. Aborting."
>>       exit 1
>>     fi
>>
>>     tar xzf $tar_file -C $target
>>     rm -f $tar_file $tar_file_md5
>>   fi
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function update_repo() {
>>   if which dpkg &> /dev/null; then
>>     sudo apt-get update
>>   elif which rpm &> /dev/null; then
>>     yum update -y yum
>>   fi
>> }
>>
>> function install_hadoop() {
>>   local OPTIND
>>   local OPTARG
>>
>>   CLOUD_PROVIDER=
>>   HADOOP_TAR_URL=
>>   while getopts "c:u:" OPTION; do
>>     case $OPTION in
>>     c)
>>       CLOUD_PROVIDER="$OPTARG"
>>       ;;
>>     u)
>>       HADOOP_TAR_URL="$OPTARG"
>>       ;;
>>     esac
>>   done
>>
>>   HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>>
>>   update_repo
>>
>>   if ! id hadoop &> /dev/null; then
>>     useradd hadoop
>>   fi
>>
>>   install_tarball $HADOOP_TAR_URL
>>   ln -s $HADOOP_HOME /usr/local/hadoop
>>
>>   echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>>   echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >>
>> ~root/.bashrc
>> }
>>
>> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>> configure_hostnames -c aws-ec2 || exit 1
>> install_java || exit 1
>> install_tarball || exit 1
>> install_hadoop -c aws-ec2 -u
>> http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz|| exit 1
>> exit 0
>>
>> 2011-10-11 22:09:56,490 DEBUG [jclouds.compute] (main) >> searching
>> params([biggest=false, fastest=false, imageName=null, imageDescription=null,
>> imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
>> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
>> iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
>> osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
>> hardwareId=m1.large])
>> 2011-10-11 22:09:56,491 DEBUG [jclouds.compute] (user thread 0) >>
>> providing images
>> 2011-10-11 22:09:56,497 DEBUG [jclouds.compute] (user thread 1) >>
>> providing images
>> 2011-10-11 22:09:58,046 DEBUG [jclouds.compute] (user thread 1) <<
>> images(32)
>> 2011-10-11 22:10:01,457 DEBUG [jclouds.compute] (user thread 0) <<
>> images(3123)
>> 2011-10-11 22:10:02,183 DEBUG [jclouds.compute] (main) <<   matched
>> hardware(m1.large)
>> 2011-10-11 22:10:02,184 DEBUG [jclouds.compute] (main) <<   matched
>> image(us-east-1/ami-da0cf8b3)
>> 2011-10-11 22:10:02,194 INFO
>> [org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
>> template
>> 2011-10-11 22:10:02,196 INFO  [org.apache.whirr.actions.NodeStarter]
>> (pool-3-thread-2) Starting 1 node(s) with roles [hadoop-datanode,
>> hadoop-tasktracker]
>> 2011-10-11 22:10:02,196 DEBUG [jclouds.compute] (pool-3-thread-2) >>
>> running 1 node group(hadoop) location(us-east-1)
>> image(us-east-1/ami-da0cf8b3) hardwareProfile(m1.large)
>> options([groupIds=[], keyPair=null, noKeyPair=false,
>> monitoringEnabled=false, placementGroup=null, noPlacementGroup=false,
>> subnetId=null, userData=null, blockDeviceMappings=[], spotPrice=null,
>> spotOptions=[formParameters={}]])
>> 2011-10-11 22:10:02,199 DEBUG [jclouds.compute] (pool-3-thread-2) >>
>> searching params([biggest=false, fastest=false, imageName=null,
>> imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>> imageId=null, imagePredicate=null, imageVersion=20101020,
>> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
>> iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
>> osFamily=ubuntu, osName=null,
>> osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>> osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
>> 2011-10-11 22:10:02,199 DEBUG
>> [org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
>> #!/bin/bash
>> set +u
>> shopt -s xpg_echo
>> shopt -s expand_aliases
>> unset PATH JAVA_HOME LD_LIBRARY_PATH
>> function abort {
>>    echo "aborting: $@" 1>&2
>>    exit 1
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function configure_hostnames() {
>>   local OPTIND
>>   local OPTARG
>>
>>   CLOUD_PROVIDER=
>>   while getopts "c:" OPTION; do
>>     case $OPTION in
>>     c)
>>       CLOUD_PROVIDER="$OPTARG"
>>       shift $((OPTIND-1)); OPTIND=1
>>       ;;
>>     esac
>>   done
>>
>>   case $CLOUD_PROVIDER in
>>     cloudservers | cloudservers-uk | cloudservers-us )
>>       if which dpkg &> /dev/null; then
>>         PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2
>> | awk '{ print $1}'`
>>         HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
>>         echo $HOSTNAME > /etc/hostname
>>         sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
>>         set +e
>>         /etc/init.d/hostname restart
>>         set -e
>>         sleep 2
>>         hostname
>>       fi
>>       ;;
>>   esac
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function install_java_deb() {
>>   # Enable multiverse
>>   # TODO: check that it is not already enabled
>>   sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list
>>
>>   DISTRO=`lsb_release -s -c`
>>   cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
>> deb http://archive.canonical.com/ubuntu $DISTRO partner
>> deb-src http://archive.canonical.com/ubuntu $DISTRO partner
>> EOF
>>
>>   apt-get update
>>
>>   echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
>> sun-java6-jre   sun-java6-jre/stopthread        boolean true
>> sun-java6-jre   sun-java6-jre/jcepolicy note
>> sun-java6-bin   shared/present-sun-dlj-v1-1     note
>> sun-java6-jdk   shared/present-sun-dlj-v1-1     note
>> sun-java6-jre   shared/present-sun-dlj-v1-1     note
>> ' | debconf-set-selections
>>
>>   apt-get -y install sun-java6-jdk
>>
>>   echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>>   export JAVA_HOME=/usr/lib/jvm/java-6-sun
>>   java -version
>>
>> }
>>
>> function install_java_rpm() {
>>   MACHINE_TYPE=`uname -m`
>>   if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>>     JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>>   else
>>     JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>>   fi
>>   JDK_INSTALL_PATH=/usr/java
>>   mkdir -p $JDK_INSTALL_PATH
>>   cd $JDK_INSTALL_PATH
>>   wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>>   chmod +x $JDK_PACKAGE
>>   mv /bin/more /bin/more.no
>>   yes | ./$JDK_PACKAGE -noregister
>>   mv /bin/more.no /bin/more
>>   rm -f *.rpm $JDK_PACKAGE
>>
>>   export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>>   echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>>   alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
>>   alternatives --set java $JAVA_HOME/bin/java
>>   java -version
>> }
>>
>> function install_java() {
>>   if which dpkg &> /dev/null; then
>>     install_java_deb
>>   elif which rpm &> /dev/null; then
>>     install_java_rpm
>>   fi
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function install_tarball() {
>>   if [[ "$1" != "" ]]; then
>>     # Download a .tar.gz file and extract to target dir
>>
>>     local tar_url=$1
>>     local tar_file=`basename $tar_url`
>>     local tar_file_md5=`basename $tar_url.md5`
>>
>>     local target=${2:-/usr/local/}
>>     mkdir -p $target
>>
>>     local curl="curl -L --silent --show-error --fail --connect-timeout 10
>> --max-time 600 --retry 5"
>>     # any download should take less than 10 minutes
>>
>>     for retry_count in `seq 1 3`;
>>     do
>>       $curl -O $tar_url || true
>>       $curl -O $tar_url.md5 || true
>>
>>       if [ ! -e $tar_file_md5 ]; then
>>         echo "Could not download  $tar_url.md5. Continuing."
>>         break;
>>       elif md5sum -c $tar_file_md5; then
>>         break;
>>       else
>>         # workaround for cassandra broken .md5 files
>>         if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5`
>> ]; then
>>           break;
>>         fi
>>
>>         rm -f $tar_file $tar_file_md5
>>       fi
>>
>>       if [ ! $retry_count -eq "3" ]; then
>>         sleep 10
>>       fi
>>     done
>>
>>     if [ ! -e $tar_file ]; then
>>       echo "Failed to download $tar_file. Aborting."
>>       exit 1
>>     fi
>>
>>     tar xzf $tar_file -C $target
>>     rm -f $tar_file $tar_file_md5
>>   fi
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function update_repo() {
>>   if which dpkg &> /dev/null; then
>>     sudo apt-get update
>>   elif which rpm &> /dev/null; then
>>     yum update -y yum
>>   fi
>> }
>>
>> function install_hadoop() {
>>   local OPTIND
>>   local OPTARG
>>
>>   CLOUD_PROVIDER=
>>   HADOOP_TAR_URL=
>>   while getopts "c:u:" OPTION; do
>>     case $OPTION in
>>     c)
>>       CLOUD_PROVIDER="$OPTARG"
>>       ;;
>>     u)
>>       HADOOP_TAR_URL="$OPTARG"
>>       ;;
>>     esac
>>   done
>>
>>   HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>>
>>   update_repo
>>
>>   if ! id hadoop &> /dev/null; then
>>     useradd hadoop
>>   fi
>>
>>   install_tarball $HADOOP_TAR_URL
>>   ln -s $HADOOP_HOME /usr/local/hadoop
>>
>>   echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>>   echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >>
>> ~root/.bashrc
>> }
>>
>> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>> configure_hostnames -c aws-ec2 || exit 1
>> install_java || exit 1
>> install_tarball || exit 1
>> install_hadoop -c aws-ec2 -u
>> http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz|| exit 1
>> exit 0
>>
>> 2011-10-11 22:10:02,200 DEBUG [jclouds.compute] (main) >> searching
>> params([biggest=false, fastest=false, imageName=null, imageDescription=null,
>> imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
>> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
>> iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
>> osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
>> hardwareId=m1.large])
>> 2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
>> hardware(m1.large)
>> 2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
>> image(us-east-1/ami-da0cf8b3)
>> 2011-10-11 22:10:02,204 INFO  [org.apache.whirr.actions.NodeStarter]
>> (pool-3-thread-4) Starting 1 node(s) with roles [hadoop-namenode,
>> hadoop-jobtracker]
>> 2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >>
>> running 1 node group(hadoop) location(us-east-1)
>> image(us-east-1/ami-da0cf8b3) hardwareProfile(m1.large)
>> options([groupIds=[], keyPair=null, noKeyPair=false,
>> monitoringEnabled=false, placementGroup=null, noPlacementGroup=false,
>> subnetId=null, userData=null, blockDeviceMappings=[], spotPrice=null,
>> spotOptions=[formParameters={}]])
>> 2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >>
>> searching params([biggest=false, fastest=false, imageName=null,
>> imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>> imageId=null, imagePredicate=null, imageVersion=20101020,
>> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
>> iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
>> osFamily=ubuntu, osName=null,
>> osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
>> osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
>> 2011-10-11 22:10:02,321 DEBUG [jclouds.compute] (pool-3-thread-4) <<
>> matched hardware(m1.large)
>> 2011-10-11 22:10:02,325 DEBUG [jclouds.compute] (pool-3-thread-4) <<
>> matched image(us-east-1/ami-da0cf8b3)
>> 2011-10-11 22:10:02,327 DEBUG [jclouds.compute] (pool-3-thread-4) >>
>> creating keyPair region(us-east-1) group(hadoop)
>> 2011-10-11 22:10:02,342 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> matched hardware(m1.large)
>> 2011-10-11 22:10:02,366 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> matched image(us-east-1/ami-da0cf8b3)
>> 2011-10-11 22:10:02,367 DEBUG [jclouds.compute] (pool-3-thread-2) >>
>> creating keyPair region(us-east-1) group(hadoop)
>> 2011-10-11 22:10:03,645 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> created keyPair(jclouds#hadoop#us-east-1#2)
>> 2011-10-11 22:10:03,646 DEBUG [jclouds.compute] (pool-3-thread-2) >>
>> creating securityGroup region(us-east-1) name(jclouds#hadoop#us-east-1)
>> 2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> reused securityGroup(jclouds#hadoop#us-east-1)
>> 2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) >>
>> running 1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
>> params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
>> KeyName=[jclouds#hadoop#us-east-1#2]})
>> 2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) <<
>> created keyPair(jclouds#hadoop#us-east-1#0)
>> 2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) >>
>> running 1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
>> params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
>> KeyName=[jclouds#hadoop#us-east-1#0]})
>> 2011-10-11 22:10:05,067 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> started instances([region=us-east-1, name=i-08153d68])
>> 2011-10-11 22:10:05,128 DEBUG [jclouds.compute] (pool-3-thread-2) <<
>> present instances([region=us-east-1, name=i-08153d68])
>> 2011-10-11 22:10:05,186 DEBUG [jclouds.compute] (pool-3-thread-4) <<
>> started instances([region=us-east-1, name=i-12153d72])
>> 2011-10-11 22:10:05,249 DEBUG [jclouds.compute] (pool-3-thread-4) <<
>> present instances([region=us-east-1, name=i-12153d72])
>> 2011-10-11 22:10:38,407 DEBUG [jclouds.compute] (user thread 0) >>
>> blocking on socket [address=184.72.177.130, port=22] for 600000 seconds
>> 2011-10-11 22:10:43,449 DEBUG [jclouds.compute] (user thread 0) << socket
>> [address=184.72.177.130, port=22] opened
>> 2011-10-11 22:10:44,681 DEBUG [jclouds.compute] (user thread 7) >>
>> blocking on socket [address=50.19.59.109, port=22] for 600000 seconds
>> 2011-10-11 22:10:46,462 DEBUG [jclouds.compute] (user thread 0) >> running
>> [sudo ./setup-ubuntu init] as ubuntu@184.72.177.130
>> 2011-10-11 22:10:46,534 DEBUG [jclouds.compute] (user thread 0) << init(0)
>> 2011-10-11 22:10:46,535 DEBUG [jclouds.compute] (user thread 0) >> running
>> [sudo ./setup-ubuntu start] as ubuntu@184.72.177.130
>> 2011-10-11 22:10:47,653 DEBUG [jclouds.compute] (user thread 0) <<
>> start(0)
>> 2011-10-11 22:10:56,729 DEBUG [jclouds.compute] (user thread 7) << socket
>> [address=50.19.59.109, port=22] opened
>> 2011-10-11 22:11:00,695 DEBUG [jclouds.compute] (user thread 7) >> running
>> [sudo ./setup-ubuntu init] as ubuntu@50.19.59.109
>> 2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) << init(0)
>> 2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) >> running
>> [sudo ./setup-ubuntu start] as ubuntu@50.19.59.109
>> 2011-10-11 22:11:02,078 DEBUG [jclouds.compute] (user thread 7) <<
>> start(0)
>> 2011-10-11 22:11:36,157 DEBUG [jclouds.compute] (user thread 0) <<
>> complete(true)
>> 2011-10-11 22:11:36,235 DEBUG [jclouds.compute] (user thread 0) << stdout
>> from setup-ubuntu as ubuntu@184.72.177.130
>> Hit http://security.ubuntu.com lucid-security/main Packages
>> Hit http://archive.canonical.com lucid/partner Packages
>> Hit http://security.ubuntu.com lucid-security/universe Packages
>> Hit http://security.ubuntu.com lucid-security/multiverse Packages
>> Hit http://security.ubuntu.com lucid-security/main Sources
>> Hit http://security.ubuntu.com lucid-security/universe Sources
>> Hit http://security.ubuntu.com lucid-security/multiverse Sources
>> Hit http://archive.canonical.com lucid/partner Sources
>> Reading package lists...
>> hadoop-0.20.2.tar.gz: OK
>>
>> 2011-10-11 22:11:36,291 DEBUG [jclouds.compute] (user thread 0) << stderr
>> from setup-ubuntu as ubuntu@184.72.177.130
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
>> provide /usr/bin/native2ascii (native2ascii) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
>> /usr/bin/rmic (rmic) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
>> provide /usr/bin/schemagen (schemagen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
>> provide /usr/bin/serialver (serialver) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
>> /usr/bin/wsgen (wsgen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
>> /usr/bin/wsimport (wsimport) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
>> /usr/bin/xjc (xjc) in auto mode.
>> java version "1.6.0_26"
>> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
>> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>>
>> 2011-10-11 22:11:36,293 DEBUG [jclouds.compute] (user thread 0) << options
>> applied node(us-east-1/i-08153d68)
>> 2011-10-11 22:11:36,296 INFO  [org.apache.whirr.actions.NodeStarter]
>> (pool-3-thread-2) Nodes started: [[id=us-east-1/i-08153d68,
>> providerId=i-08153d68, group=hadoop, name=null, location=[id=us-east-1b,
>> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
>> privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>> 2011-10-11 22:11:47,206 DEBUG [jclouds.compute] (user thread 7) <<
>> complete(true)
>> 2011-10-11 22:11:47,282 DEBUG [jclouds.compute] (user thread 7) << stdout
>> from setup-ubuntu as ubuntu@50.19.59.109
>> Hit http://security.ubuntu.com lucid-security/main Packages
>> Hit http://archive.canonical.com lucid/partner Packages
>> Hit http://security.ubuntu.com lucid-security/universe Packages
>> Hit http://security.ubuntu.com lucid-security/multiverse Packages
>> Hit http://security.ubuntu.com lucid-security/main Sources
>> Hit http://security.ubuntu.com lucid-security/universe Sources
>> Hit http://security.ubuntu.com lucid-security/multiverse Sources
>> Hit http://archive.canonical.com lucid/partner Sources
>> Reading package lists...
>> hadoop-0.20.2.tar.gz: OK
>>
>> 2011-10-11 22:11:47,338 DEBUG [jclouds.compute] (user thread 7) << stderr
>> from setup-ubuntu as ubuntu@50.19.59.109
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
>> provide /usr/bin/native2ascii (native2ascii) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
>> /usr/bin/rmic (rmic) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
>> provide /usr/bin/schemagen (schemagen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
>> provide /usr/bin/serialver (serialver) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
>> /usr/bin/wsgen (wsgen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
>> /usr/bin/wsimport (wsimport) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
>> /usr/bin/xjc (xjc) in auto mode.
>> java version "1.6.0_26"
>> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
>> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>>
>> 2011-10-11 22:11:47,339 DEBUG [jclouds.compute] (user thread 7) << options
>> applied node(us-east-1/i-12153d72)
>> 2011-10-11 22:11:47,340 INFO  [org.apache.whirr.actions.NodeStarter]
>> (pool-3-thread-4) Nodes started: [[id=us-east-1/i-12153d72,
>> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
>> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>> 2011-10-11 22:11:47,464 INFO  [org.apache.whirr.service.FirewallManager]
>> (main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
>> hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
>> id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
>> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
>> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [50070,
>> 50030] for [75.101.253.125/32]
>> 2011-10-11 22:11:47,529 WARN
>> [org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
>> 75.101.253.125/32-1-50070-50070' has already been authorized on the
>> specified group
>> 2011-10-11 22:11:47,574 WARN
>> [org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
>> 75.101.253.125/32-1-50030-50030' has already been authorized on the
>> specified group
>> 2011-10-11 22:11:47,575 INFO  [org.apache.whirr.service.FirewallManager]
>> (main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
>> hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
>> id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
>> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
>> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
>> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
>> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [8020,
>> 8021] for [50.19.59.109/32]
>> 2011-10-11 22:11:47,806 DEBUG [jclouds.compute] (main) >> listing node
>> details matching(withIds([us-east-1/i-08153d68, us-east-1/i-12153d72]))
>> 2011-10-11 22:11:50,315 DEBUG [jclouds.compute] (main) << list(2)
>> 2011-10-11 22:11:50,315 DEBUG
>> [org.apache.whirr.actions.ConfigureClusterAction] (main) Nodes in cluster:
>> [[id=us-east-1/i-08153d68, providerId=i-08153d68, group=hadoop, name=null,
>> location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
>> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
>> imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
>> arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
>> privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]],
>> [id=us-east-1/i-12153d72, providerId=i-12153d72, group=hadoop, name=null,
>> location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
>> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
>> imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
>> arch=paravirtual, is64Bit=true,
>> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
>> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
>> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
>> hardware=[id=m1.large, providerId=m1.large, name=null,
>> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
>> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
>> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
>> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
>> durable=false, isBootDevice=false]],
>> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
>> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
>> 2011-10-11 22:11:50,316 INFO
>> [org.apache.whirr.actions.ConfigureClusterAction] (main) Running
>> configuration script on nodes: [us-east-1/i-08153d68]
>> 2011-10-11 22:11:50,318 DEBUG
>> [org.apache.whirr.actions.ConfigureClusterAction] (main) script:
>> #!/bin/bash
>> set +u
>> shopt -s xpg_echo
>> shopt -s expand_aliases
>> unset PATH JAVA_HOME LD_LIBRARY_PATH
>> function abort {
>>    echo "aborting: $@" 1>&2
>>    exit 1
>> }
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #     http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>> function configure_hadoop() {
>>   local OPTIND
>>   local OPTARG
>>
>>   ROLES=$1
>>   shift
>>
>>   CLOUD_PROVIDER=
>>   while getopts "c:" OPTION; do
>>     case $OPTION in
>>     c)
>>       CLOUD_PROVIDER="$OPTARG"
>>       ;;
>>     esac
>>   done
>>
>>   case $CLOUD_PROVIDER in
>>     ec2 | aws-ec2 )
>>       # Alias /mnt as /data
>>       ln -s /mnt /data
>>       ;;
>>     *)
>>       ;;
>>   esac
>>
>>   HADOOP_HOME=/usr/local/hadoop
>>   HADOOP_CONF_DIR=$HADOOP_HOME/conf
>>
>>   mkdir -p /data/hadoop
>>   chown hadoop:hadoop /data/hadoop
>>   if [ ! -e /data/tmp ]; then
>>     mkdir /data/tmp
>>     chmod a+rwxt /data/tmp
>>   fi
>>   mkdir /etc/hadoop
>>   ln -s $HADOOP_CONF_DIR /etc/hadoop/conf
>>
>>   # Copy generated configuration files in place
>>   cp /tmp/{core,hdfs,mapred}-site.xml $HADOOP_CONF_DIR
>>
>>   # Keep PID files in a non-temporary directory
>>   sed -i -e "s|# export HADOOP_PID_DIR=.*|export
>> HADOOP_PID_DIR=/var/run/hadoop|" \
>>     $HADOOP_CONF_DIR/hadoop-env.sh
>>   mkdir -p /var/run/hadoop
>>   chown -R hadoop:hadoop /var/run/hadoop
>>
>>   # Set SSH options within the cluster
>>   sed -i -e 's|# export HADOOP_SSH_OPTS=.*|export HADOOP_SSH_OPTS="-o
>> StrictHostKeyChecking=no"|' \
>>     $HADOOP_CONF_DIR/hadoop-env.sh
>>
>>   # Disable IPv6
>>   sed -i -e 's|# export HADOOP_OPTS=.*|export
>> HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"|' \
>>     $HADOOP_CONF_DIR/hadoop-env.sh
>>
>>   # Hadoop logs should be on the /data partition
>>   sed -i -e 's|# export HADOOP_LOG_DIR=.*|export
>> HADOOP_LOG_DIR=/var/log/hadoop/logs|' \
>>     $HADOOP_CONF_DIR/hadoop-env.sh
>>   rm -rf /var/log/hadoop
>>   mkdir /data/hadoop/logs
>>   chown hadoop:hadoop /data/hadoop/logs
>>   ln -s /data/hadoop/logs /var/log/hadoop
>>   chown -R hadoop:hadoop /var/log/hadoop
>>
>>   for role in $(echo "$ROLES" | tr "," "\n"); do
>>     case $role in
>>     hadoop-namenode)
>>       start_namenode
>>       ;;
>>     hadoop-secondarynamenode)
>>       start_hadoop_daemon secondarynamenode
>>       ;;
>>     hadoop-jobtracker)
>>       start_hadoop_daemon jobtracker
>>       ;;
>>     hadoop-datanode)
>>       start_hadoop_daemon datanode
>>       ;;
>>     hadoop-tasktracker)
>>       start_hadoop_daemon tasktracker
>>       ;;
>>     esac
>>   done
>>
>> }
>>
>> function start_namenode() {
>>   if which dpkg &> /dev/null; then
>>     AS_HADOOP="su -s /bin/bash - hadoop -c"
>>   elif which rpm &> /dev/null; then
>>     AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>>   fi
>>
>>   # Format HDFS
>>   [ ! -e /data/hadoop/hdfs ] && $AS_HADOOP "$HADOOP_HOME/bin/hadoop
>> namenode -format"
>>
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start namenode"
>>
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop dfsadmin -safemode wait"
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user"
>>   # The following is questionable, as it allows a user to delete another
>> user
>>   # It's needed to allow users to create their own user directories
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user"
>>
>>   # Create temporary directory for Pig and Hive in HDFS
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /tmp"
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /tmp"
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse"
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user/hive/warehouse"
>>
>> }
>>
>> function start_hadoop_daemon() {
>>   if which dpkg &> /dev/null; then
>>     AS_HADOOP="su -s /bin/bash - hadoop -c"
>>   elif which rpm &> /dev/null; then
>>     AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>>   fi
>>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start $1"
>> }
>>
>> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
>> cat >> /tmp/core-site.xml <<'END_OF_FILE'
>> <configuration>
>>   <property>
>>     <name>hadoop.tmp.dir</name>
>>     <value>/data/tmp/hadoop-${user.name}</value>
>>   </property>
>>   <property>
>>     <name>io.file.buffer.size</name>
>>     <value>65536</value>
>>   </property>
>>   <property>
>>     <name>hadoop.rpc.socket.factory.class.default</name>
>>     <value>org.apache.hadoop.net.StandardSocketFactory</value>
>>     <final>true</final>
>>   </property>
>>   <property>
>>     <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
>>     <value></value>
>>   </property>
>>   <property>
>>     <name>hadoop.rpc.socket.factory.class.JobSubmissionProtocol</name>
>>     <value></value>
>>   </property>
>>   <property>
>>     <name>fs.trash.interval</name>
>>     <value>1440</value>
>>   </property>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://ec2-50-19-59-109.compute-1.amazonaws.com:8020/</value>
>>   </property>
>> </configuration>
>> END_OF_FILE
>> cat >> /tmp/hdfs-site.xml <<'END_OF_FILE'
>> <configuration>
>>   <property>
>>     <name>dfs.block.size</name>
>>     <value>134217728</value>
>>   </property>
>>   <property>
>>     <name>dfs.data.dir</name>
>>     <value>/data/hadoop/hdfs/data</value>
>>   </property>
>>   <property>
>>     <name>dfs.datanode.du.reserved</name>
>>     <value>1073741824</value>
>>   </property>
>>   <property>
>>     <name>dfs.name.dir</name>
>>     <value>/data/hadoop/hdfs/name</value>
>>   </property>
>>   <property>
>>     <name>fs.checkpoint.dir</name>
>>     <value>/data/hadoop/hdfs/secondary</value>
>>   </property>
>> </configuration>
>> END_OF_FILE
>> cat >> /tmp/mapred-site.xml <<'END_OF_FILE'
>> <configuration>
>>   <property>
>>     <name>mapred.local.dir</name>
>>     <value>/data/hadoop/mapred/local</value>
>>   </property>
>>   <property>
>>     <name>mapred.map.tasks.speculative.execution</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>mapred.reduce.tasks.speculative.execution</name>
>>     <value>false</value>
>>   </property>
>>   <property>
>>     <name>mapred.system.dir</name>
>>     <value>/hadoop/system/mapred</value>
>>   </property>
>>   <property>
>>     <name>mapreduce.jobtracker.staging.root.dir</name>
>>     <value>/user</value>
>>   </property>
>>   <property>
>>     <name>mapred.compress.map.output</name>
>>     <value>true</value>
>>   </property>
>>   <property>
>>     <name>mapred.output.compression.type</name>
>>     <value>BLOCK</value>
>>   </property>
>>   <property>
>>     <name>mapred.child.java.opts</name>
>>     <value>-Xmx550m</value>
>>   </property>
>>   <property>
>>     <name>mapred.child.ulimit</name>
>>     <value>1126400</value>
>>   </property>
>>   <property>
>>     <name>mapred.tasktracker.map.tasks.maximum</name>
>>     <value>2</value>
>>   </property>
>>   <property>
>>     <name>mapred.tasktracker.reduce.tasks.maximum</name>
>>     <value>2</value>
>>   </property>
>>   <property>
>>     <name>mapred.reduce.tasks</name>
>>     <value>2</value>
>>   </property>
>>   <property>
>>     <name>mapred.job.tracker</name>
>>     <value>ec2-50-19-59-109.compute-1.amazonaws.com:8021</value>
>>   </property>
>> </configuration>
>> END_OF_FILE
>> configure_hadoop hadoop-datanode,hadoop-tasktracker -c aws-ec2 || exit 1
>> exit 0
>>
>> 2011-10-11 22:11:50,970 DEBUG [jclouds.compute] (user thread 7) >>
>> blocking on socket [address=184.72.177.130, port=22] for 600000 seconds
>> 2011-10-11 22:11:53,992 DEBUG [jclouds.compute] (user thread 7) << socket
>> [address=184.72.177.130, port=22] opened
>> 2011-10-11 22:12:57,316 DEBUG [org.apache.whirr.service.ComputeCache]
>> (Thread-1) closing ComputeServiceContext  [id=aws-ec2, endpoint=
>> https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
>> identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
>>
>>
>> On Tue, Oct 11, 2011 at 3:31 PM, Andrei Savu <sa...@gmail.com>wrote:
>>
>>> Chris -
>>>
>>> We've seen this issue in the past. I will take a closer look in the
>>> morning (in ~10 hours). Can you upload the full log somewhere? Also make the
>>> sure that the SSH keypair does not need a password.
>>>
>>> Cheers,
>>>
>>> -- Andrei Savu / andreisavu.ro
>>>
>>>
>>> On Tue, Oct 11, 2011 at 11:13 PM, Chris Schilling <
>>> chris@thecleversense.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> New to whirr, having trouble *running whirr from an ec2 instance*
>>>> (authentication when setting up other machines)
>>>>
>>>> First, here is my configuration:
>>>> whirr.cluster-name=hadoop
>>>> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
>>>> hadoop-datanode+hadoop-tasktracker
>>>>
>>>> # For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
>>>> variables.
>>>> whirr.provider=aws-ec2
>>>> whirr.identity=${env:AWS_ACCESS_KEY_ID}
>>>> whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>>>>
>>>> # The size of the instance to use. See
>>>> http://aws.amazon.com/ec2/instance-types/
>>>> whirr.hardware-id=m1.large
>>>> whirr.image-id=us-east-1/ami-da0cf8b3
>>>> whirr.location-id=us-east-1
>>>> # By default use the user system SSH keys. Override them here.
>>>> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
>>>> whirr.public-key-file=${whirr.private-key-file}.pub
>>>>
>>>>
>>>>
>>>> I export the credentials, then create the key:
>>>>  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr
>>>>
>>>> Then I launch the cluster:
>>>> whirr launch-cluster --config hadoop-ec2.properties --private-key-file
>>>> ~/.ssh/id_rsa_whirr
>>>>
>>>> The nodes start (costs me $!), but then authentication errors all over
>>>> the place, along with Preconditions failures.  Here are some samples of the
>>>>
>>>> java.lang.NullPointerException: architecture
>>>>         at
>>>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>>>>         at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>>>>         at
>>>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
>>>>         at
>>>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
>>>> ......
>>>>
>>>> Then the authentication errors begin:
>>>> <<authenticated>> woke to: net.schmizz.sshj.userauth.UserAuthException:
>>>> publickey auth failed
>>>> << (ubuntu@184.72.177.130:22) error acquiring SSHClient(ubuntu@
>>>> 184.72.177.130:22): Exhausted available authentication methods
>>>> net.schmizz.sshj.userauth.UserAuthException: Exhausted available
>>>> authentication methods
>>>>         at
>>>> net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
>>>>         at net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
>>>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
>>>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
>>>>         at
>>>> org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
>>>>         at
>>>> org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
>>>>         at
>>>> org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
>>>>         at
>>>> org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
>>>>         at
>>>> org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
>>>>         at
>>>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
>>>>         at
>>>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
>>>> ......
>>>>
>>>> Please advise!
>>>>
>>>>
>>>> Chris Schilling
>>>> Sr. Data Mining Engineer
>>>> Clever Sense, Inc.
>>>> "Curating the World Around You"
>>>> --------------------------------------------------------------
>>>> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>>>>
>>>> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
>>>> --------------------------------------------------------------
>>>>
>>>>
>>>
>>
>>
>> --
>> Chris Schilling
>> Sr. Data Fiend
>> Clever Sense, Inc.
>> "Curating the World Around You!"
>> --------------------------------------------------------------
>> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>>
>> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
>> --------------------------------------------------------------
>>
>>
>


-- 
Chris Schilling
Sr. Data Fiend
Clever Sense, Inc.
"Curating the World Around You!"
--------------------------------------------------------------
Winner of the 2011 Fortune Brainstorm Start-up
Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>

Wanna join the Clever Team? We're
hiring!<http://www.thecleversense.com/jobs.html>
--------------------------------------------------------------

Re: authentication trouble

Posted by Chris Schilling <ch...@thecleversense.com>.
Hello Andrei,

I was just curious if there was any more word on this authentication trouble.  I actually had the CDH hadoop+whirr setup running from EC2, however it seemed to be an older version since it was only using the deprecated hadoop-site.xml to establish the cluster.   

Thanks!
Chris


Re: authentication trouble

Posted by Andrei Savu <sa...@gmail.com>.
I see that you've customised the java install script. Are you building on
top of the 0.6.0 release or on top of trunk?

I see nothing strange in the log file. I will be able to say more tomorrow
as I try to replicate the same behaviour.

@Adrian I know that you've tried to launch a cluster from within the Amazon
cloud. Any feedback on this? Thanks!


On Tue, Oct 11, 2011 at 11:36 PM, Chris Schilling
<ch...@thecleversense.com>wrote:

> Okay, no the ssh keypair does not need a password.  I installed whirr on a
> separate ec2 instance.  So, this may be internal communication issues
> between ec2.  Here is the full whirr.log
>
> 2011-10-11 22:08:47,807 DEBUG [org.apache.whirr.service.ComputeCache]
> (main) creating new ComputeServiceContext
> org.apache.whirr.service.ComputeCache$Key@1a689880
> 2011-10-11 22:09:50,094 DEBUG [org.apache.whirr.service.ComputeCache]
> (main) creating new ComputeServiceContext
> org.apache.whirr.service.ComputeCache$Key@1a689880
> 2011-10-11 22:09:56,433 DEBUG [org.apache.whirr.service.ComputeCache]
> (main) created new ComputeServiceContext  [id=aws-ec2, endpoint=
> https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
> identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
> 2011-10-11 22:09:56,454 INFO
> [org.apache.whirr.actions.BootstrapClusterAction] (main) Bootstrapping
> cluster
> 2011-10-11 22:09:56,455 INFO
> [org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
> template
> 2011-10-11 22:09:56,473 DEBUG
> [org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
> #!/bin/bash
> set +u
> shopt -s xpg_echo
> shopt -s expand_aliases
> unset PATH JAVA_HOME LD_LIBRARY_PATH
> function abort {
>    echo "aborting: $@" 1>&2
>    exit 1
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function configure_hostnames() {
>   local OPTIND
>   local OPTARG
>
>   CLOUD_PROVIDER=
>   while getopts "c:" OPTION; do
>     case $OPTION in
>     c)
>       CLOUD_PROVIDER="$OPTARG"
>       shift $((OPTIND-1)); OPTIND=1
>       ;;
>     esac
>   done
>
>   case $CLOUD_PROVIDER in
>     cloudservers | cloudservers-uk | cloudservers-us )
>       if which dpkg &> /dev/null; then
>         PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 |
> awk '{ print $1}'`
>         HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
>         echo $HOSTNAME > /etc/hostname
>         sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
>         set +e
>         /etc/init.d/hostname restart
>         set -e
>         sleep 2
>         hostname
>       fi
>       ;;
>   esac
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function install_java_deb() {
>   # Enable multiverse
>   # TODO: check that it is not already enabled
>   sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list
>
>   DISTRO=`lsb_release -s -c`
>   cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
> deb http://archive.canonical.com/ubuntu $DISTRO partner
> deb-src http://archive.canonical.com/ubuntu $DISTRO partner
> EOF
>
>   apt-get update
>
>   echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jre   sun-java6-jre/stopthread        boolean true
> sun-java6-jre   sun-java6-jre/jcepolicy note
> sun-java6-bin   shared/present-sun-dlj-v1-1     note
> sun-java6-jdk   shared/present-sun-dlj-v1-1     note
> sun-java6-jre   shared/present-sun-dlj-v1-1     note
> ' | debconf-set-selections
>
>   apt-get -y install sun-java6-jdk
>
>   echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>   export JAVA_HOME=/usr/lib/jvm/java-6-sun
>   java -version
>
> }
>
> function install_java_rpm() {
>   MACHINE_TYPE=`uname -m`
>   if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>     JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>   else
>     JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>   fi
>   JDK_INSTALL_PATH=/usr/java
>   mkdir -p $JDK_INSTALL_PATH
>   cd $JDK_INSTALL_PATH
>   wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>   chmod +x $JDK_PACKAGE
>   mv /bin/more /bin/more.no
>   yes | ./$JDK_PACKAGE -noregister
>   mv /bin/more.no /bin/more
>   rm -f *.rpm $JDK_PACKAGE
>
>   export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>   echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>   alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
>   alternatives --set java $JAVA_HOME/bin/java
>   java -version
> }
>
> function install_java() {
>   if which dpkg &> /dev/null; then
>     install_java_deb
>   elif which rpm &> /dev/null; then
>     install_java_rpm
>   fi
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function install_tarball() {
>   if [[ "$1" != "" ]]; then
>     # Download a .tar.gz file and extract to target dir
>
>     local tar_url=$1
>     local tar_file=`basename $tar_url`
>     local tar_file_md5=`basename $tar_url.md5`
>
>     local target=${2:-/usr/local/}
>     mkdir -p $target
>
>     local curl="curl -L --silent --show-error --fail --connect-timeout 10
> --max-time 600 --retry 5"
>     # any download should take less than 10 minutes
>
>     for retry_count in `seq 1 3`;
>     do
>       $curl -O $tar_url || true
>       $curl -O $tar_url.md5 || true
>
>       if [ ! -e $tar_file_md5 ]; then
>         echo "Could not download  $tar_url.md5. Continuing."
>         break;
>       elif md5sum -c $tar_file_md5; then
>         break;
>       else
>         # workaround for cassandra broken .md5 files
>         if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5` ];
> then
>           break;
>         fi
>
>         rm -f $tar_file $tar_file_md5
>       fi
>
>       if [ ! $retry_count -eq "3" ]; then
>         sleep 10
>       fi
>     done
>
>     if [ ! -e $tar_file ]; then
>       echo "Failed to download $tar_file. Aborting."
>       exit 1
>     fi
>
>     tar xzf $tar_file -C $target
>     rm -f $tar_file $tar_file_md5
>   fi
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function update_repo() {
>   if which dpkg &> /dev/null; then
>     sudo apt-get update
>   elif which rpm &> /dev/null; then
>     yum update -y yum
>   fi
> }
>
> function install_hadoop() {
>   local OPTIND
>   local OPTARG
>
>   CLOUD_PROVIDER=
>   HADOOP_TAR_URL=
>   while getopts "c:u:" OPTION; do
>     case $OPTION in
>     c)
>       CLOUD_PROVIDER="$OPTARG"
>       ;;
>     u)
>       HADOOP_TAR_URL="$OPTARG"
>       ;;
>     esac
>   done
>
>   HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>
>   update_repo
>
>   if ! id hadoop &> /dev/null; then
>     useradd hadoop
>   fi
>
>   install_tarball $HADOOP_TAR_URL
>   ln -s $HADOOP_HOME /usr/local/hadoop
>
>   echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>   echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >> ~root/.bashrc
> }
>
> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
> configure_hostnames -c aws-ec2 || exit 1
> install_java || exit 1
> install_tarball || exit 1
> install_hadoop -c aws-ec2 -u
> http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz|| exit 1
> exit 0
>
> 2011-10-11 22:09:56,490 DEBUG [jclouds.compute] (main) >> searching
> params([biggest=false, fastest=false, imageName=null, imageDescription=null,
> imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
> iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
> osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
> hardwareId=m1.large])
> 2011-10-11 22:09:56,491 DEBUG [jclouds.compute] (user thread 0) >>
> providing images
> 2011-10-11 22:09:56,497 DEBUG [jclouds.compute] (user thread 1) >>
> providing images
> 2011-10-11 22:09:58,046 DEBUG [jclouds.compute] (user thread 1) <<
> images(32)
> 2011-10-11 22:10:01,457 DEBUG [jclouds.compute] (user thread 0) <<
> images(3123)
> 2011-10-11 22:10:02,183 DEBUG [jclouds.compute] (main) <<   matched
> hardware(m1.large)
> 2011-10-11 22:10:02,184 DEBUG [jclouds.compute] (main) <<   matched
> image(us-east-1/ami-da0cf8b3)
> 2011-10-11 22:10:02,194 INFO
> [org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
> template
> 2011-10-11 22:10:02,196 INFO  [org.apache.whirr.actions.NodeStarter]
> (pool-3-thread-2) Starting 1 node(s) with roles [hadoop-datanode,
> hadoop-tasktracker]
> 2011-10-11 22:10:02,196 DEBUG [jclouds.compute] (pool-3-thread-2) >>
> running 1 node group(hadoop) location(us-east-1)
> image(us-east-1/ami-da0cf8b3) hardwareProfile(m1.large)
> options([groupIds=[], keyPair=null, noKeyPair=false,
> monitoringEnabled=false, placementGroup=null, noPlacementGroup=false,
> subnetId=null, userData=null, blockDeviceMappings=[], spotPrice=null,
> spotOptions=[formParameters={}]])
> 2011-10-11 22:10:02,199 DEBUG [jclouds.compute] (pool-3-thread-2) >>
> searching params([biggest=false, fastest=false, imageName=null,
> imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
> imageId=null, imagePredicate=null, imageVersion=20101020,
> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
> iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
> osFamily=ubuntu, osName=null,
> osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
> osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
> 2011-10-11 22:10:02,199 DEBUG
> [org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
> #!/bin/bash
> set +u
> shopt -s xpg_echo
> shopt -s expand_aliases
> unset PATH JAVA_HOME LD_LIBRARY_PATH
> function abort {
>    echo "aborting: $@" 1>&2
>    exit 1
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function configure_hostnames() {
>   local OPTIND
>   local OPTARG
>
>   CLOUD_PROVIDER=
>   while getopts "c:" OPTION; do
>     case $OPTION in
>     c)
>       CLOUD_PROVIDER="$OPTARG"
>       shift $((OPTIND-1)); OPTIND=1
>       ;;
>     esac
>   done
>
>   case $CLOUD_PROVIDER in
>     cloudservers | cloudservers-uk | cloudservers-us )
>       if which dpkg &> /dev/null; then
>         PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 |
> awk '{ print $1}'`
>         HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
>         echo $HOSTNAME > /etc/hostname
>         sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
>         set +e
>         /etc/init.d/hostname restart
>         set -e
>         sleep 2
>         hostname
>       fi
>       ;;
>   esac
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function install_java_deb() {
>   # Enable multiverse
>   # TODO: check that it is not already enabled
>   sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list
>
>   DISTRO=`lsb_release -s -c`
>   cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
> deb http://archive.canonical.com/ubuntu $DISTRO partner
> deb-src http://archive.canonical.com/ubuntu $DISTRO partner
> EOF
>
>   apt-get update
>
>   echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
> sun-java6-jre   sun-java6-jre/stopthread        boolean true
> sun-java6-jre   sun-java6-jre/jcepolicy note
> sun-java6-bin   shared/present-sun-dlj-v1-1     note
> sun-java6-jdk   shared/present-sun-dlj-v1-1     note
> sun-java6-jre   shared/present-sun-dlj-v1-1     note
> ' | debconf-set-selections
>
>   apt-get -y install sun-java6-jdk
>
>   echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
>   export JAVA_HOME=/usr/lib/jvm/java-6-sun
>   java -version
>
> }
>
> function install_java_rpm() {
>   MACHINE_TYPE=`uname -m`
>   if [ ${MACHINE_TYPE} == 'x86_64' ]; then
>     JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
>   else
>     JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
>   fi
>   JDK_INSTALL_PATH=/usr/java
>   mkdir -p $JDK_INSTALL_PATH
>   cd $JDK_INSTALL_PATH
>   wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
>   chmod +x $JDK_PACKAGE
>   mv /bin/more /bin/more.no
>   yes | ./$JDK_PACKAGE -noregister
>   mv /bin/more.no /bin/more
>   rm -f *.rpm $JDK_PACKAGE
>
>   export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
>   echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
>   alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
>   alternatives --set java $JAVA_HOME/bin/java
>   java -version
> }
>
> function install_java() {
>   if which dpkg &> /dev/null; then
>     install_java_deb
>   elif which rpm &> /dev/null; then
>     install_java_rpm
>   fi
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function install_tarball() {
>   if [[ "$1" != "" ]]; then
>     # Download a .tar.gz file and extract to target dir
>
>     local tar_url=$1
>     local tar_file=`basename $tar_url`
>     local tar_file_md5=`basename $tar_url.md5`
>
>     local target=${2:-/usr/local/}
>     mkdir -p $target
>
>     local curl="curl -L --silent --show-error --fail --connect-timeout 10
> --max-time 600 --retry 5"
>     # any download should take less than 10 minutes
>
>     for retry_count in `seq 1 3`;
>     do
>       $curl -O $tar_url || true
>       $curl -O $tar_url.md5 || true
>
>       if [ ! -e $tar_file_md5 ]; then
>         echo "Could not download  $tar_url.md5. Continuing."
>         break;
>       elif md5sum -c $tar_file_md5; then
>         break;
>       else
>         # workaround for cassandra broken .md5 files
>         if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5` ];
> then
>           break;
>         fi
>
>         rm -f $tar_file $tar_file_md5
>       fi
>
>       if [ ! $retry_count -eq "3" ]; then
>         sleep 10
>       fi
>     done
>
>     if [ ! -e $tar_file ]; then
>       echo "Failed to download $tar_file. Aborting."
>       exit 1
>     fi
>
>     tar xzf $tar_file -C $target
>     rm -f $tar_file $tar_file_md5
>   fi
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function update_repo() {
>   if which dpkg &> /dev/null; then
>     sudo apt-get update
>   elif which rpm &> /dev/null; then
>     yum update -y yum
>   fi
> }
>
> function install_hadoop() {
>   local OPTIND
>   local OPTARG
>
>   CLOUD_PROVIDER=
>   HADOOP_TAR_URL=
>   while getopts "c:u:" OPTION; do
>     case $OPTION in
>     c)
>       CLOUD_PROVIDER="$OPTARG"
>       ;;
>     u)
>       HADOOP_TAR_URL="$OPTARG"
>       ;;
>     esac
>   done
>
>   HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)
>
>   update_repo
>
>   if ! id hadoop &> /dev/null; then
>     useradd hadoop
>   fi
>
>   install_tarball $HADOOP_TAR_URL
>   ln -s $HADOOP_HOME /usr/local/hadoop
>
>   echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
>   echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >> ~root/.bashrc
> }
>
> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
> configure_hostnames -c aws-ec2 || exit 1
> install_java || exit 1
> install_tarball || exit 1
> install_hadoop -c aws-ec2 -u
> http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz|| exit 1
> exit 0
>
> 2011-10-11 22:10:02,200 DEBUG [jclouds.compute] (main) >> searching
> params([biggest=false, fastest=false, imageName=null, imageDescription=null,
> imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
> iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
> osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
> hardwareId=m1.large])
> 2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
> hardware(m1.large)
> 2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
> image(us-east-1/ami-da0cf8b3)
> 2011-10-11 22:10:02,204 INFO  [org.apache.whirr.actions.NodeStarter]
> (pool-3-thread-4) Starting 1 node(s) with roles [hadoop-namenode,
> hadoop-jobtracker]
> 2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >>
> running 1 node group(hadoop) location(us-east-1)
> image(us-east-1/ami-da0cf8b3) hardwareProfile(m1.large)
> options([groupIds=[], keyPair=null, noKeyPair=false,
> monitoringEnabled=false, placementGroup=null, noPlacementGroup=false,
> subnetId=null, userData=null, blockDeviceMappings=[], spotPrice=null,
> spotOptions=[formParameters={}]])
> 2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >>
> searching params([biggest=false, fastest=false, imageName=null,
> imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
> imageId=null, imagePredicate=null, imageVersion=20101020,
> location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
> iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
> osFamily=ubuntu, osName=null,
> osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
> osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
> 2011-10-11 22:10:02,321 DEBUG [jclouds.compute] (pool-3-thread-4) <<
> matched hardware(m1.large)
> 2011-10-11 22:10:02,325 DEBUG [jclouds.compute] (pool-3-thread-4) <<
> matched image(us-east-1/ami-da0cf8b3)
> 2011-10-11 22:10:02,327 DEBUG [jclouds.compute] (pool-3-thread-4) >>
> creating keyPair region(us-east-1) group(hadoop)
> 2011-10-11 22:10:02,342 DEBUG [jclouds.compute] (pool-3-thread-2) <<
> matched hardware(m1.large)
> 2011-10-11 22:10:02,366 DEBUG [jclouds.compute] (pool-3-thread-2) <<
> matched image(us-east-1/ami-da0cf8b3)
> 2011-10-11 22:10:02,367 DEBUG [jclouds.compute] (pool-3-thread-2) >>
> creating keyPair region(us-east-1) group(hadoop)
> 2011-10-11 22:10:03,645 DEBUG [jclouds.compute] (pool-3-thread-2) <<
> created keyPair(jclouds#hadoop#us-east-1#2)
> 2011-10-11 22:10:03,646 DEBUG [jclouds.compute] (pool-3-thread-2) >>
> creating securityGroup region(us-east-1) name(jclouds#hadoop#us-east-1)
> 2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) << reused
> securityGroup(jclouds#hadoop#us-east-1)
> 2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) >>
> running 1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
> params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
> KeyName=[jclouds#hadoop#us-east-1#2]})
> 2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) <<
> created keyPair(jclouds#hadoop#us-east-1#0)
> 2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) >>
> running 1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
> params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
> KeyName=[jclouds#hadoop#us-east-1#0]})
> 2011-10-11 22:10:05,067 DEBUG [jclouds.compute] (pool-3-thread-2) <<
> started instances([region=us-east-1, name=i-08153d68])
> 2011-10-11 22:10:05,128 DEBUG [jclouds.compute] (pool-3-thread-2) <<
> present instances([region=us-east-1, name=i-08153d68])
> 2011-10-11 22:10:05,186 DEBUG [jclouds.compute] (pool-3-thread-4) <<
> started instances([region=us-east-1, name=i-12153d72])
> 2011-10-11 22:10:05,249 DEBUG [jclouds.compute] (pool-3-thread-4) <<
> present instances([region=us-east-1, name=i-12153d72])
> 2011-10-11 22:10:38,407 DEBUG [jclouds.compute] (user thread 0) >> blocking
> on socket [address=184.72.177.130, port=22] for 600000 seconds
> 2011-10-11 22:10:43,449 DEBUG [jclouds.compute] (user thread 0) << socket
> [address=184.72.177.130, port=22] opened
> 2011-10-11 22:10:44,681 DEBUG [jclouds.compute] (user thread 7) >> blocking
> on socket [address=50.19.59.109, port=22] for 600000 seconds
> 2011-10-11 22:10:46,462 DEBUG [jclouds.compute] (user thread 0) >> running
> [sudo ./setup-ubuntu init] as ubuntu@184.72.177.130
> 2011-10-11 22:10:46,534 DEBUG [jclouds.compute] (user thread 0) << init(0)
> 2011-10-11 22:10:46,535 DEBUG [jclouds.compute] (user thread 0) >> running
> [sudo ./setup-ubuntu start] as ubuntu@184.72.177.130
> 2011-10-11 22:10:47,653 DEBUG [jclouds.compute] (user thread 0) << start(0)
> 2011-10-11 22:10:56,729 DEBUG [jclouds.compute] (user thread 7) << socket
> [address=50.19.59.109, port=22] opened
> 2011-10-11 22:11:00,695 DEBUG [jclouds.compute] (user thread 7) >> running
> [sudo ./setup-ubuntu init] as ubuntu@50.19.59.109
> 2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) << init(0)
> 2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) >> running
> [sudo ./setup-ubuntu start] as ubuntu@50.19.59.109
> 2011-10-11 22:11:02,078 DEBUG [jclouds.compute] (user thread 7) << start(0)
> 2011-10-11 22:11:36,157 DEBUG [jclouds.compute] (user thread 0) <<
> complete(true)
> 2011-10-11 22:11:36,235 DEBUG [jclouds.compute] (user thread 0) << stdout
> from setup-ubuntu as ubuntu@184.72.177.130
> Hit http://security.ubuntu.com lucid-security/main Packages
> Hit http://archive.canonical.com lucid/partner Packages
> Hit http://security.ubuntu.com lucid-security/universe Packages
> Hit http://security.ubuntu.com lucid-security/multiverse Packages
> Hit http://security.ubuntu.com lucid-security/main Sources
> Hit http://security.ubuntu.com lucid-security/universe Sources
> Hit http://security.ubuntu.com lucid-security/multiverse Sources
> Hit http://archive.canonical.com lucid/partner Sources
> Reading package lists...
> hadoop-0.20.2.tar.gz: OK
>
> 2011-10-11 22:11:36,291 DEBUG [jclouds.compute] (user thread 0) << stderr
> from setup-ubuntu as ubuntu@184.72.177.130
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
> provide /usr/bin/native2ascii (native2ascii) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
> /usr/bin/rmic (rmic) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to provide
> /usr/bin/schemagen (schemagen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to provide
> /usr/bin/serialver (serialver) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
> /usr/bin/wsgen (wsgen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
> /usr/bin/wsimport (wsimport) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
> /usr/bin/xjc (xjc) in auto mode.
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>
> 2011-10-11 22:11:36,293 DEBUG [jclouds.compute] (user thread 0) << options
> applied node(us-east-1/i-08153d68)
> 2011-10-11 22:11:36,296 INFO  [org.apache.whirr.actions.NodeStarter]
> (pool-3-thread-2) Nodes started: [[id=us-east-1/i-08153d68,
> providerId=i-08153d68, group=hadoop, name=null, location=[id=us-east-1b,
> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
> privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
> 2011-10-11 22:11:47,206 DEBUG [jclouds.compute] (user thread 7) <<
> complete(true)
> 2011-10-11 22:11:47,282 DEBUG [jclouds.compute] (user thread 7) << stdout
> from setup-ubuntu as ubuntu@50.19.59.109
> Hit http://security.ubuntu.com lucid-security/main Packages
> Hit http://archive.canonical.com lucid/partner Packages
> Hit http://security.ubuntu.com lucid-security/universe Packages
> Hit http://security.ubuntu.com lucid-security/multiverse Packages
> Hit http://security.ubuntu.com lucid-security/main Sources
> Hit http://security.ubuntu.com lucid-security/universe Sources
> Hit http://security.ubuntu.com lucid-security/multiverse Sources
> Hit http://archive.canonical.com lucid/partner Sources
> Reading package lists...
> hadoop-0.20.2.tar.gz: OK
>
> 2011-10-11 22:11:47,338 DEBUG [jclouds.compute] (user thread 7) << stderr
> from setup-ubuntu as ubuntu@50.19.59.109
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
> provide /usr/bin/native2ascii (native2ascii) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
> /usr/bin/rmic (rmic) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to provide
> /usr/bin/schemagen (schemagen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to provide
> /usr/bin/serialver (serialver) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
> /usr/bin/wsgen (wsgen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
> /usr/bin/wsimport (wsimport) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
> /usr/bin/xjc (xjc) in auto mode.
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>
> 2011-10-11 22:11:47,339 DEBUG [jclouds.compute] (user thread 7) << options
> applied node(us-east-1/i-12153d72)
> 2011-10-11 22:11:47,340 INFO  [org.apache.whirr.actions.NodeStarter]
> (pool-3-thread-4) Nodes started: [[id=us-east-1/i-12153d72,
> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
> 2011-10-11 22:11:47,464 INFO  [org.apache.whirr.service.FirewallManager]
> (main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
> hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
> id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [50070,
> 50030] for [75.101.253.125/32]
> 2011-10-11 22:11:47,529 WARN
> [org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
> 75.101.253.125/32-1-50070-50070' has already been authorized on the
> specified group
> 2011-10-11 22:11:47,574 WARN
> [org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
> 75.101.253.125/32-1-50030-50030' has already been authorized on the
> specified group
> 2011-10-11 22:11:47,575 INFO  [org.apache.whirr.service.FirewallManager]
> (main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
> hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
> id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
> providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
> scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
> metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
> family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [8020,
> 8021] for [50.19.59.109/32]
> 2011-10-11 22:11:47,806 DEBUG [jclouds.compute] (main) >> listing node
> details matching(withIds([us-east-1/i-08153d68, us-east-1/i-12153d72]))
> 2011-10-11 22:11:50,315 DEBUG [jclouds.compute] (main) << list(2)
> 2011-10-11 22:11:50,315 DEBUG
> [org.apache.whirr.actions.ConfigureClusterAction] (main) Nodes in cluster:
> [[id=us-east-1/i-08153d68, providerId=i-08153d68, group=hadoop, name=null,
> location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
> privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]],
> [id=us-east-1/i-12153d72, providerId=i-12153d72, group=hadoop, name=null,
> location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
> privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
> isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
> durable=false, isBootDevice=false]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
> 2011-10-11 22:11:50,316 INFO
> [org.apache.whirr.actions.ConfigureClusterAction] (main) Running
> configuration script on nodes: [us-east-1/i-08153d68]
> 2011-10-11 22:11:50,318 DEBUG
> [org.apache.whirr.actions.ConfigureClusterAction] (main) script:
> #!/bin/bash
> set +u
> shopt -s xpg_echo
> shopt -s expand_aliases
> unset PATH JAVA_HOME LD_LIBRARY_PATH
> function abort {
>    echo "aborting: $@" 1>&2
>    exit 1
> }
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #     http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
> function configure_hadoop() {
>   local OPTIND
>   local OPTARG
>
>   ROLES=$1
>   shift
>
>   CLOUD_PROVIDER=
>   while getopts "c:" OPTION; do
>     case $OPTION in
>     c)
>       CLOUD_PROVIDER="$OPTARG"
>       ;;
>     esac
>   done
>
>   case $CLOUD_PROVIDER in
>     ec2 | aws-ec2 )
>       # Alias /mnt as /data
>       ln -s /mnt /data
>       ;;
>     *)
>       ;;
>   esac
>
>   HADOOP_HOME=/usr/local/hadoop
>   HADOOP_CONF_DIR=$HADOOP_HOME/conf
>
>   mkdir -p /data/hadoop
>   chown hadoop:hadoop /data/hadoop
>   if [ ! -e /data/tmp ]; then
>     mkdir /data/tmp
>     chmod a+rwxt /data/tmp
>   fi
>   mkdir /etc/hadoop
>   ln -s $HADOOP_CONF_DIR /etc/hadoop/conf
>
>   # Copy generated configuration files in place
>   cp /tmp/{core,hdfs,mapred}-site.xml $HADOOP_CONF_DIR
>
>   # Keep PID files in a non-temporary directory
>   sed -i -e "s|# export HADOOP_PID_DIR=.*|export
> HADOOP_PID_DIR=/var/run/hadoop|" \
>     $HADOOP_CONF_DIR/hadoop-env.sh
>   mkdir -p /var/run/hadoop
>   chown -R hadoop:hadoop /var/run/hadoop
>
>   # Set SSH options within the cluster
>   sed -i -e 's|# export HADOOP_SSH_OPTS=.*|export HADOOP_SSH_OPTS="-o
> StrictHostKeyChecking=no"|' \
>     $HADOOP_CONF_DIR/hadoop-env.sh
>
>   # Disable IPv6
>   sed -i -e 's|# export HADOOP_OPTS=.*|export
> HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"|' \
>     $HADOOP_CONF_DIR/hadoop-env.sh
>
>   # Hadoop logs should be on the /data partition
>   sed -i -e 's|# export HADOOP_LOG_DIR=.*|export
> HADOOP_LOG_DIR=/var/log/hadoop/logs|' \
>     $HADOOP_CONF_DIR/hadoop-env.sh
>   rm -rf /var/log/hadoop
>   mkdir /data/hadoop/logs
>   chown hadoop:hadoop /data/hadoop/logs
>   ln -s /data/hadoop/logs /var/log/hadoop
>   chown -R hadoop:hadoop /var/log/hadoop
>
>   for role in $(echo "$ROLES" | tr "," "\n"); do
>     case $role in
>     hadoop-namenode)
>       start_namenode
>       ;;
>     hadoop-secondarynamenode)
>       start_hadoop_daemon secondarynamenode
>       ;;
>     hadoop-jobtracker)
>       start_hadoop_daemon jobtracker
>       ;;
>     hadoop-datanode)
>       start_hadoop_daemon datanode
>       ;;
>     hadoop-tasktracker)
>       start_hadoop_daemon tasktracker
>       ;;
>     esac
>   done
>
> }
>
> function start_namenode() {
>   if which dpkg &> /dev/null; then
>     AS_HADOOP="su -s /bin/bash - hadoop -c"
>   elif which rpm &> /dev/null; then
>     AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>   fi
>
>   # Format HDFS
>   [ ! -e /data/hadoop/hdfs ] && $AS_HADOOP "$HADOOP_HOME/bin/hadoop
> namenode -format"
>
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start namenode"
>
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop dfsadmin -safemode wait"
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user"
>   # The following is questionable, as it allows a user to delete another
> user
>   # It's needed to allow users to create their own user directories
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user"
>
>   # Create temporary directory for Pig and Hive in HDFS
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /tmp"
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /tmp"
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse"
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user/hive/warehouse"
>
> }
>
> function start_hadoop_daemon() {
>   if which dpkg &> /dev/null; then
>     AS_HADOOP="su -s /bin/bash - hadoop -c"
>   elif which rpm &> /dev/null; then
>     AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
>   fi
>   $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start $1"
> }
>
> export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
> cat >> /tmp/core-site.xml <<'END_OF_FILE'
> <configuration>
>   <property>
>     <name>hadoop.tmp.dir</name>
>     <value>/data/tmp/hadoop-${user.name}</value>
>   </property>
>   <property>
>     <name>io.file.buffer.size</name>
>     <value>65536</value>
>   </property>
>   <property>
>     <name>hadoop.rpc.socket.factory.class.default</name>
>     <value>org.apache.hadoop.net.StandardSocketFactory</value>
>     <final>true</final>
>   </property>
>   <property>
>     <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
>     <value></value>
>   </property>
>   <property>
>     <name>hadoop.rpc.socket.factory.class.JobSubmissionProtocol</name>
>     <value></value>
>   </property>
>   <property>
>     <name>fs.trash.interval</name>
>     <value>1440</value>
>   </property>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://ec2-50-19-59-109.compute-1.amazonaws.com:8020/</value>
>   </property>
> </configuration>
> END_OF_FILE
> cat >> /tmp/hdfs-site.xml <<'END_OF_FILE'
> <configuration>
>   <property>
>     <name>dfs.block.size</name>
>     <value>134217728</value>
>   </property>
>   <property>
>     <name>dfs.data.dir</name>
>     <value>/data/hadoop/hdfs/data</value>
>   </property>
>   <property>
>     <name>dfs.datanode.du.reserved</name>
>     <value>1073741824</value>
>   </property>
>   <property>
>     <name>dfs.name.dir</name>
>     <value>/data/hadoop/hdfs/name</value>
>   </property>
>   <property>
>     <name>fs.checkpoint.dir</name>
>     <value>/data/hadoop/hdfs/secondary</value>
>   </property>
> </configuration>
> END_OF_FILE
> cat >> /tmp/mapred-site.xml <<'END_OF_FILE'
> <configuration>
>   <property>
>     <name>mapred.local.dir</name>
>     <value>/data/hadoop/mapred/local</value>
>   </property>
>   <property>
>     <name>mapred.map.tasks.speculative.execution</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>mapred.reduce.tasks.speculative.execution</name>
>     <value>false</value>
>   </property>
>   <property>
>     <name>mapred.system.dir</name>
>     <value>/hadoop/system/mapred</value>
>   </property>
>   <property>
>     <name>mapreduce.jobtracker.staging.root.dir</name>
>     <value>/user</value>
>   </property>
>   <property>
>     <name>mapred.compress.map.output</name>
>     <value>true</value>
>   </property>
>   <property>
>     <name>mapred.output.compression.type</name>
>     <value>BLOCK</value>
>   </property>
>   <property>
>     <name>mapred.child.java.opts</name>
>     <value>-Xmx550m</value>
>   </property>
>   <property>
>     <name>mapred.child.ulimit</name>
>     <value>1126400</value>
>   </property>
>   <property>
>     <name>mapred.tasktracker.map.tasks.maximum</name>
>     <value>2</value>
>   </property>
>   <property>
>     <name>mapred.tasktracker.reduce.tasks.maximum</name>
>     <value>2</value>
>   </property>
>   <property>
>     <name>mapred.reduce.tasks</name>
>     <value>2</value>
>   </property>
>   <property>
>     <name>mapred.job.tracker</name>
>     <value>ec2-50-19-59-109.compute-1.amazonaws.com:8021</value>
>   </property>
> </configuration>
> END_OF_FILE
> configure_hadoop hadoop-datanode,hadoop-tasktracker -c aws-ec2 || exit 1
> exit 0
>
> 2011-10-11 22:11:50,970 DEBUG [jclouds.compute] (user thread 7) >> blocking
> on socket [address=184.72.177.130, port=22] for 600000 seconds
> 2011-10-11 22:11:53,992 DEBUG [jclouds.compute] (user thread 7) << socket
> [address=184.72.177.130, port=22] opened
> 2011-10-11 22:12:57,316 DEBUG [org.apache.whirr.service.ComputeCache]
> (Thread-1) closing ComputeServiceContext  [id=aws-ec2, endpoint=
> https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
> identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
>
>
> On Tue, Oct 11, 2011 at 3:31 PM, Andrei Savu <sa...@gmail.com>wrote:
>
>> Chris -
>>
>> We've seen this issue in the past. I will take a closer look in the
>> morning (in ~10 hours). Can you upload the full log somewhere? Also make the
>> sure that the SSH keypair does not need a password.
>>
>> Cheers,
>>
>> -- Andrei Savu / andreisavu.ro
>>
>>
>> On Tue, Oct 11, 2011 at 11:13 PM, Chris Schilling <
>> chris@thecleversense.com> wrote:
>>
>>> Hello,
>>>
>>> New to whirr, having trouble *running whirr from an ec2 instance*
>>> (authentication when setting up other machines)
>>>
>>> First, here is my configuration:
>>> whirr.cluster-name=hadoop
>>> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
>>> hadoop-datanode+hadoop-tasktracker
>>>
>>> # For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
>>> variables.
>>> whirr.provider=aws-ec2
>>> whirr.identity=${env:AWS_ACCESS_KEY_ID}
>>> whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>>>
>>> # The size of the instance to use. See
>>> http://aws.amazon.com/ec2/instance-types/
>>> whirr.hardware-id=m1.large
>>> whirr.image-id=us-east-1/ami-da0cf8b3
>>> whirr.location-id=us-east-1
>>> # By default use the user system SSH keys. Override them here.
>>> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
>>> whirr.public-key-file=${whirr.private-key-file}.pub
>>>
>>>
>>>
>>> I export the credentials, then create the key:
>>>  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr
>>>
>>> Then I launch the cluster:
>>> whirr launch-cluster --config hadoop-ec2.properties --private-key-file
>>> ~/.ssh/id_rsa_whirr
>>>
>>> The nodes start (costs me $!), but then authentication errors all over
>>> the place, along with Preconditions failures.  Here are some samples of the
>>>
>>> java.lang.NullPointerException: architecture
>>>         at
>>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>>>         at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>>>         at
>>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>>>         at
>>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
>>>         at
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
>>>         at
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
>>>         at
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
>>>         at
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
>>>         at
>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
>>>         at
>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
>>>         at
>>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
>>> ......
>>>
>>> Then the authentication errors begin:
>>> <<authenticated>> woke to: net.schmizz.sshj.userauth.UserAuthException:
>>> publickey auth failed
>>> << (ubuntu@184.72.177.130:22) error acquiring SSHClient(ubuntu@
>>> 184.72.177.130:22): Exhausted available authentication methods
>>> net.schmizz.sshj.userauth.UserAuthException: Exhausted available
>>> authentication methods
>>>         at
>>> net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
>>>         at net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
>>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
>>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
>>>         at
>>> org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
>>>         at
>>> org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
>>>         at org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
>>>         at org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
>>>         at
>>> org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
>>>         at
>>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
>>>         at
>>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
>>> ......
>>>
>>> Please advise!
>>>
>>>
>>> Chris Schilling
>>> Sr. Data Mining Engineer
>>> Clever Sense, Inc.
>>> "Curating the World Around You"
>>> --------------------------------------------------------------
>>> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>>>
>>> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
>>> --------------------------------------------------------------
>>>
>>>
>>
>
>
> --
> Chris Schilling
> Sr. Data Fiend
> Clever Sense, Inc.
> "Curating the World Around You!"
> --------------------------------------------------------------
> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>
> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
> --------------------------------------------------------------
>
>

Re: authentication trouble

Posted by Chris Schilling <ch...@thecleversense.com>.
Okay, no the ssh keypair does not need a password.  I installed whirr on a
separate ec2 instance.  So, this may be internal communication issues
between ec2.  Here is the full whirr.log

2011-10-11 22:08:47,807 DEBUG [org.apache.whirr.service.ComputeCache] (main)
creating new ComputeServiceContext
org.apache.whirr.service.ComputeCache$Key@1a689880
2011-10-11 22:09:50,094 DEBUG [org.apache.whirr.service.ComputeCache] (main)
creating new ComputeServiceContext
org.apache.whirr.service.ComputeCache$Key@1a689880
2011-10-11 22:09:56,433 DEBUG [org.apache.whirr.service.ComputeCache] (main)
created new ComputeServiceContext  [id=aws-ec2, endpoint=
https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]
2011-10-11 22:09:56,454 INFO
[org.apache.whirr.actions.BootstrapClusterAction] (main) Bootstrapping
cluster
2011-10-11 22:09:56,455 INFO
[org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
template
2011-10-11 22:09:56,473 DEBUG
[org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
#!/bin/bash
set +u
shopt -s xpg_echo
shopt -s expand_aliases
unset PATH JAVA_HOME LD_LIBRARY_PATH
function abort {
   echo "aborting: $@" 1>&2
   exit 1
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function configure_hostnames() {
  local OPTIND
  local OPTARG

  CLOUD_PROVIDER=
  while getopts "c:" OPTION; do
    case $OPTION in
    c)
      CLOUD_PROVIDER="$OPTARG"
      shift $((OPTIND-1)); OPTIND=1
      ;;
    esac
  done

  case $CLOUD_PROVIDER in
    cloudservers | cloudservers-uk | cloudservers-us )
      if which dpkg &> /dev/null; then
        PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 |
awk '{ print $1}'`
        HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
        echo $HOSTNAME > /etc/hostname
        sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
        set +e
        /etc/init.d/hostname restart
        set -e
        sleep 2
        hostname
      fi
      ;;
  esac
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function install_java_deb() {
  # Enable multiverse
  # TODO: check that it is not already enabled
  sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list

  DISTRO=`lsb_release -s -c`
  cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
deb http://archive.canonical.com/ubuntu $DISTRO partner
deb-src http://archive.canonical.com/ubuntu $DISTRO partner
EOF

  apt-get update

  echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jre   sun-java6-jre/stopthread        boolean true
sun-java6-jre   sun-java6-jre/jcepolicy note
sun-java6-bin   shared/present-sun-dlj-v1-1     note
sun-java6-jdk   shared/present-sun-dlj-v1-1     note
sun-java6-jre   shared/present-sun-dlj-v1-1     note
' | debconf-set-selections

  apt-get -y install sun-java6-jdk

  echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
  export JAVA_HOME=/usr/lib/jvm/java-6-sun
  java -version

}

function install_java_rpm() {
  MACHINE_TYPE=`uname -m`
  if [ ${MACHINE_TYPE} == 'x86_64' ]; then
    JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
  else
    JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
  fi
  JDK_INSTALL_PATH=/usr/java
  mkdir -p $JDK_INSTALL_PATH
  cd $JDK_INSTALL_PATH
  wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
  chmod +x $JDK_PACKAGE
  mv /bin/more /bin/more.no
  yes | ./$JDK_PACKAGE -noregister
  mv /bin/more.no /bin/more
  rm -f *.rpm $JDK_PACKAGE

  export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
  echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
  alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
  alternatives --set java $JAVA_HOME/bin/java
  java -version
}

function install_java() {
  if which dpkg &> /dev/null; then
    install_java_deb
  elif which rpm &> /dev/null; then
    install_java_rpm
  fi
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function install_tarball() {
  if [[ "$1" != "" ]]; then
    # Download a .tar.gz file and extract to target dir

    local tar_url=$1
    local tar_file=`basename $tar_url`
    local tar_file_md5=`basename $tar_url.md5`

    local target=${2:-/usr/local/}
    mkdir -p $target

    local curl="curl -L --silent --show-error --fail --connect-timeout 10
--max-time 600 --retry 5"
    # any download should take less than 10 minutes

    for retry_count in `seq 1 3`;
    do
      $curl -O $tar_url || true
      $curl -O $tar_url.md5 || true

      if [ ! -e $tar_file_md5 ]; then
        echo "Could not download  $tar_url.md5. Continuing."
        break;
      elif md5sum -c $tar_file_md5; then
        break;
      else
        # workaround for cassandra broken .md5 files
        if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5` ];
then
          break;
        fi

        rm -f $tar_file $tar_file_md5
      fi

      if [ ! $retry_count -eq "3" ]; then
        sleep 10
      fi
    done

    if [ ! -e $tar_file ]; then
      echo "Failed to download $tar_file. Aborting."
      exit 1
    fi

    tar xzf $tar_file -C $target
    rm -f $tar_file $tar_file_md5
  fi
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function update_repo() {
  if which dpkg &> /dev/null; then
    sudo apt-get update
  elif which rpm &> /dev/null; then
    yum update -y yum
  fi
}

function install_hadoop() {
  local OPTIND
  local OPTARG

  CLOUD_PROVIDER=
  HADOOP_TAR_URL=
  while getopts "c:u:" OPTION; do
    case $OPTION in
    c)
      CLOUD_PROVIDER="$OPTARG"
      ;;
    u)
      HADOOP_TAR_URL="$OPTARG"
      ;;
    esac
  done

  HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)

  update_repo

  if ! id hadoop &> /dev/null; then
    useradd hadoop
  fi

  install_tarball $HADOOP_TAR_URL
  ln -s $HADOOP_HOME /usr/local/hadoop

  echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
  echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >> ~root/.bashrc
}

export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
configure_hostnames -c aws-ec2 || exit 1
install_java || exit 1
install_tarball || exit 1
install_hadoop -c aws-ec2 -u
http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz||
exit 1
exit 0

2011-10-11 22:09:56,490 DEBUG [jclouds.compute] (main) >> searching
params([biggest=false, fastest=false, imageName=null, imageDescription=null,
imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
hardwareId=m1.large])
2011-10-11 22:09:56,491 DEBUG [jclouds.compute] (user thread 0) >> providing
images
2011-10-11 22:09:56,497 DEBUG [jclouds.compute] (user thread 1) >> providing
images
2011-10-11 22:09:58,046 DEBUG [jclouds.compute] (user thread 1) <<
images(32)
2011-10-11 22:10:01,457 DEBUG [jclouds.compute] (user thread 0) <<
images(3123)
2011-10-11 22:10:02,183 DEBUG [jclouds.compute] (main) <<   matched
hardware(m1.large)
2011-10-11 22:10:02,184 DEBUG [jclouds.compute] (main) <<   matched
image(us-east-1/ami-da0cf8b3)
2011-10-11 22:10:02,194 INFO
[org.apache.whirr.actions.BootstrapClusterAction] (main) Configuring
template
2011-10-11 22:10:02,196 INFO  [org.apache.whirr.actions.NodeStarter]
(pool-3-thread-2) Starting 1 node(s) with roles [hadoop-datanode,
hadoop-tasktracker]
2011-10-11 22:10:02,196 DEBUG [jclouds.compute] (pool-3-thread-2) >> running
1 node group(hadoop) location(us-east-1) image(us-east-1/ami-da0cf8b3)
hardwareProfile(m1.large) options([groupIds=[], keyPair=null,
noKeyPair=false, monitoringEnabled=false, placementGroup=null,
noPlacementGroup=false, subnetId=null, userData=null,
blockDeviceMappings=[], spotPrice=null, spotOptions=[formParameters={}]])
2011-10-11 22:10:02,199 DEBUG [jclouds.compute] (pool-3-thread-2) >>
searching params([biggest=false, fastest=false, imageName=null,
imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
imageId=null, imagePredicate=null, imageVersion=20101020,
location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
osFamily=ubuntu, osName=null,
osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
2011-10-11 22:10:02,199 DEBUG
[org.apache.whirr.actions.BootstrapClusterAction] (main) Running script:
#!/bin/bash
set +u
shopt -s xpg_echo
shopt -s expand_aliases
unset PATH JAVA_HOME LD_LIBRARY_PATH
function abort {
   echo "aborting: $@" 1>&2
   exit 1
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function configure_hostnames() {
  local OPTIND
  local OPTARG

  CLOUD_PROVIDER=
  while getopts "c:" OPTION; do
    case $OPTION in
    c)
      CLOUD_PROVIDER="$OPTARG"
      shift $((OPTIND-1)); OPTIND=1
      ;;
    esac
  done

  case $CLOUD_PROVIDER in
    cloudservers | cloudservers-uk | cloudservers-us )
      if which dpkg &> /dev/null; then
        PRIVATE_IP=`/sbin/ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 |
awk '{ print $1}'`
        HOSTNAME=`echo $PRIVATE_IP | tr . -`.static.cloud-ips.com
        echo $HOSTNAME > /etc/hostname
        sed -i -e "s/$PRIVATE_IP.*/$PRIVATE_IP $HOSTNAME/" /etc/hosts
        set +e
        /etc/init.d/hostname restart
        set -e
        sleep 2
        hostname
      fi
      ;;
  esac
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function install_java_deb() {
  # Enable multiverse
  # TODO: check that it is not already enabled
  sed -i -e 's/universe$/universe multiverse/' /etc/apt/sources.list

  DISTRO=`lsb_release -s -c`
  cat > /etc/apt/sources.list.d/canonical.com.list <<EOF
deb http://archive.canonical.com/ubuntu $DISTRO partner
deb-src http://archive.canonical.com/ubuntu $DISTRO partner
EOF

  apt-get update

  echo 'sun-java6-bin   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jdk   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jre   shared/accepted-sun-dlj-v1-1    boolean true
sun-java6-jre   sun-java6-jre/stopthread        boolean true
sun-java6-jre   sun-java6-jre/jcepolicy note
sun-java6-bin   shared/present-sun-dlj-v1-1     note
sun-java6-jdk   shared/present-sun-dlj-v1-1     note
sun-java6-jre   shared/present-sun-dlj-v1-1     note
' | debconf-set-selections

  apt-get -y install sun-java6-jdk

  echo "export JAVA_HOME=/usr/lib/jvm/java-6-sun" >> /etc/profile
  export JAVA_HOME=/usr/lib/jvm/java-6-sun
  java -version

}

function install_java_rpm() {
  MACHINE_TYPE=`uname -m`
  if [ ${MACHINE_TYPE} == 'x86_64' ]; then
    JDK_PACKAGE=jdk-6u21-linux-x64-rpm.bin
  else
    JDK_PACKAGE=jdk-6u21-linux-i586-rpm.bin
  fi
  JDK_INSTALL_PATH=/usr/java
  mkdir -p $JDK_INSTALL_PATH
  cd $JDK_INSTALL_PATH
  wget http://whirr-third-party.s3.amazonaws.com/$JDK_PACKAGE
  chmod +x $JDK_PACKAGE
  mv /bin/more /bin/more.no
  yes | ./$JDK_PACKAGE -noregister
  mv /bin/more.no /bin/more
  rm -f *.rpm $JDK_PACKAGE

  export JAVA_HOME=$(ls -d $JDK_INSTALL_PATH/jdk*)
  echo "export JAVA_HOME=$JAVA_HOME" >> /etc/profile
  alternatives --install /usr/bin/java java $JAVA_HOME/bin/java 17000
  alternatives --set java $JAVA_HOME/bin/java
  java -version
}

function install_java() {
  if which dpkg &> /dev/null; then
    install_java_deb
  elif which rpm &> /dev/null; then
    install_java_rpm
  fi
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function install_tarball() {
  if [[ "$1" != "" ]]; then
    # Download a .tar.gz file and extract to target dir

    local tar_url=$1
    local tar_file=`basename $tar_url`
    local tar_file_md5=`basename $tar_url.md5`

    local target=${2:-/usr/local/}
    mkdir -p $target

    local curl="curl -L --silent --show-error --fail --connect-timeout 10
--max-time 600 --retry 5"
    # any download should take less than 10 minutes

    for retry_count in `seq 1 3`;
    do
      $curl -O $tar_url || true
      $curl -O $tar_url.md5 || true

      if [ ! -e $tar_file_md5 ]; then
        echo "Could not download  $tar_url.md5. Continuing."
        break;
      elif md5sum -c $tar_file_md5; then
        break;
      else
        # workaround for cassandra broken .md5 files
        if [ `md5sum $tar_file | awk '{print $1}'` = `cat $tar_file_md5` ];
then
          break;
        fi

        rm -f $tar_file $tar_file_md5
      fi

      if [ ! $retry_count -eq "3" ]; then
        sleep 10
      fi
    done

    if [ ! -e $tar_file ]; then
      echo "Failed to download $tar_file. Aborting."
      exit 1
    fi

    tar xzf $tar_file -C $target
    rm -f $tar_file $tar_file_md5
  fi
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function update_repo() {
  if which dpkg &> /dev/null; then
    sudo apt-get update
  elif which rpm &> /dev/null; then
    yum update -y yum
  fi
}

function install_hadoop() {
  local OPTIND
  local OPTARG

  CLOUD_PROVIDER=
  HADOOP_TAR_URL=
  while getopts "c:u:" OPTION; do
    case $OPTION in
    c)
      CLOUD_PROVIDER="$OPTARG"
      ;;
    u)
      HADOOP_TAR_URL="$OPTARG"
      ;;
    esac
  done

  HADOOP_HOME=/usr/local/$(basename $HADOOP_TAR_URL .tar.gz)

  update_repo

  if ! id hadoop &> /dev/null; then
    useradd hadoop
  fi

  install_tarball $HADOOP_TAR_URL
  ln -s $HADOOP_HOME /usr/local/hadoop

  echo "export HADOOP_HOME=$HADOOP_HOME" >> ~root/.bashrc
  echo 'export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH' >> ~root/.bashrc
}

export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
configure_hostnames -c aws-ec2 || exit 1
install_java || exit 1
install_tarball || exit 1
install_hadoop -c aws-ec2 -u
http://archive.apache.org/dist/hadoop/core/hadoop-0.20.2/hadoop-0.20.2.tar.gz||
exit 1
exit 0

2011-10-11 22:10:02,200 DEBUG [jclouds.compute] (main) >> searching
params([biggest=false, fastest=false, imageName=null, imageDescription=null,
imageId=us-east-1/ami-da0cf8b3, imagePredicate=null, imageVersion=null,
location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
iso3166Codes=[US-VA], metadata={}], minCores=0.0, minRam=0, osFamily=null,
osName=null, osDescription=null, osVersion=null, osArch=null, os64Bit=null,
hardwareId=m1.large])
2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
hardware(m1.large)
2011-10-11 22:10:02,203 DEBUG [jclouds.compute] (main) <<   matched
image(us-east-1/ami-da0cf8b3)
2011-10-11 22:10:02,204 INFO  [org.apache.whirr.actions.NodeStarter]
(pool-3-thread-4) Starting 1 node(s) with roles [hadoop-namenode,
hadoop-jobtracker]
2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >> running
1 node group(hadoop) location(us-east-1) image(us-east-1/ami-da0cf8b3)
hardwareProfile(m1.large) options([groupIds=[], keyPair=null,
noKeyPair=false, monitoringEnabled=false, placementGroup=null,
noPlacementGroup=false, subnetId=null, userData=null,
blockDeviceMappings=[], spotPrice=null, spotOptions=[formParameters={}]])
2011-10-11 22:10:02,205 DEBUG [jclouds.compute] (pool-3-thread-4) >>
searching params([biggest=false, fastest=false, imageName=null,
imageDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
imageId=null, imagePredicate=null, imageVersion=20101020,
location=[id=us-east-1, scope=REGION, description=us-east-1, parent=aws-ec2,
iso3166Codes=[US-VA], metadata={}], minCores=2.0, minRam=7680,
osFamily=ubuntu, osName=null,
osDescription=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml,
osVersion=10.04, osArch=paravirtual, os64Bit=true, hardwareId=null])
2011-10-11 22:10:02,321 DEBUG [jclouds.compute] (pool-3-thread-4) <<
matched hardware(m1.large)
2011-10-11 22:10:02,325 DEBUG [jclouds.compute] (pool-3-thread-4) <<
matched image(us-east-1/ami-da0cf8b3)
2011-10-11 22:10:02,327 DEBUG [jclouds.compute] (pool-3-thread-4) >>
creating keyPair region(us-east-1) group(hadoop)
2011-10-11 22:10:02,342 DEBUG [jclouds.compute] (pool-3-thread-2) <<
matched hardware(m1.large)
2011-10-11 22:10:02,366 DEBUG [jclouds.compute] (pool-3-thread-2) <<
matched image(us-east-1/ami-da0cf8b3)
2011-10-11 22:10:02,367 DEBUG [jclouds.compute] (pool-3-thread-2) >>
creating keyPair region(us-east-1) group(hadoop)
2011-10-11 22:10:03,645 DEBUG [jclouds.compute] (pool-3-thread-2) << created
keyPair(jclouds#hadoop#us-east-1#2)
2011-10-11 22:10:03,646 DEBUG [jclouds.compute] (pool-3-thread-2) >>
creating securityGroup region(us-east-1) name(jclouds#hadoop#us-east-1)
2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) << reused
securityGroup(jclouds#hadoop#us-east-1)
2011-10-11 22:10:03,776 DEBUG [jclouds.compute] (pool-3-thread-2) >> running
1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
KeyName=[jclouds#hadoop#us-east-1#2]})
2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) << created
keyPair(jclouds#hadoop#us-east-1#0)
2011-10-11 22:10:04,765 DEBUG [jclouds.compute] (pool-3-thread-4) >> running
1 instance region(us-east-1) zone(null) ami(ami-da0cf8b3)
params({InstanceType=[m1.large], SecurityGroup.1=[jclouds#hadoop#us-east-1],
KeyName=[jclouds#hadoop#us-east-1#0]})
2011-10-11 22:10:05,067 DEBUG [jclouds.compute] (pool-3-thread-2) << started
instances([region=us-east-1, name=i-08153d68])
2011-10-11 22:10:05,128 DEBUG [jclouds.compute] (pool-3-thread-2) << present
instances([region=us-east-1, name=i-08153d68])
2011-10-11 22:10:05,186 DEBUG [jclouds.compute] (pool-3-thread-4) << started
instances([region=us-east-1, name=i-12153d72])
2011-10-11 22:10:05,249 DEBUG [jclouds.compute] (pool-3-thread-4) << present
instances([region=us-east-1, name=i-12153d72])
2011-10-11 22:10:38,407 DEBUG [jclouds.compute] (user thread 0) >> blocking
on socket [address=184.72.177.130, port=22] for 600000 seconds
2011-10-11 22:10:43,449 DEBUG [jclouds.compute] (user thread 0) << socket
[address=184.72.177.130, port=22] opened
2011-10-11 22:10:44,681 DEBUG [jclouds.compute] (user thread 7) >> blocking
on socket [address=50.19.59.109, port=22] for 600000 seconds
2011-10-11 22:10:46,462 DEBUG [jclouds.compute] (user thread 0) >> running
[sudo ./setup-ubuntu init] as ubuntu@184.72.177.130
2011-10-11 22:10:46,534 DEBUG [jclouds.compute] (user thread 0) << init(0)
2011-10-11 22:10:46,535 DEBUG [jclouds.compute] (user thread 0) >> running
[sudo ./setup-ubuntu start] as ubuntu@184.72.177.130
2011-10-11 22:10:47,653 DEBUG [jclouds.compute] (user thread 0) << start(0)
2011-10-11 22:10:56,729 DEBUG [jclouds.compute] (user thread 7) << socket
[address=50.19.59.109, port=22] opened
2011-10-11 22:11:00,695 DEBUG [jclouds.compute] (user thread 7) >> running
[sudo ./setup-ubuntu init] as ubuntu@50.19.59.109
2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) << init(0)
2011-10-11 22:11:00,954 DEBUG [jclouds.compute] (user thread 7) >> running
[sudo ./setup-ubuntu start] as ubuntu@50.19.59.109
2011-10-11 22:11:02,078 DEBUG [jclouds.compute] (user thread 7) << start(0)
2011-10-11 22:11:36,157 DEBUG [jclouds.compute] (user thread 0) <<
complete(true)
2011-10-11 22:11:36,235 DEBUG [jclouds.compute] (user thread 0) << stdout
from setup-ubuntu as ubuntu@184.72.177.130
Hit http://security.ubuntu.com lucid-security/main Packages
Hit http://archive.canonical.com lucid/partner Packages
Hit http://security.ubuntu.com lucid-security/universe Packages
Hit http://security.ubuntu.com lucid-security/multiverse Packages
Hit http://security.ubuntu.com lucid-security/main Sources
Hit http://security.ubuntu.com lucid-security/universe Sources
Hit http://security.ubuntu.com lucid-security/multiverse Sources
Hit http://archive.canonical.com lucid/partner Sources
Reading package lists...
hadoop-0.20.2.tar.gz: OK

2011-10-11 22:11:36,291 DEBUG [jclouds.compute] (user thread 0) << stderr
from setup-ubuntu as ubuntu@184.72.177.130
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
provide /usr/bin/native2ascii (native2ascii) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
/usr/bin/rmic (rmic) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to provide
/usr/bin/schemagen (schemagen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to provide
/usr/bin/serialver (serialver) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
/usr/bin/wsgen (wsgen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
/usr/bin/wsimport (wsimport) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
/usr/bin/xjc (xjc) in auto mode.
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

2011-10-11 22:11:36,293 DEBUG [jclouds.compute] (user thread 0) << options
applied node(us-east-1/i-08153d68)
2011-10-11 22:11:36,296 INFO  [org.apache.whirr.actions.NodeStarter]
(pool-3-thread-2) Nodes started: [[id=us-east-1/i-08153d68,
providerId=i-08153d68, group=hadoop, name=null, location=[id=us-east-1b,
scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
2011-10-11 22:11:47,206 DEBUG [jclouds.compute] (user thread 7) <<
complete(true)
2011-10-11 22:11:47,282 DEBUG [jclouds.compute] (user thread 7) << stdout
from setup-ubuntu as ubuntu@50.19.59.109
Hit http://security.ubuntu.com lucid-security/main Packages
Hit http://archive.canonical.com lucid/partner Packages
Hit http://security.ubuntu.com lucid-security/universe Packages
Hit http://security.ubuntu.com lucid-security/multiverse Packages
Hit http://security.ubuntu.com lucid-security/main Sources
Hit http://security.ubuntu.com lucid-security/universe Sources
Hit http://security.ubuntu.com lucid-security/multiverse Sources
Hit http://archive.canonical.com lucid/partner Sources
Reading package lists...
hadoop-0.20.2.tar.gz: OK

2011-10-11 22:11:47,338 DEBUG [jclouds.compute] (user thread 7) << stderr
from setup-ubuntu as ubuntu@50.19.59.109
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
provide /usr/bin/native2ascii (native2ascii) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
/usr/bin/rmic (rmic) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to provide
/usr/bin/schemagen (schemagen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to provide
/usr/bin/serialver (serialver) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to provide
/usr/bin/wsgen (wsgen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to provide
/usr/bin/wsimport (wsimport) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
/usr/bin/xjc (xjc) in auto mode.
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

2011-10-11 22:11:47,339 DEBUG [jclouds.compute] (user thread 7) << options
applied node(us-east-1/i-12153d72)
2011-10-11 22:11:47,340 INFO  [org.apache.whirr.actions.NodeStarter]
(pool-3-thread-4) Nodes started: [[id=us-east-1/i-12153d72,
providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
2011-10-11 22:11:47,464 INFO  [org.apache.whirr.service.FirewallManager]
(main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [50070,
50030] for [75.101.253.125/32]
2011-10-11 22:11:47,529 WARN
[org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
75.101.253.125/32-1-50070-50070' has already been authorized on the
specified group
2011-10-11 22:11:47,574 WARN
[org.apache.whirr.service.jclouds.FirewallSettings] (main) The permission '
75.101.253.125/32-1-50030-50030' has already been authorized on the
specified group
2011-10-11 22:11:47,575 INFO  [org.apache.whirr.service.FirewallManager]
(main) Authorizing firewall ingress to [Instance{roles=[hadoop-namenode,
hadoop-jobtracker], publicIp=50.19.59.109, privateIp=10.97.51.240,
id=us-east-1/i-12153d72, nodeMetadata=[id=us-east-1/i-12153d72,
providerId=i-12153d72, group=hadoop, name=null, location=[id=us-east-1b,
scope=ZONE, description=us-east-1b, parent=us-east-1, iso3166Codes=[US-VA],
metadata={}], uri=null, imageId=us-east-1/ami-da0cf8b3, os=[name=null,
family=ubuntu, version=10.04, arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]}] on ports [8020,
8021] for [50.19.59.109/32]
2011-10-11 22:11:47,806 DEBUG [jclouds.compute] (main) >> listing node
details matching(withIds([us-east-1/i-08153d68, us-east-1/i-12153d72]))
2011-10-11 22:11:50,315 DEBUG [jclouds.compute] (main) << list(2)
2011-10-11 22:11:50,315 DEBUG
[org.apache.whirr.actions.ConfigureClusterAction] (main) Nodes in cluster:
[[id=us-east-1/i-08153d68, providerId=i-08153d68, group=hadoop, name=null,
location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-16-E5-E8,
privateAddresses=[10.96.230.22], publicAddresses=[184.72.177.130],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]],
[id=us-east-1/i-12153d72, providerId=i-12153d72, group=hadoop, name=null,
location=[id=us-east-1b, scope=ZONE, description=us-east-1b,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-da0cf8b3, os=[name=null, family=ubuntu, version=10.04,
arch=paravirtual, is64Bit=true,
description=ubuntu-images-us/ubuntu-lucid-10.04-amd64-server-20101020.manifest.xml],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-17-30-02,
privateAddresses=[10.97.51.240], publicAddresses=[50.19.59.109],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=10.0, device=/dev/sda1, durable=false, isBootDevice=true],
[id=null, type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={}, tags=[]]]
2011-10-11 22:11:50,316 INFO
[org.apache.whirr.actions.ConfigureClusterAction] (main) Running
configuration script on nodes: [us-east-1/i-08153d68]
2011-10-11 22:11:50,318 DEBUG
[org.apache.whirr.actions.ConfigureClusterAction] (main) script:
#!/bin/bash
set +u
shopt -s xpg_echo
shopt -s expand_aliases
unset PATH JAVA_HOME LD_LIBRARY_PATH
function abort {
   echo "aborting: $@" 1>&2
   exit 1
}
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
function configure_hadoop() {
  local OPTIND
  local OPTARG

  ROLES=$1
  shift

  CLOUD_PROVIDER=
  while getopts "c:" OPTION; do
    case $OPTION in
    c)
      CLOUD_PROVIDER="$OPTARG"
      ;;
    esac
  done

  case $CLOUD_PROVIDER in
    ec2 | aws-ec2 )
      # Alias /mnt as /data
      ln -s /mnt /data
      ;;
    *)
      ;;
  esac

  HADOOP_HOME=/usr/local/hadoop
  HADOOP_CONF_DIR=$HADOOP_HOME/conf

  mkdir -p /data/hadoop
  chown hadoop:hadoop /data/hadoop
  if [ ! -e /data/tmp ]; then
    mkdir /data/tmp
    chmod a+rwxt /data/tmp
  fi
  mkdir /etc/hadoop
  ln -s $HADOOP_CONF_DIR /etc/hadoop/conf

  # Copy generated configuration files in place
  cp /tmp/{core,hdfs,mapred}-site.xml $HADOOP_CONF_DIR

  # Keep PID files in a non-temporary directory
  sed -i -e "s|# export HADOOP_PID_DIR=.*|export
HADOOP_PID_DIR=/var/run/hadoop|" \
    $HADOOP_CONF_DIR/hadoop-env.sh
  mkdir -p /var/run/hadoop
  chown -R hadoop:hadoop /var/run/hadoop

  # Set SSH options within the cluster
  sed -i -e 's|# export HADOOP_SSH_OPTS=.*|export HADOOP_SSH_OPTS="-o
StrictHostKeyChecking=no"|' \
    $HADOOP_CONF_DIR/hadoop-env.sh

  # Disable IPv6
  sed -i -e 's|# export HADOOP_OPTS=.*|export
HADOOP_OPTS="-Djava.net.preferIPv4Stack=true"|' \
    $HADOOP_CONF_DIR/hadoop-env.sh

  # Hadoop logs should be on the /data partition
  sed -i -e 's|# export HADOOP_LOG_DIR=.*|export
HADOOP_LOG_DIR=/var/log/hadoop/logs|' \
    $HADOOP_CONF_DIR/hadoop-env.sh
  rm -rf /var/log/hadoop
  mkdir /data/hadoop/logs
  chown hadoop:hadoop /data/hadoop/logs
  ln -s /data/hadoop/logs /var/log/hadoop
  chown -R hadoop:hadoop /var/log/hadoop

  for role in $(echo "$ROLES" | tr "," "\n"); do
    case $role in
    hadoop-namenode)
      start_namenode
      ;;
    hadoop-secondarynamenode)
      start_hadoop_daemon secondarynamenode
      ;;
    hadoop-jobtracker)
      start_hadoop_daemon jobtracker
      ;;
    hadoop-datanode)
      start_hadoop_daemon datanode
      ;;
    hadoop-tasktracker)
      start_hadoop_daemon tasktracker
      ;;
    esac
  done

}

function start_namenode() {
  if which dpkg &> /dev/null; then
    AS_HADOOP="su -s /bin/bash - hadoop -c"
  elif which rpm &> /dev/null; then
    AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
  fi

  # Format HDFS
  [ ! -e /data/hadoop/hdfs ] && $AS_HADOOP "$HADOOP_HOME/bin/hadoop namenode
-format"

  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start namenode"

  $AS_HADOOP "$HADOOP_HOME/bin/hadoop dfsadmin -safemode wait"
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user"
  # The following is questionable, as it allows a user to delete another
user
  # It's needed to allow users to create their own user directories
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user"

  # Create temporary directory for Pig and Hive in HDFS
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /tmp"
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /tmp"
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse"
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop fs -chmod +w /user/hive/warehouse"

}

function start_hadoop_daemon() {
  if which dpkg &> /dev/null; then
    AS_HADOOP="su -s /bin/bash - hadoop -c"
  elif which rpm &> /dev/null; then
    AS_HADOOP="/sbin/runuser -s /bin/bash - hadoop -c"
  fi
  $AS_HADOOP "$HADOOP_HOME/bin/hadoop-daemon.sh start $1"
}

export PATH=/usr/ucb/bin:/bin:/sbin:/usr/bin:/usr/sbin
cat >> /tmp/core-site.xml <<'END_OF_FILE'
<configuration>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/data/tmp/hadoop-${user.name}</value>
  </property>
  <property>
    <name>io.file.buffer.size</name>
    <value>65536</value>
  </property>
  <property>
    <name>hadoop.rpc.socket.factory.class.default</name>
    <value>org.apache.hadoop.net.StandardSocketFactory</value>
    <final>true</final>
  </property>
  <property>
    <name>hadoop.rpc.socket.factory.class.ClientProtocol</name>
    <value></value>
  </property>
  <property>
    <name>hadoop.rpc.socket.factory.class.JobSubmissionProtocol</name>
    <value></value>
  </property>
  <property>
    <name>fs.trash.interval</name>
    <value>1440</value>
  </property>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://ec2-50-19-59-109.compute-1.amazonaws.com:8020/</value>
  </property>
</configuration>
END_OF_FILE
cat >> /tmp/hdfs-site.xml <<'END_OF_FILE'
<configuration>
  <property>
    <name>dfs.block.size</name>
    <value>134217728</value>
  </property>
  <property>
    <name>dfs.data.dir</name>
    <value>/data/hadoop/hdfs/data</value>
  </property>
  <property>
    <name>dfs.datanode.du.reserved</name>
    <value>1073741824</value>
  </property>
  <property>
    <name>dfs.name.dir</name>
    <value>/data/hadoop/hdfs/name</value>
  </property>
  <property>
    <name>fs.checkpoint.dir</name>
    <value>/data/hadoop/hdfs/secondary</value>
  </property>
</configuration>
END_OF_FILE
cat >> /tmp/mapred-site.xml <<'END_OF_FILE'
<configuration>
  <property>
    <name>mapred.local.dir</name>
    <value>/data/hadoop/mapred/local</value>
  </property>
  <property>
    <name>mapred.map.tasks.speculative.execution</name>
    <value>true</value>
  </property>
  <property>
    <name>mapred.reduce.tasks.speculative.execution</name>
    <value>false</value>
  </property>
  <property>
    <name>mapred.system.dir</name>
    <value>/hadoop/system/mapred</value>
  </property>
  <property>
    <name>mapreduce.jobtracker.staging.root.dir</name>
    <value>/user</value>
  </property>
  <property>
    <name>mapred.compress.map.output</name>
    <value>true</value>
  </property>
  <property>
    <name>mapred.output.compression.type</name>
    <value>BLOCK</value>
  </property>
  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx550m</value>
  </property>
  <property>
    <name>mapred.child.ulimit</name>
    <value>1126400</value>
  </property>
  <property>
    <name>mapred.tasktracker.map.tasks.maximum</name>
    <value>2</value>
  </property>
  <property>
    <name>mapred.tasktracker.reduce.tasks.maximum</name>
    <value>2</value>
  </property>
  <property>
    <name>mapred.reduce.tasks</name>
    <value>2</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>ec2-50-19-59-109.compute-1.amazonaws.com:8021</value>
  </property>
</configuration>
END_OF_FILE
configure_hadoop hadoop-datanode,hadoop-tasktracker -c aws-ec2 || exit 1
exit 0

2011-10-11 22:11:50,970 DEBUG [jclouds.compute] (user thread 7) >> blocking
on socket [address=184.72.177.130, port=22] for 600000 seconds
2011-10-11 22:11:53,992 DEBUG [jclouds.compute] (user thread 7) << socket
[address=184.72.177.130, port=22] opened
2011-10-11 22:12:57,316 DEBUG [org.apache.whirr.service.ComputeCache]
(Thread-1) closing ComputeServiceContext  [id=aws-ec2, endpoint=
https://ec2.us-east-1.amazonaws.com, apiVersion=2010-06-15,
identity=1FTR7NCN01CEAR6FK2G2, iso3166Codes=[US-VA, US-CA, IE, SG, JP-13]]

On Tue, Oct 11, 2011 at 3:31 PM, Andrei Savu <sa...@gmail.com> wrote:

> Chris -
>
> We've seen this issue in the past. I will take a closer look in the morning
> (in ~10 hours). Can you upload the full log somewhere? Also make the sure
> that the SSH keypair does not need a password.
>
> Cheers,
>
> -- Andrei Savu / andreisavu.ro
>
>
> On Tue, Oct 11, 2011 at 11:13 PM, Chris Schilling <
> chris@thecleversense.com> wrote:
>
>> Hello,
>>
>> New to whirr, having trouble *running whirr from an ec2 instance*
>> (authentication when setting up other machines)
>>
>> First, here is my configuration:
>> whirr.cluster-name=hadoop
>> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
>> hadoop-datanode+hadoop-tasktracker
>>
>> # For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
>> variables.
>> whirr.provider=aws-ec2
>> whirr.identity=${env:AWS_ACCESS_KEY_ID}
>> whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>>
>> # The size of the instance to use. See
>> http://aws.amazon.com/ec2/instance-types/
>> whirr.hardware-id=m1.large
>> whirr.image-id=us-east-1/ami-da0cf8b3
>> whirr.location-id=us-east-1
>> # By default use the user system SSH keys. Override them here.
>> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
>> whirr.public-key-file=${whirr.private-key-file}.pub
>>
>>
>>
>> I export the credentials, then create the key:
>>  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr
>>
>> Then I launch the cluster:
>> whirr launch-cluster --config hadoop-ec2.properties --private-key-file
>> ~/.ssh/id_rsa_whirr
>>
>> The nodes start (costs me $!), but then authentication errors all over the
>> place, along with Preconditions failures.  Here are some samples of the
>> java.lang.NullPointerException: architecture
>>         at
>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>>         at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>>         at
>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>>         at
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
>>         at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
>>         at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
>>         at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
>>         at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
>>         at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
>>         at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
>>         at
>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
>> ......
>>
>> Then the authentication errors begin:
>> <<authenticated>> woke to: net.schmizz.sshj.userauth.UserAuthException:
>> publickey auth failed
>> << (ubuntu@184.72.177.130:22) error acquiring SSHClient(ubuntu@
>> 184.72.177.130:22): Exhausted available authentication methods
>> net.schmizz.sshj.userauth.UserAuthException: Exhausted available
>> authentication methods
>>         at
>> net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
>>         at net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
>>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
>>         at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
>>         at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
>>         at org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
>>         at org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
>>         at
>> org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
>>         at
>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
>>         at
>> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
>> ......
>>
>> Please advise!
>>
>>
>> Chris Schilling
>> Sr. Data Mining Engineer
>> Clever Sense, Inc.
>> "Curating the World Around You"
>> --------------------------------------------------------------
>> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>>
>> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
>> --------------------------------------------------------------
>>
>>
>


-- 
Chris Schilling
Sr. Data Fiend
Clever Sense, Inc.
"Curating the World Around You!"
--------------------------------------------------------------
Winner of the 2011 Fortune Brainstorm Start-up
Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>

Wanna join the Clever Team? We're
hiring!<http://www.thecleversense.com/jobs.html>
--------------------------------------------------------------

Re: authentication trouble

Posted by Andrei Savu <sa...@gmail.com>.
Chris -

We've seen this issue in the past. I will take a closer look in the morning
(in ~10 hours). Can you upload the full log somewhere? Also make the sure
that the SSH keypair does not need a password.

Cheers,

-- Andrei Savu / andreisavu.ro

On Tue, Oct 11, 2011 at 11:13 PM, Chris Schilling
<ch...@thecleversense.com>wrote:

> Hello,
>
> New to whirr, having trouble *running whirr from an ec2 instance*
> (authentication when setting up other machines)
>
> First, here is my configuration:
> whirr.cluster-name=hadoop
> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,1
> hadoop-datanode+hadoop-tasktracker
>
> # For EC2 set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment
> variables.
> whirr.provider=aws-ec2
> whirr.identity=${env:AWS_ACCESS_KEY_ID}
> whirr.credential=${env:AWS_SECRET_ACCESS_KEY}
>
> # The size of the instance to use. See
> http://aws.amazon.com/ec2/instance-types/
> whirr.hardware-id=m1.large
> whirr.image-id=us-east-1/ami-da0cf8b3
> whirr.location-id=us-east-1
> # By default use the user system SSH keys. Override them here.
> whirr.private-key-file=${sys:user.home}/.ssh/id_rsa_whirr
> whirr.public-key-file=${whirr.private-key-file}.pub
>
>
>
> I export the credentials, then create the key:
>  ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa_whirr
>
> Then I launch the cluster:
> whirr launch-cluster --config hadoop-ec2.properties --private-key-file
> ~/.ssh/id_rsa_whirr
>
> The nodes start (costs me $!), but then authentication errors all over the
> place, along with Preconditions failures.  Here are some samples of the
> java.lang.NullPointerException: architecture
>         at
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>         at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>         at
> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>         at
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:604)
>         at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1759)
>         at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2915)
>         at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:625)
>         at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:488)
>         at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:812)
>         at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741)
>         at
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123)
> ......
>
> Then the authentication errors begin:
> <<authenticated>> woke to: net.schmizz.sshj.userauth.UserAuthException:
> publickey auth failed
> << (ubuntu@184.72.177.130:22) error acquiring SSHClient(ubuntu@
> 184.72.177.130:22): Exhausted available authentication methods
> net.schmizz.sshj.userauth.UserAuthException: Exhausted available
> authentication methods
>         at
> net.schmizz.sshj.userauth.UserAuthImpl.authenticate(UserAuthImpl.java:114)
>         at net.schmizz.sshj.SSHClient.auth(SSHClient.java:204)
>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:304)
>         at net.schmizz.sshj.SSHClient.authPublickey(SSHClient.java:323)
>         at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:183)
>         at org.jclouds.sshj.SshjSshClient$1.create(SshjSshClient.java:155)
>         at org.jclouds.sshj.SshjSshClient.acquire(SshjSshClient.java:204)
>         at org.jclouds.sshj.SshjSshClient.connect(SshjSshClient.java:229)
>         at
> org.jclouds.compute.callables.RunScriptOnNodeAsInitScriptUsingSsh.call(RunScriptOnNodeAsInitScriptUsingSsh.java:107)
>         at
> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:69)
>         at
> org.jclouds.compute.strategy.RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.call(RunScriptOnNodeAndAddToGoodMapOrPutExceptionIntoBadMap.java:44)
> ......
>
> Please advise!
>
>
> Chris Schilling
> Sr. Data Mining Engineer
> Clever Sense, Inc.
> "Curating the World Around You"
> --------------------------------------------------------------
> Winner of the 2011 Fortune Brainstorm Start-up Idol<http://tech.fortune.cnn.com/2011/07/20/startup-idol-brainstorm-clever-sense/>
>
> Wanna join the Clever Team? We're hiring!<http://www.thecleversense.com/jobs.html>
> --------------------------------------------------------------
>
>