You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by Deb Ghosh <dg...@gmail.com> on 2012/05/05 06:44:53 UTC

whirr 0.7.1 installation error on amazon ec2

Hello ,

I am having the following error while launching cluster
===========================================

any help is appreciated...


Unable to find service hadoop, using default.
Bootstrapping cluster
Configuring template
Configuring template
Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
Nodes started: [[id=us-east-1/i-57926d31, providerId=i-57926d31,
group=myhadoopcluster, name=myhadoopcluster-57926d31,
location=[id=us-east-1d, scope=ZONE, description=us-east-1d,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-ab36fbc2, os=[name=null, family=ubuntu,
version=10.04, arch=paravirtual, is64Bit=false,
description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-i386-server-20110930],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-03-41-26,
privateAddresses=[10.249.66.212], publicAddresses=[23.20.165.230],
hardware=[id=m1.small, providerId=m1.small, name=null,
processors=[[cores=1.0, speed=1.0]], ram=1740, volumes=[[id=null,
type=LOCAL, size=150.0, device=/dev/sda2, durable=false,
isBootDevice=false], [id=vol-d2d385bd, type=SAN, size=null,
device=/dev/sda1, durable=true, isBootDevice=true]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,Not(is64Bit())),
tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-57926d31},
tags=[]]]
Nodes started: [[id=us-east-1/i-49926d2f, providerId=i-49926d2f,
group=myhadoopcluster, name=myhadoopcluster-49926d2f,
location=[id=us-east-1d, scope=ZONE, description=us-east-1d,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-ab36fbc2, os=[name=null, family=ubuntu,
version=10.04, arch=paravirtual, is64Bit=false,
description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-i386-server-20110930],
state=RUNNING, loginPort=22, hostname=domU-12-31-39-02-6A-15,
privateAddresses=[10.248.109.223], publicAddresses=[23.22.55.18],
hardware=[id=m1.small, providerId=m1.small, name=null,
processors=[[cores=1.0, speed=1.0]], ram=1740, volumes=[[id=null,
type=LOCAL, size=150.0, device=/dev/sda2, durable=false,
isBootDevice=false], [id=vol-d0d385bf, type=SAN, size=null,
device=/dev/sda1, durable=true, isBootDevice=true]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,Not(is64Bit())),
tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-49926d2f},
tags=[]]]
Unable to start the cluster. Terminating all nodes.
org.apache.whirr.net.DnsException: java.net.ConnectException: Connection
refused
        at
org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
        at
org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)

-Debashis

Re: whirr 0.7.1 installation error on amazon ec2

Posted by Marco Didonna <m....@gmail.com>.
On 7 May 2012 03:30, Deb Ghosh <dg...@gmail.com> wrote:
> Hi
>
> After execution of the whirr 0.7.1 launch-cluster command I am getting error
> on amazon-ec2 , I don't know how to resolve it ,
>
> the following error occurred
>
>
> Unable to find service hadoop, using default.
> Bootstrapping cluster
> Configuring template
> Configuring template
> Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
> Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
> Nodes started: [[id=us-east-1/i-b167a5d7, providerId=i-b167a5d7,
> group=myhadoopcluster, name=myhadoopcluster-b167a5d7,
> location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
> state=RUNNING, loginPort=22, hostname=ip-10-77-18-166,
> privateAddresses=[10.77.18.166], publicAddresses=[23.20.149.118],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
> isBootDevice=false], [id=vol-209ac44f, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b167a5d7},
> tags=[]]]
> Nodes started: [[id=us-east-1/i-b367a5d5, providerId=i-b367a5d5,
> group=myhadoopcluster, name=myhadoopcluster-b367a5d5,
> location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=true,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
> state=RUNNING, loginPort=22, hostname=ip-10-202-45-93,
> privateAddresses=[10.202.45.93], publicAddresses=[23.20.230.14],
> hardware=[id=m1.large, providerId=m1.large, name=null,
> processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
> type=LOCAL, size=420.0, device=/dev/sdb, durable=false, isBootDevice=false],
> [id=null, type=LOCAL, size=420.0, device=/dev/sdc, durable=false,
> isBootDevice=false], [id=vol-249ac44b, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b367a5d5},
> tags=[]]]
>
> Unable to start the cluster. Terminating all nodes.
> org.apache.whirr.net.DnsException: java.net.ConnectException: Connection
> refused
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
>     at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
>     at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
>     at
> org.apache.whirr.service.hadoop.HadoopCluster.getNamenodePublicAddress(HadoopCluster.java:35)
>     at
> org.apache.whirr.service.hadoop.HadoopJobTrackerClusterActionHandler.doBeforeConfigure(HadoopJobTrackerClusterActionHandler.java:51)
>     at
> org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
>     at
> org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
>     at
> org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
>     at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
>     at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
>     at org.apache.whirr.cli.Main.run(Main.java:64)
>     at org.apache.whirr.cli.Main.main(Main.java:97)
> Caused by: java.net.ConnectException: Connection refused
>     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
>     at org.xbill.DNS.TCPClient.connect(TCPClient.java:30)
>     at org.xbill.DNS.TCPClient.sendrecv(TCPClient.java:118)
>     at org.xbill.DNS.SimpleResolver.send(SimpleResolver.java:254)
>     at
> org.xbill.DNS.ExtendedResolver$Resolution.start(ExtendedResolver.java:95)
>     at org.xbill.DNS.ExtendedResolver.send(ExtendedResolver.java:358)
>     at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:69)
>     ... 12 more
> Unable to load cluster state, assuming it has no running nodes.
> java.io.FileNotFoundException:
> /home/debashisg/.whirr/myhadoopcluster/instances (No such file or directory)
>     at java.io.FileInputStream.open(Native Method)
>     at java.io.FileInputStream.<init>(FileInputStream.java:120)
>     at com.google.common.io.Files$1.getInput(Files.java:100)
>     at com.google.common.io.Files$1.getInput(Files.java:97)
>     at com.google.common.io.CharStreams$2.getInput(CharStreams.java:91)
>     at com.google.common.io.CharStreams$2.getInput(CharStreams.java:88)
>     at com.google.common.io.CharStreams.readLines(CharStreams.java:306)
>     at com.google.common.io.Files.readLines(Files.java:580)
>     at
> org.apache.whirr.state.FileClusterStateStore.load(FileClusterStateStore.java:54)
>     at
> org.apache.whirr.state.ClusterStateStore.tryLoadOrEmpty(ClusterStateStore.java:58)
>     at
> org.apache.whirr.ClusterController.destroyCluster(ClusterController.java:143)
>     at
> org.apache.whirr.ClusterController.launchCluster(ClusterController.java:118)
>     at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
>     at org.apache.whirr.cli.Main.run(Main.java:64)
>     at org.apache.whirr.cli.Main.main(Main.java:97)
> Starting to run scripts on cluster for phase destroyinstances:
> Starting to run scripts on cluster for phase destroyinstances:
> Finished running destroy phase scripts on all cluster instances
> Destroying myhadoopcluster cluster
>
>
> Thanks
> Debashis


Again try http://mail-archives.apache.org/mod_mbox/whirr-user/201204.mbox/browser

Re: whirr 0.7.1 installation error on amazon ec2

Posted by Deb Ghosh <dg...@gmail.com>.
Hi

After execution of the whirr 0.7.1 launch-cluster command I am getting
error on amazon-ec2 , I don't know how to resolve it ,

the following error occurred

Unable to find service hadoop, using default.
Bootstrapping cluster
Configuring template
Configuring template
Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
Nodes started: [[id=us-east-1/i-b167a5d7, providerId=i-b167a5d7,
group=myhadoopcluster, name=myhadoopcluster-b167a5d7,
location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu,
version=10.04, arch=paravirtual, is64Bit=true,
description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
state=RUNNING, loginPort=22, hostname=ip-10-77-18-166,
privateAddresses=[10.77.18.166], publicAddresses=[23.20.149.118],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false], [id=vol-209ac44f, type=SAN, size=null,
device=/dev/sda1, durable=true, isBootDevice=true]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b167a5d7},
tags=[]]]
Nodes started: [[id=us-east-1/i-b367a5d5, providerId=i-b367a5d5,
group=myhadoopcluster, name=myhadoopcluster-b367a5d5,
location=[id=us-east-1a, scope=ZONE, description=us-east-1a,
parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
imageId=us-east-1/ami-ad36fbc4, os=[name=null, family=ubuntu,
version=10.04, arch=paravirtual, is64Bit=true,
description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-amd64-server-20110930],
state=RUNNING, loginPort=22, hostname=ip-10-202-45-93,
privateAddresses=[10.202.45.93], publicAddresses=[23.20.230.14],
hardware=[id=m1.large, providerId=m1.large, name=null,
processors=[[cores=2.0, speed=2.0]], ram=7680, volumes=[[id=null,
type=LOCAL, size=420.0, device=/dev/sdb, durable=false,
isBootDevice=false], [id=null, type=LOCAL, size=420.0, device=/dev/sdc,
durable=false, isBootDevice=false], [id=vol-249ac44b, type=SAN, size=null,
device=/dev/sda1, durable=true, isBootDevice=true]],
supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,is64Bit()),
tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-b367a5d5},
tags=[]]]
Unable to start the cluster. Terminating all nodes.
org.apache.whirr.net.DnsException: java.net.ConnectException: Connection
refused
    at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
    at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
    at org.apache.whirr.Cluster$Instance.getPublicHostName(Cluster.java:112)
    at org.apache.whirr.Cluster$Instance.getPublicAddress(Cluster.java:94)
    at
org.apache.whirr.service.hadoop.HadoopCluster.getNamenodePublicAddress(HadoopCluster.java:35)
    at
org.apache.whirr.service.hadoop.HadoopJobTrackerClusterActionHandler.doBeforeConfigure(HadoopJobTrackerClusterActionHandler.java:51)
    at
org.apache.whirr.service.hadoop.HadoopClusterActionHandler.beforeConfigure(HadoopClusterActionHandler.java:87)
    at
org.apache.whirr.service.ClusterActionHandlerSupport.beforeAction(ClusterActionHandlerSupport.java:53)
    at
org.apache.whirr.actions.ScriptBasedClusterAction.execute(ScriptBasedClusterAction.java:100)
    at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:109)
    at
org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
    at org.apache.whirr.cli.Main.run(Main.java:64)
    at org.apache.whirr.cli.Main.main(Main.java:97)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
    at org.xbill.DNS.TCPClient.connect(TCPClient.java:30)
    at org.xbill.DNS.TCPClient.sendrecv(TCPClient.java:118)
    at org.xbill.DNS.SimpleResolver.send(SimpleResolver.java:254)
    at
org.xbill.DNS.ExtendedResolver$Resolution.start(ExtendedResolver.java:95)
    at org.xbill.DNS.ExtendedResolver.send(ExtendedResolver.java:358)
    at org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:69)
    ... 12 more
Unable to load cluster state, assuming it has no running nodes.
java.io.FileNotFoundException:
/home/debashisg/.whirr/myhadoopcluster/instances (No such file or directory)
    at java.io.FileInputStream.open(Native Method)
    at java.io.FileInputStream.<init>(FileInputStream.java:120)
    at com.google.common.io.Files$1.getInput(Files.java:100)
    at com.google.common.io.Files$1.getInput(Files.java:97)
    at com.google.common.io.CharStreams$2.getInput(CharStreams.java:91)
    at com.google.common.io.CharStreams$2.getInput(CharStreams.java:88)
    at com.google.common.io.CharStreams.readLines(CharStreams.java:306)
    at com.google.common.io.Files.readLines(Files.java:580)
    at
org.apache.whirr.state.FileClusterStateStore.load(FileClusterStateStore.java:54)
    at
org.apache.whirr.state.ClusterStateStore.tryLoadOrEmpty(ClusterStateStore.java:58)
    at
org.apache.whirr.ClusterController.destroyCluster(ClusterController.java:143)
    at
org.apache.whirr.ClusterController.launchCluster(ClusterController.java:118)
    at
org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:63)
    at org.apache.whirr.cli.Main.run(Main.java:64)
    at org.apache.whirr.cli.Main.main(Main.java:97)
Starting to run scripts on cluster for phase destroyinstances:
Starting to run scripts on cluster for phase destroyinstances:
Finished running destroy phase scripts on all cluster instances
Destroying myhadoopcluster cluster


Thanks
Debashis

Re: whirr 0.7.1 installation error on amazon ec2

Posted by Marco Didonna <m....@gmail.com>.
On 5 May 2012 06:44, Deb Ghosh <dg...@gmail.com> wrote:
>
> Hello ,
>
> I am having the following error while launching cluster
> ===========================================
>
> any help is appreciated...
>
>
> Unable to find service hadoop, using default.
> Bootstrapping cluster
> Configuring template
> Configuring template
> Starting 1 node(s) with roles [hadoop-datanode, hadoop-tasktracker]
> Starting 1 node(s) with roles [hadoop-jobtracker, hadoop-namenode]
> Nodes started: [[id=us-east-1/i-57926d31, providerId=i-57926d31,
> group=myhadoopcluster, name=myhadoopcluster-57926d31,
> location=[id=us-east-1d, scope=ZONE, description=us-east-1d,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ab36fbc2, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=false,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-i386-server-20110930],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-03-41-26,
> privateAddresses=[10.249.66.212], publicAddresses=[23.20.165.230],
> hardware=[id=m1.small, providerId=m1.small, name=null,
> processors=[[cores=1.0, speed=1.0]], ram=1740, volumes=[[id=null,
> type=LOCAL, size=150.0, device=/dev/sda2, durable=false,
> isBootDevice=false], [id=vol-d2d385bd, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,Not(is64Bit())),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-57926d31},
> tags=[]]]
> Nodes started: [[id=us-east-1/i-49926d2f, providerId=i-49926d2f,
> group=myhadoopcluster, name=myhadoopcluster-49926d2f,
> location=[id=us-east-1d, scope=ZONE, description=us-east-1d,
> parent=us-east-1, iso3166Codes=[US-VA], metadata={}], uri=null,
> imageId=us-east-1/ami-ab36fbc2, os=[name=null, family=ubuntu, version=10.04,
> arch=paravirtual, is64Bit=false,
> description=099720109477/ebs/ubuntu-images/ubuntu-lucid-10.04-i386-server-20110930],
> state=RUNNING, loginPort=22, hostname=domU-12-31-39-02-6A-15,
> privateAddresses=[10.248.109.223], publicAddresses=[23.22.55.18],
> hardware=[id=m1.small, providerId=m1.small, name=null,
> processors=[[cores=1.0, speed=1.0]], ram=1740, volumes=[[id=null,
> type=LOCAL, size=150.0, device=/dev/sda2, durable=false,
> isBootDevice=false], [id=vol-d0d385bf, type=SAN, size=null,
> device=/dev/sda1, durable=true, isBootDevice=true]],
> supportsImage=And(ALWAYS_TRUE,Or(isWindows(),requiresVirtualizationType(paravirtual)),ALWAYS_TRUE,Not(is64Bit())),
> tags=[]], loginUser=ubuntu, userMetadata={Name=myhadoopcluster-49926d2f},
> tags=[]]]
> Unable to start the cluster. Terminating all nodes.
> org.apache.whirr.net.DnsException: java.net.ConnectException: Connection
> refused
>         at
> org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:83)
>         at
> org.apache.whirr.net.FastDnsResolver.apply(FastDnsResolver.java:40)
>
> -Debashis
>

Another user had a problem similar to yours. Try using google dns
servers 8.8.8.8 and 8.8.4.4.

Marco