You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by Sebastien Goasguen <ru...@gmail.com> on 2013/05/21 21:46:43 UTC

CloudStack support

Hi,

I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
Instances get launched and I am trying to setup cdh.

I believe I am running into a DNS issue as I am running into lots of issues of this type:

13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
java.net.UnknownHostException: hadoop-3d5: hadoop-3d5

If I log in to the name node and try to use hadoop I get things like:

$ hadoop fs -mkdir /toto
-mkdir: java.net.UnknownHostException: hadoop-3d5

my hadoop-site.xml looks like:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
 <property>
   <name>dfs.client.use.legacy.blockreader</name>
   <value>true</value>
 </property>
 <property>
   <name>fs.default.name</name>
   <value>hdfs://hadoop-3d5:8020/</value>
 </property>
 <property>
   <name>mapred.job.tracker</name>
   <value>hadoop-3d5:8021</value>
 </property>
 <property>
   <name>hadoop.job.ugi</name>
   <value>root,root</value>
 </property>
 <property>
   <name>hadoop.rpc.socket.factory.class.default</name>
   <value>org.apache.hadoop.net.SocksSocketFactory</value>
 </property>
 <property>
   <name>hadoop.socks.server</name>
   <value>localhost:6666</value>
 </property>
</configuration>

my ~/.whirr/hadoop/instances file has all the right IP addresses, but I don't think the security group rules got created.

Any thoughts ?

thanks,

-sebastien


Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.
On May 22, 2013, at 8:28 AM, Andrei Savu <sa...@gmail.com> wrote:

> 
> On Wed, May 22, 2013 at 2:22 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
> I am now running into cdh/hdfs issues, maybe linked to the security group rules:
> 
> 13/05/22 12:57:31 ERROR hdfs.DFSClient: Failed to close file /user/sebastiengoasguen/input/toto._COPYING_
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/sebastiengoasguen/input/toto._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
> 
> Probably Whirr is not authorizing the security group to allow unrestricted communication between machines. 

Whirr is using jclouds 1.5.8 to create the security groups, correct ?

maybe that's the issue, as I am using a cloudstack 4.0.2 cloud.

> 
> -- Andrei Savu


Re: CloudStack support

Posted by Andrei Savu <sa...@gmail.com>.
On Wed, May 22, 2013 at 2:22 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

> I am now running into cdh/hdfs issues, maybe linked to the security group
> rules:
>
> 13/05/22 12:57:31 ERROR hdfs.DFSClient: Failed to close file
> /user/sebastiengoasguen/input/toto._COPYING_
> org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> /user/sebastiengoasguen/input/toto._COPYING_ could only be replicated to 0
> nodes instead of minReplication (=1).  There are 1 datanode(s) running and
> 1 node(s) are excluded in this operation.
>

Probably Whirr is not authorizing the security group to allow unrestricted
communication between machines.

-- Andrei Savu

Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.
On May 21, 2013, at 5:02 PM, Sebastien Goasguen <ru...@gmail.com> wrote:

> 
> 
> On 21 May 2013, at 22:52, Andrei Savu <sa...@gmail.com> wrote:
> 
>> 
>> On Tue, May 21, 2013 at 11:46 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
>> Does not seem to work, I tried by putting it in the properties file and it did not do anything.
>> On the command line:
>> 
>> whirr launch-cluster --store-cluster-in-etc-hosts --config ~/Desktop/hadoop.properties
>> Exception in thread "main" joptsimple.UnrecognizedOptionException: 'store-cluster-in-etc-hosts' is not a recognized option
>> 	at joptsimple.OptionException.unrecognizedOption(OptionException.java:88)
>> 	at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:403)
>> 	at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:54)
>> 	at joptsimple.OptionParser.parse(OptionParser.java:379)
>> 	at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:49)
>> 	at org.apache.whirr.cli.Main.run(Main.java:69)
>> 	at org.apache.whirr.cli.Main.main(Main.java:102)
>> 
>> That's strange. What version of Whirr are you running?
>> 
>> See the following page for all configuration options:
>> http://whirr.apache.org/docs/0.8.2/configuration-guide.html
> 
> I compiled 0.8.1 from source

I installed 0.8.2 this morning and it worked. the /etc/hosts file was updated.

I am now running into cdh/hdfs issues, maybe linked to the security group rules:

13/05/22 12:57:31 ERROR hdfs.DFSClient: Failed to close file /user/sebastiengoasguen/input/toto._COPYING_
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/sebastiengoasguen/input/toto._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

-Sebastien


> 
> 
>> -- Andrei


Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.

On 21 May 2013, at 22:52, Andrei Savu <sa...@gmail.com> wrote:

> 
> On Tue, May 21, 2013 at 11:46 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
> Does not seem to work, I tried by putting it in the properties file and it did not do anything.
> On the command line:
> 
> whirr launch-cluster --store-cluster-in-etc-hosts --config ~/Desktop/hadoop.properties
> Exception in thread "main" joptsimple.UnrecognizedOptionException: 'store-cluster-in-etc-hosts' is not a recognized option
> 	at joptsimple.OptionException.unrecognizedOption(OptionException.java:88)
> 	at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:403)
> 	at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:54)
> 	at joptsimple.OptionParser.parse(OptionParser.java:379)
> 	at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:49)
> 	at org.apache.whirr.cli.Main.run(Main.java:69)
> 	at org.apache.whirr.cli.Main.main(Main.java:102)
> 
> That's strange. What version of Whirr are you running?
> 
> See the following page for all configuration options:
> http://whirr.apache.org/docs/0.8.2/configuration-guide.html

I compiled 0.8.1 from source


> -- Andrei

Re: CloudStack support

Posted by Andrei Savu <sa...@gmail.com>.
On Tue, May 21, 2013 at 11:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

> Does not seem to work, I tried by putting it in the properties file and it
> did not do anything.
> On the command line:
>
> whirr launch-cluster --store-cluster-in-etc-hosts --config
> ~/Desktop/hadoop.properties
> Exception in thread "main" joptsimple.UnrecognizedOptionException:
> 'store-cluster-in-etc-hosts' is not a recognized option
>  at joptsimple.OptionException.unrecognizedOption(OptionException.java:88)
> at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:403)
>  at
> joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:54)
> at joptsimple.OptionParser.parse(OptionParser.java:379)
>  at
> org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:49)
> at org.apache.whirr.cli.Main.run(Main.java:69)
>  at org.apache.whirr.cli.Main.main(Main.java:102)
>

That's strange. What version of Whirr are you running?

See the following page for all configuration options:
http://whirr.apache.org/docs/0.8.2/configuration-guide.html

-- Andrei

Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.
On May 21, 2013, at 4:14 PM, Andrew Bayer <an...@gmail.com> wrote:

> And actually, if you set "whirr.store-cluster-in-etc-hosts=true" in your properties file, Whirr should set up /etc/hosts on the instances for you.
> 

Does not seem to work, I tried by putting it in the properties file and it did not do anything.
On the command line:

whirr launch-cluster --store-cluster-in-etc-hosts --config ~/Desktop/hadoop.properties
Exception in thread "main" joptsimple.UnrecognizedOptionException: 'store-cluster-in-etc-hosts' is not a recognized option
	at joptsimple.OptionException.unrecognizedOption(OptionException.java:88)
	at joptsimple.OptionParser.handleLongOptionToken(OptionParser.java:403)
	at joptsimple.OptionParserState$2.handleArgument(OptionParserState.java:54)
	at joptsimple.OptionParser.parse(OptionParser.java:379)
	at org.apache.whirr.cli.command.LaunchClusterCommand.run(LaunchClusterCommand.java:49)
	at org.apache.whirr.cli.Main.run(Main.java:69)
	at org.apache.whirr.cli.Main.main(Main.java:102)

> A.
> 
> On Tue, May 21, 2013 at 1:09 PM, Andrei Savu <sa...@gmail.com> wrote:
> Yes, you should be able to make that work. 
> 
> -- Andrei Savu
> 
> On Tue, May 21, 2013 at 11:04 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
> 
> On May 21, 2013, at 4:00 PM, Andrei Savu <sa...@gmail.com> wrote:
> 
>> You need sane dns settings (forward and reverse for each machine to make this work). 
>> 
> 
> Can I try to hack configure_hostname.sh in:
> 
> services/cdh/target/classes/functions
> 
> Adding some entry in /etc/hosts
> 
> Will that be enough ?
> 
> 
>> -- Andrei Savu
>> 
>> On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
>> 
>> On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com> wrote:
>> 
>>> Yeah, DNS is a giant pain. If at all possible, you need to get the hostnames resolvable from wherever you're spinning the instances up, as well as on the instances themselves. The DNS that CloudStack's DHCP assigns should do the trick for that.
>> 
>> argh…
>> 
>> These instances have public IPs but not DNS entries.
>> 
>> @andrei the hadoop-3d5 and other names are setup as the name of the instances. They are used for local 'hostname'. so no not resolvable.
>> 
>> 
>> 
>>> 
>>> A.
>>> 
>>> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
>>> Hi,
>>> 
>>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>>> Instances get launched and I am trying to setup cdh.
>>> 
>>> I believe I am running into a DNS issue as I am running into lots of issues of this type:
>>> 
>>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
>>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>>> 
>>> If I log in to the name node and try to use hadoop I get things like:
>>> 
>>> $ hadoop fs -mkdir /toto
>>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>>> 
>>> my hadoop-site.xml looks like:
>>> 
>>> <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> <configuration>
>>>  <property>
>>>    <name>dfs.client.use.legacy.blockreader</name>
>>>    <value>true</value>
>>>  </property>
>>>  <property>
>>>    <name>fs.default.name</name>
>>>    <value>hdfs://hadoop-3d5:8020/</value>
>>>  </property>
>>>  <property>
>>>    <name>mapred.job.tracker</name>
>>>    <value>hadoop-3d5:8021</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.job.ugi</name>
>>>    <value>root,root</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.socks.server</name>
>>>    <value>localhost:6666</value>
>>>  </property>
>>> </configuration>
>>> 
>>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I don't think the security group rules got created.
>>> 
>>> Any thoughts ?
>>> 
>>> thanks,
>>> 
>>> -sebastien
>>> 
>>> 
>> 
>> 
> 
> 
> 


Re: CloudStack support

Posted by Andrew Bayer <an...@gmail.com>.
And actually, if you set "whirr.store-cluster-in-etc-hosts=true" in your
properties file, Whirr should set up /etc/hosts on the instances for you.

A.

On Tue, May 21, 2013 at 1:09 PM, Andrei Savu <sa...@gmail.com> wrote:

> Yes, you should be able to make that work.
>
> -- Andrei Savu
>
> On Tue, May 21, 2013 at 11:04 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>
>>
>> On May 21, 2013, at 4:00 PM, Andrei Savu <sa...@gmail.com> wrote:
>>
>> You need sane dns settings (forward and reverse for each machine to make
>> this work).
>>
>>
>> Can I try to hack configure_hostname.sh in:
>>
>> services/cdh/target/classes/functions
>>
>> Adding some entry in /etc/hosts
>>
>> Will that be enough ?
>>
>>
>> -- Andrei Savu
>>
>> On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>>
>>>
>>> On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com>
>>> wrote:
>>>
>>> Yeah, DNS is a giant pain. If at all possible, you need to get the
>>> hostnames resolvable from wherever you're spinning the instances up, as
>>> well as on the instances themselves. The DNS that CloudStack's DHCP assigns
>>> should do the trick for that.
>>>
>>>
>>> argh…
>>>
>>> These instances have public IPs but not DNS entries.
>>>
>>> @andrei the hadoop-3d5 and other names are setup as the name of the
>>> instances. They are used for local 'hostname'. so no not resolvable.
>>>
>>>
>>>
>>>
>>> A.
>>>
>>> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>>>
>>>> Hi,
>>>>
>>>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>>>> Instances get launched and I am trying to setup cdh.
>>>>
>>>> I believe I am running into a DNS issue as I am running into lots of
>>>> issues of this type:
>>>>
>>>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
>>>> -falling back to "localhost"
>>>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>>>>
>>>> If I log in to the name node and try to use hadoop I get things like:
>>>>
>>>> $ hadoop fs -mkdir /toto
>>>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>>>>
>>>> my hadoop-site.xml looks like:
>>>>
>>>> <?xml version="1.0"?>
>>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>>> <configuration>
>>>>  <property>
>>>>    <name>dfs.client.use.legacy.blockreader</name>
>>>>    <value>true</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>fs.default.name</name>
>>>>    <value>hdfs://hadoop-3d5:8020/</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>mapred.job.tracker</name>
>>>>    <value>hadoop-3d5:8021</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.job.ugi</name>
>>>>    <value>root,root</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>>>  </property>
>>>>  <property>
>>>>    <name>hadoop.socks.server</name>
>>>>    <value>localhost:6666</value>
>>>>  </property>
>>>> </configuration>
>>>>
>>>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I
>>>> don't think the security group rules got created.
>>>>
>>>> Any thoughts ?
>>>>
>>>> thanks,
>>>>
>>>> -sebastien
>>>>
>>>>
>>>
>>>
>>
>>
>

Re: CloudStack support

Posted by Andrei Savu <sa...@gmail.com>.
Yes, you should be able to make that work.

-- Andrei Savu

On Tue, May 21, 2013 at 11:04 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

>
> On May 21, 2013, at 4:00 PM, Andrei Savu <sa...@gmail.com> wrote:
>
> You need sane dns settings (forward and reverse for each machine to make
> this work).
>
>
> Can I try to hack configure_hostname.sh in:
>
> services/cdh/target/classes/functions
>
> Adding some entry in /etc/hosts
>
> Will that be enough ?
>
>
> -- Andrei Savu
>
> On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>
>>
>> On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com> wrote:
>>
>> Yeah, DNS is a giant pain. If at all possible, you need to get the
>> hostnames resolvable from wherever you're spinning the instances up, as
>> well as on the instances themselves. The DNS that CloudStack's DHCP assigns
>> should do the trick for that.
>>
>>
>> argh…
>>
>> These instances have public IPs but not DNS entries.
>>
>> @andrei the hadoop-3d5 and other names are setup as the name of the
>> instances. They are used for local 'hostname'. so no not resolvable.
>>
>>
>>
>>
>> A.
>>
>> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>>
>>> Hi,
>>>
>>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>>> Instances get launched and I am trying to setup cdh.
>>>
>>> I believe I am running into a DNS issue as I am running into lots of
>>> issues of this type:
>>>
>>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
>>> -falling back to "localhost"
>>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>>>
>>> If I log in to the name node and try to use hadoop I get things like:
>>>
>>> $ hadoop fs -mkdir /toto
>>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>>>
>>> my hadoop-site.xml looks like:
>>>
>>> <?xml version="1.0"?>
>>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> <configuration>
>>>  <property>
>>>    <name>dfs.client.use.legacy.blockreader</name>
>>>    <value>true</value>
>>>  </property>
>>>  <property>
>>>    <name>fs.default.name</name>
>>>    <value>hdfs://hadoop-3d5:8020/</value>
>>>  </property>
>>>  <property>
>>>    <name>mapred.job.tracker</name>
>>>    <value>hadoop-3d5:8021</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.job.ugi</name>
>>>    <value>root,root</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>>  </property>
>>>  <property>
>>>    <name>hadoop.socks.server</name>
>>>    <value>localhost:6666</value>
>>>  </property>
>>> </configuration>
>>>
>>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I
>>> don't think the security group rules got created.
>>>
>>> Any thoughts ?
>>>
>>> thanks,
>>>
>>> -sebastien
>>>
>>>
>>
>>
>
>

Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.
On May 21, 2013, at 4:00 PM, Andrei Savu <sa...@gmail.com> wrote:

> You need sane dns settings (forward and reverse for each machine to make this work). 
> 

Can I try to hack configure_hostname.sh in:

services/cdh/target/classes/functions

Adding some entry in /etc/hosts

Will that be enough ?


> -- Andrei Savu
> 
> On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
> 
> On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com> wrote:
> 
>> Yeah, DNS is a giant pain. If at all possible, you need to get the hostnames resolvable from wherever you're spinning the instances up, as well as on the instances themselves. The DNS that CloudStack's DHCP assigns should do the trick for that.
> 
> argh…
> 
> These instances have public IPs but not DNS entries.
> 
> @andrei the hadoop-3d5 and other names are setup as the name of the instances. They are used for local 'hostname'. so no not resolvable.
> 
> 
> 
>> 
>> A.
>> 
>> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
>> Hi,
>> 
>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>> Instances get launched and I am trying to setup cdh.
>> 
>> I believe I am running into a DNS issue as I am running into lots of issues of this type:
>> 
>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>> 
>> If I log in to the name node and try to use hadoop I get things like:
>> 
>> $ hadoop fs -mkdir /toto
>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>> 
>> my hadoop-site.xml looks like:
>> 
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> <configuration>
>>  <property>
>>    <name>dfs.client.use.legacy.blockreader</name>
>>    <value>true</value>
>>  </property>
>>  <property>
>>    <name>fs.default.name</name>
>>    <value>hdfs://hadoop-3d5:8020/</value>
>>  </property>
>>  <property>
>>    <name>mapred.job.tracker</name>
>>    <value>hadoop-3d5:8021</value>
>>  </property>
>>  <property>
>>    <name>hadoop.job.ugi</name>
>>    <value>root,root</value>
>>  </property>
>>  <property>
>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>  </property>
>>  <property>
>>    <name>hadoop.socks.server</name>
>>    <value>localhost:6666</value>
>>  </property>
>> </configuration>
>> 
>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I don't think the security group rules got created.
>> 
>> Any thoughts ?
>> 
>> thanks,
>> 
>> -sebastien
>> 
>> 
> 
> 


Re: CloudStack support

Posted by Andrei Savu <sa...@gmail.com>.
You need sane dns settings (forward and reverse for each machine to make
this work).

-- Andrei Savu

On Tue, May 21, 2013 at 10:57 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

>
> On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com> wrote:
>
> Yeah, DNS is a giant pain. If at all possible, you need to get the
> hostnames resolvable from wherever you're spinning the instances up, as
> well as on the instances themselves. The DNS that CloudStack's DHCP assigns
> should do the trick for that.
>
>
> argh…
>
> These instances have public IPs but not DNS entries.
>
> @andrei the hadoop-3d5 and other names are setup as the name of the
> instances. They are used for local 'hostname'. so no not resolvable.
>
>
>
>
> A.
>
> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:
>
>> Hi,
>>
>> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
>> Instances get launched and I am trying to setup cdh.
>>
>> I believe I am running into a DNS issue as I am running into lots of
>> issues of this type:
>>
>> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
>> -falling back to "localhost"
>> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>>
>> If I log in to the name node and try to use hadoop I get things like:
>>
>> $ hadoop fs -mkdir /toto
>> -mkdir: java.net.UnknownHostException: hadoop-3d5
>>
>> my hadoop-site.xml looks like:
>>
>> <?xml version="1.0"?>
>> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>> <configuration>
>>  <property>
>>    <name>dfs.client.use.legacy.blockreader</name>
>>    <value>true</value>
>>  </property>
>>  <property>
>>    <name>fs.default.name</name>
>>    <value>hdfs://hadoop-3d5:8020/</value>
>>  </property>
>>  <property>
>>    <name>mapred.job.tracker</name>
>>    <value>hadoop-3d5:8021</value>
>>  </property>
>>  <property>
>>    <name>hadoop.job.ugi</name>
>>    <value>root,root</value>
>>  </property>
>>  <property>
>>    <name>hadoop.rpc.socket.factory.class.default</name>
>>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>>  </property>
>>  <property>
>>    <name>hadoop.socks.server</name>
>>    <value>localhost:6666</value>
>>  </property>
>> </configuration>
>>
>> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I
>> don't think the security group rules got created.
>>
>> Any thoughts ?
>>
>> thanks,
>>
>> -sebastien
>>
>>
>
>

Re: CloudStack support

Posted by Sebastien Goasguen <ru...@gmail.com>.
On May 21, 2013, at 3:48 PM, Andrew Bayer <an...@gmail.com> wrote:

> Yeah, DNS is a giant pain. If at all possible, you need to get the hostnames resolvable from wherever you're spinning the instances up, as well as on the instances themselves. The DNS that CloudStack's DHCP assigns should do the trick for that.

argh…

These instances have public IPs but not DNS entries.

@andrei the hadoop-3d5 and other names are setup as the name of the instances. They are used for local 'hostname'. so no not resolvable.



> 
> A.
> 
> On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com> wrote:
> Hi,
> 
> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
> Instances get launched and I am trying to setup cdh.
> 
> I believe I am running into a DNS issue as I am running into lots of issues of this type:
> 
> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname -falling back to "localhost"
> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
> 
> If I log in to the name node and try to use hadoop I get things like:
> 
> $ hadoop fs -mkdir /toto
> -mkdir: java.net.UnknownHostException: hadoop-3d5
> 
> my hadoop-site.xml looks like:
> 
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
>  <property>
>    <name>dfs.client.use.legacy.blockreader</name>
>    <value>true</value>
>  </property>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://hadoop-3d5:8020/</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>hadoop-3d5:8021</value>
>  </property>
>  <property>
>    <name>hadoop.job.ugi</name>
>    <value>root,root</value>
>  </property>
>  <property>
>    <name>hadoop.rpc.socket.factory.class.default</name>
>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>  </property>
>  <property>
>    <name>hadoop.socks.server</name>
>    <value>localhost:6666</value>
>  </property>
> </configuration>
> 
> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I don't think the security group rules got created.
> 
> Any thoughts ?
> 
> thanks,
> 
> -sebastien
> 
> 


Re: CloudStack support

Posted by Andrew Bayer <an...@gmail.com>.
Yeah, DNS is a giant pain. If at all possible, you need to get the
hostnames resolvable from wherever you're spinning the instances up, as
well as on the instances themselves. The DNS that CloudStack's DHCP assigns
should do the trick for that.

A.

On Tue, May 21, 2013 at 12:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

> Hi,
>
> I installed whirr 0.8.1, I am using it against a CloudStack endpoint.
> Instances get launched and I am trying to setup cdh.
>
> I believe I am running into a DNS issue as I am running into lots of
> issues of this type:
>
> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
> -falling back to "localhost"
> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>
> If I log in to the name node and try to use hadoop I get things like:
>
> $ hadoop fs -mkdir /toto
> -mkdir: java.net.UnknownHostException: hadoop-3d5
>
> my hadoop-site.xml looks like:
>
> <?xml version="1.0"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
>  <property>
>    <name>dfs.client.use.legacy.blockreader</name>
>    <value>true</value>
>  </property>
>  <property>
>    <name>fs.default.name</name>
>    <value>hdfs://hadoop-3d5:8020/</value>
>  </property>
>  <property>
>    <name>mapred.job.tracker</name>
>    <value>hadoop-3d5:8021</value>
>  </property>
>  <property>
>    <name>hadoop.job.ugi</name>
>    <value>root,root</value>
>  </property>
>  <property>
>    <name>hadoop.rpc.socket.factory.class.default</name>
>    <value>org.apache.hadoop.net.SocksSocketFactory</value>
>  </property>
>  <property>
>    <name>hadoop.socks.server</name>
>    <value>localhost:6666</value>
>  </property>
> </configuration>
>
> my ~/.whirr/hadoop/instances file has all the right IP addresses, but I
> don't think the security group rules got created.
>
> Any thoughts ?
>
> thanks,
>
> -sebastien
>
>

Re: CloudStack support

Posted by Andrei Savu <sa...@gmail.com>.
On Tue, May 21, 2013 at 10:46 PM, Sebastien Goasguen <ru...@gmail.com>wrote:

> I believe I am running into a DNS issue as I am running into lots of
> issues of this type:
>
> 13/05/21 21:21:28 WARN net.DNS: Unable to determine local hostname
> -falling back to "localhost"
> java.net.UnknownHostException: hadoop-3d5: hadoop-3d5
>


Are you able to resolve hadoop-3d5 to an IP address from any machine inside
the cluster?

-- Andrei Savu