You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by Paolo Castagna <ca...@googlemail.com> on 2011/10/28 17:32:32 UTC

Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Hi,
it's me again, I am trying to use Apache Whirr 0.6.0-incubating
to start a 20 nodes Hadoop cluster on Amazon EC2.

Here is my recipe:

----
whirr.cluster-name=hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,20
hadoop-datanode+hadoop-tasktracker
whirr.instance-templates-max-percent-failures=100
hadoop-namenode+hadoop-jobtracker,50
hadoop-datanode+hadoop-tasktracker
whirr.max-startup-retries=1
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
whirr.hardware-id=m1.large
whirr.image-id=eu-west-1/ami-ee0e3c9a
whirr.location-id=eu-west-1
whirr.private-key-file=${sys:user.home}/.ssh/whirr
whirr.public-key-file=${whirr.private-key-file}.pub
whirr.hadoop.version=0.20.204.0
whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
----

I see a lot of these errors:

org.jclouds.aws.AWSResponseException: request POST
https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
error: AWSError{requestId='b361f3f6-73f1-4348-964a-31265ec70eeb',
requestToken='null', code='RequestLimitExceeded', message='Request
limit exceeded.', context='{Response=, Errors=}'}
	at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:74)
	at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:71)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:200)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:165)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:134)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)


I have more than 20 slots available on this Amazon account.

Is it Whirr sending requests too fast to Amazon?

How can I solve this problem?

Regards,
Paolo

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Paolo -

I am so glad to your part of the community. Thanks for sharing your work
with us!

I am dreaming that by the time we get to 1.0.0 Apache Whirr should be a
drop-in replacement for Amazon EMR for common scenario and at the same time
a strong foundation for building more complicated workflows that involve
multiple services (including custom ones) and work in a similar fashion on
different clouds.


> But, the cluster size issue can be a big problem in terms of adoption and
> in my opinion it should be addressed (if at all possible).
>
>
I agree that this is an important requirement. There is also some work
happening in jclouds for this:
http://www.jclouds.org/documentation/reference/pool-design

> I hope we are going to be able to get this in for 0.8.0.
>
> Ack.
>

It's now on the roadmap for 0.7.0: http://s.apache.org/whirr-0.7.0-roadmap

BTW it would be great if you want to help with some of the remaining
issues.


>
> Indeed, I was thinking to do the opposite: use twice or more m1.small.
> My MapReduce jobs are typically very simple and do not require a lot of
> RAM.
> I am aware that this might not be the right thing to do... but I am
> curious and
> I want to experience it myself. IO might be poor ... I know.
>
>
You need testing to answer these questions. AFAIK for Hadoop commodity
hardware means systems with two processor each with 4 cores, plenty of RAM
and fast disks + fast networking.


> So, far I've used Whirr for Hadoop clusters only, but I am really happy to
> see
> that there is support for Cassandra, HBase, ElasticSearch and ZooKeeper.
> I might use these as well in a not too distant future.
>
>
Great!


> Paolo
>
>  [1] https://github.com/castagna/tdbloader3
>

Chers,

Andrei

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei,
thanks for your promptly replies and your help. I appreciate.
I share my comments and opinions inline.

On 31 October 2011 14:32, Andrei Savu <sa...@gmail.com> wrote:
> Answers inline.
>>
>> I was trying to start an Hadoop cluster of 20 datanodes|tasktrackers.
>>
>> What is the current upper bound?
>
> We haven't done any testing to find out but it seems like when starting a
> cluster with ~20 nodes jclouds makes too many requests to AWS. We should be
> able to overcame this limitation by changing settings.

That would be wonderful.

I am not aiming at launching clusters of hundreds of nodes using Whirr, but
in the range of 10s of nodes, if possible at all, seems very reasonable to me.

A too low limit on the number of nodes Whirr can provision would, in my
humble opinion, significantly harm the utility and potential of the project.
It's a great and useful project, very easy to use and, when it works, it works
beautifully. Kudos to all of you, thanks.

But, the cluster size issue can be a big problem in terms of adoption and
in my opinion it should be addressed (if at all possible).

>>> I have created a new JIRA issue so that we can add this automatically
>>> when the image-id is known:
>>> https://issues.apache.org/jira/browse/WHIRR-416
>>
>> I am looking forward to see if this will fix my problem and increase the
>> number of nodes of Hadoop clusters one can use via Whirr.
>
> I hope we are going to be able to get this in for 0.8.0.

Ack.

>>> What if you start a smaller size cluster but with more powerful machines?
>>
>> An option, but not a good one in the context of MapReduce, isn't it? :-)
>> m1.large are powerful (and expensive) enough for what I want to do.
>>
>
> How about m1.xlarge? (twice as powerful - and *only* twice as expensive).

I might try that as well.

Indeed, I was thinking to do the opposite: use twice or more m1.small.
My MapReduce jobs are typically very simple and do not require a lot of RAM.
I am aware that this might not be the right thing to do... but I am curious and
I want to experience it myself. IO might be poor ... I know.

> How are you using Apache Whirr? What's the end result?

We use MapReduce mainly in our ingestion pipeline, currently we use
Amazon EMR jobs. We get some data in RDF format, we build Lucene
indexes, TDB indexes [1], etc. Others use MapReduce jobs to gather
stats or analytics over their datasets (again, mainly via EMR jobs currently).

We deal mainly with RDF data and often it is "human curated" data. Dataset
sizes follow a sort of power law distribution where you have many datasets
of small-medium size and just a few large-huge. In RDF world large-huge
is in the order of billions of triples/quads (i.e. "records"|lines).

I find Apache Whirr very good for testing and I prefer to have full control on
my software stack and I achieve this using open source software. This way,
I can choose to be on the bleeding edge, upgrade when I need it, etc. When
I have a problem, I can go as deep as I need to find out what went wrong.
Support, in my experience, is faster, better quality and more transparent for
open source (and Apache) projects such as Hadoop and Whirr.

I am not sure if with Amazon EMR it is possible to launch a job on an already
running cluster, probably it is. It is certainly possible to do so
using Whirr and
this is very useful when testing/developing considering you pay machines on
EC2 by the hour. It is useful in production when you have many short running
jobs.

I did not find a way with Amazon EMR jobs to actually browse HDFS via the
Namenode UI as you can do with Hadoop provisioned by Whirr. With Amazon
EMR jobs you can connect to the Namenode UI but the browsing does not
work out-of-the-box for me.

For testing, while I develop, I use MiniDFCluster and MiniMRCluster, they helps
even if it can be slow. However, you are never 100% sure until you test with a
real cluster. I find very useful, once I am almost there, to have a
small cluster
running and iterate quickly to fix small issues. At that point Whirr
is what I use.

Personally, I rather prefer to invest my time on stuff which does not lock me
into a particular cloud provider (Whirr has this property).

> You feedback is extremely important for our future roadmap.

So, far I've used Whirr for Hadoop clusters only, but I am really happy to see
that there is support for Cassandra, HBase, ElasticSearch and ZooKeeper.
I might use these as well in a not too distant future.

Paolo

 [1] https://github.com/castagna/tdbloader3

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Answers inline.


> I was trying to start an Hadoop cluster of 20 datanodes|tasktrackers.
>
> What is the current upper bound?
>
>
We haven't done any testing to find out but it seems like when starting a
cluster with ~20 nodes jclouds makes too many requests to AWS. We should be
able to overcame this limitation by changing settings.


>
>  I have created a new JIRA issue so that we can add this automatically
>> when the image-id is known:
>> https://issues.apache.org/**jira/browse/WHIRR-416<https://issues.apache.org/jira/browse/WHIRR-416>
>>
>
> I am looking forward to see if this will fix my problem and increase the
> number of nodes of Hadoop clusters one can use via Whirr.


I hope we are going to be able to get this in for 0.8.0.


>
>
>  What if you start a smaller size cluster but with more powerful machines?
>>
>
> An option, but not a good one in the context of MapReduce, isn't it? :-)
> m1.large are powerful (and expensive) enough for what I want to do.
>
>
How about m1.xlarge? (twice as powerful - and *only* twice as expensive).

How are you using Apache Whirr? What's the end result?

You feedback is extremely important for our future roadmap.

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei

Andrei Savu wrote:
> Paul -
> 
> I think you are hitting an upper bound on the size of the clusters that 
> can be started with Whirr right now. 

I was trying to start an Hadoop cluster of 20 datanodes|tasktrackers.

What is the current upper bound?

> One possible workaround you can try is to enable lazy image fetching in 
> jclouds:
> http://www.jclouds.org/documentation/userguide/using-ec2

It's not clear to me how I can do that with Whirr, I am not even sure if
that is the root cause of the problem.

> I have created a new JIRA issue so that we can add this automatically 
> when the image-id is known:
> https://issues.apache.org/jira/browse/WHIRR-416

I am looking forward to see if this will fix my problem and increase the
number of nodes of Hadoop clusters one can use via Whirr.

> What if you start a smaller size cluster but with more powerful machines?

An option, but not a good one in the context of MapReduce, isn't it? :-)
m1.large are powerful (and expensive) enough for what I want to do.

Paolo

> 
> Cheers,
> 
> -- Andrei Savu
> 
> On Fri, Oct 28, 2011 at 6:32 PM, Paolo Castagna 
> <castagna.lists@googlemail.com <ma...@googlemail.com>> 
> wrote:
> 
>     Hi,
>     it's me again, I am trying to use Apache Whirr 0.6.0-incubating
>     to start a 20 nodes Hadoop cluster on Amazon EC2.
> 
>     Here is my recipe:
> 
>     ----
>     whirr.cluster-name=hadoop
>     whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,20
>     hadoop-datanode+hadoop-tasktracker
>     whirr.instance-templates-max-percent-failures=100
>     hadoop-namenode+hadoop-jobtracker,50
>     hadoop-datanode+hadoop-tasktracker
>     whirr.max-startup-retries=1
>     whirr.provider=aws-ec2
>     whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
>     whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
>     whirr.hardware-id=m1.large
>     whirr.image-id=eu-west-1/ami-ee0e3c9a
>     whirr.location-id=eu-west-1
>     whirr.private-key-file=${sys:user.home}/.ssh/whirr
>     whirr.public-key-file=${whirr.private-key-file}.pub
>     whirr.hadoop.version=0.20.204.0
>     whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>     ----
> 
>     I see a lot of these errors:
> 
>     org.jclouds.aws.AWSResponseException: request POST
>     https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>     error: AWSError{requestId='b361f3f6-73f1-4348-964a-31265ec70eeb',
>     requestToken='null', code='RequestLimitExceeded', message='Request
>     limit exceeded.', context='{Response=, Errors=}'}
>            at
>     org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:74)
>            at
>     org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:71)
>            at
>     org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:200)
>            at
>     org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:165)
>            at
>     org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:134)
>            at
>     java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>            at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>            at
>     java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>            at
>     java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>            at java.lang.Thread.run(Thread.java:662)
> 
> 
>     I have more than 20 slots available on this Amazon account.
> 
>     Is it Whirr sending requests too fast to Amazon?
> 
>     How can I solve this problem?
> 
>     Regards,
>     Paolo
> 
> 


Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Paul can you share some of jclouds specific options you are using?

https://issues.apache.org/jira/browse/WHIRR-420

On Thu, Nov 3, 2011 at 7:19 AM, Paul Baclace <pa...@gmail.com> wrote:

>  I thought "NullPointerException: architecture" is a missing attribute;
> this problem is encountered when using a private image (IIRC).  I found it
> necessary to add this property with attr. architecture=:
>
>
> jclouds.ec2.ami-query=owner-id=$AWS_ACCOUNT_NUMBER;state=available;image-type=machine;root-device-type=instance-store;architecture=x86_32
>
> The more constraints on finding the image, the faster the lookup (even if
> some seem rather obvious).
>
>
> Paul
>
>
> On 20111102 8:11 , Andrei Savu wrote:
>
> The jclouds issue is here:
> http://code.google.com/p/jclouds/issues/detail?id=744
>
> On Wed, Nov 2, 2011 at 5:05 PM, Andrei Savu <sa...@gmail.com> wrote:
>
>> So maybe the cluster is still the underlying problem ...
>>
>>  I will log that exception as an issue on the jclouds issue tracker.
>>
>>  Thanks,
>>
>> -- Andrei Savu
>>
>> On Wed, Nov 2, 2011 at 4:59 PM, Paolo Castagna <
>> castagna.lists@googlemail.com> wrote:
>>
>>> Hi Andrei,
>>> I've just tried again, the only difference in the recipe:
>>> whirr.instance-templates=1 hadoop-namenode薧�jobtracker,16
>>> hadoop-datanode薧�tasktracker
>>>
>>> I saw the same exception, but now I can connect to the web UIs as usual.
>>>
>>> Paolo
>>>
>>> On 2 November 2011 14:54, Andrei Savu <sa...@gmail.com> wrote:
>>> > Maybe - I am not sure but I think the AMI metadata is incomplete.
>>> > Are you able to actually use the cluster? Does it happen every time?
>>> >
>>> > Thanks,
>>> > -- Andrei Savu
>>> >
>>> > On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna
>>> > <ca...@googlemail.com> wrote:
>>> >>
>>> >> Hi Andrei
>>> >>
>>> >> On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
>>> >> > What if you start a smaller size cluster but with more powerful
>>> >> > machines?
>>> >>
>>> >> I've tried that... using this recipe with c1.xlarge:
>>> >>
>>> >> -------
>>> >> whirr.cluster-name=hadoop
>>> >> whirr.instance-templates=1 hadoop-namenode薧�jobtracker,18
>>> >> hadoop-datanode薧�tasktracker
>>> >> whirr.instance-templates-max-percent-failures=100
>>> >> hadoop-namenode薧�jobtracker,70
>>> >> hadoop-datanode薧�tasktracker
>>>
>>> >> whirr.max-startup-retries=1
>>> >> whirr.provider=aws-ec2
>>> >> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
>>> >> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
>>> >> whirr.hardware-id=c1.xlarge
>>> >> whirr.location-id=us-east-1
>>> >> whirr.image-id=us-east-1/ami-1136fb78
>>> >> whirr.private-key-file=${sys:user.home}/.ssh/whirr
>>> >> whirr.public-key-file=${whirr.private-key-file}.pub
>>> >> whirr.hadoop.version=0.20.204.0
>>> >>
>>> >> whirr.hadoop.tarball.url=
>>> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>>> >> -------
>>> >>
>>> >> The cluster started up this time, however I see this exception in the
>>> >> Whirr log:
>>> >>
>>> >> malformed image: null
>>> >> java.lang.NullPointerException: architecture
>>> >>        at
>>> >>
>>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>>> >>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>>> >>        at
>>> >>
>>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
>>> >>        at
>>> >>
>>> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
>>> >>        at
>>> org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
>>> >>        at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
>>> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
>>> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
>>> >>        at
>>> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
>>> >>        at
>>> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
>>> >>        at
>>> >>
>>> com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
>>> >>        at
>>> >>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> >>        at
>>> >>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> >>        at java.lang.Thread.run(Thread.java:662)
>>> >>
>>> >> I run the proxy as usual and try to connect to the Namenode UI or
>>> >> Jobtracker UI.
>>> >> It connects but I see an empty page... it usually works fine.
>>> >>
>>> >> Am I hitting another problem?
>>> >>
>>> >> Thanks,
>>> >> Paolo
>>> >
>>> >
>>>
>>
>>
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paul Baclace <pa...@gmail.com>.
I thought "NullPointerException: architecture" is a missing attribute; 
this problem is encountered when using a private image (IIRC).  I found 
it necessary to add this property with attr. architecture=:

jclouds.ec2.ami-query=owner-id=$AWS_ACCOUNT_NUMBER;state=available;image-type=machine;root-device-type=instance-store;architecture=x86_32

The more constraints on finding the image, the faster the lookup (even 
if some seem rather obvious).


Paul

On 20111102 8:11 , Andrei Savu wrote:
> The jclouds issue is here: 
> http://code.google.com/p/jclouds/issues/detail?id=744
>
> On Wed, Nov 2, 2011 at 5:05 PM, Andrei Savu <savu.andrei@gmail.com 
> <ma...@gmail.com>> wrote:
>
>     So maybe the cluster is still the underlying problem ...
>
>     I will log that exception as an issue on the jclouds issue tracker.
>
>     Thanks,
>
>     -- Andrei Savu
>
>     On Wed, Nov 2, 2011 at 4:59 PM, Paolo Castagna
>     <castagna.lists@googlemail.com
>     <ma...@googlemail.com>> wrote:
>
>         Hi Andrei,
>         I've just tried again, the only difference in the recipe:
>         whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
>         hadoop-datanode+hadoop-tasktracker
>         I saw the same exception, but now I can connect to the web UIs
>         as usual.
>
>         Paolo
>
>         On 2 November 2011 14:54, Andrei Savu <savu.andrei@gmail.com
>         <ma...@gmail.com>> wrote:
>         > Maybe - I am not sure but I think the AMI metadata is
>         incomplete.
>         > Are you able to actually use the cluster? Does it happen
>         every time?
>         >
>         > Thanks,
>         > -- Andrei Savu
>         >
>         > On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna
>         > <castagna.lists@googlemail.com
>         <ma...@googlemail.com>> wrote:
>         >>
>         >> Hi Andrei
>         >>
>         >> On 29 October 2011 13:37, Andrei Savu
>         <savu.andrei@gmail.com <ma...@gmail.com>> wrote:
>         >> > What if you start a smaller size cluster but with more
>         powerful
>         >> > machines?
>         >>
>         >> I've tried that... using this recipe with c1.xlarge:
>         >>
>         >> -------
>         >> whirr.cluster-name=hadoop
>         >> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
>         >> hadoop-datanode+hadoop-tasktracker
>         >> whirr.instance-templates-max-percent-failures=100
>         >> hadoop-namenode+hadoop-jobtracker,70
>         >> hadoop-datanode+hadoop-tasktracker
>         >> whirr.max-startup-retries=1
>         >> whirr.provider=aws-ec2
>         >> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
>         >> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
>         >> whirr.hardware-id=c1.xlarge
>         >> whirr.location-id=us-east-1
>         >> whirr.image-id=us-east-1/ami-1136fb78
>         >> whirr.private-key-file=${sys:user.home}/.ssh/whirr
>         >> whirr.public-key-file=${whirr.private-key-file}.pub
>         >> whirr.hadoop.version=0.20.204.0
>         >>
>         >>
>         whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>         <http://archive.apache.org/dist/hadoop/core/hadoop-$%7Bwhirr.hadoop.version%7D/hadoop-$%7Bwhirr.hadoop.version%7D.tar.gz>
>         >> -------
>         >>
>         >> The cluster started up this time, however I see this
>         exception in the
>         >> Whirr log:
>         >>
>         >> malformed image: null
>         >> java.lang.NullPointerException: architecture
>         >>        at
>         >>
>         com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>         >>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>         >>        at
>         >>
>         org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
>         >>        at
>         >>
>         com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
>         >>        at
>         org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
>         >>        at
>         org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
>         >>        at
>         org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
>         >>        at
>         org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
>         >>        at
>         >>
>         com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
>         >>        at
>         >>
>         com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
>         >>        at
>         >>
>         com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
>         >>        at
>         >>
>         java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         >>        at
>         >>
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         >>        at java.lang.Thread.run(Thread.java:662)
>         >>
>         >> I run the proxy as usual and try to connect to the Namenode
>         UI or
>         >> Jobtracker UI.
>         >> It connects but I see an empty page... it usually works fine.
>         >>
>         >> Am I hitting another problem?
>         >>
>         >> Thanks,
>         >> Paolo
>         >
>         >
>
>
>


Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
The jclouds issue is here:
http://code.google.com/p/jclouds/issues/detail?id=744

On Wed, Nov 2, 2011 at 5:05 PM, Andrei Savu <sa...@gmail.com> wrote:

> So maybe the cluster is still the underlying problem ...
>
> I will log that exception as an issue on the jclouds issue tracker.
>
> Thanks,
>
> -- Andrei Savu
>
> On Wed, Nov 2, 2011 at 4:59 PM, Paolo Castagna <
> castagna.lists@googlemail.com> wrote:
>
>> Hi Andrei,
>> I've just tried again, the only difference in the recipe:
>> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
>> hadoop-datanode+hadoop-tasktracker
>> I saw the same exception, but now I can connect to the web UIs as usual.
>>
>> Paolo
>>
>> On 2 November 2011 14:54, Andrei Savu <sa...@gmail.com> wrote:
>> > Maybe - I am not sure but I think the AMI metadata is incomplete.
>> > Are you able to actually use the cluster? Does it happen every time?
>> >
>> > Thanks,
>> > -- Andrei Savu
>> >
>> > On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna
>> > <ca...@googlemail.com> wrote:
>> >>
>> >> Hi Andrei
>> >>
>> >> On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
>> >> > What if you start a smaller size cluster but with more powerful
>> >> > machines?
>> >>
>> >> I've tried that... using this recipe with c1.xlarge:
>> >>
>> >> -------
>> >> whirr.cluster-name=hadoop
>> >> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
>> >> hadoop-datanode+hadoop-tasktracker
>> >> whirr.instance-templates-max-percent-failures=100
>> >> hadoop-namenode+hadoop-jobtracker,70
>> >> hadoop-datanode+hadoop-tasktracker
>> >> whirr.max-startup-retries=1
>> >> whirr.provider=aws-ec2
>> >> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
>> >> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
>> >> whirr.hardware-id=c1.xlarge
>> >> whirr.location-id=us-east-1
>> >> whirr.image-id=us-east-1/ami-1136fb78
>> >> whirr.private-key-file=${sys:user.home}/.ssh/whirr
>> >> whirr.public-key-file=${whirr.private-key-file}.pub
>> >> whirr.hadoop.version=0.20.204.0
>> >>
>> >> whirr.hadoop.tarball.url=
>> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>> >> -------
>> >>
>> >> The cluster started up this time, however I see this exception in the
>> >> Whirr log:
>> >>
>> >> malformed image: null
>> >> java.lang.NullPointerException: architecture
>> >>        at
>> >>
>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>> >>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>> >>        at
>> >>
>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
>> >>        at
>> >>
>> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
>> >>        at
>> org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
>> >>        at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
>> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
>> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
>> >>        at
>> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
>> >>        at
>> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
>> >>        at
>> >>
>> com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
>> >>        at
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> >>        at
>> >>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> >>        at java.lang.Thread.run(Thread.java:662)
>> >>
>> >> I run the proxy as usual and try to connect to the Namenode UI or
>> >> Jobtracker UI.
>> >> It connects but I see an empty page... it usually works fine.
>> >>
>> >> Am I hitting another problem?
>> >>
>> >> Thanks,
>> >> Paolo
>> >
>> >
>>
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
On Wed, Nov 2, 2011 at 5:05 PM, Andrei Savu <sa...@gmail.com> wrote:

> So maybe the cluster is still the underlying problem ...


So maybe the cluster *size* is still the underlying problem ....

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
So maybe the cluster is still the underlying problem ...

I will log that exception as an issue on the jclouds issue tracker.

Thanks,

-- Andrei Savu

On Wed, Nov 2, 2011 at 4:59 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi Andrei,
> I've just tried again, the only difference in the recipe:
> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> hadoop-datanode+hadoop-tasktracker
> I saw the same exception, but now I can connect to the web UIs as usual.
>
> Paolo
>
> On 2 November 2011 14:54, Andrei Savu <sa...@gmail.com> wrote:
> > Maybe - I am not sure but I think the AMI metadata is incomplete.
> > Are you able to actually use the cluster? Does it happen every time?
> >
> > Thanks,
> > -- Andrei Savu
> >
> > On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna
> > <ca...@googlemail.com> wrote:
> >>
> >> Hi Andrei
> >>
> >> On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
> >> > What if you start a smaller size cluster but with more powerful
> >> > machines?
> >>
> >> I've tried that... using this recipe with c1.xlarge:
> >>
> >> -------
> >> whirr.cluster-name=hadoop
> >> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
> >> hadoop-datanode+hadoop-tasktracker
> >> whirr.instance-templates-max-percent-failures=100
> >> hadoop-namenode+hadoop-jobtracker,70
> >> hadoop-datanode+hadoop-tasktracker
> >> whirr.max-startup-retries=1
> >> whirr.provider=aws-ec2
> >> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
> >> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
> >> whirr.hardware-id=c1.xlarge
> >> whirr.location-id=us-east-1
> >> whirr.image-id=us-east-1/ami-1136fb78
> >> whirr.private-key-file=${sys:user.home}/.ssh/whirr
> >> whirr.public-key-file=${whirr.private-key-file}.pub
> >> whirr.hadoop.version=0.20.204.0
> >>
> >> whirr.hadoop.tarball.url=
> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
> >> -------
> >>
> >> The cluster started up this time, however I see this exception in the
> >> Whirr log:
> >>
> >> malformed image: null
> >> java.lang.NullPointerException: architecture
> >>        at
> >>
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
> >>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
> >>        at
> >>
> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
> >>        at
> >>
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
> >>        at org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
> >>        at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
> >>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
> >>        at
> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
> >>        at
> >> com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
> >>        at
> >>
> com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
> >>        at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >>        at
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >>        at java.lang.Thread.run(Thread.java:662)
> >>
> >> I run the proxy as usual and try to connect to the Namenode UI or
> >> Jobtracker UI.
> >> It connects but I see an empty page... it usually works fine.
> >>
> >> Am I hitting another problem?
> >>
> >> Thanks,
> >> Paolo
> >
> >
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Paolo -

I have created the following issue so that we can track progress on
improving this:
https://issues.apache.org/jira/browse/WHIRR-425

It would be great if you could add your feedback on this.

Thanks,

-- Andrei Savu

On Wed, Nov 2, 2011 at 5:52 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi Andrei,
> I connected to one of the instance which is not listed by the
> NameNode, but it is running.
> There are no Java processes running on that machine.
>
> This is what I see in /tmp/logs/stderr.log:
>
> dpkg-preconfigure: unable to re-open stdin:
> sun-dlj-v1-1 license has already been accepted
> sun-dlj-v1-1 license has already been accepted
> sun-dlj-v1-1 license has already been accepted
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/HtmlConverter
> to provide /usr/bin/HtmlConverter (HtmlConverter) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/appletviewer to
> provide /usr/bin/appletviewer (appletviewer) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/apt to provide
> /usr/bin/apt (apt) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/extcheck to
> provide /usr/bin/extcheck (extcheck) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/idlj to provide
> /usr/bin/idlj (idlj) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jar to provide
> /usr/bin/jar (jar) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jarsigner to
> provide /usr/bin/jarsigner (jarsigner) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javac to
> provide /usr/bin/javac (javac) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javadoc to
> provide /usr/bin/javadoc (javadoc) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javah to
> provide /usr/bin/javah (javah) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javap to
> provide /usr/bin/javap (javap) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jconsole to
> provide /usr/bin/jconsole (jconsole) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jdb to provide
> /usr/bin/jdb (jdb) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jhat to provide
> /usr/bin/jhat (jhat) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jinfo to
> provide /usr/bin/jinfo (jinfo) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jmap to provide
> /usr/bin/jmap (jmap) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jps to provide
> /usr/bin/jps (jps) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jrunscript to
> provide /usr/bin/jrunscript (jrunscript) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jsadebugd to
> provide /usr/bin/jsadebugd (jsadebugd) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstack to
> provide /usr/bin/jstack (jstack) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstat to
> provide /usr/bin/jstat (jstat) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstatd to
> provide /usr/bin/jstatd (jstatd) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
> provide /usr/bin/native2ascii (native2ascii) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
> /usr/bin/rmic (rmic) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
> provide /usr/bin/schemagen (schemagen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
> provide /usr/bin/serialver (serialver) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to
> provide /usr/bin/wsgen (wsgen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to
> provide /usr/bin/wsimport (wsimport) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
> /usr/bin/xjc (xjc) in auto mode.
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
> curl: (18) transfer closed with 22890056 bytes remaining to read
> curl: (22) The requested URL returned error: 404
>
> gzip: stdin: unexpected end of file
> tar: Unexpected EOF in archive
> tar: Unexpected EOF in archive
> tar: Error is not recoverable: exiting now
>
> Is it in /usr/local/hadoop/bin that I should find the shell script to
> start datanode and tasktracker daemons?
>
> That directory on this instance is empty.
>
> Paolo
>
>
> On 2 November 2011 15:38, Andrei Savu <sa...@gmail.com> wrote:
> > Try restarting the daemons. Are they running? Are there errors in the log
> > files in /tmp?
> >
> > On Wed, Nov 2, 2011 at 5:34 PM, Paolo Castagna
> > <ca...@googlemail.com> wrote:
> >>
> >> Hi Andrei,
> >> this cluster is still running, I am running a distcp job to copy my
> >> data from S3 to HDFS.
> >>
> >> The NameNode (via the Web UI) is sitll reporting:
> >>
> >> Live Nodes      :       8
> >> Dead Nodes      :       0
> >> Decommissioning Nodes   :       0
> >>
> >> I do not see errors in the logs.
> >>
> >> I can try to connect to one of the machines which did not join the
> >> cluster,
> >> but I am not sure what to do to make it join the cluster once I am
> >> connected
> >> to it.
> >>
> >> Paolo
> >>
> >> On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
> >> > Are you seeing any errors in the logs? Can you check one of the
> machines
> >> > that failed to join the cluster?
> >> > Are you sure they've tried to join the rest of the cluster? Maybe you
> >> > have
> >> > to wait a bit more.
> >> >
> >> > -- Andrei Savu
> >> >
> >> > On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
> >> > <ca...@googlemail.com> wrote:
> >> >>
> >> >> Hi
> >> >>
> >> >> On 2 November 2011 14:59, Paolo Castagna
> >> >> <ca...@googlemail.com>
> >> >> wrote:
> >> >> > Hi Andrei,
> >> >> > I've just tried again, the only difference in the recipe:
> >> >> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> >> >> > hadoop-datanode+hadoop-tasktracker
> >> >> > I saw the same exception, but now I can connect to the web UIs as
> >> >> > usual.
> >> >>
> >> >> Well, I spoken too soon.
> >> >>
> >> >> The very same cluster had 17 instances, I can see all of them running
> >> >> via the Amazon console (i.e. I am paying for them), however the
> >> >> NameNode and the JobTracker see only 8 nodes. :-(
> >> >>
> >> >> Paolo
> >> >
> >> >
> >
> >
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
On Wed, Nov 2, 2011 at 6:04 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> I am not sure which retries you are referring to.
> But, I have this in my recipe: whirr.max-startup-retries=1
>

It should retry a few times if the tarball download fails on the remote
machine.

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
On 2 November 2011 15:56, Andrei Savu <sa...@gmail.com> wrote:
>> curl: (18) transfer closed with 22890056 bytes remaining to read
> I think this means that it failed to download the files from apache.org -
> maybe
> because we are trying 18 downloads at the same time.
>
> We should consider a different strategy for distributing artefacts
> when starting
> larger clusters. Have you seen this problem before with m1.large?

No, I've never seen this issue when using m1.large.
Every time, the number of instances running was the same as the live
nodes reported by the NameNode.

> I don't really understand why we don't see a few retries.

I am not sure which retries you are referring to.
But, I have this in my recipe: whirr.max-startup-retries=1

Paolo

> -- Andrei Savu
>
> On Wed, Nov 2, 2011 at 5:52 PM, Paolo Castagna
> <ca...@googlemail.com> wrote:
>>
>> Hi Andrei,
>> I connected to one of the instance which is not listed by the
>> NameNode, but it is running.
>> There are no Java processes running on that machine.
>>
>> This is what I see in /tmp/logs/stderr.log:
>>
>> dpkg-preconfigure: unable to re-open stdin:
>> sun-dlj-v1-1 license has already been accepted
>> sun-dlj-v1-1 license has already been accepted
>> sun-dlj-v1-1 license has already been accepted
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/HtmlConverter
>> to provide /usr/bin/HtmlConverter (HtmlConverter) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/appletviewer to
>> provide /usr/bin/appletviewer (appletviewer) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/apt to provide
>> /usr/bin/apt (apt) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/extcheck to
>> provide /usr/bin/extcheck (extcheck) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/idlj to provide
>> /usr/bin/idlj (idlj) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jar to provide
>> /usr/bin/jar (jar) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jarsigner to
>> provide /usr/bin/jarsigner (jarsigner) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javac to
>> provide /usr/bin/javac (javac) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javadoc to
>> provide /usr/bin/javadoc (javadoc) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javah to
>> provide /usr/bin/javah (javah) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javap to
>> provide /usr/bin/javap (javap) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jconsole to
>> provide /usr/bin/jconsole (jconsole) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jdb to provide
>> /usr/bin/jdb (jdb) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jhat to provide
>> /usr/bin/jhat (jhat) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jinfo to
>> provide /usr/bin/jinfo (jinfo) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jmap to provide
>> /usr/bin/jmap (jmap) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jps to provide
>> /usr/bin/jps (jps) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jrunscript to
>> provide /usr/bin/jrunscript (jrunscript) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jsadebugd to
>> provide /usr/bin/jsadebugd (jsadebugd) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstack to
>> provide /usr/bin/jstack (jstack) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstat to
>> provide /usr/bin/jstat (jstat) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstatd to
>> provide /usr/bin/jstatd (jstatd) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
>> provide /usr/bin/native2ascii (native2ascii) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
>> /usr/bin/rmic (rmic) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
>> provide /usr/bin/schemagen (schemagen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
>> provide /usr/bin/serialver (serialver) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to
>> provide /usr/bin/wsgen (wsgen) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to
>> provide /usr/bin/wsimport (wsimport) in auto mode.
>> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
>> /usr/bin/xjc (xjc) in auto mode.
>> java version "1.6.0_26"
>> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
>> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
>> curl: (18) transfer closed with 22890056 bytes remaining to read
>> curl: (22) The requested URL returned error: 404
>>
>> gzip: stdin: unexpected end of file
>> tar: Unexpected EOF in archive
>> tar: Unexpected EOF in archive
>> tar: Error is not recoverable: exiting now
>>
>> Is it in /usr/local/hadoop/bin that I should find the shell script to
>> start datanode and tasktracker daemons?
>>
>> That directory on this instance is empty.
>>
>> Paolo
>>
>>
>> On 2 November 2011 15:38, Andrei Savu <sa...@gmail.com> wrote:
>> > Try restarting the daemons. Are they running? Are there errors in the
>> > log
>> > files in /tmp?
>> >
>> > On Wed, Nov 2, 2011 at 5:34 PM, Paolo Castagna
>> > <ca...@googlemail.com> wrote:
>> >>
>> >> Hi Andrei,
>> >> this cluster is still running, I am running a distcp job to copy my
>> >> data from S3 to HDFS.
>> >>
>> >> The NameNode (via the Web UI) is sitll reporting:
>> >>
>> >> Live Nodes      :       8
>> >> Dead Nodes      :       0
>> >> Decommissioning Nodes   :       0
>> >>
>> >> I do not see errors in the logs.
>> >>
>> >> I can try to connect to one of the machines which did not join the
>> >> cluster,
>> >> but I am not sure what to do to make it join the cluster once I am
>> >> connected
>> >> to it.
>> >>
>> >> Paolo
>> >>
>> >> On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
>> >> > Are you seeing any errors in the logs? Can you check one of the
>> >> > machines
>> >> > that failed to join the cluster?
>> >> > Are you sure they've tried to join the rest of the cluster? Maybe you
>> >> > have
>> >> > to wait a bit more.
>> >> >
>> >> > -- Andrei Savu
>> >> >
>> >> > On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
>> >> > <ca...@googlemail.com> wrote:
>> >> >>
>> >> >> Hi
>> >> >>
>> >> >> On 2 November 2011 14:59, Paolo Castagna
>> >> >> <ca...@googlemail.com>
>> >> >> wrote:
>> >> >> > Hi Andrei,
>> >> >> > I've just tried again, the only difference in the recipe:
>> >> >> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
>> >> >> > hadoop-datanode+hadoop-tasktracker
>> >> >> > I saw the same exception, but now I can connect to the web UIs as
>> >> >> > usual.
>> >> >>
>> >> >> Well, I spoken too soon.
>> >> >>
>> >> >> The very same cluster had 17 instances, I can see all of them
>> >> >> running
>> >> >> via the Amazon console (i.e. I am paying for them), however the
>> >> >> NameNode and the JobTracker see only 8 nodes. :-(
>> >> >>
>> >> >> Paolo
>> >> >
>> >> >
>> >
>> >
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
> curl: (18) transfer closed with 22890056 bytes remaining to read

I think this means that it failed to download the files from apache.org -
maybe
because we are trying 18 downloads at the same time.

We should consider a different strategy for distributing artefacts
when starting
larger clusters. Have you seen this problem before with m1.large?

I don't really understand why we don't see a few retries.

-- Andrei Savu

On Wed, Nov 2, 2011 at 5:52 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi Andrei,
> I connected to one of the instance which is not listed by the
> NameNode, but it is running.
> There are no Java processes running on that machine.
>
> This is what I see in /tmp/logs/stderr.log:
>
> dpkg-preconfigure: unable to re-open stdin:
> sun-dlj-v1-1 license has already been accepted
> sun-dlj-v1-1 license has already been accepted
> sun-dlj-v1-1 license has already been accepted
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/HtmlConverter
> to provide /usr/bin/HtmlConverter (HtmlConverter) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/appletviewer to
> provide /usr/bin/appletviewer (appletviewer) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/apt to provide
> /usr/bin/apt (apt) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/extcheck to
> provide /usr/bin/extcheck (extcheck) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/idlj to provide
> /usr/bin/idlj (idlj) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jar to provide
> /usr/bin/jar (jar) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jarsigner to
> provide /usr/bin/jarsigner (jarsigner) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javac to
> provide /usr/bin/javac (javac) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javadoc to
> provide /usr/bin/javadoc (javadoc) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javah to
> provide /usr/bin/javah (javah) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javap to
> provide /usr/bin/javap (javap) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jconsole to
> provide /usr/bin/jconsole (jconsole) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jdb to provide
> /usr/bin/jdb (jdb) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jhat to provide
> /usr/bin/jhat (jhat) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jinfo to
> provide /usr/bin/jinfo (jinfo) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jmap to provide
> /usr/bin/jmap (jmap) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jps to provide
> /usr/bin/jps (jps) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jrunscript to
> provide /usr/bin/jrunscript (jrunscript) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jsadebugd to
> provide /usr/bin/jsadebugd (jsadebugd) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstack to
> provide /usr/bin/jstack (jstack) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstat to
> provide /usr/bin/jstat (jstat) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstatd to
> provide /usr/bin/jstatd (jstatd) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
> provide /usr/bin/native2ascii (native2ascii) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
> /usr/bin/rmic (rmic) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
> provide /usr/bin/schemagen (schemagen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
> provide /usr/bin/serialver (serialver) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to
> provide /usr/bin/wsgen (wsgen) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to
> provide /usr/bin/wsimport (wsimport) in auto mode.
> update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
> /usr/bin/xjc (xjc) in auto mode.
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
> curl: (18) transfer closed with 22890056 bytes remaining to read
> curl: (22) The requested URL returned error: 404
>
> gzip: stdin: unexpected end of file
> tar: Unexpected EOF in archive
> tar: Unexpected EOF in archive
> tar: Error is not recoverable: exiting now
>
> Is it in /usr/local/hadoop/bin that I should find the shell script to
> start datanode and tasktracker daemons?
>
> That directory on this instance is empty.
>
> Paolo
>
>
> On 2 November 2011 15:38, Andrei Savu <sa...@gmail.com> wrote:
> > Try restarting the daemons. Are they running? Are there errors in the log
> > files in /tmp?
> >
> > On Wed, Nov 2, 2011 at 5:34 PM, Paolo Castagna
> > <ca...@googlemail.com> wrote:
> >>
> >> Hi Andrei,
> >> this cluster is still running, I am running a distcp job to copy my
> >> data from S3 to HDFS.
> >>
> >> The NameNode (via the Web UI) is sitll reporting:
> >>
> >> Live Nodes      :       8
> >> Dead Nodes      :       0
> >> Decommissioning Nodes   :       0
> >>
> >> I do not see errors in the logs.
> >>
> >> I can try to connect to one of the machines which did not join the
> >> cluster,
> >> but I am not sure what to do to make it join the cluster once I am
> >> connected
> >> to it.
> >>
> >> Paolo
> >>
> >> On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
> >> > Are you seeing any errors in the logs? Can you check one of the
> machines
> >> > that failed to join the cluster?
> >> > Are you sure they've tried to join the rest of the cluster? Maybe you
> >> > have
> >> > to wait a bit more.
> >> >
> >> > -- Andrei Savu
> >> >
> >> > On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
> >> > <ca...@googlemail.com> wrote:
> >> >>
> >> >> Hi
> >> >>
> >> >> On 2 November 2011 14:59, Paolo Castagna
> >> >> <ca...@googlemail.com>
> >> >> wrote:
> >> >> > Hi Andrei,
> >> >> > I've just tried again, the only difference in the recipe:
> >> >> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> >> >> > hadoop-datanode+hadoop-tasktracker
> >> >> > I saw the same exception, but now I can connect to the web UIs as
> >> >> > usual.
> >> >>
> >> >> Well, I spoken too soon.
> >> >>
> >> >> The very same cluster had 17 instances, I can see all of them running
> >> >> via the Amazon console (i.e. I am paying for them), however the
> >> >> NameNode and the JobTracker see only 8 nodes. :-(
> >> >>
> >> >> Paolo
> >> >
> >> >
> >
> >
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei,
I connected to one of the instance which is not listed by the
NameNode, but it is running.
There are no Java processes running on that machine.

This is what I see in /tmp/logs/stderr.log:

dpkg-preconfigure: unable to re-open stdin:
sun-dlj-v1-1 license has already been accepted
sun-dlj-v1-1 license has already been accepted
sun-dlj-v1-1 license has already been accepted
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/HtmlConverter
to provide /usr/bin/HtmlConverter (HtmlConverter) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/appletviewer to
provide /usr/bin/appletviewer (appletviewer) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/apt to provide
/usr/bin/apt (apt) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/extcheck to
provide /usr/bin/extcheck (extcheck) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/idlj to provide
/usr/bin/idlj (idlj) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jar to provide
/usr/bin/jar (jar) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jarsigner to
provide /usr/bin/jarsigner (jarsigner) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javac to
provide /usr/bin/javac (javac) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javadoc to
provide /usr/bin/javadoc (javadoc) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javah to
provide /usr/bin/javah (javah) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/javap to
provide /usr/bin/javap (javap) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jconsole to
provide /usr/bin/jconsole (jconsole) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jdb to provide
/usr/bin/jdb (jdb) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jhat to provide
/usr/bin/jhat (jhat) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jinfo to
provide /usr/bin/jinfo (jinfo) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jmap to provide
/usr/bin/jmap (jmap) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jps to provide
/usr/bin/jps (jps) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jrunscript to
provide /usr/bin/jrunscript (jrunscript) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jsadebugd to
provide /usr/bin/jsadebugd (jsadebugd) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstack to
provide /usr/bin/jstack (jstack) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstat to
provide /usr/bin/jstat (jstat) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/jstatd to
provide /usr/bin/jstatd (jstatd) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/native2ascii to
provide /usr/bin/native2ascii (native2ascii) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/rmic to provide
/usr/bin/rmic (rmic) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/schemagen to
provide /usr/bin/schemagen (schemagen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/serialver to
provide /usr/bin/serialver (serialver) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsgen to
provide /usr/bin/wsgen (wsgen) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/wsimport to
provide /usr/bin/wsimport (wsimport) in auto mode.
update-alternatives: using /usr/lib/jvm/java-6-sun/bin/xjc to provide
/usr/bin/xjc (xjc) in auto mode.
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)
curl: (18) transfer closed with 22890056 bytes remaining to read
curl: (22) The requested URL returned error: 404

gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now

Is it in /usr/local/hadoop/bin that I should find the shell script to
start datanode and tasktracker daemons?

That directory on this instance is empty.

Paolo


On 2 November 2011 15:38, Andrei Savu <sa...@gmail.com> wrote:
> Try restarting the daemons. Are they running? Are there errors in the log
> files in /tmp?
>
> On Wed, Nov 2, 2011 at 5:34 PM, Paolo Castagna
> <ca...@googlemail.com> wrote:
>>
>> Hi Andrei,
>> this cluster is still running, I am running a distcp job to copy my
>> data from S3 to HDFS.
>>
>> The NameNode (via the Web UI) is sitll reporting:
>>
>> Live Nodes      :       8
>> Dead Nodes      :       0
>> Decommissioning Nodes   :       0
>>
>> I do not see errors in the logs.
>>
>> I can try to connect to one of the machines which did not join the
>> cluster,
>> but I am not sure what to do to make it join the cluster once I am
>> connected
>> to it.
>>
>> Paolo
>>
>> On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
>> > Are you seeing any errors in the logs? Can you check one of the machines
>> > that failed to join the cluster?
>> > Are you sure they've tried to join the rest of the cluster? Maybe you
>> > have
>> > to wait a bit more.
>> >
>> > -- Andrei Savu
>> >
>> > On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
>> > <ca...@googlemail.com> wrote:
>> >>
>> >> Hi
>> >>
>> >> On 2 November 2011 14:59, Paolo Castagna
>> >> <ca...@googlemail.com>
>> >> wrote:
>> >> > Hi Andrei,
>> >> > I've just tried again, the only difference in the recipe:
>> >> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
>> >> > hadoop-datanode+hadoop-tasktracker
>> >> > I saw the same exception, but now I can connect to the web UIs as
>> >> > usual.
>> >>
>> >> Well, I spoken too soon.
>> >>
>> >> The very same cluster had 17 instances, I can see all of them running
>> >> via the Amazon console (i.e. I am paying for them), however the
>> >> NameNode and the JobTracker see only 8 nodes. :-(
>> >>
>> >> Paolo
>> >
>> >
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Try restarting the daemons. Are they running? Are there errors in the log
files in /tmp?

On Wed, Nov 2, 2011 at 5:34 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi Andrei,
> this cluster is still running, I am running a distcp job to copy my
> data from S3 to HDFS.
>
> The NameNode (via the Web UI) is sitll reporting:
>
> Live Nodes      :       8
> Dead Nodes      :       0
> Decommissioning Nodes   :       0
>
> I do not see errors in the logs.
>
> I can try to connect to one of the machines which did not join the cluster,
> but I am not sure what to do to make it join the cluster once I am
> connected
> to it.
>
> Paolo
>
> On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
> > Are you seeing any errors in the logs? Can you check one of the machines
> > that failed to join the cluster?
> > Are you sure they've tried to join the rest of the cluster? Maybe you
> have
> > to wait a bit more.
> >
> > -- Andrei Savu
> >
> > On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
> > <ca...@googlemail.com> wrote:
> >>
> >> Hi
> >>
> >> On 2 November 2011 14:59, Paolo Castagna <castagna.lists@googlemail.com
> >
> >> wrote:
> >> > Hi Andrei,
> >> > I've just tried again, the only difference in the recipe:
> >> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> >> > hadoop-datanode+hadoop-tasktracker
> >> > I saw the same exception, but now I can connect to the web UIs as
> usual.
> >>
> >> Well, I spoken too soon.
> >>
> >> The very same cluster had 17 instances, I can see all of them running
> >> via the Amazon console (i.e. I am paying for them), however the
> >> NameNode and the JobTracker see only 8 nodes. :-(
> >>
> >> Paolo
> >
> >
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei,
this cluster is still running, I am running a distcp job to copy my
data from S3 to HDFS.

The NameNode (via the Web UI) is sitll reporting:

Live Nodes 	:	8
Dead Nodes 	:	0
Decommissioning Nodes 	:	0

I do not see errors in the logs.

I can try to connect to one of the machines which did not join the cluster,
but I am not sure what to do to make it join the cluster once I am connected
to it.

Paolo

On 2 November 2011 15:29, Andrei Savu <sa...@gmail.com> wrote:
> Are you seeing any errors in the logs? Can you check one of the machines
> that failed to join the cluster?
> Are you sure they've tried to join the rest of the cluster? Maybe you have
> to wait a bit more.
>
> -- Andrei Savu
>
> On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna
> <ca...@googlemail.com> wrote:
>>
>> Hi
>>
>> On 2 November 2011 14:59, Paolo Castagna <ca...@googlemail.com>
>> wrote:
>> > Hi Andrei,
>> > I've just tried again, the only difference in the recipe:
>> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
>> > hadoop-datanode+hadoop-tasktracker
>> > I saw the same exception, but now I can connect to the web UIs as usual.
>>
>> Well, I spoken too soon.
>>
>> The very same cluster had 17 instances, I can see all of them running
>> via the Amazon console (i.e. I am paying for them), however the
>> NameNode and the JobTracker see only 8 nodes. :-(
>>
>> Paolo
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Are you seeing any errors in the logs? Can you check one of the machines
that failed to join the cluster?

Are you sure they've tried to join the rest of the cluster? Maybe you have
to wait a bit more.

-- Andrei Savu

On Wed, Nov 2, 2011 at 5:25 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi
>
> On 2 November 2011 14:59, Paolo Castagna <ca...@googlemail.com>
> wrote:
> > Hi Andrei,
> > I've just tried again, the only difference in the recipe:
> > whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> > hadoop-datanode+hadoop-tasktracker
> > I saw the same exception, but now I can connect to the web UIs as usual.
>
> Well, I spoken too soon.
>
> The very same cluster had 17 instances, I can see all of them running
> via the Amazon console (i.e. I am paying for them), however the
> NameNode and the JobTracker see only 8 nodes. :-(
>
> Paolo
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi

On 2 November 2011 14:59, Paolo Castagna <ca...@googlemail.com> wrote:
> Hi Andrei,
> I've just tried again, the only difference in the recipe:
> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
> hadoop-datanode+hadoop-tasktracker
> I saw the same exception, but now I can connect to the web UIs as usual.

Well, I spoken too soon.

The very same cluster had 17 instances, I can see all of them running
via the Amazon console (i.e. I am paying for them), however the
NameNode and the JobTracker see only 8 nodes. :-(

Paolo

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei,
I've just tried again, the only difference in the recipe:
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,16
hadoop-datanode+hadoop-tasktracker
I saw the same exception, but now I can connect to the web UIs as usual.

Paolo

On 2 November 2011 14:54, Andrei Savu <sa...@gmail.com> wrote:
> Maybe - I am not sure but I think the AMI metadata is incomplete.
> Are you able to actually use the cluster? Does it happen every time?
>
> Thanks,
> -- Andrei Savu
>
> On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna
> <ca...@googlemail.com> wrote:
>>
>> Hi Andrei
>>
>> On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
>> > What if you start a smaller size cluster but with more powerful
>> > machines?
>>
>> I've tried that... using this recipe with c1.xlarge:
>>
>> -------
>> whirr.cluster-name=hadoop
>> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
>> hadoop-datanode+hadoop-tasktracker
>> whirr.instance-templates-max-percent-failures=100
>> hadoop-namenode+hadoop-jobtracker,70
>> hadoop-datanode+hadoop-tasktracker
>> whirr.max-startup-retries=1
>> whirr.provider=aws-ec2
>> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
>> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
>> whirr.hardware-id=c1.xlarge
>> whirr.location-id=us-east-1
>> whirr.image-id=us-east-1/ami-1136fb78
>> whirr.private-key-file=${sys:user.home}/.ssh/whirr
>> whirr.public-key-file=${whirr.private-key-file}.pub
>> whirr.hadoop.version=0.20.204.0
>>
>> whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
>> -------
>>
>> The cluster started up this time, however I see this exception in the
>> Whirr log:
>>
>> malformed image: null
>> java.lang.NullPointerException: architecture
>>        at
>> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>>        at
>> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>>        at
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
>>        at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
>>        at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
>>        at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
>>        at
>> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
>>        at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
>>        at
>> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
>>        at
>> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
>>        at
>> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
>>        at
>> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
>>        at org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
>>        at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
>>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
>>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
>>        at
>> com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
>>        at
>> com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
>>        at
>> com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
>>        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>        at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>>
>> I run the proxy as usual and try to connect to the Namenode UI or
>> Jobtracker UI.
>> It connects but I see an empty page... it usually works fine.
>>
>> Am I hitting another problem?
>>
>> Thanks,
>> Paolo
>
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Maybe - I am not sure but I think the AMI metadata is incomplete.

Are you able to actually use the cluster? Does it happen every time?

Thanks,

-- Andrei Savu

On Wed, Nov 2, 2011 at 4:35 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi Andrei
>
> On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
> > What if you start a smaller size cluster but with more powerful machines?
>
> I've tried that... using this recipe with c1.xlarge:
>
> -------
> whirr.cluster-name=hadoop
> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
> hadoop-datanode+hadoop-tasktracker
> whirr.instance-templates-max-percent-failures=100
> hadoop-namenode+hadoop-jobtracker,70
> hadoop-datanode+hadoop-tasktracker
> whirr.max-startup-retries=1
> whirr.provider=aws-ec2
> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
> whirr.hardware-id=c1.xlarge
> whirr.location-id=us-east-1
> whirr.image-id=us-east-1/ami-1136fb78
> whirr.private-key-file=${sys:user.home}/.ssh/whirr
> whirr.public-key-file=${whirr.private-key-file}.pub
> whirr.hadoop.version=0.20.204.0
> whirr.hadoop.tarball.url=
> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
> -------
>
> The cluster started up this time, however I see this exception in the
> Whirr log:
>
> malformed image: null
> java.lang.NullPointerException: architecture
>        at
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>        at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
>        at
> org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
>        at
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
>        at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
>        at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
>        at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
>        at
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
>        at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
>        at
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
>        at
> com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
>        at
> com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
>        at
> com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
>        at org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
>        at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
>        at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
>        at
> com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
>        at
> com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
>        at
> com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
>
> I run the proxy as usual and try to connect to the Namenode UI or
> Jobtracker UI.
> It connects but I see an empty page... it usually works fine.
>
> Am I hitting another problem?
>
> Thanks,
> Paolo
>

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Andrei

On 29 October 2011 13:37, Andrei Savu <sa...@gmail.com> wrote:
> What if you start a smaller size cluster but with more powerful machines?

I've tried that... using this recipe with c1.xlarge:

-------
whirr.cluster-name=hadoop
whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,18
hadoop-datanode+hadoop-tasktracker
whirr.instance-templates-max-percent-failures=100
hadoop-namenode+hadoop-jobtracker,70
hadoop-datanode+hadoop-tasktracker
whirr.max-startup-retries=1
whirr.provider=aws-ec2
whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
whirr.hardware-id=c1.xlarge
whirr.location-id=us-east-1
whirr.image-id=us-east-1/ami-1136fb78
whirr.private-key-file=${sys:user.home}/.ssh/whirr
whirr.public-key-file=${whirr.private-key-file}.pub
whirr.hadoop.version=0.20.204.0
whirr.hadoop.tarball.url=http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
-------

The cluster started up this time, however I see this exception in the Whirr log:

malformed image: null
java.lang.NullPointerException: architecture
	at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
	at org.jclouds.ec2.domain.Image.<init>(Image.java:81)
	at org.jclouds.ec2.xml.DescribeImagesResponseHandler.endElement(DescribeImagesResponseHandler.java:169)
	at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:601)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2938)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648)
	at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
	at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
	at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
	at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
	at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
	at org.jclouds.http.functions.ParseSax.doParse(ParseSax.java:125)
	at org.jclouds.http.functions.ParseSax.parse(ParseSax.java:114)
	at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:78)
	at org.jclouds.http.functions.ParseSax.apply(ParseSax.java:51)
	at com.google.common.util.concurrent.Futures$4.apply(Futures.java:439)
	at com.google.common.util.concurrent.Futures$4.apply(Futures.java:437)
	at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:713)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:662)

I run the proxy as usual and try to connect to the Namenode UI or
Jobtracker UI.
It connects but I see an empty page... it usually works fine.

Am I hitting another problem?

Thanks,
Paolo

Re: Amazon EC2 and HTTP 503 errors: RequestLimitExceeded

Posted by Andrei Savu <sa...@gmail.com>.
Paul -

I think you are hitting an upper bound on the size of the clusters that can
be started with Whirr right now.

One possible workaround you can try is to enable lazy image fetching in
jclouds:
http://www.jclouds.org/documentation/userguide/using-ec2

I have created a new JIRA issue so that we can add this automatically when
the image-id is known:
https://issues.apache.org/jira/browse/WHIRR-416

What if you start a smaller size cluster but with more powerful machines?

Cheers,

-- Andrei Savu

On Fri, Oct 28, 2011 at 6:32 PM, Paolo Castagna <
castagna.lists@googlemail.com> wrote:

> Hi,
> it's me again, I am trying to use Apache Whirr 0.6.0-incubating
> to start a 20 nodes Hadoop cluster on Amazon EC2.
>
> Here is my recipe:
>
> ----
> whirr.cluster-name=hadoop
> whirr.instance-templates=1 hadoop-namenode+hadoop-jobtracker,20
> hadoop-datanode+hadoop-tasktracker
> whirr.instance-templates-max-percent-failures=100
> hadoop-namenode+hadoop-jobtracker,50
> hadoop-datanode+hadoop-tasktracker
> whirr.max-startup-retries=1
> whirr.provider=aws-ec2
> whirr.identity=${env:AWS_ACCESS_KEY_ID_LIVE}
> whirr.credential=${env:AWS_SECRET_ACCESS_KEY_LIVE}
> whirr.hardware-id=m1.large
> whirr.image-id=eu-west-1/ami-ee0e3c9a
> whirr.location-id=eu-west-1
> whirr.private-key-file=${sys:user.home}/.ssh/whirr
> whirr.public-key-file=${whirr.private-key-file}.pub
> whirr.hadoop.version=0.20.204.0
> whirr.hadoop.tarball.url=
> http://archive.apache.org/dist/hadoop/core/hadoop-${whirr.hadoop.version}/hadoop-${whirr.hadoop.version}.tar.gz
> ----
>
> I see a lot of these errors:
>
> org.jclouds.aws.AWSResponseException: request POST
> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
> error: AWSError{requestId='b361f3f6-73f1-4348-964a-31265ec70eeb',
> requestToken='null', code='RequestLimitExceeded', message='Request
> limit exceeded.', context='{Response=, Errors=}'}
>        at
> org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:74)
>        at
> org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:71)
>        at
> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:200)
>        at
> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:165)
>        at
> org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:134)
>        at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:662)
>
>
> I have more than 20 slots available on this Amazon account.
>
> Is it Whirr sending requests too fast to Amazon?
>
> How can I solve this problem?
>
> Regards,
> Paolo
>