You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@whirr.apache.org by Paolo Castagna <ca...@googlemail.com> on 2011/01/28 21:59:28 UTC

Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Hi,
I am a Whirr newbie and I've tried to use Whirr only recently.
Also, I am not sure if the problem I am experiencing is related to
Whirr, jclouds or Amazon.

This is my Whirr properties file:

------
whirr.service-name=hadoop
whirr.cluster-name=myhadoopcluster
whirr.location-id=eu-west-1
whirr.instance-templates=1 jt+nn, 10 dn+tt
whirr.provider=ec2
whirr.identity=********************
whirr.credential=****************************************
whirr.private-key-file=${sys:user.home}/.ssh/castagna
whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
whirr.hadoop-install-runurl=cloudera/cdh/install
whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
------

This is what I do (i.e. I am trying to start-up a 10
datanodes/tasktrackers Hadoop cluster):

$ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
$ cd whirr
$ mvn package -Ppackage
$ ./bin/whirr version
Apache Whirr 0.4.0-incubating-SNAPSHOT
$ ./bin/whirr launch-cluster --config
/home/castagna/Desktop/hadoop-whirr.properties

These are the errors I see on the console and on the whirr.log file:

------
Cannot retry after server error, command has exceeded retry limit 5:
[request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
<< problem applying options to node(eu-west-1/i-8709ecf1):
org.jclouds.aws.AWSResponseException: request POST
https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
requestToken='null', code='RequestLimitExceeded', message='Request
limit exceeded.', context='{Response=, Errors=}'}
	at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
	at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
	at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
	at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
	at java.util.concurrent.FutureTask.run(FutureTask.java:138)
	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
	at java.lang.Thread.run(Thread.java:619)
------

Same happens connecting to Amazon via Elasticfox while Whirr is running.

Is someone else experiencing a similar problem?
Is it my Amazon account or Whirr or jclouds too aggressive?

I have already tried different regions (i.e.
whirr.location-id=us-west-1|us-east-1) but I experience the same
problem.
If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
fine, no errors.

Thanks in advance for your help (and thanks for sharing Whirr with the world),
Paolo

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Paolo Castagna <ca...@googlemail.com>.
praveen.peddi@nokia.com wrote:
> I am having similar issue when I try to create 22 server hadoop cluster. 

Similar, but not exactly the same... you have a:
HTTP/1.1 413 Request Entity Too Large error.

Paolo

> Here is the stack trace:
> Bootstrapping cluster
> Configuring template
> Starting 22 node(s) with roles [tt, dn]
> Configuring template
> Starting 1 node(s) with roles [jt, nn]
> starting nodes, completed: 0/22, errors: 1, rate: 630ms/op
> java.util.concurrent.ExecutionException: org.jclouds.http.HttpResponseException: command: POST https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: [{"overLimit":{"message":"Too many requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
>         at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>         at org.jclouds.concurrent.FutureIterables$1.run(FutureIterables.java:121)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: org.jclouds.http.HttpResponseException: command: POST https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: [{"overLimit":{"message":"Too many requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
>         at org.jclouds.rackspace.cloudservers.handlers.ParseCloudServersErrorFromHttpResponse.handleError(ParseCloudServersErrorFromHttpResponse.java:76)
>         at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:70)
>         at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>         at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>         at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         ... 3 more
> 
> ________________________________________
> From: ext Paolo Castagna [castagna.lists@googlemail.com]
> Sent: Monday, January 31, 2011 3:04 PM
> To: whirr-user@incubator.apache.org
> Subject: Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt
> 
> Lars George wrote:
>> Hi Paolo,
>>
>> I had that same error a few times on a very slow connection and using The default AMIs. Could you try what I am using here
>>
>> https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27
>>
>> Plus the medium or even large instance size. Just to confirm if that works or fails.
> 
> Ok, I can confirm I do not have problems using:
> 
> whirr.hardware-id=m1.large
> whirr.image-id=us-east-1/ami-da0cf8b3
> whirr.location-id=us-east-1
> 
> I do not know if it is m1.large or east-1/ami-da0cf8b3 which actually
> makes the difference.
> 
> ... also, to have the necessary jars copied in cli/target/lib I needed
> to run mvn install package -Ppackage instead on simply mvn package
> -Ppackage.
> 
> I will probably make more experiments tomorrow.
> 
> Thank you all,
> Paolo
> 
>> Lars
>>
>> On Jan 28, 2011, at 21:59, Paolo Castagna <ca...@googlemail.com> wrote:
>>
>>> Hi,
>>> I am a Whirr newbie and I've tried to use Whirr only recently.
>>> Also, I am not sure if the problem I am experiencing is related to
>>> Whirr, jclouds or Amazon.
>>>
>>> This is my Whirr properties file:
>>>
>>> ------
>>> whirr.service-name=hadoop
>>> whirr.cluster-name=myhadoopcluster
>>> whirr.location-id=eu-west-1
>>> whirr.instance-templates=1 jt+nn, 10 dn+tt
>>> whirr.provider=ec2
>>> whirr.identity=********************
>>> whirr.credential=****************************************
>>> whirr.private-key-file=${sys:user.home}/.ssh/castagna
>>> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
>>> whirr.hadoop-install-runurl=cloudera/cdh/install
>>> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
>>> ------
>>>
>>> This is what I do (i.e. I am trying to start-up a 10
>>> datanodes/tasktrackers Hadoop cluster):
>>>
>>> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
>>> $ cd whirr
>>> $ mvn package -Ppackage
>>> $ ./bin/whirr version
>>> Apache Whirr 0.4.0-incubating-SNAPSHOT
>>> $ ./bin/whirr launch-cluster --config
>>> /home/castagna/Desktop/hadoop-whirr.properties
>>>
>>> These are the errors I see on the console and on the whirr.log file:
>>>
>>> ------
>>> Cannot retry after server error, command has exceeded retry limit 5:
>>> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
>>> << problem applying options to node(eu-west-1/i-8709ecf1):
>>> org.jclouds.aws.AWSResponseException: request POST
>>> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>>> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
>>> requestToken='null', code='RequestLimitExceeded', message='Request
>>> limit exceeded.', context='{Response=, Errors=}'}
>>>    at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>>>    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>    at java.lang.Thread.run(Thread.java:619)
>>> ------
>>>
>>> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>>>
>>> Is someone else experiencing a similar problem?
>>> Is it my Amazon account or Whirr or jclouds too aggressive?
>>>
>>> I have already tried different regions (i.e.
>>> whirr.location-id=us-west-1|us-east-1) but I experience the same
>>> problem.
>>> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
>>> fine, no errors.
>>>
>>> Thanks in advance for your help (and thanks for sharing Whirr with the world),
>>> Paolo


RE: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by pr...@nokia.com.
I am having similar issue when I try to create 22 server hadoop cluster. Here is the stack trace:
Bootstrapping cluster
Configuring template
Starting 22 node(s) with roles [tt, dn]
Configuring template
Starting 1 node(s) with roles [jt, nn]
starting nodes, completed: 0/22, errors: 1, rate: 630ms/op
java.util.concurrent.ExecutionException: org.jclouds.http.HttpResponseException: command: POST https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: [{"overLimit":{"message":"Too many requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
        at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
        at java.util.concurrent.FutureTask.get(FutureTask.java:83)
        at org.jclouds.concurrent.FutureIterables$1.run(FutureIterables.java:121)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: org.jclouds.http.HttpResponseException: command: POST https://servers.api.rackspacecloud.com/v1.0/541520/servers?format=json&now=1296521767871 HTTP/1.1 failed with response: HTTP/1.1 413 Request Entity Too Large; content: [{"overLimit":{"message":"Too many requests...","code":413,"retryAfter":"2011-01-31T18:56:11.091-06:00"}}]
        at org.jclouds.rackspace.cloudservers.handlers.ParseCloudServersErrorFromHttpResponse.handleError(ParseCloudServersErrorFromHttpResponse.java:76)
        at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:70)
        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        ... 3 more

________________________________________
From: ext Paolo Castagna [castagna.lists@googlemail.com]
Sent: Monday, January 31, 2011 3:04 PM
To: whirr-user@incubator.apache.org
Subject: Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Lars George wrote:
> Hi Paolo,
>
> I had that same error a few times on a very slow connection and using The default AMIs. Could you try what I am using here
>
> https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27
>
> Plus the medium or even large instance size. Just to confirm if that works or fails.

Ok, I can confirm I do not have problems using:

whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-da0cf8b3
whirr.location-id=us-east-1

I do not know if it is m1.large or east-1/ami-da0cf8b3 which actually
makes the difference.

... also, to have the necessary jars copied in cli/target/lib I needed
to run mvn install package -Ppackage instead on simply mvn package
-Ppackage.

I will probably make more experiments tomorrow.

Thank you all,
Paolo

>
> Lars
>
> On Jan 28, 2011, at 21:59, Paolo Castagna <ca...@googlemail.com> wrote:
>
>> Hi,
>> I am a Whirr newbie and I've tried to use Whirr only recently.
>> Also, I am not sure if the problem I am experiencing is related to
>> Whirr, jclouds or Amazon.
>>
>> This is my Whirr properties file:
>>
>> ------
>> whirr.service-name=hadoop
>> whirr.cluster-name=myhadoopcluster
>> whirr.location-id=eu-west-1
>> whirr.instance-templates=1 jt+nn, 10 dn+tt
>> whirr.provider=ec2
>> whirr.identity=********************
>> whirr.credential=****************************************
>> whirr.private-key-file=${sys:user.home}/.ssh/castagna
>> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
>> whirr.hadoop-install-runurl=cloudera/cdh/install
>> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
>> ------
>>
>> This is what I do (i.e. I am trying to start-up a 10
>> datanodes/tasktrackers Hadoop cluster):
>>
>> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
>> $ cd whirr
>> $ mvn package -Ppackage
>> $ ./bin/whirr version
>> Apache Whirr 0.4.0-incubating-SNAPSHOT
>> $ ./bin/whirr launch-cluster --config
>> /home/castagna/Desktop/hadoop-whirr.properties
>>
>> These are the errors I see on the console and on the whirr.log file:
>>
>> ------
>> Cannot retry after server error, command has exceeded retry limit 5:
>> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
>> << problem applying options to node(eu-west-1/i-8709ecf1):
>> org.jclouds.aws.AWSResponseException: request POST
>> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
>> requestToken='null', code='RequestLimitExceeded', message='Request
>> limit exceeded.', context='{Response=, Errors=}'}
>>    at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>>    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:619)
>> ------
>>
>> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>>
>> Is someone else experiencing a similar problem?
>> Is it my Amazon account or Whirr or jclouds too aggressive?
>>
>> I have already tried different regions (i.e.
>> whirr.location-id=us-west-1|us-east-1) but I experience the same
>> problem.
>> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
>> fine, no errors.
>>
>> Thanks in advance for your help (and thanks for sharing Whirr with the world),
>> Paolo

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Paolo Castagna <ca...@googlemail.com>.
Lars George wrote:
> Hi Paolo,
> 
> I had that same error a few times on a very slow connection and using The default AMIs. Could you try what I am using here
> 
> https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27
> 
> Plus the medium or even large instance size. Just to confirm if that works or fails. 

Ok, I can confirm I do not have problems using:

whirr.hardware-id=m1.large
whirr.image-id=us-east-1/ami-da0cf8b3
whirr.location-id=us-east-1

I do not know if it is m1.large or east-1/ami-da0cf8b3 which actually
makes the difference.

... also, to have the necessary jars copied in cli/target/lib I needed
to run mvn install package -Ppackage instead on simply mvn package
-Ppackage.

I will probably make more experiments tomorrow.

Thank you all,
Paolo

> 
> Lars
> 
> On Jan 28, 2011, at 21:59, Paolo Castagna <ca...@googlemail.com> wrote:
> 
>> Hi,
>> I am a Whirr newbie and I've tried to use Whirr only recently.
>> Also, I am not sure if the problem I am experiencing is related to
>> Whirr, jclouds or Amazon.
>>
>> This is my Whirr properties file:
>>
>> ------
>> whirr.service-name=hadoop
>> whirr.cluster-name=myhadoopcluster
>> whirr.location-id=eu-west-1
>> whirr.instance-templates=1 jt+nn, 10 dn+tt
>> whirr.provider=ec2
>> whirr.identity=********************
>> whirr.credential=****************************************
>> whirr.private-key-file=${sys:user.home}/.ssh/castagna
>> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
>> whirr.hadoop-install-runurl=cloudera/cdh/install
>> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
>> ------
>>
>> This is what I do (i.e. I am trying to start-up a 10
>> datanodes/tasktrackers Hadoop cluster):
>>
>> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
>> $ cd whirr
>> $ mvn package -Ppackage
>> $ ./bin/whirr version
>> Apache Whirr 0.4.0-incubating-SNAPSHOT
>> $ ./bin/whirr launch-cluster --config
>> /home/castagna/Desktop/hadoop-whirr.properties
>>
>> These are the errors I see on the console and on the whirr.log file:
>>
>> ------
>> Cannot retry after server error, command has exceeded retry limit 5:
>> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
>> << problem applying options to node(eu-west-1/i-8709ecf1):
>> org.jclouds.aws.AWSResponseException: request POST
>> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
>> requestToken='null', code='RequestLimitExceeded', message='Request
>> limit exceeded.', context='{Response=, Errors=}'}
>>    at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>>    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:619)
>> ------
>>
>> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>>
>> Is someone else experiencing a similar problem?
>> Is it my Amazon account or Whirr or jclouds too aggressive?
>>
>> I have already tried different regions (i.e.
>> whirr.location-id=us-west-1|us-east-1) but I experience the same
>> problem.
>> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
>> fine, no errors.
>>
>> Thanks in advance for your help (and thanks for sharing Whirr with the world),
>> Paolo

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Paolo Castagna <ca...@googlemail.com>.
Hi Lars, (hi Tom),
thank you for your reply... and sorry for taking so long, but at the
moment I am having classpath problems with Whirr from trunk and with
the release-0.3.0-incubating branch as well.

I will let you know as soon as I am able to run Whirr again, packaging
it from trunk, if using m1.large instances or a different image helps
to solve the problem I was experiencing.

Thank you again,
Paolo

On 29 January 2011 16:31, Lars George <la...@gmail.com> wrote:
> Hi Paolo,
>
> I had that same error a few times on a very slow connection and using The default AMIs. Could you try what I am using here
>
> https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27
>
> Plus the medium or even large instance size. Just to confirm if that works or fails.
>
> Lars
>
> On Jan 28, 2011, at 21:59, Paolo Castagna <ca...@googlemail.com> wrote:
>
>> Hi,
>> I am a Whirr newbie and I've tried to use Whirr only recently.
>> Also, I am not sure if the problem I am experiencing is related to
>> Whirr, jclouds or Amazon.
>>
>> This is my Whirr properties file:
>>
>> ------
>> whirr.service-name=hadoop
>> whirr.cluster-name=myhadoopcluster
>> whirr.location-id=eu-west-1
>> whirr.instance-templates=1 jt+nn, 10 dn+tt
>> whirr.provider=ec2
>> whirr.identity=********************
>> whirr.credential=****************************************
>> whirr.private-key-file=${sys:user.home}/.ssh/castagna
>> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
>> whirr.hadoop-install-runurl=cloudera/cdh/install
>> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
>> ------
>>
>> This is what I do (i.e. I am trying to start-up a 10
>> datanodes/tasktrackers Hadoop cluster):
>>
>> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
>> $ cd whirr
>> $ mvn package -Ppackage
>> $ ./bin/whirr version
>> Apache Whirr 0.4.0-incubating-SNAPSHOT
>> $ ./bin/whirr launch-cluster --config
>> /home/castagna/Desktop/hadoop-whirr.properties
>>
>> These are the errors I see on the console and on the whirr.log file:
>>
>> ------
>> Cannot retry after server error, command has exceeded retry limit 5:
>> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
>> << problem applying options to node(eu-west-1/i-8709ecf1):
>> org.jclouds.aws.AWSResponseException: request POST
>> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
>> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
>> requestToken='null', code='RequestLimitExceeded', message='Request
>> limit exceeded.', context='{Response=, Errors=}'}
>>    at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>>    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>    at java.lang.Thread.run(Thread.java:619)
>> ------
>>
>> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>>
>> Is someone else experiencing a similar problem?
>> Is it my Amazon account or Whirr or jclouds too aggressive?
>>
>> I have already tried different regions (i.e.
>> whirr.location-id=us-west-1|us-east-1) but I experience the same
>> problem.
>> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
>> fine, no errors.
>>
>> Thanks in advance for your help (and thanks for sharing Whirr with the world),
>> Paolo
>

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Lars George <la...@gmail.com>.
Hi Paolo,

I had that same error a few times on a very slow connection and using The default AMIs. Could you try what I am using here

https://github.com/larsgeorge/whirr/blob/WHIRR-201/services/hbase/src/test/resources/whirr-hbase-test.properties#L27

Plus the medium or even large instance size. Just to confirm if that works or fails. 

Lars

On Jan 28, 2011, at 21:59, Paolo Castagna <ca...@googlemail.com> wrote:

> Hi,
> I am a Whirr newbie and I've tried to use Whirr only recently.
> Also, I am not sure if the problem I am experiencing is related to
> Whirr, jclouds or Amazon.
> 
> This is my Whirr properties file:
> 
> ------
> whirr.service-name=hadoop
> whirr.cluster-name=myhadoopcluster
> whirr.location-id=eu-west-1
> whirr.instance-templates=1 jt+nn, 10 dn+tt
> whirr.provider=ec2
> whirr.identity=********************
> whirr.credential=****************************************
> whirr.private-key-file=${sys:user.home}/.ssh/castagna
> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
> whirr.hadoop-install-runurl=cloudera/cdh/install
> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
> ------
> 
> This is what I do (i.e. I am trying to start-up a 10
> datanodes/tasktrackers Hadoop cluster):
> 
> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
> $ cd whirr
> $ mvn package -Ppackage
> $ ./bin/whirr version
> Apache Whirr 0.4.0-incubating-SNAPSHOT
> $ ./bin/whirr launch-cluster --config
> /home/castagna/Desktop/hadoop-whirr.properties
> 
> These are the errors I see on the console and on the whirr.log file:
> 
> ------
> Cannot retry after server error, command has exceeded retry limit 5:
> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
> << problem applying options to node(eu-west-1/i-8709ecf1):
> org.jclouds.aws.AWSResponseException: request POST
> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
> requestToken='null', code='RequestLimitExceeded', message='Request
> limit exceeded.', context='{Response=, Errors=}'}
>    at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>    at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>    at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>    at java.lang.Thread.run(Thread.java:619)
> ------
> 
> Same happens connecting to Amazon via Elasticfox while Whirr is running.
> 
> Is someone else experiencing a similar problem?
> Is it my Amazon account or Whirr or jclouds too aggressive?
> 
> I have already tried different regions (i.e.
> whirr.location-id=us-west-1|us-east-1) but I experience the same
> problem.
> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
> fine, no errors.
> 
> Thanks in advance for your help (and thanks for sharing Whirr with the world),
> Paolo

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Kiss Tibor <ki...@gmail.com>.
Hi Paolo,

When I experienced startup problems like you, initially I also was taking in
consideration the question that "Whirr or jclouds are too aggressive?"
But we have an older classic instance startup framework too (has nothing
with hadoop or whirr) and there I also experienced startup problems if
suddenly I would like to request more instances.
That approach differs from hadoop cluster startup in the sense that there
the startup process is based on a lasy initialization, nodes are added upon
requests and only in certain condition it was started more instances in
parallel, of course the problem appeared mostly in that cases. At least I
don't remind a case when I couldn't start one instance at a time, but more
than 3-5 often failed.

It says that all those virtual systems are not so reliable, but I see a very
sensible difference, because if I am using Xen virtual machines I never
experienced such kind of problems. So the problem could be in the management
layer of ec2 instances. Maybe there is a synchronization issue somewhere in
their code.

That's why I insisted to have this WHIRR-167 at least. Since them I am using
and my startup process at least survives and I don't loose the entire
cluster.

Cheers,
Tibor

On Sat, Jan 29, 2011 at 12:03 AM, Tom White <to...@gmail.com> wrote:

> Hi Paolo,
>
> I'm not sure what is causing the problem (Adrian may have more insight
> into this jclouds failure mode), but it might be related to this bug:
> https://issues.apache.org/jira/browse/WHIRR-167. Perhaps you could try
> with this patch?
>
> Cheers,
> Tom
>
> On Fri, Jan 28, 2011 at 12:59 PM, Paolo Castagna
> <ca...@googlemail.com> wrote:
>
> > Same happens connecting to Amazon via Elasticfox while Whirr is running.
> >
> > Is someone else experiencing a similar problem?
> > Is it my Amazon account or Whirr or jclouds too aggressive?
>

Re: Request Limit Exceeded using Whirr 0.4.0-incubating-SNAPSHOT with 1 jt+nn, 10 dn+tt

Posted by Tom White <to...@gmail.com>.
Hi Paolo,

I'm not sure what is causing the problem (Adrian may have more insight
into this jclouds failure mode), but it might be related to this bug:
https://issues.apache.org/jira/browse/WHIRR-167. Perhaps you could try
with this patch?

Cheers,
Tom

On Fri, Jan 28, 2011 at 12:59 PM, Paolo Castagna
<ca...@googlemail.com> wrote:
> Hi,
> I am a Whirr newbie and I've tried to use Whirr only recently.
> Also, I am not sure if the problem I am experiencing is related to
> Whirr, jclouds or Amazon.
>
> This is my Whirr properties file:
>
> ------
> whirr.service-name=hadoop
> whirr.cluster-name=myhadoopcluster
> whirr.location-id=eu-west-1
> whirr.instance-templates=1 jt+nn, 10 dn+tt
> whirr.provider=ec2
> whirr.identity=********************
> whirr.credential=****************************************
> whirr.private-key-file=${sys:user.home}/.ssh/castagna
> whirr.public-key-file=${sys:user.home}/.ssh/castagna.pub
> whirr.hadoop-install-runurl=cloudera/cdh/install
> whirr.hadoop-configure-runurl=cloudera/cdh/post-configure
> ------
>
> This is what I do (i.e. I am trying to start-up a 10
> datanodes/tasktrackers Hadoop cluster):
>
> $ svn co https://svn.apache.org/repos/asf/incubator/whirr/trunk/ whirr
> $ cd whirr
> $ mvn package -Ppackage
> $ ./bin/whirr version
> Apache Whirr 0.4.0-incubating-SNAPSHOT
> $ ./bin/whirr launch-cluster --config
> /home/castagna/Desktop/hadoop-whirr.properties
>
> These are the errors I see on the console and on the whirr.log file:
>
> ------
> Cannot retry after server error, command has exceeded retry limit 5:
> [request=POST https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1]
> << problem applying options to node(eu-west-1/i-8709ecf1):
> org.jclouds.aws.AWSResponseException: request POST
> https://ec2.eu-west-1.amazonaws.com/ HTTP/1.1 failed with code 503,
> error: AWSError{requestId='6f0236b0-12c0-49b7-b21b-aab969b9be26',
> requestToken='null', code='RequestLimitExceeded', message='Request
> limit exceeded.', context='{Response=, Errors=}'}
>        at org.jclouds.aws.handlers.ParseAWSErrorFromXmlContent.handleError(ParseAWSErrorFromXmlContent.java:80)
>        at org.jclouds.http.handlers.DelegatingErrorHandler.handleError(DelegatingErrorHandler.java:72)
>        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.shouldContinue(BaseHttpCommandExecutorService.java:193)
>        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:163)
>        at org.jclouds.http.internal.BaseHttpCommandExecutorService$HttpResponseCallable.call(BaseHttpCommandExecutorService.java:132)
>        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>        at java.lang.Thread.run(Thread.java:619)
> ------
>
> Same happens connecting to Amazon via Elasticfox while Whirr is running.
>
> Is someone else experiencing a similar problem?
> Is it my Amazon account or Whirr or jclouds too aggressive?
>
> I have already tried different regions (i.e.
> whirr.location-id=us-west-1|us-east-1) but I experience the same
> problem.
> If I try with whirr.instance-templates=1 jt+nn, 4 dn+tt everything is
> fine, no errors.
>
> Thanks in advance for your help (and thanks for sharing Whirr with the world),
> Paolo
>