You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Mike Smith <mi...@gmail.com> on 2007/01/18 00:37:59 UTC
InternalError response on Hadoop and S3
Sometimes large number of internal errors keeps happening on Amazon S3 which
does not let put command to go through, and it causes failure in
Jets3tFileSystemStore.put(Jets3tFileSystemStore.java:203). I think it would
be great to have an exponential back-off and retry mechanism in put() method
when we get these errors. Here is a sample of the exception that s3 failed
put() causes:
org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
S3 PUT failed. XML Error Message: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InternalError</Code><Message>We encountered an internal error.
Please try
again.</Message><RequestId>201316FD</RequestId><HostId>sc5AtWptF6nqAkBzaaTo19oGtz9O</HostId></Error>
at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.put(
Jets3tFileSystemStore.java:203)
at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.storeBlock(
Jets3tFileSystemStore.java:212)
at org.apache.hadoop.fs.s3.S3OutputStream.endBlock(
S3OutputStream.java:151)
at org.apache.hadoop.fs.s3.S3OutputStream.close(S3OutputStream.java
:188)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at org.apache.hadoop.fs.FSDataOutputStream$Summer.close(
FSDataOutputStream.java:99)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:326)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:371)
Thanks, Mike
Re: InternalError response on Hadoop and S3
Posted by Tom White <to...@gmail.com>.
Thanks for reporting this. I've had a few of these exceptions too.
There's a Jira issue here:
https://issues.apache.org/jira/browse/HADOOP-882.
Tom
On 17/01/07, Mike Smith <mi...@gmail.com> wrote:
> Sometimes large number of internal errors keeps happening on Amazon S3 which
> does not let put command to go through, and it causes failure in
> Jets3tFileSystemStore.put(Jets3tFileSystemStore.java:203). I think it would
> be great to have an exponential back-off and retry mechanism in put() method
> when we get these errors. Here is a sample of the exception that s3 failed
> put() causes:
>
>
> org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException:
> S3 PUT failed. XML Error Message: <?xml version="1.0" encoding="UTF-8"?>
> <Error><Code>InternalError</Code><Message>We encountered an internal error.
> Please try
> again.</Message><RequestId>201316FD</RequestId><HostId>sc5AtWptF6nqAkBzaaTo19oGtz9O</HostId></Error>
>
> at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.put(
> Jets3tFileSystemStore.java:203)
> at org.apache.hadoop.fs.s3.Jets3tFileSystemStore.storeBlock(
> Jets3tFileSystemStore.java:212)
> at org.apache.hadoop.fs.s3.S3OutputStream.endBlock(
> S3OutputStream.java:151)
> at org.apache.hadoop.fs.s3.S3OutputStream.close(S3OutputStream.java
> :188)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at org.apache.hadoop.fs.FSDataOutputStream$Summer.close(
> FSDataOutputStream.java:99)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at java.io.FilterOutputStream.close(FilterOutputStream.java:143)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:326)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:371)
>
> Thanks, Mike
>
>