You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Lei (Eddy) Xu (JIRA)" <ji...@apache.org> on 2015/03/10 00:08:38 UTC
[jira] [Updated] (HADOOP-11697) Use larger value for
fs.s3a.connection.timeout.
[ https://issues.apache.org/jira/browse/HADOOP-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Lei (Eddy) Xu updated HADOOP-11697:
-----------------------------------
Attachment: HADOOP-11697.001.patch
Reverted the patch to not change time unit of socket connection timeout in AWS SDK.
> Use larger value for fs.s3a.connection.timeout.
> -----------------------------------------------
>
> Key: HADOOP-11697
> URL: https://issues.apache.org/jira/browse/HADOOP-11697
> Project: Hadoop Common
> Issue Type: Improvement
> Affects Versions: 2.6.0
> Reporter: Lei (Eddy) Xu
> Assignee: Lei (Eddy) Xu
> Priority: Minor
> Labels: s3
> Attachments: HADOOP-11697.001.patch, HDFS-7908.000.patch
>
>
> The default value of {{fs.s3a.connection.timeout}} is {{50000}} milliseconds. It causes many {{SocketTimeoutException}} when uploading large files using {{hadoop fs -put}}.
> Also, the units for {{fs.s3a.connection.timeout}} and {{fs.s3a.connection.estaablish.timeout}} are milliseconds. For s3 connections, I think it is not necessary to have sub-seconds timeout value. Thus I suggest to change the time unit to seconds, to easy sys admin's job.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)