You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Chen He (JIRA)" <ji...@apache.org> on 2015/10/09 08:56:27 UTC
[jira] [Commented] (HADOOP-12471) Support Swift file (> 5GB)
continuious uploading where there is a failure
[ https://issues.apache.org/jira/browse/HADOOP-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14949990#comment-14949990 ]
Chen He commented on HADOOP-12471:
----------------------------------
First of all, I think we need a way to differentiate those failed leftover files from other file.
> Support Swift file (> 5GB) continuious uploading where there is a failure
> -------------------------------------------------------------------------
>
> Key: HADOOP-12471
> URL: https://issues.apache.org/jira/browse/HADOOP-12471
> Project: Hadoop Common
> Issue Type: New Feature
> Components: fs/swift
> Affects Versions: 2.7.1
> Reporter: Chen He
>
> Current Swift FileSystem supports file larger than 5GB.
> File will be chunked as large as 4.6GB (configurable). For example, if there is a 46GB file "foo" in swift,
> Then the structure will look like:
> foo/000001
> foo/000002
> foo/000003
> ...
> foo/000010
> User will not see those 00000x files if they don't specify. That means, if user does:
> \> hadoop fs -ls swift://container.serviceProvidor/foo
> It only shows:
> dwr-r--r-- 46GB foo
> However, in my test, if there is a failure, during uploading the foo file, the previous uploaded chunks will be left in the object store. It will be good to support continuous uploading based on previous leftover
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)