You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2014/01/02 10:56:50 UTC

[jira] [Commented] (HADOOP-10195) swiftfs object list stops at 10000 objects

    [ https://issues.apache.org/jira/browse/HADOOP-10195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13860103#comment-13860103 ] 

Steve Loughran commented on HADOOP-10195:
-----------------------------------------

Looks reasonable. Have you tested this against throttled endpoints like rackspace UK? The many-small-file operations used to hit problems at delete time, and we may want to increase the test timeouts there

> swiftfs object list stops at 10000 objects
> ------------------------------------------
>
>                 Key: HADOOP-10195
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10195
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.3.0
>            Reporter: David Dobbins
>            Assignee: David Dobbins
>         Attachments: hadoop-10195.patch, hadoop-10195.patch
>
>
> listing objects in a container in swift is limited to 10000 objects per request. swiftfs only makes one request and is therefore limited to the first 10000 objects in the container, ignoring any remaining objects



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)