You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Alejandro Abdelnur (JIRA)" <ji...@apache.org> on 2012/06/08 23:13:22 UTC

[jira] [Created] (HADOOP-8496) FsShell is broken with s3 filesystems

Alejandro Abdelnur created HADOOP-8496:
------------------------------------------

             Summary: FsShell is broken with s3 filesystems
                 Key: HADOOP-8496
                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
             Project: Hadoop Common
          Issue Type: Bug
          Components: fs/s3
    Affects Versions: 2.0.1-alpha
            Reporter: Alejandro Abdelnur
            Priority: Critical


After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:

{code}
Found 4 items
-ls: -0s
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
{code}

Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.

I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Comment Edited] (HADOOP-8496) FsShell is broken with s3 filesystems

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13401053#comment-13401053 ] 

Alejandro Abdelnur edited comment on HADOOP-8496 at 6/26/12 12:12 AM:
----------------------------------------------------------------------

dup of HADOOP-8168
                
      was (Author: tucu00):
    dup of HADOOP-4335
                  
> FsShell is broken with s3 filesystems
> -------------------------------------
>
>                 Key: HADOOP-8496
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.0.1-alpha
>            Reporter: Alejandro Abdelnur
>            Priority: Critical
>
> After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:
> {code}
> Found 4 items
> -ls: -0s
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
> {code}
> Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.
> I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8496) FsShell is broken with s3 filesystems

Posted by "Daryn Sharp (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398817#comment-13398817 ] 

Daryn Sharp commented on HADOOP-8496:
-------------------------------------

Oh, I know what this is.  The format string is angry about "%-0s" for the user/group fields.  I'm positive there was a jira and patch to fix this.  It must not have been committed.
                
> FsShell is broken with s3 filesystems
> -------------------------------------
>
>                 Key: HADOOP-8496
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.0.1-alpha
>            Reporter: Alejandro Abdelnur
>            Priority: Critical
>
> After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:
> {code}
> Found 4 items
> -ls: -0s
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
> {code}
> Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.
> I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8496) FsShell is broken with s3 filesystems

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398513#comment-13398513 ] 

Alejandro Abdelnur commented on HADOOP-8496:
--------------------------------------------

Daryn, no '-s0' entry.

Using Hadoop 1.0.1 things works:

{code}
$ bin/hadoop fs -conf ~/aws-s3.xml -ls s3n://tucu/
Found 4 items
-rwxrwxrwx   1          5 2012-06-08 14:00 /foo.txt
drwxrwxrwx   -          0 1969-12-31 16:00 /test
-rwxrwxrwx   1          5 2012-06-08 13:53 /test.txt
-rwxrwxrwx   1          5 2012-06-08 13:56 /test1.txt
$ 
{code}

Using Hadoop 2.0.0/trunk things fail:

{code}
$ bin/hadoop fs -conf ~/aws-s3.xml -ls s3n://tucu/
Found 4 items
-ls: -0s
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
$ 
{code}

                
> FsShell is broken with s3 filesystems
> -------------------------------------
>
>                 Key: HADOOP-8496
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.0.1-alpha
>            Reporter: Alejandro Abdelnur
>            Priority: Critical
>
> After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:
> {code}
> Found 4 items
> -ls: -0s
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
> {code}
> Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.
> I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (HADOOP-8496) FsShell is broken with s3 filesystems

Posted by "Alejandro Abdelnur (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HADOOP-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alejandro Abdelnur resolved HADOOP-8496.
----------------------------------------

    Resolution: Duplicate

dup of HADOOP-4335
                
> FsShell is broken with s3 filesystems
> -------------------------------------
>
>                 Key: HADOOP-8496
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.0.1-alpha
>            Reporter: Alejandro Abdelnur
>            Priority: Critical
>
> After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:
> {code}
> Found 4 items
> -ls: -0s
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
> {code}
> Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.
> I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HADOOP-8496) FsShell is broken with s3 filesystems

Posted by "Daryn Sharp (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HADOOP-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398470#comment-13398470 ] 

Daryn Sharp commented on HADOOP-8496:
-------------------------------------

Does the directory contain a "-0s" entry?  What is the exact cmdline and the exact contents of the directory?
                
> FsShell is broken with s3 filesystems
> -------------------------------------
>
>                 Key: HADOOP-8496
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8496
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.0.1-alpha
>            Reporter: Alejandro Abdelnur
>            Priority: Critical
>
> After setting up a S3 account, configuring the site.xml with the accesskey/password, when doing an ls on a non-empty bucket I get:
> {code}
> Found 4 items
> -ls: -0s
> Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [<path> ...]
> {code}
> Note that it correctly shows the number of items in the root of the bucket, it does not show the contents of the root.
> I've tried -get and -put and it works fine, accessing a folder in the bucket seems to be fully broken.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira