You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Maxence Bernard (JIRA)" <ji...@apache.org> on 2010/02/28 23:12:05 UTC

[jira] Commented: (HADOOP-2534) File manager frontend for Hadoop DFS (with proof of concept).

    [ https://issues.apache.org/jira/browse/HADOOP-2534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12839507#action_12839507 ] 

Maxence Bernard commented on HADOOP-2534:
-----------------------------------------

Support for HDFS has been added to muCommander 0.8.5.


> File manager frontend for Hadoop DFS (with proof of concept).
> -------------------------------------------------------------
>
>                 Key: HADOOP-2534
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2534
>             Project: Hadoop Common
>          Issue Type: Wish
>          Components: io
>            Reporter: Dawid Weiss
>         Attachments: upload.png
>
>
> I had problems classifying this, but since it's not an improvement and neither a task, I thought I'd put it under "wishes". I like command line, but using hadoop fs -X ... leaves my fingers hurt after some time. I though it would be great to have a file manager-like front end to DFS. So I modified muCommander (Java-based) a little bit and voila -- it works _great_, especially for browsing/ uploading and deleting stuff.
> I uploaded the binary and WebStart-launchable version here:
> http://project.carrot2.org/varia/mucommander-hdfs
> Look at screenshots, they will give you a clue about how it works. I had some thoughts about publishing the source code -- muCommander is GPLed... so I guess it can't reside in Hadoop's repository anyway, no matter what we do. If you need sources, let me know.
> Finally, a few thoughts stemming from the coding session:
>     *  DF utility does not work under Windows. This has been addressed recently on the mailing list (HADOOP-33), so it's not a big issue I guess.
>     * I support the claim that it would be sensible to introduce a client interface to DFS and provide two implementations -- one with intelligent spooling on local disk (using DF) and one with some simpler form of spooling (in /tmp for example). Note the funky shape of the upload chart above resulting from delay between spooling and chunk upload. I don't know if this can be worked around in any way.
>     * Incompatible protocol version causes exceptions. Since the protocol changes quite frequently (isn't it version 20 at the moment?), some way of choosing the connection protocol to Hadoop and keeping the most recent versions around would be very useful for external clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.