You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Andrew Purtell (JIRA)" <ji...@apache.org> on 2010/03/29 18:43:27 UTC

[jira] Updated: (HBASE-2387) [stargate] FUSE module for mounting Stargate exported tablespaces

     [ https://issues.apache.org/jira/browse/HBASE-2387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Purtell updated HBASE-2387:
----------------------------------

    Description: 
FUSE: http://fuse.sourceforge.net/

Create a FUSE translator that mounts Stargate exported tablespaces into the Linux filesystem namespace. Support Stargate when it is running in multiuser mode. Should run in either of two modes:

1) Map 1:1 the exported tablespace under the mount point.

2) Emulate a filesystem, like s3fs (http://code.google.com/p/s3fs/wiki/FuseOverAmazon)
    - Stargate multiget and multiput operations can help performance
    - Translate paths under the mount point to row keys for good load spreading, {{/a/b/c/file.ext}} becomes {{file.ext/c/b/a}}
    - Consider borrowing from Tom White's Hadoop S3 FS (HADOOP-574), and store file data as blocks. 
        -- After fetching the inode can stream all blocks in a Stargate multiget. This would support arbitrary file sizes. Otherwise there is a practical limit somewhere around 20-50 MB with default regionserver heaps. 
        -- So,  {{file.ext/c/b/a}} gets the inode. Blocks would be keyed using the SHA-1 hash of their contents. 
        -- Use multiversioning on the inode to get snapshots for free: A path in the filesystem like {{/a/b/c/file.ext;timestamp}} gets file contents on or before _timestamp_.
        -- Because new writes produce new blocks with unique hashes, this is like a dedup filesystem. Use ICV to maintain use counters on blocks.

  was:
FUSE: http://fuse.sourceforge.net/

Create a FUSE translator that mounts Stargate exported tablespaces into the Linux filesystem namespace. Support Stargate when it is running in multiuser mode. Should run in either of two modes:

1) Map 1:1 the exported tablespace under the mount point.

2) Emulate a filesystem, like s3fs (http://code.google.com/p/s3fs/wiki/FuseOverAmazon)
    - Stargate multiget and multiput operations can help performance
    - Translate paths under the mount point to row keys for good load spreading, {{/a/b/c/file.ext}} becomes {{file.ext/c/b/a}}
    - Consider borrowing from Tom White's Hadoop S3 FS (HADOOP-574), and store file data as blocks. 
        -- After fetching the inode can stream all blocks in a Stargate multiget. This would support arbitrary file sizes. Otherwise there is a practical limit somewhere around 20-50 MB with default regionserver heaps. 
        -- So,  {{file.ext/c/b/a}} gets the inode. Blocks would be keyed using the SHA-1 hash of their contents. 
        -- Because new writes produce new blocks with unique hashes, this is like a dedup filesystem. Use ICV to maintain use counters on blocks.


> [stargate] FUSE module for mounting Stargate exported tablespaces
> -----------------------------------------------------------------
>
>                 Key: HBASE-2387
>                 URL: https://issues.apache.org/jira/browse/HBASE-2387
>             Project: Hadoop HBase
>          Issue Type: New Feature
>            Reporter: Andrew Purtell
>            Assignee: Andrew Purtell
>            Priority: Minor
>
> FUSE: http://fuse.sourceforge.net/
> Create a FUSE translator that mounts Stargate exported tablespaces into the Linux filesystem namespace. Support Stargate when it is running in multiuser mode. Should run in either of two modes:
> 1) Map 1:1 the exported tablespace under the mount point.
> 2) Emulate a filesystem, like s3fs (http://code.google.com/p/s3fs/wiki/FuseOverAmazon)
>     - Stargate multiget and multiput operations can help performance
>     - Translate paths under the mount point to row keys for good load spreading, {{/a/b/c/file.ext}} becomes {{file.ext/c/b/a}}
>     - Consider borrowing from Tom White's Hadoop S3 FS (HADOOP-574), and store file data as blocks. 
>         -- After fetching the inode can stream all blocks in a Stargate multiget. This would support arbitrary file sizes. Otherwise there is a practical limit somewhere around 20-50 MB with default regionserver heaps. 
>         -- So,  {{file.ext/c/b/a}} gets the inode. Blocks would be keyed using the SHA-1 hash of their contents. 
>         -- Use multiversioning on the inode to get snapshots for free: A path in the filesystem like {{/a/b/c/file.ext;timestamp}} gets file contents on or before _timestamp_.
>         -- Because new writes produce new blocks with unique hashes, this is like a dedup filesystem. Use ICV to maintain use counters on blocks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.