You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Aaron Kimball (JIRA)" <ji...@apache.org> on 2010/04/16 00:22:58 UTC

[jira] Commented: (HADOOP-6708) New file format for very large records

    [ https://issues.apache.org/jira/browse/HADOOP-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12857578#action_12857578 ] 

Aaron Kimball commented on HADOOP-6708:
---------------------------------------

In working on Sqoop, I need to import records which may be several gigabytes in size each. I require a file format that allows me to store these records in an efficient, grouped fashion.

Users may then want to open a file containing many such records and partially-read individual records, but still access subsequent records efficiently. 

I'm attaching to this issue a proposal for a _LobFile_ format which will store these large objects. (The basis for this work surrounds import of BLOB and CLOB-typed columns.) The specification proposal analyzes available file formats and my understanding of why they aren't appropriate here.

> New file format for very large records
> --------------------------------------
>
>                 Key: HADOOP-6708
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6708
>             Project: Hadoop Common
>          Issue Type: New Feature
>          Components: io
>            Reporter: Aaron Kimball
>            Assignee: Aaron Kimball
>
> A file format that handles multi-gigabyte records efficiently, with lazy disk access

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira