You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/06/06 01:08:51 UTC

[jira] Closed: (HADOOP-212) allow changes to dfs block size

     [ http://issues.apache.org/jira/browse/HADOOP-212?page=all ]
     
Doug Cutting closed HADOOP-212:
-------------------------------


> allow changes to dfs block size
> -------------------------------
>
>          Key: HADOOP-212
>          URL: http://issues.apache.org/jira/browse/HADOOP-212
>      Project: Hadoop
>         Type: Improvement

>   Components: dfs
>     Versions: 0.2
>     Reporter: Owen O'Malley
>     Assignee: Owen O'Malley
>     Priority: Critical
>      Fix For: 0.3.0
>  Attachments: TEST-org.apache.hadoop.fs.TestCopyFiles.txt, dfs-blocksize-2.patch, dfs-blocksize.patch
>
> Trying to change the DFS block size, led the realization that the 32,000,000 was hard coded into the source code. I propose:
>   1. Change the default block size to 64 * 1024 * 1024.
>   2. Add the config variable dfs.block.size that sets the default block size.
>   3. Add a parameter to the FileSystem, DFSClient, and ClientProtocol create method that let's the user control the block size.
>   4. Rename the FileSystem.getBlockSize to getDefaultBlockSize.
>   5. Add a new method to FileSytem.getBlockSize that takes a pathname.
>   6. Use long for the block size in the API, which is what was used before. However, the implementation will not work if block size is set bigger than 2**31.
>   7. Have the InputFormatBase use the blocksize of each file to determine the split size.
> Thoughts?

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira