You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2009/06/14 10:00:12 UTC

[jira] Commented: (HADOOP-5598) Implement a pure Java CRC32 calculator

    [ https://issues.apache.org/jira/browse/HADOOP-5598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12719232#action_12719232 ] 

dhruba borthakur commented on HADOOP-5598:
------------------------------------------

Looking at the results, it appears that a choice of Crc algorithms, crc data sizes and JVMs can result in pretty varied performance numbers. Maybe it would be worthwhile to make the ChecksymFilesystem pick a Checksum object based on a config parameter.

Also, the hybrid model of  dynamically deciding which algo to use (based on the size of data to checksum) sounds a litle scary to me :-)

> Implement a pure Java CRC32 calculator
> --------------------------------------
>
>                 Key: HADOOP-5598
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5598
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: Owen O'Malley
>            Assignee: Todd Lipcon
>         Attachments: crc32-results.txt, hadoop-5598-hybrid.txt, hadoop-5598.txt, TestCrc32Performance.java, TestCrc32Performance.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a long time in crc calculation. In particular, it was spending 5 seconds in crc calculation out of a total of 6 for the write. I suspect that it is the java-jni border that is causing us grief.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.