You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Todd Lipcon (JIRA)" <ji...@apache.org> on 2009/07/13 19:45:15 UTC
[jira] Moved: (HADOOP-6148) Implement a pure Java CRC32 calculator
[ https://issues.apache.org/jira/browse/HADOOP-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Todd Lipcon moved HDFS-297 to HADOOP-6148:
------------------------------------------
Key: HADOOP-6148 (was: HDFS-297)
Project: Hadoop Common (was: Hadoop HDFS)
> Implement a pure Java CRC32 calculator
> --------------------------------------
>
> Key: HADOOP-6148
> URL: https://issues.apache.org/jira/browse/HADOOP-6148
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Owen O'Malley
> Assignee: Todd Lipcon
> Attachments: crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt, hadoop-5598.txt, hadoop-5598.txt, hdfs-297.txt, PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java, TestCrc32Performance.java, TestCrc32Performance.java, TestCrc32Performance.java, TestPureJavaCrc32.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a long time in crc calculation. In particular, it was spending 5 seconds in crc calculation out of a total of 6 for the write. I suspect that it is the java-jni border that is causing us grief.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.