You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "stack (JIRA)" <ji...@apache.org> on 2007/06/19 21:29:26 UTC
[jira] Updated: (HADOOP-1505) HADOOP-1093 adds INFO-level logging
of stacktrace java.lang.Exception...
ZlibFactory.getZlibCompressor(ZlibFactory.java:81)
[ https://issues.apache.org/jira/browse/HADOOP-1505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
stack updated HADOOP-1505:
--------------------------
Attachment: zlibfactory.patch
Suggested fix.
> HADOOP-1093 adds INFO-level logging of stacktrace java.lang.Exception... ZlibFactory.getZlibCompressor(ZlibFactory.java:81)
> ---------------------------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1505
> URL: https://issues.apache.org/jira/browse/HADOOP-1505
> Project: Hadoop
> Issue Type: Bug
> Reporter: stack
> Attachments: zlibfactory.patch
>
>
> This change:
> + 52. HADOOP-1193. Pool allocation of compression codecs. This
> + eliminates a memory leak that could cause OutOfMemoryException,
> + and also substantially improves performance.
> + (Arun C Murthy via cutting)
> Added this to logs:
> {code}
> 07/06/19 12:18:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 07/06/19 12:18:58 INFO zlib.ZlibFactory: Creating a new ZlibCompressor
> java.lang.Exception
> at org.apache.hadoop.io.compress.zlib.ZlibFactory.getZlibCompressor(ZlibFactory.java:81)
> at org.apache.hadoop.io.compress.DefaultCodec.createCompressor(DefaultCodec.java:59)
> at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:740)
> at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(SequenceFile.java:863)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:136)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:112)
> at org.apache.hadoop.hbase.HLog.rollWriter(HLog.java:227)
> at org.apache.hadoop.hbase.HLog.<init>(HLog.java:172)
> at org.apache.hadoop.hbase.AbstractMergeTestBase.createNewHRegion(AbstractMergeTestBase.java:137)
> at org.apache.hadoop.hbase.AbstractMergeTestBase.createAregion(AbstractMergeTestBase.java:104)
> at org.apache.hadoop.hbase.AbstractMergeTestBase.setUp(AbstractMergeTestBase.java:72)
> at junit.framework.TestCase.runBare(TestCase.java:125)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:128)
> at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
> at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
> {code}
> This seems to be culprit code:
> {code}
> public static Compressor getZlibCompressor() {
> LOG.info("Creating a new ZlibCompressor");
> try {
> throw new Exception();
> } catch (Exception e) {
> e.printStackTrace();
> }
> return (nativeZlibLoaded) ?
> new ZlibCompressor() : new BuiltInZlibDeflater();
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.