You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hong Tang (JIRA)" <ji...@apache.org> on 2008/11/18 01:27:44 UTC
[jira] Created: (HADOOP-4673) IFile.Writer close() uses compressor
after returning it to CodecPool.
IFile.Writer close() uses compressor after returning it to CodecPool.
----------------------------------------------------------------------
Key: HADOOP-4673
URL: https://issues.apache.org/jira/browse/HADOOP-4673
Project: Hadoop Core
Issue Type: Bug
Affects Versions: 0.18.2, 0.18.1
Reporter: Hong Tang
The problem is of the same nature as HADOOP-4195.
The compression codec is returned to the CodecPool, and later is finished in "out.close()".
{code:title=IFile.java|borderStyle=solid}
public void close() throws IOException {
// Close the serializers
keySerializer.close();
valueSerializer.close();
// Write EOF_MARKER for key/value length
WritableUtils.writeVInt(out, EOF_MARKER);
WritableUtils.writeVInt(out, EOF_MARKER);
decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
if (compressOutput) {
// Flush data from buffers into the compressor
out.flush();
// Flush & return the compressor
compressedOut.finish();
compressedOut.resetState();
CodecPool.returnCompressor(compressor);
compressor = null;
}
// Close the stream
rawOut.flush();
compressedBytesWritten = rawOut.getPos() - start;
// Close the underlying stream iff we own it...
if (ownOutputStream) {
out.close();
}
out = null;
}
{code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648443#action_12648443 ]
Arun C Murthy commented on HADOOP-4673:
---------------------------------------
IFile.Writer.close() closes the rawOut?
HADOOP-3514 changed this...
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Arun C Murthy
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Jothi Padmanabhan (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12648483#action_12648483 ]
Jothi Padmanabhan commented on HADOOP-4673:
-------------------------------------------
Yes, that needs to be fixed. The reason for that is because of IFileOutputStream.close just does the checksum calculation, but does not call close of the underlying output stream.
This possibly should be fixed by treating it the same way as compressed output stream -- have the functionality of checksum calculation in finish() and have close call finish and the underlying stream's close.
{code}
public void finish() {
(if finished) {
return;
}
else {
finished = true;
// calcualte checksum and write to
// underlying stream
}
}
public void close() throws IOException {
finish();
out.close();
}
{code}
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Arun C Murthy
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Jothi Padmanabhan (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jothi Padmanabhan resolved HADOOP-4673.
---------------------------------------
Resolution: Invalid
This is no longer an issue
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Jothi Padmanabhan
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Jothi Padmanabhan (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jothi Padmanabhan reassigned HADOOP-4673:
-----------------------------------------
Assignee: Jothi Padmanabhan (was: Arun C Murthy)
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Jothi Padmanabhan
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Commented: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Jothi Padmanabhan (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649630#action_12649630 ]
Jothi Padmanabhan commented on HADOOP-4673:
-------------------------------------------
Created a new Jira HADOOP-4706 to address the IFileOutputStream.close issue
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Jothi Padmanabhan
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Assigned: (HADOOP-4673) IFile.Writer close() uses
compressor after returning it to CodecPool.
Posted by "Arun C Murthy (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HADOOP-4673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Arun C Murthy reassigned HADOOP-4673:
-------------------------------------
Assignee: Arun C Murthy
> IFile.Writer close() uses compressor after returning it to CodecPool.
> ----------------------------------------------------------------------
>
> Key: HADOOP-4673
> URL: https://issues.apache.org/jira/browse/HADOOP-4673
> Project: Hadoop Core
> Issue Type: Bug
> Affects Versions: 0.18.1, 0.18.2
> Reporter: Hong Tang
> Assignee: Arun C Murthy
>
> The problem is of the same nature as HADOOP-4195.
> The compression codec is returned to the CodecPool, and later is finished in "out.close()".
> {code:title=IFile.java|borderStyle=solid}
> public void close() throws IOException {
> // Close the serializers
> keySerializer.close();
> valueSerializer.close();
> // Write EOF_MARKER for key/value length
> WritableUtils.writeVInt(out, EOF_MARKER);
> WritableUtils.writeVInt(out, EOF_MARKER);
> decompressedBytesWritten += 2 * WritableUtils.getVIntSize(EOF_MARKER);
>
> if (compressOutput) {
> // Flush data from buffers into the compressor
> out.flush();
>
> // Flush & return the compressor
> compressedOut.finish();
> compressedOut.resetState();
> CodecPool.returnCompressor(compressor);
> compressor = null;
> }
> // Close the stream
> rawOut.flush();
> compressedBytesWritten = rawOut.getPos() - start;
> // Close the underlying stream iff we own it...
> if (ownOutputStream) {
> out.close();
> }
> out = null;
> }
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.