You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org> on 2012/01/20 15:28:39 UTC
[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams
not getting closed when any of the writer threads has exceptions.
[ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------
Summary: HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. (was: HLogSplitter writer threads not getting closed when any of the writer threads has exceptions.)
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
> Key: HBASE-5235
> URL: https://issues.apache.org/jira/browse/HBASE-5235
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.92.0, 0.90.5
> Reporter: ramkrishna.s.vasudevan
> Assignee: ramkrishna.s.vasudevan
> Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis. Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable
> {code}
> private void writerThreadError(Throwable t) {
> thrown.compareAndSet(null, t);
> }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
> for (WriterThread t: writerThreads) {
> try {
> t.join();
> } catch (InterruptedException ie) {
> throw new IOException(ie);
> }
> checkForErrors();
> }
> LOG.info("Split writers finished");
>
> return closeStreams();
> {code}
> Inside checkForErrors
> {code}
> private void checkForErrors() throws IOException {
> Throwable thrown = this.thrown.get();
> if (thrown == null) return;
> if (thrown instanceof IOException) {
> throw (IOException)thrown;
> } else {
> throw new RuntimeException(thrown);
> }
> }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira