You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "ramkrishna.s.vasudevan (Created) (JIRA)" <ji...@apache.org> on 2012/01/20 15:26:40 UTC

[jira] [Created] (HBASE-5235) HLogSplitter writer threads not getting closed when any of the writer threads has exceptions.

HLogSplitter writer threads not getting closed when any of the writer threads has exceptions.
---------------------------------------------------------------------------------------------

                 Key: HBASE-5235
                 URL: https://issues.apache.org/jira/browse/HBASE-5235
             Project: HBase
          Issue Type: Bug
    Affects Versions: 0.90.5, 0.92.0
            Reporter: ramkrishna.s.vasudevan
            Assignee: ramkrishna.s.vasudevan
             Fix For: 0.92.1, 0.90.6


Pls find the analysis.  Correct me if am wrong
{code}
2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)

{code}
Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
{code}
  private void writerThreadError(Throwable t) {
    thrown.compareAndSet(null, t);
  }
{code}
In the finally block of splitLog we try to close the streams.
{code}
      for (WriterThread t: writerThreads) {
        try {
          t.join();
        } catch (InterruptedException ie) {
          throw new IOException(ie);
        }
        checkForErrors();
      }
      LOG.info("Split writers finished");
      
      return closeStreams();
{code}
Inside checkForErrors
{code}
  private void checkForErrors() throws IOException {
    Throwable thrown = this.thrown.get();
    if (thrown == null) return;
    if (thrown instanceof IOException) {
      throw (IOException)thrown;
    } else {
      throw new RuntimeException(thrown);
    }
  }
So once we throw the exception the DFSStreamer threads are not getting closed.
{code}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Reopened] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Reopened) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Zhihong Yu reopened HBASE-5235:
-------------------------------


Patch should be integrated to 0.92 branch as well.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189853#comment-13189853 ] 

ramkrishna.s.vasudevan commented on HBASE-5235:
-----------------------------------------------

I think if any errors we should only close the Streams and not call closeStreams().
Because here we do lot of other steps that completes the log split process. 
Also even if one WriterThread has an exception can we completely abort the master as we did for any failures of splitLog()? Pls suggest.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189987#comment-13189987 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

How about introducing a boolean, logWritersClosed (similar to hasClosed) ?
We separate the above for loop into a new private method called closeLogWriters() where logWritersClosed is checked upon entry. After successful execution logWritersClosed would be set to true before exit.
closeStreams() calls closeLogWriters().
We also place closeLogWriters() in the finally block of finishWritingAndClose().

Is the above close to what you were thinking, Ram ?
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Attachment: HBASE-5235_0.90_1.patch

Addressing Ted's comments.  Here the logWriter.values() will be iterated twice.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190326#comment-13190326 ] 

ramkrishna.s.vasudevan commented on HBASE-5235:
-----------------------------------------------

Yes.  This what i was thinking too.  Will upload a patch

                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Summary: HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.  (was: HLogSplitter writer threads not getting closed when any of the writer threads has exceptions.)
    
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190439#comment-13190439 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

The rationale for my above comment is that in patch v1, the following assignment may be skipped if there is runtime exception thrown in the try block of line 801:
{code}
+      logWritersClosed = true;
{code}
It would make the code cleaner if logWritersClosed only reflects the status of log writer closing.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hadoop QA (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190502#comment-13190502 ] 

Hadoop QA commented on HBASE-5235:
----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12511401/HBASE-5235_trunk.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    -1 javadoc.  The javadoc tool appears to have generated -145 warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    -1 findbugs.  The patch appears to introduce 82 new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit warnings.

     -1 core tests.  The patch failed these unit tests:
                       org.apache.hadoop.hbase.mapreduce.TestImportTsv
                  org.apache.hadoop.hbase.mapred.TestTableMapReduce
                  org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/827//testReport/
Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/827//artifact/trunk/patchprocess/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/827//console

This message is automatically generated.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Attachment: HBASE-5235_0.90.patch

Patch for 0.90.  If this patch is fine i will prepare a similar patch for 0.92
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Attachment: HBASE-5235_trunk.patch

Patch for trunk.  
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hudson (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13191986#comment-13191986 ] 

Hudson commented on HBASE-5235:
-------------------------------

Integrated in HBase-0.92-security #88 (See [https://builds.apache.org/job/HBase-0.92-security/88/])
    HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. (Ram)

ramkrishna : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java

                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.5, 0.92.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.90.6, 0.92.1
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190438#comment-13190438 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

{code}
+      // close the log writer streams only if they are not closed
+      // in closeStreams().
+      if (!closeCompleted && !logWritersClosed) {
{code}
Do we need to check closeCompleted here ? It is set after logWritersClosed is set to true.

I think closeAndCleanupCompleted would be a better name for hasClosed.
The following line should be in closeLogWriters():
{code}
+      logWritersClosed = true;
{code}

If I were you, I would put the loop from line 789 to 798 into closeLogWriters() and let closeStreams() call closeLogWriters()
Then closeLogWriters() would start for loop to iterate over logWriters.values()
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hudson (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190883#comment-13190883 ] 

Hudson commented on HBASE-5235:
-------------------------------

Integrated in HBase-TRUNK-security #85 (See [https://builds.apache.org/job/HBase-TRUNK-security/85/])
    HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.(Ram)

ramkrishna : 
Files : 
* /hbase/trunk/CHANGES.txt
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java

                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Resolved] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Resolved) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan resolved HBASE-5235.
-------------------------------------------

    Resolution: Fixed

Committed to 0.92, trunk and 0.90
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.5, 0.92.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.90.6, 0.92.1
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Issue Comment Edited] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Issue Comment Edited) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190326#comment-13190326 ] 

ramkrishna.s.vasudevan edited comment on HBASE-5235 at 1/21/12 11:30 AM:
-------------------------------------------------------------------------

Yes.  This is what i was thinking too.  Will upload a patch

                
      was (Author: ram_krish):
    Yes.  This what i was thinking too.  Will upload a patch

                  
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Attachment: HBASE-5235_0.90_2.patch

Updated patch addressing Ted's comments for 0.90. Already trunk patch incorporates Ted's comments.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hadoop QA (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190666#comment-13190666 ] 

Hadoop QA commented on HBASE-5235:
----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12511427/HBASE-5235_0.90_2.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    -1 patch.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/833//console

This message is automatically generated.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

      Resolution: Fixed
    Hadoop Flags: Reviewed
          Status: Resolved  (was: Patch Available)

Committed to 0.90 and trunk.  
Thanks for the review Ted.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189961#comment-13189961 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

closeStreams() records all the IOException's and throw a MultipleIOException before exiting.
I think the simplest solution is to wrap closeStreams() in a finally block in finishWritingAndClose()

Thanks for reporting this issue, Ram.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hudson (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190714#comment-13190714 ] 

Hudson commented on HBASE-5235:
-------------------------------

Integrated in HBase-TRUNK #2644 (See [https://builds.apache.org/job/HBase-TRUNK/2644/])
    HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.(Ram)

ramkrishna : 
Files : 
* /hbase/trunk/CHANGES.txt
* /hbase/trunk/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java

                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Updated] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Updated) (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

ramkrishna.s.vasudevan updated HBASE-5235:
------------------------------------------

    Status: Patch Available  (was: Open)
    
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.5, 0.92.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Hudson (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13191359#comment-13191359 ] 

Hudson commented on HBASE-5235:
-------------------------------

Integrated in HBase-0.92 #257 (See [https://builds.apache.org/job/HBase-0.92/257/])
    HBASE-5235 HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions. (Ram)

ramkrishna : 
Files : 
* /hbase/branches/0.92/CHANGES.txt
* /hbase/branches/0.92/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java

                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.90.5, 0.92.0
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.90.6, 0.92.1
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch, HBASE-5235_0.90_2.patch, HBASE-5235_trunk.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "ramkrishna.s.vasudevan (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13189969#comment-13189969 ] 

ramkrishna.s.vasudevan commented on HBASE-5235:
-----------------------------------------------

But Ted, in closeStreams() we also do the steps related to renaming the .temp files in recovered.edits. So we should only do the 
{code}
     for (WriterAndPath wap : logWriters.values()) {
        try {
          wap.w.close();
        } catch (IOException ioe) {
          LOG.error("Couldn't close log at " + wap.p, ioe);
          thrown.add(ioe);
          continue;
        }
{code} 
and make the master abort so that the subsequent split can parse the HLog. Correct me if am wrong.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] [Commented] (HBASE-5235) HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.

Posted by "Zhihong Yu (Commented) (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/HBASE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13190481#comment-13190481 ] 

Zhihong Yu commented on HBASE-5235:
-----------------------------------

+1 on patch v2.
Minor comments which can be addressed at time of commit:
{code}
+        LOG.info("Closed path " + wap.p + " (wrote " + wap.editsWritten
+            + " edits in " + (wap.nanosSpent / 1000 / 1000) + "ms)");
{code}
I think the above should be inside closeLogWriters().
{code}
+      // close the log writer streams only if they are not closed
+      // in closeStreams().
{code}
Since closeLogWriters() is called inside closeStreams(), the above comment can be removed.

Please run through test suite.
                
> HLogSplitter writer thread's streams not getting closed when any of the writer threads has exceptions.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-5235
>                 URL: https://issues.apache.org/jira/browse/HBASE-5235
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 0.92.0, 0.90.5
>            Reporter: ramkrishna.s.vasudevan
>            Assignee: ramkrishna.s.vasudevan
>             Fix For: 0.92.1, 0.90.6
>
>         Attachments: HBASE-5235_0.90.patch, HBASE-5235_0.90_1.patch
>
>
> Pls find the analysis.  Correct me if am wrong
> {code}
> 2012-01-15 05:14:02,374 FATAL org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: WriterThread-9 Got while writing log entry to log
> java.io.IOException: All datanodes 10.18.40.200:50010 are bad. Aborting...
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:3373)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2811)
> 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3026)
> {code}
> Here we have an exception in one of the writer threads. If any exception we try to hold it in an Atomic variable 
> {code}
>   private void writerThreadError(Throwable t) {
>     thrown.compareAndSet(null, t);
>   }
> {code}
> In the finally block of splitLog we try to close the streams.
> {code}
>       for (WriterThread t: writerThreads) {
>         try {
>           t.join();
>         } catch (InterruptedException ie) {
>           throw new IOException(ie);
>         }
>         checkForErrors();
>       }
>       LOG.info("Split writers finished");
>       
>       return closeStreams();
> {code}
> Inside checkForErrors
> {code}
>   private void checkForErrors() throws IOException {
>     Throwable thrown = this.thrown.get();
>     if (thrown == null) return;
>     if (thrown instanceof IOException) {
>       throw (IOException)thrown;
>     } else {
>       throw new RuntimeException(thrown);
>     }
>   }
> So once we throw the exception the DFSStreamer threads are not getting closed.
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira