You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Uwe Schindler <uw...@thetaphi.de> on 2013/08/14 14:17:26 UTC

RE: svn commit: r1513825 - in /lucene/dev/trunk/solr: CHANGES.txt core/src/java/org/apache/solr/store/hdfs/HdfsDirectory.java

Hi,

I am not sure if this is a good idea, unfortunately I have not seen that earlier:

>      @Override
>      public void close() throws IOException {
> +      try {
> +        super.close();
> +      } catch (Throwable t) {
> +        LOG.error("Error while closing", t);
> +      }
>        writer.close();
>      }

Super.close() writes the final (not yet written) buffer to disk. If an error occurs, this is completely ignored, so IndexWriter would think that the data is written and would not fail to commit! The close of HDFS writer afterward will almost never fail, so it’s a completely swallowed write error. This is worse because BufferedIndexOutput is different than the old code, because close() acztuall does something! Also catching Throwable is not a good idea (think of ThreadInterruptedException!)

It would be better to clone the code from FSIndexOutput:

Import o.a.l.util.IOUtils;

public void close() throws IOException {
        IOException priorE = null;
        try {
          super.close();
        } catch (IOException ioe) {
          priorE = ioe;
        } finally {
          IOUtils.closeWhileHandlingException(priorE, writer);
        }
}

Uwe


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org