You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by GitBox <gi...@apache.org> on 2021/03/07 00:39:44 UTC

[GitHub] [drill] cgivre commented on a change in pull request #2186: DRILL-7874: Ensure DrillFSDataInputStream.read populates byte array of the requested length

cgivre commented on a change in pull request #2186:
URL: https://github.com/apache/drill/pull/2186#discussion_r588951587



##########
File path: exec/java-exec/src/main/java/org/apache/drill/exec/store/dfs/DrillFileSystem.java
##########
@@ -788,51 +785,18 @@ public void removeXAttr(final Path path, final String name) throws IOException {
 
   /**
    * Returns an InputStream from a Hadoop path. If the data is compressed, this method will return a compressed
-   * InputStream depending on the codec.  Note that if the results of this method are sent to a third party parser
-   * that works with bytes or individual characters directly, you should use the openDecompressedInputStream method.
+   * InputStream depending on the codec.
    * @param path Input file path
    * @return InputStream of opened file path
    * @throws IOException If the file is unreachable, unavailable or otherwise unreadable
    */
   public InputStream openPossiblyCompressedStream(Path path) throws IOException {
-    CompressionCodec codec = codecFactory.getCodec(path); // infers from file ext.
+    CompressionCodec codec = getCodec(path); // infers from file ext.
+    InputStream inputStream = open(path);

Review comment:
       @vvysotskyi 
   This looks good.  Did you test it on s3 with a PCAP file to confirm that this works?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org