You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@impala.apache.org by "Dan Hecht (JIRA)" <ji...@apache.org> on 2017/05/27 01:03:04 UTC

[jira] [Created] (IMPALA-5383) Fix PARQUET_FILE_SIZE option for ADLS

Dan Hecht created IMPALA-5383:
---------------------------------

             Summary: Fix PARQUET_FILE_SIZE option for ADLS
                 Key: IMPALA-5383
                 URL: https://issues.apache.org/jira/browse/IMPALA-5383
             Project: IMPALA
          Issue Type: Bug
          Components: Backend
    Affects Versions: Impala 2.9.0
            Reporter: Dan Hecht
            Assignee: Sailesh Mukil
            Priority: Critical


PARQUET_FILE_SIZE query option doesn't work with ADLS because the AdlFileSystem doesn't have a notion of block sizes.  And impala depends on the filesystem remembering the block size which is then used as the target parquet file size (this is done for Hdfs so that the parquet file size and block size match even if the parquet_file_size isn't a valid blocksize).

We should special case Adls just like we do for S3 to bypass the FileSystem block size, and instead just use the requested PARQUET_FILE_SIZE as the output partitions block_size (and consequently the parquet file target size) here:

{code:title=HdfsTableSink::CreateNewTmpFile()}
  if (IsS3APath(output_partition->current_file_name.c_str())) {
    // On S3A, the file cannot be stat'ed until after it's closed, and even so, the block
    // size reported will be just the filesystem default. So, remember the requested
    // block size.
    output_partition->block_size = block_size;
  } else {
    // HDFS may choose to override the block size that we've recommended, so for non-S3
    // files, we get the block size by stat-ing the file.
    hdfsFileInfo* info = hdfsGetPathInfo(output_partition->hdfs_connection,
        output_partition->current_file_name.c_str());
    if (info == nullptr) {
      return Status(GetHdfsErrorMsg("Failed to get info on temporary HDFS file: ",
          output_partition->current_file_name));
    }
    output_partition->block_size = info->mBlockSize;
    hdfsFreeFileInfo(info, 1);
  }
{code}

After this is fixed we can re-enable {{test_insert_parquet_verify_size()}}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)