You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sergio Peña (JIRA)" <ji...@apache.org> on 2016/08/04 16:31:20 UTC
[jira] [Updated] (HIVE-14270) Write temporary data to HDFS when
doing inserts on tables located on S3
[ https://issues.apache.org/jira/browse/HIVE-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sergio Peña updated HIVE-14270:
-------------------------------
Attachment: HIVE-14270.4.patch
Attaching new patch to run another set of tests.
[~ashutoshc] I removed the duplication of the rename(), and create HDFS scratch directories instead. It is simpler than the other code, and less prone to errors. Could you help me reviewing it?
> Write temporary data to HDFS when doing inserts on tables located on S3
> -----------------------------------------------------------------------
>
> Key: HIVE-14270
> URL: https://issues.apache.org/jira/browse/HIVE-14270
> Project: Hive
> Issue Type: Sub-task
> Reporter: Sergio Peña
> Assignee: Sergio Peña
> Attachments: HIVE-14270.1.patch, HIVE-14270.2.patch, HIVE-14270.3.patch, HIVE-14270.4.patch
>
>
> Currently, when doing INSERT statements on tables located at S3, Hive writes and reads temporary (or intermediate) files to S3 as well.
> If HDFS is still the default filesystem on Hive, then we can keep such temporary files on HDFS to keep things run faster.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)