You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Sergio Peña (JIRA)" <ji...@apache.org> on 2016/11/14 20:31:58 UTC

[jira] [Created] (HIVE-15199) INSERT INTO data on S3 is replacing the old rows with the new ones

Sergio Peña created HIVE-15199:
----------------------------------

             Summary: INSERT INTO data on S3 is replacing the old rows with the new ones
                 Key: HIVE-15199
                 URL: https://issues.apache.org/jira/browse/HIVE-15199
             Project: Hive
          Issue Type: Bug
          Components: Hive
            Reporter: Sergio Peña
            Assignee: Sergio Peña
            Priority: Critical


Any INSERT INTO statement run on S3 tables and when the scratch directory is saved on S3 is deleting old rows of the table.

{noformat}
hive> set hive.blobstore.use.blobstore.as.scratchdir=true;

hive> create table t1 (id int, name string) location 's3a://spena-bucket/t1';

hive> insert into table t1 values (1,'name1');

hive> select * from t1;
1       name1

hive> insert into table t1 values (2,'name2');

hive> select * from t1;
2       name2
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)