You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hudi.apache.org by aakash aakash <em...@gmail.com> on 2020/04/30 05:30:53 UTC

Table name is not respected while inserting record with different table name with Append mode

Hi,

While running commands from Hudi quick start guide, I found that the
library does not check for the table name in the request against the table
name in the metadata available in the base path, I think it should throw
TableAlreadyExist, In case of Save mode: *overwrite *it warns.

*spark-2.4.4-bin-hadoop2.7/bin/spark-shell   --packages
org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
 --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'*

scala> df.write.format("hudi").
     |     options(getQuickstartWriteConfigs).
     |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
*     |     option(TABLE_NAME, "test_table").*
     |     mode(*Append*).
     |     save(basePath)
20/04/29 17:23:42 WARN DefaultSource: Snapshot view not supported yet via
data source, for MERGE_ON_READ tables. Please query the Hive table
registered using Spark SQL.

scala>

No exception is thrown if we run this

scala> df.write.format("hudi").
     |     options(getQuickstartWriteConfigs).
     |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
*     |     option(TABLE_NAME, "foo_table").*
     |     mode(*Append*).
     |     save(basePath)
20/04/29 17:24:37 WARN DefaultSource: Snapshot view not supported yet via
data source, for MERGE_ON_READ tables. Please query the Hive table
registered using Spark SQL.

scala>


scala> df.write.format("hudi").
     |   options(getQuickstartWriteConfigs).
     |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
     |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
     |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
     |   option(TABLE_NAME, *tableName*).
     |   mode(*Overwrite*).
     |   save(basePath)
*20/04/29 22:25:16 WARN HoodieSparkSqlWriter$: hoodie table at
file:/tmp/hudi_trips_cow already exists. Deleting existing data &
overwriting with new data.*
20/04/29 22:25:18 WARN DefaultSource: Snapshot view not supported yet via
data source, for MERGE_ON_READ tables. Please query the Hive table
registered using Spark SQL.

scala>


Regards,
Aakash

Re: Table name is not respected while inserting record with different table name with Append mode

Posted by aakash aakash <em...@gmail.com>.
my username is: aakashpradeep

Thanks,
Aakash



On Thu, Apr 30, 2020 at 2:42 AM aakash aakash <em...@gmail.com>
wrote:

> Thanks, Sudha. Please assign it to me.
>
> Regards,
> Aakash
>
>
> On Thu, Apr 30, 2020 at 2:00 AM Bhavani Sudha <bh...@gmail.com>
> wrote:
>
>> Thanks for reporting this Akash. I created the Jira to track this -
>> https://jira.apache.org/jira/browse/HUDI-852 . Feel free to take a stab
>> if
>> interested. Let me know so I can re-assign it to you.
>>
>> Thanks,
>> Sudha
>>
>> On Wed, Apr 29, 2020 at 10:31 PM aakash aakash <em...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > While running commands from Hudi quick start guide, I found that the
>> > library does not check for the table name in the request against the
>> table
>> > name in the metadata available in the base path, I think it should throw
>> > TableAlreadyExist, In case of Save mode: *overwrite *it warns.
>> >
>> > *spark-2.4.4-bin-hadoop2.7/bin/spark-shell   --packages
>> >
>> >
>> org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
>> >  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'*
>> >
>> > scala> df.write.format("hudi").
>> >      |     options(getQuickstartWriteConfigs).
>> >      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>> >      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>> >      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>> > *     |     option(TABLE_NAME, "test_table").*
>> >      |     mode(*Append*).
>> >      |     save(basePath)
>> > 20/04/29 17:23:42 WARN DefaultSource: Snapshot view not supported yet
>> via
>> > data source, for MERGE_ON_READ tables. Please query the Hive table
>> > registered using Spark SQL.
>> >
>> > scala>
>> >
>> > No exception is thrown if we run this
>> >
>> > scala> df.write.format("hudi").
>> >      |     options(getQuickstartWriteConfigs).
>> >      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>> >      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>> >      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>> > *     |     option(TABLE_NAME, "foo_table").*
>> >      |     mode(*Append*).
>> >      |     save(basePath)
>> > 20/04/29 17:24:37 WARN DefaultSource: Snapshot view not supported yet
>> via
>> > data source, for MERGE_ON_READ tables. Please query the Hive table
>> > registered using Spark SQL.
>> >
>> > scala>
>> >
>> >
>> > scala> df.write.format("hudi").
>> >      |   options(getQuickstartWriteConfigs).
>> >      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>> >      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>> >      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>> >      |   option(TABLE_NAME, *tableName*).
>> >      |   mode(*Overwrite*).
>> >      |   save(basePath)
>> > *20/04/29 22:25:16 WARN HoodieSparkSqlWriter$: hoodie table at
>> > file:/tmp/hudi_trips_cow already exists. Deleting existing data &
>> > overwriting with new data.*
>> > 20/04/29 22:25:18 WARN DefaultSource: Snapshot view not supported yet
>> via
>> > data source, for MERGE_ON_READ tables. Please query the Hive table
>> > registered using Spark SQL.
>> >
>> > scala>
>> >
>> >
>> > Regards,
>> > Aakash
>> >
>>
>

Re: Table name is not respected while inserting record with different table name with Append mode

Posted by aakash aakash <em...@gmail.com>.
Thanks, Sudha. Please assign it to me.

Regards,
Aakash


On Thu, Apr 30, 2020 at 2:00 AM Bhavani Sudha <bh...@gmail.com>
wrote:

> Thanks for reporting this Akash. I created the Jira to track this -
> https://jira.apache.org/jira/browse/HUDI-852 . Feel free to take a stab if
> interested. Let me know so I can re-assign it to you.
>
> Thanks,
> Sudha
>
> On Wed, Apr 29, 2020 at 10:31 PM aakash aakash <em...@gmail.com>
> wrote:
>
> > Hi,
> >
> > While running commands from Hudi quick start guide, I found that the
> > library does not check for the table name in the request against the
> table
> > name in the metadata available in the base path, I think it should throw
> > TableAlreadyExist, In case of Save mode: *overwrite *it warns.
> >
> > *spark-2.4.4-bin-hadoop2.7/bin/spark-shell   --packages
> >
> >
> org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
> >  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'*
> >
> > scala> df.write.format("hudi").
> >      |     options(getQuickstartWriteConfigs).
> >      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
> >      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
> >      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
> > *     |     option(TABLE_NAME, "test_table").*
> >      |     mode(*Append*).
> >      |     save(basePath)
> > 20/04/29 17:23:42 WARN DefaultSource: Snapshot view not supported yet via
> > data source, for MERGE_ON_READ tables. Please query the Hive table
> > registered using Spark SQL.
> >
> > scala>
> >
> > No exception is thrown if we run this
> >
> > scala> df.write.format("hudi").
> >      |     options(getQuickstartWriteConfigs).
> >      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
> >      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
> >      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
> > *     |     option(TABLE_NAME, "foo_table").*
> >      |     mode(*Append*).
> >      |     save(basePath)
> > 20/04/29 17:24:37 WARN DefaultSource: Snapshot view not supported yet via
> > data source, for MERGE_ON_READ tables. Please query the Hive table
> > registered using Spark SQL.
> >
> > scala>
> >
> >
> > scala> df.write.format("hudi").
> >      |   options(getQuickstartWriteConfigs).
> >      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
> >      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
> >      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
> >      |   option(TABLE_NAME, *tableName*).
> >      |   mode(*Overwrite*).
> >      |   save(basePath)
> > *20/04/29 22:25:16 WARN HoodieSparkSqlWriter$: hoodie table at
> > file:/tmp/hudi_trips_cow already exists. Deleting existing data &
> > overwriting with new data.*
> > 20/04/29 22:25:18 WARN DefaultSource: Snapshot view not supported yet via
> > data source, for MERGE_ON_READ tables. Please query the Hive table
> > registered using Spark SQL.
> >
> > scala>
> >
> >
> > Regards,
> > Aakash
> >
>

Re: Table name is not respected while inserting record with different table name with Append mode

Posted by Bhavani Sudha <bh...@gmail.com>.
Thanks for reporting this Akash. I created the Jira to track this -
https://jira.apache.org/jira/browse/HUDI-852 . Feel free to take a stab if
interested. Let me know so I can re-assign it to you.

Thanks,
Sudha

On Wed, Apr 29, 2020 at 10:31 PM aakash aakash <em...@gmail.com>
wrote:

> Hi,
>
> While running commands from Hudi quick start guide, I found that the
> library does not check for the table name in the request against the table
> name in the metadata available in the base path, I think it should throw
> TableAlreadyExist, In case of Save mode: *overwrite *it warns.
>
> *spark-2.4.4-bin-hadoop2.7/bin/spark-shell   --packages
>
> org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating,org.apache.spark:spark-avro_2.11:2.4.4
>  --conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer'*
>
> scala> df.write.format("hudi").
>      |     options(getQuickstartWriteConfigs).
>      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
> *     |     option(TABLE_NAME, "test_table").*
>      |     mode(*Append*).
>      |     save(basePath)
> 20/04/29 17:23:42 WARN DefaultSource: Snapshot view not supported yet via
> data source, for MERGE_ON_READ tables. Please query the Hive table
> registered using Spark SQL.
>
> scala>
>
> No exception is thrown if we run this
>
> scala> df.write.format("hudi").
>      |     options(getQuickstartWriteConfigs).
>      |     option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>      |     option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>      |     option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
> *     |     option(TABLE_NAME, "foo_table").*
>      |     mode(*Append*).
>      |     save(basePath)
> 20/04/29 17:24:37 WARN DefaultSource: Snapshot view not supported yet via
> data source, for MERGE_ON_READ tables. Please query the Hive table
> registered using Spark SQL.
>
> scala>
>
>
> scala> df.write.format("hudi").
>      |   options(getQuickstartWriteConfigs).
>      |   option(PRECOMBINE_FIELD_OPT_KEY, "ts").
>      |   option(RECORDKEY_FIELD_OPT_KEY, "uuid").
>      |   option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
>      |   option(TABLE_NAME, *tableName*).
>      |   mode(*Overwrite*).
>      |   save(basePath)
> *20/04/29 22:25:16 WARN HoodieSparkSqlWriter$: hoodie table at
> file:/tmp/hudi_trips_cow already exists. Deleting existing data &
> overwriting with new data.*
> 20/04/29 22:25:18 WARN DefaultSource: Snapshot view not supported yet via
> data source, for MERGE_ON_READ tables. Please query the Hive table
> registered using Spark SQL.
>
> scala>
>
>
> Regards,
> Aakash
>