You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Siddhi Mehta <sm...@gmail.com> on 2016/06/09 18:43:57 UTC

Hive Table Creation failure on Postgres

Hello Everyone,

We are using postgres for hive persistent store.

We are making use of the schematool to create hive schema and our hive
configs have table and column validation enabled.

While trying to create a simple hive table we ran into the following error.

Error: Error while processing statement: FAILED: Execution Error, return
code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:javax.jdo.JDODataStoreException: Wrong precision for
column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC driver)
but should be 256 (based on the MetaData definition for field
org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).

Looks like the Hive Metastore validation expects it to be 255 but when I
looked at the metastore script for Postgres  it creates the column with
precision 4000.

Interesting thing is that mysql scripts for the same hive version create
the column with precision 255.

Is there a config to communicate with Hive MetaStore validation layers as
to what is the appropriate column precision to be based on the underlying
persistent store  used or
is this a known workaround to turn of validation when using postgress as
the persistent store.

Thanks,
Siddhi

Re: Hive Table Creation failure on Postgres

Posted by Siddhi Mehta <sm...@gmail.com>.
Ping to see if there is a jira filed around the same or if there is a
config driven way to make metastore know what is the underlying schema
created by different persistent stores

On Fri, Jun 10, 2016 at 11:33 AM, Siddhi Mehta <sm...@gmail.com> wrote:

> Right so mysql and oracle both set the column to 256 bytes.
> Any postgres users who have seen the issue.
>
>
> HIVE-4921 <https://issues.apache.org/jira/browse/HIVE-4921> jira talks
> about upgrading comments column to 4000 for 3 tables.
>
> Is than inconsistenty/bug with postgres schema creation script or is there
> a config way to ensure that the MetaData definition for field reflects
> picks correct size?
>
> Thanks,
> Siddhi
>
> On Thu, Jun 9, 2016 at 11:54 AM, Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Well I know that the script works fine for Oracle (both base and
>> transactional).
>>
>> Ok this is what this table is in Oracle. That column is 256 bytes.
>>
>> [image: Inline images 2]
>>
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 9 June 2016 at 19:43, Siddhi Mehta <sm...@gmail.com> wrote:
>>
>>> Hello Everyone,
>>>
>>> We are using postgres for hive persistent store.
>>>
>>> We are making use of the schematool to create hive schema and our hive
>>> configs have table and column validation enabled.
>>>
>>> While trying to create a simple hive table we ran into the following
>>> error.
>>>
>>> Error: Error while processing statement: FAILED: Execution Error, return
>>> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>>> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
>>> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
>>> driver) but should be 256 (based on the MetaData definition for field
>>> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>>>
>>> Looks like the Hive Metastore validation expects it to be 255 but when I
>>> looked at the metastore script for Postgres  it creates the column with
>>> precision 4000.
>>>
>>> Interesting thing is that mysql scripts for the same hive version create
>>> the column with precision 255.
>>>
>>> Is there a config to communicate with Hive MetaStore validation layers
>>> as to what is the appropriate column precision to be based on the
>>> underlying persistent store  used or
>>> is this a known workaround to turn of validation when using postgress as
>>> the persistent store.
>>>
>>> Thanks,
>>> Siddhi
>>>
>>
>>
>

Re: Hive Table Creation failure on Postgres

Posted by Siddhi Mehta <sm...@gmail.com>.
Ping to see if there is a jira filed around the same or if there is a
config driven way to make metastore know what is the underlying schema
created by different persistent stores

On Fri, Jun 10, 2016 at 11:33 AM, Siddhi Mehta <sm...@gmail.com> wrote:

> Right so mysql and oracle both set the column to 256 bytes.
> Any postgres users who have seen the issue.
>
>
> HIVE-4921 <https://issues.apache.org/jira/browse/HIVE-4921> jira talks
> about upgrading comments column to 4000 for 3 tables.
>
> Is than inconsistenty/bug with postgres schema creation script or is there
> a config way to ensure that the MetaData definition for field reflects
> picks correct size?
>
> Thanks,
> Siddhi
>
> On Thu, Jun 9, 2016 at 11:54 AM, Mich Talebzadeh <
> mich.talebzadeh@gmail.com> wrote:
>
>> Well I know that the script works fine for Oracle (both base and
>> transactional).
>>
>> Ok this is what this table is in Oracle. That column is 256 bytes.
>>
>> [image: Inline images 2]
>>
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 9 June 2016 at 19:43, Siddhi Mehta <sm...@gmail.com> wrote:
>>
>>> Hello Everyone,
>>>
>>> We are using postgres for hive persistent store.
>>>
>>> We are making use of the schematool to create hive schema and our hive
>>> configs have table and column validation enabled.
>>>
>>> While trying to create a simple hive table we ran into the following
>>> error.
>>>
>>> Error: Error while processing statement: FAILED: Execution Error, return
>>> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>>> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
>>> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
>>> driver) but should be 256 (based on the MetaData definition for field
>>> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>>>
>>> Looks like the Hive Metastore validation expects it to be 255 but when I
>>> looked at the metastore script for Postgres  it creates the column with
>>> precision 4000.
>>>
>>> Interesting thing is that mysql scripts for the same hive version create
>>> the column with precision 255.
>>>
>>> Is there a config to communicate with Hive MetaStore validation layers
>>> as to what is the appropriate column precision to be based on the
>>> underlying persistent store  used or
>>> is this a known workaround to turn of validation when using postgress as
>>> the persistent store.
>>>
>>> Thanks,
>>> Siddhi
>>>
>>
>>
>

Re: Hive Table Creation failure on Postgres

Posted by Siddhi Mehta <sm...@gmail.com>.
Right so mysql and oracle both set the column to 256 bytes.
Any postgres users who have seen the issue.


HIVE-4921 <https://issues.apache.org/jira/browse/HIVE-4921> jira talks
about upgrading comments column to 4000 for 3 tables.

Is than inconsistenty/bug with postgres schema creation script or is there
a config way to ensure that the MetaData definition for field reflects
picks correct size?

Thanks,
Siddhi

On Thu, Jun 9, 2016 at 11:54 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Well I know that the script works fine for Oracle (both base and
> transactional).
>
> Ok this is what this table is in Oracle. That column is 256 bytes.
>
> [image: Inline images 2]
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 9 June 2016 at 19:43, Siddhi Mehta <sm...@gmail.com> wrote:
>
>> Hello Everyone,
>>
>> We are using postgres for hive persistent store.
>>
>> We are making use of the schematool to create hive schema and our hive
>> configs have table and column validation enabled.
>>
>> While trying to create a simple hive table we ran into the following
>> error.
>>
>> Error: Error while processing statement: FAILED: Execution Error, return
>> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
>> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
>> driver) but should be 256 (based on the MetaData definition for field
>> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>>
>> Looks like the Hive Metastore validation expects it to be 255 but when I
>> looked at the metastore script for Postgres  it creates the column with
>> precision 4000.
>>
>> Interesting thing is that mysql scripts for the same hive version create
>> the column with precision 255.
>>
>> Is there a config to communicate with Hive MetaStore validation layers as
>> to what is the appropriate column precision to be based on the underlying
>> persistent store  used or
>> is this a known workaround to turn of validation when using postgress as
>> the persistent store.
>>
>> Thanks,
>> Siddhi
>>
>
>

Re: Hive Table Creation failure on Postgres

Posted by Siddhi Mehta <sm...@gmail.com>.
Right so mysql and oracle both set the column to 256 bytes.
Any postgres users who have seen the issue.


HIVE-4921 <https://issues.apache.org/jira/browse/HIVE-4921> jira talks
about upgrading comments column to 4000 for 3 tables.

Is than inconsistenty/bug with postgres schema creation script or is there
a config way to ensure that the MetaData definition for field reflects
picks correct size?

Thanks,
Siddhi

On Thu, Jun 9, 2016 at 11:54 AM, Mich Talebzadeh <mi...@gmail.com>
wrote:

> Well I know that the script works fine for Oracle (both base and
> transactional).
>
> Ok this is what this table is in Oracle. That column is 256 bytes.
>
> [image: Inline images 2]
>
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 9 June 2016 at 19:43, Siddhi Mehta <sm...@gmail.com> wrote:
>
>> Hello Everyone,
>>
>> We are using postgres for hive persistent store.
>>
>> We are making use of the schematool to create hive schema and our hive
>> configs have table and column validation enabled.
>>
>> While trying to create a simple hive table we ran into the following
>> error.
>>
>> Error: Error while processing statement: FAILED: Execution Error, return
>> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
>> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
>> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
>> driver) but should be 256 (based on the MetaData definition for field
>> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>>
>> Looks like the Hive Metastore validation expects it to be 255 but when I
>> looked at the metastore script for Postgres  it creates the column with
>> precision 4000.
>>
>> Interesting thing is that mysql scripts for the same hive version create
>> the column with precision 255.
>>
>> Is there a config to communicate with Hive MetaStore validation layers as
>> to what is the appropriate column precision to be based on the underlying
>> persistent store  used or
>> is this a known workaround to turn of validation when using postgress as
>> the persistent store.
>>
>> Thanks,
>> Siddhi
>>
>
>

Re: Hive Table Creation failure on Postgres

Posted by Mich Talebzadeh <mi...@gmail.com>.
Well I know that the script works fine for Oracle (both base and
transactional).

Ok this is what this table is in Oracle. That column is 256 bytes.

[image: Inline images 2]


HTH

Dr Mich Talebzadeh



LinkedIn * https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 9 June 2016 at 19:43, Siddhi Mehta <sm...@gmail.com> wrote:

> Hello Everyone,
>
> We are using postgres for hive persistent store.
>
> We are making use of the schematool to create hive schema and our hive
> configs have table and column validation enabled.
>
> While trying to create a simple hive table we ran into the following error.
>
> Error: Error while processing statement: FAILED: Execution Error, return
> code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
> MetaException(message:javax.jdo.JDODataStoreException: Wrong precision
> for column "*COLUMNS_V2"."COMMENT*" : was 4000 (according to the JDBC
> driver) but should be 256 (based on the MetaData definition for field
> org.apache.hadoop.hive.metastore.model.MFieldSchema.comment).
>
> Looks like the Hive Metastore validation expects it to be 255 but when I
> looked at the metastore script for Postgres  it creates the column with
> precision 4000.
>
> Interesting thing is that mysql scripts for the same hive version create
> the column with precision 255.
>
> Is there a config to communicate with Hive MetaStore validation layers as
> to what is the appropriate column precision to be based on the underlying
> persistent store  used or
> is this a known workaround to turn of validation when using postgress as
> the persistent store.
>
> Thanks,
> Siddhi
>