You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Xiaobo Gu <gu...@gmail.com> on 2011/08/20 08:20:13 UTC
Hive 0.7.1 does not work with PostgreSQL 9.0.2
Hi,
I have just set up a PostgreSQL 9.0.2 server for hive 0.7.1 metastore,
and I am using the postgresql-9.0-801.jdbc4.jar jdbc driver, when I
test the following HQL,
CREATE TABLE records (year STRING, temperature INT, quality INT)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t';
the following error messages are thrown in metastore services standard error:
11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: create_table:
db=default tbl=records
11/08/20 10:12:33 INFO HiveMetaStore.audit: ugi=gpadmin
ip=/192.168.72.6 cmd=create_table: db=default tbl=records
11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: Opening raw store
with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
11/08/20 10:12:33 INFO metastore.ObjectStore: ObjectStore, initialize called
11/08/20 10:12:33 INFO metastore.ObjectStore: Initialized ObjectStore
11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
"embedded-only" so does not have its own data
store table.
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table :
"SERDES", InheritanceStrategy
: new-table]
11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
"org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
"embedded-only" so does not have its own datastore
table.
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor
[Table : "SDS", InheritanceStr
ategy : new-table]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Class : org.apache.hadoop.hive.metastore.model.MTable [Table :
"TBLS", InheritanceStrategy : new
-table]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters
[Table : "SERDE_PARAMS"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters
[Table : "TABLE_PARAMS"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys
[Table : "PARTITION_KEYS"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols
[Table : "BUCKETING
_COLS"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.cols
[Table : "COLUMNS"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters
[Table : "SD_PARAMS
"]
11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols
[Table : "SORT_COLS"]
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "SERDES"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 0 foreign key(s)
for table "SERDES"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 index(es) for
table "SERDES"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 unique key(s)
for table "TBLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 foreign key(s)
for table "TBLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 4 index(es) for table "TBLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "SDS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "SDS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for table "SDS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "SORT_COLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "SORT_COLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "SORT_COLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "TABLE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "TABLE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "TABLE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "SD_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "SD_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "SD_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "SERDE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "SERDE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "SERDE_PARAMS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "PARTITION_KEYS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "PARTITION_KEYS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "PARTITION_KEYS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "COLUMNS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "COLUMNS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "COLUMNS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
for table "BUCKETING_COLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
for table "BUCKETING_COLS"
11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
table "BUCKETING_COLS"
11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
initialisation for persistable class
org.apache.hadoop.hive.metastore.model.MSerDeInfo
11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
initialisation for persistable class
org.apache.hadoop.hive.metastore.model.MStorageDescriptor
11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
initialisation for persistable class
org.apache.hadoop.hive.metastore.model.MTable
11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
initialisation for persistable class
org.apache.hadoop.hive.metastore.model.MFieldSchema
11/08/20 10:12:34 WARN Datastore.Persist: Insert of object
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@46bb05de"
using statement "INSERT INTO "SDS" ("SD_
ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
type bit bu
t expression is of type boolean
Hint: You will need to rewrite or cast the expression.
Position: 129
11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: deleting
hdfs://linuxsvr2/user/hive/warehouse/records
11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: Deleted the
diretory hdfs://linuxsvr2/user/hive/warehouse/records
11/08/20 10:12:34 ERROR metastore.HiveMetaStore: JDO datastore error.
Retrying metastore command after 1000 ms (attempt 1 of 1)
11/08/20 10:12:35 WARN metastore.HiveMetaStore: Location:
hdfs://linuxsvr2/user/hive/warehouse/records specified for
non-external table:records
11/08/20 10:12:35 WARN Datastore.Persist: Insert of object
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
using statement "INSERT INTO "SDS" ("SD_
ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
type bit bu
t expression is of type boolean
Hint: You will need to rewrite or cast the expression.
Position: 129
11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: deleting
hdfs://linuxsvr2/user/hive/warehouse/records
11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: Deleted the
diretory hdfs://linuxsvr2/user/hive/warehouse/records
11/08/20 10:12:35 ERROR api.ThriftHiveMetastore$Processor: Internal
error processing create_table
javax.jdo.JDODataStoreException: Insert of object
"org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
using statement "INSERT INTO "SDS" ("SD_ID","NUM_
BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
type bit but express
ion is of type boolean
Hint: You will need to rewrite or cast the expression.
Position: 129
at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313)
at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:660)
at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
at org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
NestedThrowablesStackTrace:
org.postgresql.util.PSQLException: ERROR: column "IS_COMPRESSED" is of
type bit but expression is of type boolean
Hint: You will need to rewrite or cast the expression.
Position: 129
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388)
at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:334)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:396)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:406)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
at org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
at org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
at org.datanucleus.store.mapped.mapping.PersistableMapping.setObjectAsValue(PersistableMapping.java:664)
at org.datanucleus.store.mapped.mapping.PersistableMapping.setObject(PersistableMapping.java:423)
at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:197)
at org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1023)
at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideField(MTable.java)
at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideFields(MTable.java)
at org.datanucleus.jdo.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2803)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:294)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
at org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
at org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
at org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1241)
at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:655)
at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
at org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
[gpadmin@linuxsvr2 hive]$
Re: Hive 0.7.1 does not work with PostgreSQL 9.0.2
Posted by Xiaobo Gu <gu...@gmail.com>.
thanks ,I change the DDL manually ,change bit to boolean ,it works ,
2011/8/21, wd <wd...@wdicc.com>:
> you can try hive 0.5, after create the metadata, use upgrade sql file
> in hive 0.7.1 to upgrade to 0.7.1
>
> On Sat, Aug 20, 2011 at 2:20 PM, Xiaobo Gu <gu...@gmail.com> wrote:
>> Hi,
>> I have just set up a PostgreSQL 9.0.2 server for hive 0.7.1 metastore,
>> and I am using the postgresql-9.0-801.jdbc4.jar jdbc driver, when I
>> test the following HQL,
>>
>>
>> CREATE TABLE records (year STRING, temperature INT, quality INT)
>> ROW FORMAT DELIMITED
>> FIELDS TERMINATED BY '\t';
>>
>>
>> the following error messages are thrown in metastore services standard
>> error:
>>
>> 11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: create_table:
>> db=default tbl=records
>> 11/08/20 10:12:33 INFO HiveMetaStore.audit: ugi=gpadmin
>> ip=/192.168.72.6 cmd=create_table: db=default tbl=records
>> 11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: Opening raw store
>> with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
>> 11/08/20 10:12:33 INFO metastore.ObjectStore: ObjectStore, initialize
>> called
>> 11/08/20 10:12:33 INFO metastore.ObjectStore: Initialized ObjectStore
>> 11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
>> "embedded-only" so does not have its own data
>> store table.
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table :
>> "SERDES", InheritanceStrategy
>> : new-table]
>> 11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
>> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
>> "embedded-only" so does not have its own datastore
>> table.
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor
>> [Table : "SDS", InheritanceStr
>> ategy : new-table]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Class : org.apache.hadoop.hive.metastore.model.MTable [Table :
>> "TBLS", InheritanceStrategy : new
>> -table]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters
>> [Table : "SERDE_PARAMS"]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters
>> [Table : "TABLE_PARAMS"]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys
>> [Table : "PARTITION_KEYS"]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field :
>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols
>> [Table : "BUCKETING
>> _COLS"]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.cols
>> [Table : "COLUMNS"]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field :
>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters
>> [Table : "SD_PARAMS
>> "]
>> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
>> of Field :
>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols
>> [Table : "SORT_COLS"]
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "SERDES"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 0 foreign key(s)
>> for table "SERDES"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 index(es) for
>> table "SERDES"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 unique key(s)
>> for table "TBLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 foreign key(s)
>> for table "TBLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 4 index(es) for table
>> "TBLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "SDS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "SDS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for table
>> "SDS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "SORT_COLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "SORT_COLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "SORT_COLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "TABLE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "TABLE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "TABLE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "SD_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "SD_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "SD_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "SERDE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "SERDE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "SERDE_PARAMS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "PARTITION_KEYS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "PARTITION_KEYS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "PARTITION_KEYS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "COLUMNS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "COLUMNS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "COLUMNS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
>> for table "BUCKETING_COLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
>> for table "BUCKETING_COLS"
>> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
>> table "BUCKETING_COLS"
>> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
>> initialisation for persistable class
>> org.apache.hadoop.hive.metastore.model.MSerDeInfo
>> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
>> initialisation for persistable class
>> org.apache.hadoop.hive.metastore.model.MStorageDescriptor
>> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
>> initialisation for persistable class
>> org.apache.hadoop.hive.metastore.model.MTable
>> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
>> initialisation for persistable class
>> org.apache.hadoop.hive.metastore.model.MFieldSchema
>> 11/08/20 10:12:34 WARN Datastore.Persist: Insert of object
>> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@46bb05de"
>> using statement "INSERT INTO "SDS" ("SD_
>> ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
>> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
>> type bit bu
>> t expression is of type boolean
>> Hint: You will need to rewrite or cast the expression.
>> Position: 129
>> 11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: deleting
>> hdfs://linuxsvr2/user/hive/warehouse/records
>> 11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: Deleted the
>> diretory hdfs://linuxsvr2/user/hive/warehouse/records
>> 11/08/20 10:12:34 ERROR metastore.HiveMetaStore: JDO datastore error.
>> Retrying metastore command after 1000 ms (attempt 1 of 1)
>> 11/08/20 10:12:35 WARN metastore.HiveMetaStore: Location:
>> hdfs://linuxsvr2/user/hive/warehouse/records specified for
>> non-external table:records
>> 11/08/20 10:12:35 WARN Datastore.Persist: Insert of object
>> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
>> using statement "INSERT INTO "SDS" ("SD_
>> ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
>> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
>> type bit bu
>> t expression is of type boolean
>> Hint: You will need to rewrite or cast the expression.
>> Position: 129
>> 11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: deleting
>> hdfs://linuxsvr2/user/hive/warehouse/records
>> 11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: Deleted the
>> diretory hdfs://linuxsvr2/user/hive/warehouse/records
>> 11/08/20 10:12:35 ERROR api.ThriftHiveMetastore$Processor: Internal
>> error processing create_table
>> javax.jdo.JDODataStoreException: Insert of object
>> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
>> using statement "INSERT INTO "SDS" ("SD_ID","NUM_
>> BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
>> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
>> type bit but express
>> ion is of type boolean
>> Hint: You will need to rewrite or cast the expression.
>> Position: 129
>> at
>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313)
>> at
>> org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:660)
>> at
>> org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
>> at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>> NestedThrowablesStackTrace:
>> org.postgresql.util.PSQLException: ERROR: column "IS_COMPRESSED" is of
>> type bit but expression is of type boolean
>> Hint: You will need to rewrite or cast the expression.
>> Position: 129
>> at
>> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
>> at
>> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
>> at
>> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
>> at
>> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
>> at
>> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388)
>> at
>> org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:334)
>> at
>> org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
>> at
>> org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
>> at
>> org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:396)
>> at
>> org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:406)
>> at
>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
>> at
>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
>> at
>> org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
>> at
>> org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
>> at
>> org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
>> at
>> org.datanucleus.store.mapped.mapping.PersistableMapping.setObjectAsValue(PersistableMapping.java:664)
>> at
>> org.datanucleus.store.mapped.mapping.PersistableMapping.setObject(PersistableMapping.java:423)
>> at
>> org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:197)
>> at
>> org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1023)
>> at
>> org.apache.hadoop.hive.metastore.model.MTable.jdoProvideField(MTable.java)
>> at
>> org.apache.hadoop.hive.metastore.model.MTable.jdoProvideFields(MTable.java)
>> at
>> org.datanucleus.jdo.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2803)
>> at
>> org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:294)
>> at
>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
>> at
>> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
>> at
>> org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
>> at
>> org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
>> at
>> org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
>> at
>> org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1241)
>> at
>> org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:655)
>> at
>> org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
>> at
>> org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
>> at
>> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
>> at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
>> at
>> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>> [gpadmin@linuxsvr2 hive]$
>>
>
Re: Hive 0.7.1 does not work with PostgreSQL 9.0.2
Posted by wd <wd...@wdicc.com>.
you can try hive 0.5, after create the metadata, use upgrade sql file
in hive 0.7.1 to upgrade to 0.7.1
On Sat, Aug 20, 2011 at 2:20 PM, Xiaobo Gu <gu...@gmail.com> wrote:
> Hi,
> I have just set up a PostgreSQL 9.0.2 server for hive 0.7.1 metastore,
> and I am using the postgresql-9.0-801.jdbc4.jar jdbc driver, when I
> test the following HQL,
>
>
> CREATE TABLE records (year STRING, temperature INT, quality INT)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t';
>
>
> the following error messages are thrown in metastore services standard error:
>
> 11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: create_table:
> db=default tbl=records
> 11/08/20 10:12:33 INFO HiveMetaStore.audit: ugi=gpadmin
> ip=/192.168.72.6 cmd=create_table: db=default tbl=records
> 11/08/20 10:12:33 INFO metastore.HiveMetaStore: 1: Opening raw store
> with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 11/08/20 10:12:33 INFO metastore.ObjectStore: ObjectStore, initialize called
> 11/08/20 10:12:33 INFO metastore.ObjectStore: Initialized ObjectStore
> 11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own data
> store table.
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Class : org.apache.hadoop.hive.metastore.model.MSerDeInfo [Table :
> "SERDES", InheritanceStrategy
> : new-table]
> 11/08/20 10:12:33 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only" so does not have its own datastore
> table.
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Class : org.apache.hadoop.hive.metastore.model.MStorageDescriptor
> [Table : "SDS", InheritanceStr
> ategy : new-table]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Class : org.apache.hadoop.hive.metastore.model.MTable [Table :
> "TBLS", InheritanceStrategy : new
> -table]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MSerDeInfo.parameters
> [Table : "SERDE_PARAMS"]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MTable.parameters
> [Table : "TABLE_PARAMS"]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MTable.partitionKeys
> [Table : "PARTITION_KEYS"]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.bucketCols
> [Table : "BUCKETING
> _COLS"]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.cols
> [Table : "COLUMNS"]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.parameters
> [Table : "SD_PARAMS
> "]
> 11/08/20 10:12:33 INFO DataNucleus.Persistence: Managing Persistence
> of Field : org.apache.hadoop.hive.metastore.model.MStorageDescriptor.sortCols
> [Table : "SORT_COLS"]
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "SERDES"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 0 foreign key(s)
> for table "SERDES"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 index(es) for
> table "SERDES"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 unique key(s)
> for table "TBLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 foreign key(s)
> for table "TBLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 4 index(es) for table "TBLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "SDS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "SDS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for table "SDS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "SORT_COLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "SORT_COLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "SORT_COLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "TABLE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "TABLE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "TABLE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "SD_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "SD_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "SD_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "SERDE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "SERDE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "SERDE_PARAMS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "PARTITION_KEYS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "PARTITION_KEYS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "PARTITION_KEYS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "COLUMNS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "COLUMNS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "COLUMNS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 unique key(s)
> for table "BUCKETING_COLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 1 foreign key(s)
> for table "BUCKETING_COLS"
> 11/08/20 10:12:33 INFO Datastore.Schema: Validating 2 index(es) for
> table "BUCKETING_COLS"
> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
> initialisation for persistable class
> org.apache.hadoop.hive.metastore.model.MSerDeInfo
> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
> initialisation for persistable class
> org.apache.hadoop.hive.metastore.model.MStorageDescriptor
> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
> initialisation for persistable class
> org.apache.hadoop.hive.metastore.model.MTable
> 11/08/20 10:12:34 INFO DataNucleus.MetaData: Listener found
> initialisation for persistable class
> org.apache.hadoop.hive.metastore.model.MFieldSchema
> 11/08/20 10:12:34 WARN Datastore.Persist: Insert of object
> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@46bb05de"
> using statement "INSERT INTO "SDS" ("SD_
> ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
> type bit bu
> t expression is of type boolean
> Hint: You will need to rewrite or cast the expression.
> Position: 129
> 11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: deleting
> hdfs://linuxsvr2/user/hive/warehouse/records
> 11/08/20 10:12:34 INFO metastore.hivemetastoressimpl: Deleted the
> diretory hdfs://linuxsvr2/user/hive/warehouse/records
> 11/08/20 10:12:34 ERROR metastore.HiveMetaStore: JDO datastore error.
> Retrying metastore command after 1000 ms (attempt 1 of 1)
> 11/08/20 10:12:35 WARN metastore.HiveMetaStore: Location:
> hdfs://linuxsvr2/user/hive/warehouse/records specified for
> non-external table:records
> 11/08/20 10:12:35 WARN Datastore.Persist: Insert of object
> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
> using statement "INSERT INTO "SDS" ("SD_
> ID","NUM_BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
> type bit bu
> t expression is of type boolean
> Hint: You will need to rewrite or cast the expression.
> Position: 129
> 11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: deleting
> hdfs://linuxsvr2/user/hive/warehouse/records
> 11/08/20 10:12:35 INFO metastore.hivemetastoressimpl: Deleted the
> diretory hdfs://linuxsvr2/user/hive/warehouse/records
> 11/08/20 10:12:35 ERROR api.ThriftHiveMetastore$Processor: Internal
> error processing create_table
> javax.jdo.JDODataStoreException: Insert of object
> "org.apache.hadoop.hive.metastore.model.MStorageDescriptor@7d1c19e6"
> using statement "INSERT INTO "SDS" ("SD_ID","NUM_
> BUCKETS","SERDE_ID","IS_COMPRESSED","OUTPUT_FORMAT","LOCATION","INPUT_FORMAT")
> VALUES (?,?,?,?,?,?,?)" failed : ERROR: column "IS_COMPRESSED" is of
> type bit but express
> ion is of type boolean
> Hint: You will need to rewrite or cast the expression.
> Position: 129
> at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313)
> at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:660)
> at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
> at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
> at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> NestedThrowablesStackTrace:
> org.postgresql.util.PSQLException: ERROR: column "IS_COMPRESSED" is of
> type bit but expression is of type boolean
> Hint: You will need to rewrite or cast the expression.
> Position: 129
> at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102)
> at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835)
> at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)
> at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500)
> at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:388)
> at org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:334)
> at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
> at org.apache.commons.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:105)
> at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:396)
> at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:406)
> at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
> at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
> at org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
> at org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
> at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
> at org.datanucleus.store.mapped.mapping.PersistableMapping.setObjectAsValue(PersistableMapping.java:664)
> at org.datanucleus.store.mapped.mapping.PersistableMapping.setObject(PersistableMapping.java:423)
> at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:197)
> at org.datanucleus.state.AbstractStateManager.providedObjectField(AbstractStateManager.java:1023)
> at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideField(MTable.java)
> at org.apache.hadoop.hive.metastore.model.MTable.jdoProvideFields(MTable.java)
> at org.datanucleus.jdo.state.JDOStateManagerImpl.provideFields(JDOStateManagerImpl.java:2803)
> at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:294)
> at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:146)
> at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:121)
> at org.datanucleus.jdo.state.JDOStateManagerImpl.internalMakePersistent(JDOStateManagerImpl.java:3275)
> at org.datanucleus.jdo.state.JDOStateManagerImpl.makePersistent(JDOStateManagerImpl.java:3249)
> at org.datanucleus.ObjectManagerImpl.persistObjectInternal(ObjectManagerImpl.java:1428)
> at org.datanucleus.ObjectManagerImpl.persistObject(ObjectManagerImpl.java:1241)
> at org.datanucleus.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:655)
> at org.datanucleus.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:680)
> at org.apache.hadoop.hive.metastore.ObjectStore.createTable(ObjectStore.java:606)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:924)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$600(HiveMetaStore.java:109)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:945)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$15.run(HiveMetaStore.java:942)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table(HiveMetaStore.java:942)
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table.process(ThriftHiveMetastore.java:5297)
> at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor.process(ThriftHiveMetastore.java:4789)
> at org.apache.hadoop.hive.metastore.HiveMetaStore$TLoggingProcessor.process(HiveMetaStore.java:3167)
> at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:253)
> at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> [gpadmin@linuxsvr2 hive]$
>