You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Anandha L Ranganathan <an...@gmail.com> on 2012/12/03 19:56:58 UTC

Renaming of table throws out of memory exception.

I am trying to rename a table but it throws OOMException.

One way to resolve is to  increase the memory ,  but I would like to
understand how renaming of table  throws OOMException.


hive> ALTER TABLE <<TABLE_NAME>> RENAME TO <<TABLE_NAME_OLD>>;

java.lang.OutOfMemoryError: GC overhead limit exceeded

        at java.util.Arrays.copyOf(Arrays.java:2882)

        at
java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)

        at
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:597)

        at java.lang.StringBuilder.append(StringBuilder.java:212)

        at
org.datanucleus.JDOClassLoaderResolver.newCacheKey(JDOClassLoaderResolver.java:385)

        at
org.datanucleus.JDOClassLoaderResolver.classForName(JDOClassLoaderResolver.java:176)

        at
org.datanucleus.JDOClassLoaderResolver.classForName(JDOClassLoaderResolver.java:404)

        at
org.datanucleus.metadata.MetaDataManager.getMetaDataForClass(MetaDataManager.java:1155)

        at
org.datanucleus.ObjectManagerImpl.getSerializeReadForClass(ObjectManagerImpl.java:4117)

        at
org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:279)

        at
org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:240)

        at
org.datanucleus.jdo.state.JDOStateManagerImpl.loadFieldsFromDatastore(JDOStateManagerImpl.java:1929)

        at
org.datanucleus.jdo.state.JDOStateManagerImpl.loadSpecifiedFields(JDOStateManagerImpl.java:1556)

        at
org.datanucleus.jdo.state.JDOStateManagerImpl.isLoaded(JDOStateManagerImpl.java:2013)

        at
org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetserDeInfo(MStorageDescriptor.java)

        at
org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getSerDeInfo(MStorageDescriptor.java:183)

        at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:915)

        at
org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1052)

        at
org.apache.hadoop.hive.metastore.ObjectStore.convertToParts(ObjectStore.java:1176)

        at
org.apache.hadoop.hive.metastore.ObjectStore.getPartitions(ObjectStore.java:1099)

        at
org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:149)

        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$29.run(HiveMetaStore.java:1687)

        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$29.run(HiveMetaStore.java:1684)

        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:307)

        at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table(HiveMetaStore.java:1684)

        at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:166)

        at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:354)

        at
org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:2779)

        at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:243)

        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)

        at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)

        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1063)


-Anand