You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Edward Capriolo <ed...@gmail.com> on 2010/04/08 20:47:17 UTC

enough alter tables in the same .q file eventually fail

Hive 5.0 mysql as metastore backend. Using external tables with location for
partitions...

alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000843') LOCATION 'hit_date=20100329/mid=000843';
alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000844') LOCATION 'hit_date=20100329/mid=000844';
alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000849') LOCATION 'hit_date=20100329/mid=000849';
alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000850') LOCATION 'hit_date=20100329/mid=000850';
alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000851') LOCATION 'hit_date=20100329/mid=000851';
alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
'000852') LOCATION 'hit_date=20100329/mid=000852';

Eventually this fails after a number of entries.

Time taken: 0.159 seconds
OK
Time taken: 0.17 seconds
OK
Time taken: 0.241 seconds
FAILED: Error in metadata: Unable to fetch table XXXXX_action
FAILED: Execution Error, return code 1 from
org.apache.hadoop.hive.ql.exec.DDLTask

Restarting the process after removing the already added tables works until
it breaks again. Anyone ever dealt with this?

Doing one hive -e per table always works but takes a lot longer ...3 seconds
a partition rather then ~.5 seconds.

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Fri, Apr 30, 2010 at 7:11 PM, Paul Yang <py...@facebook.com> wrote:

>  Typo - last one should have been commons-pool-1.2 or thereabouts
>
>
>
> *From:* Paul Yang [mailto:pyang@facebook.com]
> *Sent:* Friday, April 30, 2010 4:05 PM
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* RE: enough alter tables in the same .q file eventually fail
>
>
>
> The jars I added were:
>
>
>
> datanucleus-connectionpool-1.0.2
>
> commons-dbcp-1.2.2
>
> datanucleus-connectionpool-1.0.2
>
>
>
> Can you try adding those instead? I’m thinking there is an issue with the
> newer ones as our datanucleus-core is 1.1.2
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 2:12 PM
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
>
>
>  On Fri, Apr 30, 2010 at 4:48 PM, Paul Yang <py...@facebook.com> wrote:
>
> Well, I can’t seem to find my patch at the moment, but yeah, those are the
> same instructions that I followed. Can you post a list of jars that you have
> and the path?
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 1:21 PM
>
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
> Paul,
>
> Yes please if you have this it would be most helpful.
>
> I followed the instrcutions here
>
> http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html
>
> Added this to my hive site..
>
>  <property>
>     <name>datanucleus.connectionPoolingType</name>
>     <value>DBCP</value>
>     <description>passwd for a JDBC metastore</description>
>   </property>
>
> Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke
> the "DBCP" plugin to create a ConnectionPool gave an error : The connection
> pool plugin of type "DBCP" was not found in the CLASSPATH!
>
> Do not understand this error pretty sure I have every jar they asked for.
>
> On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com> wrote:
>
> Actually, yeah, I looked into connection pooling a while back and made
> patch for the metastore. As I recall, it is just a configuration change +
> some jars. Let me see if I still have it…
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 10:03 AM
>
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
>
>
> On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>
> Typo in Ed's last email (table name):
> echo "create external table if not exists ed*_*test ( dat string )
> partitioned by (dummy string) location '/tmp/a';" > test.q
>
>
>
> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>
> Seems to be fixed in 0.6. Here's what I got:
>
> test.q:
> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>
>
> Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
> OK
> Time taken: 4.101 seconds
> OK
> Time taken: 0.558 seconds
> OK
> Time taken: 0.453 seconds
> OK
> Time taken: 0.416 seconds
> OK
> Time taken: 0.378 seconds
> OK
> Time taken: 0.457 seconds
> OK
> Time taken: 0.454 seconds
>
>
> Can you the stack trace in /tmp/<username>/hive.log?
>
>
>
>
> -----Original Message-----
> From: Prasad Chakka [mailto:pchakka@facebook.com]
> Sent: Thursday, April 08, 2010 1:03 PM
> To: hive-user@hadoop.apache.org
> Subject: Re: enough alter tables in the same .q file eventually fail
>
> There was a bug that got fixed where each request was creating a separate
> metastore client. That could be it or something similar that hasn't gotten
> fixed.
>
> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>
> > Hive 5.0 mysql as metastore backend. Using external tables with location
> for partitions...
> >
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000843') LOCATION 'hit_date=20100329/mid=000843';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000844') LOCATION 'hit_date=20100329/mid=000844';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000849') LOCATION 'hit_date=20100329/mid=000849';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000850') LOCATION 'hit_date=20100329/mid=000850';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000851') LOCATION 'hit_date=20100329/mid=000851';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000852') LOCATION 'hit_date=20100329/mid=000852';
> >
> > Eventually this fails after a number of entries.
> >
> > Time taken: 0.159 seconds
> > OK
> > Time taken: 0.17 seconds
> > OK
> > Time taken: 0.241 seconds
> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> > Restarting the process after removing the already added tables works
> until it breaks again. Anyone ever dealt with this?
> >
> > Doing one hive -e per table always works but takes a lot longer ...3
> seconds a partition rather then ~.5 seconds.
> >
> >
>
>
>
> It does not happen after 4 or 5 more like 100 or 1000+. I will try to track
> this down a bit.
>
> Edward
>
>
>
> Try this:
>
> echo "create external table if not exists edtest ( dat string ) partitioned
> by (dummy string) location '/tmp/a';" > test.q
>  for i in {1..3000} ; do echo "alter table ed_test add partition
> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
> hive -f test.q
>
> On Hive 5.0 I get a failure mid way.
> Edward
>
>
>
>
>
> Also trying to do selects from the table without enough pruning in the
> where clause causes the same error, sometimes it comes as a JDBC/jpox access
> denied error.
>
>
>
> Also, there are problems working with this type of table as well. :(
>
> $ hive -e "explain select * from XXXXX_action "
> Hive history
> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access
> denied for user 'hivadm'@'XXXXXX' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
> password: YES)
>
> Interestingly enough if we specify some partitions we can dodge this error.
> I get the fealing that the select * is trying to select too many partitions
> and causing this error.
>
> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) -
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
> .sd.pl.pvt' (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> NestedThrowablesStackTrace:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
> org.apache.hadoop.hive.ql.metadata.HiveExcepti
> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
> 2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248))
> - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
> password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>
> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
> YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
>
>
>
>
>
>
>
> The same problem occurs dropping partitions. At this stage any program I
> write that does heavy duty work I need to prefilter with a unix tool like
> 'split' to make sure I do not blow up half way through 1000 inserts or 1000
> drops. It is really anti-productive.
>
>
>
> Also when using this type of layout with external tables, queries on
> partitions that do not exist blow up...
>
>
> hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and
> mid=000979;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201004221345_0002, Tracking URL =
> http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
> Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job
> -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
> 2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
> 2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201004221345_0002 with errors
>
> Failed tasks with most(4) failures :
> Task URL:
> http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> hive>
>
>
> ...arge...
>
>
> So I have a mysql server running full logging to try to get to the bottom
> of this.
> Running a hive trunk from a few days ago.
> hive>show tables
>
> mysql results:
> 100430 12:23:27        7 Connect    hivadm@localhost on
>             7 Init DB    m6_XXXX
>             7 Query    SHOW SESSION VARIABLES
>             7 Query    SHOW COLLATION
>             7 Query    SET character_set_results = NULL
>             7 Query    SET autocommit=1
>             7 Query    SET sql_mode='STRICT_TRANS_TABLES'
>             7 Query    SET autocommit=0
>             7 Query    SELECT @@session.tx_isolation
>             7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ
> COMMITTED
>             7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT
> OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` =
> `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default'
> AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
>             7 Query    commit
>             7 Query    rollback
>
>
> To me it looks like every query does a commit followed by a rollback. What
> is up with that?
>
> I can also see that for my query that hive does not seem to be doing any
> connection pooling, or even re-using connections. I see it reconnecting and
> quitting for every query. So I am probably hitting some mysql connection
> limit, but should it really we be connecting and re-connecting  for to gain
> meta-data for each partition? since my table has tens of thousands of
> partitions this is non-optimal.
>
> Can JPOX be setup to pool or re-use current connetions?
>
> Regards
> Edward
>
>
>
> This is what I have:
>
> [bob@rs01 opt]$ ls -1 /opt/hive-0.6.0-bin/lib/
> antlr-runtime-3.0.1.jar
> asm-3.1.jar
> commons-cli-2.0-SNAPSHOT.jar
> commons-codec-1.3.jar
> commons-collections-3.2.1.jar
> commons-dbcp-1.3.jar  ----------------------------added this--------
> commons-lang-2.4.jar
> commons-logging-1.0.4.jar
> commons-logging-api-1.0.4.jar
> commons-pool-1.5.4.jar ----------------------------added this--------
> datanucleus-connectionpool-2.0.0-release.jar
> ----------------------------added this--------
> datanucleus-core-1.1.2.jar
> datanucleus-enhancer-1.1.2.jar
> datanucleus-rdbms-1.1.2.jar
> derby.jar
> hbase-0.20.3.jar
> hbase-0.20.3-test.jar
> hive-anttasks-0.6.0.jar
> hive-cli-0.6.0.jar
> hive-common-0.6.0.jar
> hive_contrib.jar
> hive-exec-0.6.0.jar
> hive_hbase-handler.jar
> hive-hwi-0.6.0.jar
> hive-hwi-0.6.0.war
> hive-jdbc-0.6.0.jar
> hivelib-1.0.20.jar
> hive-metastore-0.6.0.jar
> hive-serde-0.6.0.jar
> hive-service-0.6.0.jar
> hive-shims-0.6.0.jar
> jdo2-api-2.3-SNAPSHOT.jar
> jline-0.9.94.jar
> json.jar
> junit-3.8.1.jar
> libfb303.jar
> libthrift.jar
> log4j-1.2.15.jar
> mysql-connector-java-5.0.8-bin.jar
> php
> py
> stringtemplate-3.1b1.jar
> velocity-1.5.jar
> zookeeper-3.2.2.jar
>
Paul,

Thank you for the information.  Getting the right versions of the jars
helped. It takes many (mysql) queries to get all partition information and a
lot of time (40 seconds for explain) but it works!

RE: enough alter tables in the same .q file eventually fail

Posted by Paul Yang <py...@facebook.com>.
Typo - last one should have been commons-pool-1.2 or thereabouts

From: Paul Yang [mailto:pyang@facebook.com]
Sent: Friday, April 30, 2010 4:05 PM
To: hive-user@hadoop.apache.org
Subject: RE: enough alter tables in the same .q file eventually fail

The jars I added were:

datanucleus-connectionpool-1.0.2
commons-dbcp-1.2.2
datanucleus-connectionpool-1.0.2

Can you try adding those instead? I'm thinking there is an issue with the newer ones as our datanucleus-core is 1.1.2

From: Edward Capriolo [mailto:edlinuxguru@gmail.com]
Sent: Friday, April 30, 2010 2:12 PM
To: hive-user@hadoop.apache.org
Subject: Re: enough alter tables in the same .q file eventually fail



On Fri, Apr 30, 2010 at 4:48 PM, Paul Yang <py...@facebook.com>> wrote:
Well, I can't seem to find my patch at the moment, but yeah, those are the same instructions that I followed. Can you post a list of jars that you have and the path?

From: Edward Capriolo [mailto:edlinuxguru@gmail.com<ma...@gmail.com>]
Sent: Friday, April 30, 2010 1:21 PM

To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

Paul,

Yes please if you have this it would be most helpful.

I followed the instrcutions here

http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html

Added this to my hive site..

 <property>
    <name>datanucleus.connectionPoolingType</name>
    <value>DBCP</value>
    <description>passwd for a JDBC metastore</description>
  </property>

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "DBCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "DBCP" was not found in the CLASSPATH!

Do not understand this error pretty sure I have every jar they asked for.
On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com>> wrote:
Actually, yeah, I looked into connection pooling a while back and made patch for the metastore. As I recall, it is just a configuration change + some jars. Let me see if I still have it...

From: Edward Capriolo [mailto:edlinuxguru@gmail.com<ma...@gmail.com>]
Sent: Friday, April 30, 2010 10:03 AM

To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail


On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com>> wrote:
Typo in Ed's last email (table name):
echo "create external table if not exists ed_test ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q

On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com>> wrote:
Seems to be fixed in 0.6. Here's what I got:

test.q:
alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';


Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
OK
Time taken: 4.101 seconds
OK
Time taken: 0.558 seconds
OK
Time taken: 0.453 seconds
OK
Time taken: 0.416 seconds
OK
Time taken: 0.378 seconds
OK
Time taken: 0.457 seconds
OK
Time taken: 0.454 seconds


Can you the stack trace in /tmp/<username>/hive.log?


-----Original Message-----
From: Prasad Chakka [mailto:pchakka@facebook.com<ma...@facebook.com>]
Sent: Thursday, April 08, 2010 1:03 PM
To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
>
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
>
> Eventually this fails after a number of entries.
>
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
>
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
>
>

It does not happen after 4 or 5 more like 100 or 1000+. I will try to track this down a bit.

Edward


Try this:

echo "create external table if not exists edtest ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q
 for i in {1..3000} ; do echo "alter table ed_test add partition (dummy='${i}') location '/tmp/duh';" ; done >> test.q
hive -f test.q

On Hive 5.0 I get a failure mid way.
Edward


Also trying to do selects from the table without enough pruning in the where clause causes the same error, sometimes it comes as a JDBC/jpox access denied error.


Also, there are problems working with this type of table as well. :(

$ hive -e "explain select * from XXXXX_action "
Hive history file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)

Interestingly enough if we specify some partitions we can dodge this error. I get the fealing that the select * is trying to select too many partitions and causing this error.

2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
.sd.pl.pvt' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
NestedThrowablesStackTrace:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:genMapRedTasks(4886)) - org.apache.hadoop.hive.ql.metadata.HiveExcepti
on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)

    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248)) - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
 Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)

Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more





The same problem occurs dropping partitions. At this stage any program I write that does heavy duty work I need to prefilter with a unix tool like 'split' to make sure I do not blow up half way through 1000 inserts or 1000 drops. It is really anti-productive.

Also when using this type of layout with external tables, queries on partitions that do not exist blow up...


hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and mid=000979;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201004221345_0002, Tracking URL = http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201004221345_0002 with errors

Failed tasks with most(4) failures :
Task URL: http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver
hive>


...arge...

So I have a mysql server running full logging to try to get to the bottom of this.
Running a hive trunk from a few days ago.
hive>show tables

mysql results:
100430 12:23:27        7 Connect    hivadm@localhost on
            7 Init DB    m6_XXXX
            7 Query    SHOW SESSION VARIABLES
            7 Query    SHOW COLLATION
            7 Query    SET character_set_results = NULL
            7 Query    SET autocommit=1
            7 Query    SET sql_mode='STRICT_TRANS_TABLES'
            7 Query    SET autocommit=0
            7 Query    SELECT @@session.tx_isolation
            7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
            7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` = `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default' AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
            7 Query    commit
            7 Query    rollback


To me it looks like every query does a commit followed by a rollback. What is up with that?

I can also see that for my query that hive does not seem to be doing any connection pooling, or even re-using connections. I see it reconnecting and quitting for every query. So I am probably hitting some mysql connection limit, but should it really we be connecting and re-connecting  for to gain meta-data for each partition? since my table has tens of thousands of partitions this is non-optimal.

Can JPOX be setup to pool or re-use current connetions?

Regards
Edward

This is what I have:

[bob@rs01 opt]$ ls -1 /opt/hive-0.6.0-bin/lib/
antlr-runtime-3.0.1.jar
asm-3.1.jar
commons-cli-2.0-SNAPSHOT.jar
commons-codec-1.3.jar
commons-collections-3.2.1.jar
commons-dbcp-1.3.jar  ----------------------------added this--------
commons-lang-2.4.jar
commons-logging-1.0.4.jar
commons-logging-api-1.0.4.jar
commons-pool-1.5.4.jar ----------------------------added this--------
datanucleus-connectionpool-2.0.0-release.jar ----------------------------added this--------
datanucleus-core-1.1.2.jar
datanucleus-enhancer-1.1.2.jar
datanucleus-rdbms-1.1.2.jar
derby.jar
hbase-0.20.3.jar
hbase-0.20.3-test.jar
hive-anttasks-0.6.0.jar
hive-cli-0.6.0.jar
hive-common-0.6.0.jar
hive_contrib.jar
hive-exec-0.6.0.jar
hive_hbase-handler.jar
hive-hwi-0.6.0.jar
hive-hwi-0.6.0.war
hive-jdbc-0.6.0.jar
hivelib-1.0.20.jar
hive-metastore-0.6.0.jar
hive-serde-0.6.0.jar
hive-service-0.6.0.jar
hive-shims-0.6.0.jar
jdo2-api-2.3-SNAPSHOT.jar
jline-0.9.94.jar
json.jar
junit-3.8.1.jar
libfb303.jar
libthrift.jar
log4j-1.2.15.jar
mysql-connector-java-5.0.8-bin.jar
php
py
stringtemplate-3.1b1.jar
velocity-1.5.jar
zookeeper-3.2.2.jar

RE: enough alter tables in the same .q file eventually fail

Posted by Paul Yang <py...@facebook.com>.
The jars I added were:

datanucleus-connectionpool-1.0.2
commons-dbcp-1.2.2
datanucleus-connectionpool-1.0.2

Can you try adding those instead? I'm thinking there is an issue with the newer ones as our datanucleus-core is 1.1.2

From: Edward Capriolo [mailto:edlinuxguru@gmail.com]
Sent: Friday, April 30, 2010 2:12 PM
To: hive-user@hadoop.apache.org
Subject: Re: enough alter tables in the same .q file eventually fail




On Fri, Apr 30, 2010 at 4:48 PM, Paul Yang <py...@facebook.com>> wrote:
Well, I can't seem to find my patch at the moment, but yeah, those are the same instructions that I followed. Can you post a list of jars that you have and the path?

From: Edward Capriolo [mailto:edlinuxguru@gmail.com<ma...@gmail.com>]
Sent: Friday, April 30, 2010 1:21 PM

To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

Paul,

Yes please if you have this it would be most helpful.

I followed the instrcutions here

http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html

Added this to my hive site..

 <property>
    <name>datanucleus.connectionPoolingType</name>
    <value>DBCP</value>
    <description>passwd for a JDBC metastore</description>
  </property>

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "DBCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "DBCP" was not found in the CLASSPATH!

Do not understand this error pretty sure I have every jar they asked for.
On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com>> wrote:
Actually, yeah, I looked into connection pooling a while back and made patch for the metastore. As I recall, it is just a configuration change + some jars. Let me see if I still have it...

From: Edward Capriolo [mailto:edlinuxguru@gmail.com<ma...@gmail.com>]
Sent: Friday, April 30, 2010 10:03 AM

To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail


On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com>> wrote:
Typo in Ed's last email (table name):
echo "create external table if not exists ed_test ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q

On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com>> wrote:
Seems to be fixed in 0.6. Here's what I got:

test.q:
alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';


Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
OK
Time taken: 4.101 seconds
OK
Time taken: 0.558 seconds
OK
Time taken: 0.453 seconds
OK
Time taken: 0.416 seconds
OK
Time taken: 0.378 seconds
OK
Time taken: 0.457 seconds
OK
Time taken: 0.454 seconds


Can you the stack trace in /tmp/<username>/hive.log?


-----Original Message-----
From: Prasad Chakka [mailto:pchakka@facebook.com<ma...@facebook.com>]
Sent: Thursday, April 08, 2010 1:03 PM
To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
>
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
>
> Eventually this fails after a number of entries.
>
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
>
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
>
>

It does not happen after 4 or 5 more like 100 or 1000+. I will try to track this down a bit.

Edward


Try this:

echo "create external table if not exists edtest ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q
 for i in {1..3000} ; do echo "alter table ed_test add partition (dummy='${i}') location '/tmp/duh';" ; done >> test.q
hive -f test.q

On Hive 5.0 I get a failure mid way.
Edward


Also trying to do selects from the table without enough pruning in the where clause causes the same error, sometimes it comes as a JDBC/jpox access denied error.


Also, there are problems working with this type of table as well. :(

$ hive -e "explain select * from XXXXX_action "
Hive history file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)

Interestingly enough if we specify some partitions we can dodge this error. I get the fealing that the select * is trying to select too many partitions and causing this error.

2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
.sd.pl.pvt' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
NestedThrowablesStackTrace:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:genMapRedTasks(4886)) - org.apache.hadoop.hive.ql.metadata.HiveExcepti
on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)

    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248)) - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
 Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)

Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more





The same problem occurs dropping partitions. At this stage any program I write that does heavy duty work I need to prefilter with a unix tool like 'split' to make sure I do not blow up half way through 1000 inserts or 1000 drops. It is really anti-productive.

Also when using this type of layout with external tables, queries on partitions that do not exist blow up...


hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and mid=000979;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201004221345_0002, Tracking URL = http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201004221345_0002 with errors

Failed tasks with most(4) failures :
Task URL: http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver
hive>


...arge...

So I have a mysql server running full logging to try to get to the bottom of this.
Running a hive trunk from a few days ago.
hive>show tables

mysql results:
100430 12:23:27        7 Connect    hivadm@localhost on
            7 Init DB    m6_XXXX
            7 Query    SHOW SESSION VARIABLES
            7 Query    SHOW COLLATION
            7 Query    SET character_set_results = NULL
            7 Query    SET autocommit=1
            7 Query    SET sql_mode='STRICT_TRANS_TABLES'
            7 Query    SET autocommit=0
            7 Query    SELECT @@session.tx_isolation
            7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
            7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` = `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default' AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
            7 Query    commit
            7 Query    rollback


To me it looks like every query does a commit followed by a rollback. What is up with that?

I can also see that for my query that hive does not seem to be doing any connection pooling, or even re-using connections. I see it reconnecting and quitting for every query. So I am probably hitting some mysql connection limit, but should it really we be connecting and re-connecting  for to gain meta-data for each partition? since my table has tens of thousands of partitions this is non-optimal.

Can JPOX be setup to pool or re-use current connetions?

Regards
Edward

This is what I have:

[bob@rs01 opt]$ ls -1 /opt/hive-0.6.0-bin/lib/
antlr-runtime-3.0.1.jar
asm-3.1.jar
commons-cli-2.0-SNAPSHOT.jar
commons-codec-1.3.jar
commons-collections-3.2.1.jar
commons-dbcp-1.3.jar  ----------------------------added this--------
commons-lang-2.4.jar
commons-logging-1.0.4.jar
commons-logging-api-1.0.4.jar
commons-pool-1.5.4.jar ----------------------------added this--------
datanucleus-connectionpool-2.0.0-release.jar ----------------------------added this--------
datanucleus-core-1.1.2.jar
datanucleus-enhancer-1.1.2.jar
datanucleus-rdbms-1.1.2.jar
derby.jar
hbase-0.20.3.jar
hbase-0.20.3-test.jar
hive-anttasks-0.6.0.jar
hive-cli-0.6.0.jar
hive-common-0.6.0.jar
hive_contrib.jar
hive-exec-0.6.0.jar
hive_hbase-handler.jar
hive-hwi-0.6.0.jar
hive-hwi-0.6.0.war
hive-jdbc-0.6.0.jar
hivelib-1.0.20.jar
hive-metastore-0.6.0.jar
hive-serde-0.6.0.jar
hive-service-0.6.0.jar
hive-shims-0.6.0.jar
jdo2-api-2.3-SNAPSHOT.jar
jline-0.9.94.jar
json.jar
junit-3.8.1.jar
libfb303.jar
libthrift.jar
log4j-1.2.15.jar
mysql-connector-java-5.0.8-bin.jar
php
py
stringtemplate-3.1b1.jar
velocity-1.5.jar
zookeeper-3.2.2.jar

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Fri, Apr 30, 2010 at 4:48 PM, Paul Yang <py...@facebook.com> wrote:

>  Well, I can’t seem to find my patch at the moment, but yeah, those are
> the same instructions that I followed. Can you post a list of jars that you
> have and the path?
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 1:21 PM
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
> Paul,
>
> Yes please if you have this it would be most helpful.
>
> I followed the instrcutions here
>
> http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html
>
> Added this to my hive site..
>
>  <property>
>     <name>datanucleus.connectionPoolingType</name>
>     <value>DBCP</value>
>     <description>passwd for a JDBC metastore</description>
>   </property>
>
> Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke
> the "DBCP" plugin to create a ConnectionPool gave an error : The connection
> pool plugin of type "DBCP" was not found in the CLASSPATH!
>
> Do not understand this error pretty sure I have every jar they asked for.
>
>  On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com> wrote:
>
> Actually, yeah, I looked into connection pooling a while back and made
> patch for the metastore. As I recall, it is just a configuration change +
> some jars. Let me see if I still have it…
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 10:03 AM
>
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
>
>
> On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>
> Typo in Ed's last email (table name):
> echo "create external table if not exists ed*_*test ( dat string )
> partitioned by (dummy string) location '/tmp/a';" > test.q
>
>
>
> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>
> Seems to be fixed in 0.6. Here's what I got:
>
> test.q:
> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>
>
> Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
> OK
> Time taken: 4.101 seconds
> OK
> Time taken: 0.558 seconds
> OK
> Time taken: 0.453 seconds
> OK
> Time taken: 0.416 seconds
> OK
> Time taken: 0.378 seconds
> OK
> Time taken: 0.457 seconds
> OK
> Time taken: 0.454 seconds
>
>
> Can you the stack trace in /tmp/<username>/hive.log?
>
>
>
>
> -----Original Message-----
> From: Prasad Chakka [mailto:pchakka@facebook.com]
> Sent: Thursday, April 08, 2010 1:03 PM
> To: hive-user@hadoop.apache.org
> Subject: Re: enough alter tables in the same .q file eventually fail
>
> There was a bug that got fixed where each request was creating a separate
> metastore client. That could be it or something similar that hasn't gotten
> fixed.
>
> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>
> > Hive 5.0 mysql as metastore backend. Using external tables with location
> for partitions...
> >
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000843') LOCATION 'hit_date=20100329/mid=000843';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000844') LOCATION 'hit_date=20100329/mid=000844';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000849') LOCATION 'hit_date=20100329/mid=000849';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000850') LOCATION 'hit_date=20100329/mid=000850';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000851') LOCATION 'hit_date=20100329/mid=000851';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000852') LOCATION 'hit_date=20100329/mid=000852';
> >
> > Eventually this fails after a number of entries.
> >
> > Time taken: 0.159 seconds
> > OK
> > Time taken: 0.17 seconds
> > OK
> > Time taken: 0.241 seconds
> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> > Restarting the process after removing the already added tables works
> until it breaks again. Anyone ever dealt with this?
> >
> > Doing one hive -e per table always works but takes a lot longer ...3
> seconds a partition rather then ~.5 seconds.
> >
> >
>
>
>
> It does not happen after 4 or 5 more like 100 or 1000+. I will try to track
> this down a bit.
>
> Edward
>
>
>
> Try this:
>
> echo "create external table if not exists edtest ( dat string ) partitioned
> by (dummy string) location '/tmp/a';" > test.q
>  for i in {1..3000} ; do echo "alter table ed_test add partition
> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
> hive -f test.q
>
> On Hive 5.0 I get a failure mid way.
> Edward
>
>
>
>
>
> Also trying to do selects from the table without enough pruning in the
> where clause causes the same error, sometimes it comes as a JDBC/jpox access
> denied error.
>
>
>
> Also, there are problems working with this type of table as well. :(
>
> $ hive -e "explain select * from XXXXX_action "
> Hive history
> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access
> denied for user 'hivadm'@'XXXXXX' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
> password: YES)
>
> Interestingly enough if we specify some partitions we can dodge this error.
> I get the fealing that the select * is trying to select too many partitions
> and causing this error.
>
> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) -
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
> .sd.pl.pvt' (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> NestedThrowablesStackTrace:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
> org.apache.hadoop.hive.ql.metadata.HiveExcepti
> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
> 2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248))
> - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
> password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>
> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
> YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
>
>
>
>
>
>
>
> The same problem occurs dropping partitions. At this stage any program I
> write that does heavy duty work I need to prefilter with a unix tool like
> 'split' to make sure I do not blow up half way through 1000 inserts or 1000
> drops. It is really anti-productive.
>
>
>
> Also when using this type of layout with external tables, queries on
> partitions that do not exist blow up...
>
>
> hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and
> mid=000979;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201004221345_0002, Tracking URL =
> http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
> Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job
> -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
> 2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
> 2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201004221345_0002 with errors
>
> Failed tasks with most(4) failures :
> Task URL:
> http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> hive>
>
>
> ...arge...
>
>
> So I have a mysql server running full logging to try to get to the bottom
> of this.
> Running a hive trunk from a few days ago.
> hive>show tables
>
> mysql results:
> 100430 12:23:27        7 Connect    hivadm@localhost on
>             7 Init DB    m6_XXXX
>             7 Query    SHOW SESSION VARIABLES
>             7 Query    SHOW COLLATION
>             7 Query    SET character_set_results = NULL
>             7 Query    SET autocommit=1
>             7 Query    SET sql_mode='STRICT_TRANS_TABLES'
>             7 Query    SET autocommit=0
>             7 Query    SELECT @@session.tx_isolation
>             7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ
> COMMITTED
>             7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT
> OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` =
> `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default'
> AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
>             7 Query    commit
>             7 Query    rollback
>
>
> To me it looks like every query does a commit followed by a rollback. What
> is up with that?
>
> I can also see that for my query that hive does not seem to be doing any
> connection pooling, or even re-using connections. I see it reconnecting and
> quitting for every query. So I am probably hitting some mysql connection
> limit, but should it really we be connecting and re-connecting  for to gain
> meta-data for each partition? since my table has tens of thousands of
> partitions this is non-optimal.
>
> Can JPOX be setup to pool or re-use current connetions?
>
> Regards
> Edward
>
>
>
This is what I have:

[bob@rs01 opt]$ ls -1 /opt/hive-0.6.0-bin/lib/
antlr-runtime-3.0.1.jar
asm-3.1.jar
commons-cli-2.0-SNAPSHOT.jar
commons-codec-1.3.jar
commons-collections-3.2.1.jar
commons-dbcp-1.3.jar  ----------------------------added this--------
commons-lang-2.4.jar
commons-logging-1.0.4.jar
commons-logging-api-1.0.4.jar
commons-pool-1.5.4.jar ----------------------------added this--------
datanucleus-connectionpool-2.0.0-release.jar
----------------------------added this--------
datanucleus-core-1.1.2.jar
datanucleus-enhancer-1.1.2.jar
datanucleus-rdbms-1.1.2.jar
derby.jar
hbase-0.20.3.jar
hbase-0.20.3-test.jar
hive-anttasks-0.6.0.jar
hive-cli-0.6.0.jar
hive-common-0.6.0.jar
hive_contrib.jar
hive-exec-0.6.0.jar
hive_hbase-handler.jar
hive-hwi-0.6.0.jar
hive-hwi-0.6.0.war
hive-jdbc-0.6.0.jar
hivelib-1.0.20.jar
hive-metastore-0.6.0.jar
hive-serde-0.6.0.jar
hive-service-0.6.0.jar
hive-shims-0.6.0.jar
jdo2-api-2.3-SNAPSHOT.jar
jline-0.9.94.jar
json.jar
junit-3.8.1.jar
libfb303.jar
libthrift.jar
log4j-1.2.15.jar
mysql-connector-java-5.0.8-bin.jar
php
py
stringtemplate-3.1b1.jar
velocity-1.5.jar
zookeeper-3.2.2.jar

RE: enough alter tables in the same .q file eventually fail

Posted by Paul Yang <py...@facebook.com>.
Well, I can't seem to find my patch at the moment, but yeah, those are the same instructions that I followed. Can you post a list of jars that you have and the path?

From: Edward Capriolo [mailto:edlinuxguru@gmail.com]
Sent: Friday, April 30, 2010 1:21 PM
To: hive-user@hadoop.apache.org
Subject: Re: enough alter tables in the same .q file eventually fail

Paul,

Yes please if you have this it would be most helpful.

I followed the instrcutions here

http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html

Added this to my hive site..

 <property>
    <name>datanucleus.connectionPoolingType</name>
    <value>DBCP</value>
    <description>passwd for a JDBC metastore</description>
  </property>

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke the "DBCP" plugin to create a ConnectionPool gave an error : The connection pool plugin of type "DBCP" was not found in the CLASSPATH!

Do not understand this error pretty sure I have every jar they asked for.

On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com>> wrote:
Actually, yeah, I looked into connection pooling a while back and made patch for the metastore. As I recall, it is just a configuration change + some jars. Let me see if I still have it...

From: Edward Capriolo [mailto:edlinuxguru@gmail.com<ma...@gmail.com>]
Sent: Friday, April 30, 2010 10:03 AM

To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail


On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com>> wrote:
Typo in Ed's last email (table name):
echo "create external table if not exists ed_test ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q

On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com>> wrote:
Seems to be fixed in 0.6. Here's what I got:

test.q:
alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';


Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
OK
Time taken: 4.101 seconds
OK
Time taken: 0.558 seconds
OK
Time taken: 0.453 seconds
OK
Time taken: 0.416 seconds
OK
Time taken: 0.378 seconds
OK
Time taken: 0.457 seconds
OK
Time taken: 0.454 seconds


Can you the stack trace in /tmp/<username>/hive.log?


-----Original Message-----
From: Prasad Chakka [mailto:pchakka@facebook.com<ma...@facebook.com>]
Sent: Thursday, April 08, 2010 1:03 PM
To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
>
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
>
> Eventually this fails after a number of entries.
>
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
>
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
>
>

It does not happen after 4 or 5 more like 100 or 1000+. I will try to track this down a bit.

Edward


Try this:

echo "create external table if not exists edtest ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q
 for i in {1..3000} ; do echo "alter table ed_test add partition (dummy='${i}') location '/tmp/duh';" ; done >> test.q
hive -f test.q

On Hive 5.0 I get a failure mid way.
Edward


Also trying to do selects from the table without enough pruning in the where clause causes the same error, sometimes it comes as a JDBC/jpox access denied error.


Also, there are problems working with this type of table as well. :(

$ hive -e "explain select * from XXXXX_action "
Hive history file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)

Interestingly enough if we specify some partitions we can dodge this error. I get the fealing that the select * is trying to select too many partitions and causing this error.

2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
.sd.pl.pvt' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
NestedThrowablesStackTrace:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:genMapRedTasks(4886)) - org.apache.hadoop.hive.ql.metadata.HiveExcepti
on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)

    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248)) - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
 Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)

Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more





The same problem occurs dropping partitions. At this stage any program I write that does heavy duty work I need to prefilter with a unix tool like 'split' to make sure I do not blow up half way through 1000 inserts or 1000 drops. It is really anti-productive.

Also when using this type of layout with external tables, queries on partitions that do not exist blow up...


hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and mid=000979;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201004221345_0002, Tracking URL = http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201004221345_0002 with errors

Failed tasks with most(4) failures :
Task URL: http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver
hive>


...arge...

So I have a mysql server running full logging to try to get to the bottom of this.
Running a hive trunk from a few days ago.
hive>show tables

mysql results:
100430 12:23:27        7 Connect    hivadm@localhost on
            7 Init DB    m6_XXXX
            7 Query    SHOW SESSION VARIABLES
            7 Query    SHOW COLLATION
            7 Query    SET character_set_results = NULL
            7 Query    SET autocommit=1
            7 Query    SET sql_mode='STRICT_TRANS_TABLES'
            7 Query    SET autocommit=0
            7 Query    SELECT @@session.tx_isolation
            7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
            7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` = `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default' AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
            7 Query    commit
            7 Query    rollback


To me it looks like every query does a commit followed by a rollback. What is up with that?

I can also see that for my query that hive does not seem to be doing any connection pooling, or even re-using connections. I see it reconnecting and quitting for every query. So I am probably hitting some mysql connection limit, but should it really we be connecting and re-connecting  for to gain meta-data for each partition? since my table has tens of thousands of partitions this is non-optimal.

Can JPOX be setup to pool or re-use current connetions?

Regards
Edward


Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
Paul,

Yes please if you have this it would be most helpful.

I followed the instrcutions here

http://www.datanucleus.org/products/accessplatform_1_0/rdbms/dbcp.html

Added this to my hive site..

 <property>
    <name>datanucleus.connectionPoolingType</name>
    <value>DBCP</value>
    <description>passwd for a JDBC metastore</description>
  </property>

Caused by: org.datanucleus.exceptions.NucleusException: Attempt to invoke
the "DBCP" plugin to create a ConnectionPool gave an error : The connection
pool plugin of type "DBCP" was not found in the CLASSPATH!

Do not understand this error pretty sure I have every jar they asked for.


On Fri, Apr 30, 2010 at 1:08 PM, Paul Yang <py...@facebook.com> wrote:

>  Actually, yeah, I looked into connection pooling a while back and made
> patch for the metastore. As I recall, it is just a configuration change +
> some jars. Let me see if I still have it…
>
>
>
> *From:* Edward Capriolo [mailto:edlinuxguru@gmail.com]
> *Sent:* Friday, April 30, 2010 10:03 AM
>
> *To:* hive-user@hadoop.apache.org
> *Subject:* Re: enough alter tables in the same .q file eventually fail
>
>
>
>
>
> On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>
> Typo in Ed's last email (table name):
> echo "create external table if not exists ed*_*test ( dat string )
> partitioned by (dummy string) location '/tmp/a';" > test.q
>
>
>
> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
>  On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>
> wrote:
>
>
>
> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>
> Seems to be fixed in 0.6. Here's what I got:
>
> test.q:
> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>
>
> Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
> OK
> Time taken: 4.101 seconds
> OK
> Time taken: 0.558 seconds
> OK
> Time taken: 0.453 seconds
> OK
> Time taken: 0.416 seconds
> OK
> Time taken: 0.378 seconds
> OK
> Time taken: 0.457 seconds
> OK
> Time taken: 0.454 seconds
>
>
> Can you the stack trace in /tmp/<username>/hive.log?
>
>
>
>
> -----Original Message-----
> From: Prasad Chakka [mailto:pchakka@facebook.com]
> Sent: Thursday, April 08, 2010 1:03 PM
> To: hive-user@hadoop.apache.org
> Subject: Re: enough alter tables in the same .q file eventually fail
>
> There was a bug that got fixed where each request was creating a separate
> metastore client. That could be it or something similar that hasn't gotten
> fixed.
>
> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>
> > Hive 5.0 mysql as metastore backend. Using external tables with location
> for partitions...
> >
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000843') LOCATION 'hit_date=20100329/mid=000843';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000844') LOCATION 'hit_date=20100329/mid=000844';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000849') LOCATION 'hit_date=20100329/mid=000849';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000850') LOCATION 'hit_date=20100329/mid=000850';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000851') LOCATION 'hit_date=20100329/mid=000851';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000852') LOCATION 'hit_date=20100329/mid=000852';
> >
> > Eventually this fails after a number of entries.
> >
> > Time taken: 0.159 seconds
> > OK
> > Time taken: 0.17 seconds
> > OK
> > Time taken: 0.241 seconds
> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> > Restarting the process after removing the already added tables works
> until it breaks again. Anyone ever dealt with this?
> >
> > Doing one hive -e per table always works but takes a lot longer ...3
> seconds a partition rather then ~.5 seconds.
> >
> >
>
>
>
> It does not happen after 4 or 5 more like 100 or 1000+. I will try to track
> this down a bit.
>
> Edward
>
>
>
> Try this:
>
> echo "create external table if not exists edtest ( dat string ) partitioned
> by (dummy string) location '/tmp/a';" > test.q
>  for i in {1..3000} ; do echo "alter table ed_test add partition
> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
> hive -f test.q
>
> On Hive 5.0 I get a failure mid way.
> Edward
>
>
>
>
>
> Also trying to do selects from the table without enough pruning in the
> where clause causes the same error, sometimes it comes as a JDBC/jpox access
> denied error.
>
>
>
> Also, there are problems working with this type of table as well. :(
>
> $ hive -e "explain select * from XXXXX_action "
> Hive history
> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access
> denied for user 'hivadm'@'XXXXXX' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
> password: YES)
>
> Interestingly enough if we specify some partitions we can dodge this error.
> I get the fealing that the select * is trying to select too many partitions
> and causing this error.
>
> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) -
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
> .sd.pl.pvt' (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> NestedThrowablesStackTrace:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
> org.apache.hadoop.hive.ql.metadata.HiveExcepti
> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
> 2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248))
> - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
> password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>
> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
> YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
>
>
>
>
>
>
>
> The same problem occurs dropping partitions. At this stage any program I
> write that does heavy duty work I need to prefilter with a unix tool like
> 'split' to make sure I do not blow up half way through 1000 inserts or 1000
> drops. It is really anti-productive.
>
>
>
> Also when using this type of layout with external tables, queries on
> partitions that do not exist blow up...
>
>
> hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and
> mid=000979;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201004221345_0002, Tracking URL =
> http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
> Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job
> -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
> 2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
> 2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201004221345_0002 with errors
>
> Failed tasks with most(4) failures :
> Task URL:
> http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> hive>
>
>
> ...arge...
>
>
> So I have a mysql server running full logging to try to get to the bottom
> of this.
> Running a hive trunk from a few days ago.
> hive>show tables
>
> mysql results:
> 100430 12:23:27        7 Connect    hivadm@localhost on
>             7 Init DB    m6_XXXX
>             7 Query    SHOW SESSION VARIABLES
>             7 Query    SHOW COLLATION
>             7 Query    SET character_set_results = NULL
>             7 Query    SET autocommit=1
>             7 Query    SET sql_mode='STRICT_TRANS_TABLES'
>             7 Query    SET autocommit=0
>             7 Query    SELECT @@session.tx_isolation
>             7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ
> COMMITTED
>             7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT
> OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` =
> `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default'
> AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
>             7 Query    commit
>             7 Query    rollback
>
>
> To me it looks like every query does a commit followed by a rollback. What
> is up with that?
>
> I can also see that for my query that hive does not seem to be doing any
> connection pooling, or even re-using connections. I see it reconnecting and
> quitting for every query. So I am probably hitting some mysql connection
> limit, but should it really we be connecting and re-connecting  for to gain
> meta-data for each partition? since my table has tens of thousands of
> partitions this is non-optimal.
>
> Can JPOX be setup to pool or re-use current connetions?
>
> Regards
> Edward
>

RE: enough alter tables in the same .q file eventually fail

Posted by Paul Yang <py...@facebook.com>.
Actually, yeah, I looked into connection pooling a while back and made patch for the metastore. As I recall, it is just a configuration change + some jars. Let me see if I still have it...

From: Edward Capriolo [mailto:edlinuxguru@gmail.com]
Sent: Friday, April 30, 2010 10:03 AM
To: hive-user@hadoop.apache.org
Subject: Re: enough alter tables in the same .q file eventually fail


On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com>> wrote:
Typo in Ed's last email (table name):
echo "create external table if not exists ed_test ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q

On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>> wrote:


On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>> wrote:

On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com>> wrote:
Seems to be fixed in 0.6. Here's what I got:

test.q:
alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';


Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
OK
Time taken: 4.101 seconds
OK
Time taken: 0.558 seconds
OK
Time taken: 0.453 seconds
OK
Time taken: 0.416 seconds
OK
Time taken: 0.378 seconds
OK
Time taken: 0.457 seconds
OK
Time taken: 0.454 seconds


Can you the stack trace in /tmp/<username>/hive.log?


-----Original Message-----
From: Prasad Chakka [mailto:pchakka@facebook.com<ma...@facebook.com>]
Sent: Thursday, April 08, 2010 1:03 PM
To: hive-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: Re: enough alter tables in the same .q file eventually fail

There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
>
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
>
> Eventually this fails after a number of entries.
>
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
>
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
>
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
>
>

It does not happen after 4 or 5 more like 100 or 1000+. I will try to track this down a bit.

Edward


Try this:

echo "create external table if not exists edtest ( dat string ) partitioned by (dummy string) location '/tmp/a';" > test.q
 for i in {1..3000} ; do echo "alter table ed_test add partition (dummy='${i}') location '/tmp/duh';" ; done >> test.q
hive -f test.q

On Hive 5.0 I get a failure mid way.
Edward


Also trying to do selects from the table without enough pruning in the where clause causes the same error, sometimes it comes as a JDBC/jpox access denied error.


Also, there are problems working with this type of table as well. :(

$ hive -e "explain select * from XXXXX_action "
Hive history file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using password: YES)

Interestingly enough if we specify some partitions we can dodge this error. I get the fealing that the select * is trying to select too many partitions and causing this error.

2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
.sd.pl.pvt' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
NestedThrowablesStackTrace:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer (SemanticAnalyzer.java:genMapRedTasks(4886)) - org.apache.hadoop.hive.ql.metadata.HiveExcepti
on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)

    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248)) - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
 Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)

Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more





The same problem occurs dropping partitions. At this stage any program I write that does heavy duty work I need to prefilter with a unix tool like 'split' to make sure I do not blow up half way through 1000 inserts or 1000 drops. It is really anti-productive.


Also when using this type of layout with external tables, queries on partitions that do not exist blow up...


hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and mid=000979;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201004221345_0002, Tracking URL = http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job  -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201004221345_0002 with errors

Failed tasks with most(4) failures :
Task URL: http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver
hive>


...arge...

So I have a mysql server running full logging to try to get to the bottom of this.
Running a hive trunk from a few days ago.
hive>show tables

mysql results:
100430 12:23:27        7 Connect    hivadm@localhost on
            7 Init DB    m6_XXXX
            7 Query    SHOW SESSION VARIABLES
            7 Query    SHOW COLLATION
            7 Query    SET character_set_results = NULL
            7 Query    SET autocommit=1
            7 Query    SET sql_mode='STRICT_TRANS_TABLES'
            7 Query    SET autocommit=0
            7 Query    SELECT @@session.tx_isolation
            7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED
            7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` = `THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default' AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
            7 Query    commit
            7 Query    rollback


To me it looks like every query does a commit followed by a rollback. What is up with that?

I can also see that for my query that hive does not seem to be doing any connection pooling, or even re-using connections. I see it reconnecting and quitting for every query. So I am probably hitting some mysql connection limit, but should it really we be connecting and re-connecting  for to gain meta-data for each partition? since my table has tens of thousands of partitions this is non-optimal.

Can JPOX be setup to pool or re-use current connetions?

Regards
Edward

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Thu, Apr 22, 2010 at 1:54 PM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>wrote:
>
>>
>>
>> On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>wrote:
>>
>>>
>>>
>>> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <edlinuxguru@gmail.com
>>> > wrote:
>>>
>>>>
>>>>
>>>> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>>>>
>>>>> Typo in Ed's last email (table name):
>>>>> echo "create external table if not exists ed*_*test ( dat string )
>>>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>>>
>>>>>
>>>>> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <edlinuxguru@gmail.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <
>>>>>> edlinuxguru@gmail.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com>wrote:
>>>>>>>
>>>>>>>> Seems to be fixed in 0.6. Here's what I got:
>>>>>>>>
>>>>>>>> test.q:
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION
>>>>>>>> '/tmp/blah2';
>>>>>>>>
>>>>>>>>
>>>>>>>> Hive history
>>>>>>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>>>>>>> OK
>>>>>>>> Time taken: 4.101 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.558 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.453 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.416 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.378 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.457 seconds
>>>>>>>> OK
>>>>>>>> Time taken: 0.454 seconds
>>>>>>>>
>>>>>>>>
>>>>>>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>>>>>>
>>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>>>>>>> Sent: Thursday, April 08, 2010 1:03 PM
>>>>>>>> To: hive-user@hadoop.apache.org
>>>>>>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>>>>>>
>>>>>>>> There was a bug that got fixed where each request was creating a
>>>>>>>> separate metastore client. That could be it or something similar that hasn't
>>>>>>>> gotten fixed.
>>>>>>>>
>>>>>>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>>>>>>
>>>>>>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>>>>>>> location for partitions...
>>>>>>>> >
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000843') LOCATION 'hit_date=20100329/mid=000843';
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000844') LOCATION 'hit_date=20100329/mid=000844';
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000849') LOCATION 'hit_date=20100329/mid=000849';
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000850') LOCATION 'hit_date=20100329/mid=000850';
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000851') LOCATION 'hit_date=20100329/mid=000851';
>>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>>> = '000852') LOCATION 'hit_date=20100329/mid=000852';
>>>>>>>> >
>>>>>>>> > Eventually this fails after a number of entries.
>>>>>>>> >
>>>>>>>> > Time taken: 0.159 seconds
>>>>>>>> > OK
>>>>>>>> > Time taken: 0.17 seconds
>>>>>>>> > OK
>>>>>>>> > Time taken: 0.241 seconds
>>>>>>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>>>>>>> > FAILED: Execution Error, return code 1 from
>>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>>>>> >
>>>>>>>> > Restarting the process after removing the already added tables
>>>>>>>> works until it breaks again. Anyone ever dealt with this?
>>>>>>>> >
>>>>>>>> > Doing one hive -e per table always works but takes a lot longer
>>>>>>>> ...3 seconds a partition rather then ~.5 seconds.
>>>>>>>> >
>>>>>>>> >
>>>>>>>>
>>>>>>>>
>>>>>>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>>>>>>> track this down a bit.
>>>>>>>
>>>>>>> Edward
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Try this:
>>>>>>
>>>>>> echo "create external table if not exists edtest ( dat string )
>>>>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>>>>  for i in {1..3000} ; do echo "alter table ed_test add partition
>>>>>> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
>>>>>> hive -f test.q
>>>>>>
>>>>>> On Hive 5.0 I get a failure mid way.
>>>>>> Edward
>>>>>>
>>>>>
>>>>>
>>>> Also trying to do selects from the table without enough pruning in the
>>>> where clause causes the same error, sometimes it comes as a JDBC/jpox access
>>>> denied error.
>>>>
>>>
>>>
>>> Also, there are problems working with this type of table as well. :(
>>>
>>> $ hive -e "explain select * from XXXXX_action "
>>> Hive history
>>> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
>>> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>>> Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
>>> password: YES)
>>>
>>> Interestingly enough if we specify some partitions we can dodge this
>>> error. I get the fealing that the select * is trying to select too many
>>> partitions and causing this error.
>>>
>>> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629))
>>> - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
>>> .sd.pl.pvt' (using password: YES)
>>>     at
>>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>>     at
>>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>     at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> NestedThrowablesStackTrace:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>>     at
>>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>>     at
>>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>>     at
>>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>>     at
>>> org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>>     at
>>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>     at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>>
>>> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
>>> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
>>> org.apache.hadoop.hive.ql.metadata.HiveExcepti
>>> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>>     at
>>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>     at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>>     ... 17 more
>>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>>     at
>>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>>
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>>     at
>>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>>     at
>>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>>     at
>>> org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>>     ... 23 more
>>>
>>> 2010-04-12 10:33:02,793 ERROR ql.Driver
>>> (SessionState.java:printError(248)) - FAILED: Error in semantic analysis:
>>> javax.jdo.JDODataStoreException:
>>>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
>>> password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>> org.apache.hadoop.hive.ql.parse.SemanticException:
>>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using passwo
>>> rd: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>     at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (us
>>> ing password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>>     at
>>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>>     ... 15 more
>>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>>     ... 17 more
>>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>>     at
>>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>>
>>> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
>>> password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>> org.apache.hadoop.hive.ql.parse.SemanticException:
>>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using passwo
>>> rd: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>>     at
>>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>     at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>     at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (us
>>> ing password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>>     at
>>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>>     at
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>>     ... 15 more
>>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>>> NestedThrowables:
>>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at
>>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>>     at
>>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>>     at
>>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>>     at
>>> org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>>     ... 17 more
>>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>>> (using password: YES)
>>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>>     at
>>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>>     at
>>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>>     at
>>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>>     at
>>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>>     at
>>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>>     at
>>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>>     at
>>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>>     at
>>> org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>>     ... 23 more
>>>
>>>
>>>
>>>
>>>
>> The same problem occurs dropping partitions. At this stage any program I
>> write that does heavy duty work I need to prefilter with a unix tool like
>> 'split' to make sure I do not blow up half way through 1000 inserts or 1000
>> drops. It is really anti-productive.
>>
>>
>>
>
> Also when using this type of layout with external tables, queries on
> partitions that do not exist blow up...
>
>
> hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and
> mid=000979;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks is set to 0 since there's no reduce operator
> Starting Job = job_201004221345_0002, Tracking URL =
> http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
> Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job
> -Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
> 2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
> 2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
> Ended Job = job_201004221345_0002 with errors
>
> Failed tasks with most(4) failures :
> Task URL:
> http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000
>
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.ExecDriver
> hive>
>
>
> ...arge...
>
>
So I have a mysql server running full logging to try to get to the bottom of
this.
Running a hive trunk from a few days ago.
hive>show tables

mysql results:
100430 12:23:27        7 Connect    hivadm@localhost on
            7 Init DB    m6_XXXX
            7 Query    SHOW SESSION VARIABLES
            7 Query    SHOW COLLATION
            7 Query    SET character_set_results = NULL
            7 Query    SET autocommit=1
            7 Query    SET sql_mode='STRICT_TRANS_TABLES'
            7 Query    SET autocommit=0
            7 Query    SELECT @@session.tx_isolation
            7 Query    SET SESSION TRANSACTION ISOLATION LEVEL READ
COMMITTED
            7 Query    SELECT `THIS`.`TBL_NAME` FROM `TBLS` `THIS` LEFT
OUTER JOIN `DBS` `THIS_DATABASE_NAME` ON `THIS`.`DB_ID` =
`THIS_DATABASE_NAME`.`DB_ID` WHERE `THIS_DATABASE_NAME`.`NAME` = 'default'
AND (LOWER(`THIS`.`TBL_NAME`) LIKE '_%' ESCAPE '\\' )
            7 Query    commit
            7 Query    rollback


To me it looks like every query does a commit followed by a rollback. What
is up with that?

I can also see that for my query that hive does not seem to be doing any
connection pooling, or even re-using connections. I see it reconnecting and
quitting for every query. So I am probably hitting some mysql connection
limit, but should it really we be connecting and re-connecting  for to gain
meta-data for each partition? since my table has tens of thousands of
partitions this is non-optimal.

Can JPOX be setup to pool or re-use current connetions?

Regards
Edward

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Mon, Apr 19, 2010 at 10:59 AM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>wrote:
>
>>
>>
>> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>wrote:
>>
>>>
>>>
>>> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>>>
>>>> Typo in Ed's last email (table name):
>>>> echo "create external table if not exists ed*_*test ( dat string )
>>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>>
>>>>
>>>> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <edlinuxguru@gmail.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>>>>>>
>>>>>>> Seems to be fixed in 0.6. Here's what I got:
>>>>>>>
>>>>>>> test.q:
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>>>>>>
>>>>>>>
>>>>>>> Hive history
>>>>>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>>>>>> OK
>>>>>>> Time taken: 4.101 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.558 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.453 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.416 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.378 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.457 seconds
>>>>>>> OK
>>>>>>> Time taken: 0.454 seconds
>>>>>>>
>>>>>>>
>>>>>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>>>>>
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>>>>>> Sent: Thursday, April 08, 2010 1:03 PM
>>>>>>> To: hive-user@hadoop.apache.org
>>>>>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>>>>>
>>>>>>> There was a bug that got fixed where each request was creating a
>>>>>>> separate metastore client. That could be it or something similar that hasn't
>>>>>>> gotten fixed.
>>>>>>>
>>>>>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>>>>>
>>>>>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>>>>>> location for partitions...
>>>>>>> >
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000843') LOCATION 'hit_date=20100329/mid=000843';
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000844') LOCATION 'hit_date=20100329/mid=000844';
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000849') LOCATION 'hit_date=20100329/mid=000849';
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000850') LOCATION 'hit_date=20100329/mid=000850';
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000851') LOCATION 'hit_date=20100329/mid=000851';
>>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid
>>>>>>> = '000852') LOCATION 'hit_date=20100329/mid=000852';
>>>>>>> >
>>>>>>> > Eventually this fails after a number of entries.
>>>>>>> >
>>>>>>> > Time taken: 0.159 seconds
>>>>>>> > OK
>>>>>>> > Time taken: 0.17 seconds
>>>>>>> > OK
>>>>>>> > Time taken: 0.241 seconds
>>>>>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>>>>>> > FAILED: Execution Error, return code 1 from
>>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>>>> >
>>>>>>> > Restarting the process after removing the already added tables
>>>>>>> works until it breaks again. Anyone ever dealt with this?
>>>>>>> >
>>>>>>> > Doing one hive -e per table always works but takes a lot longer
>>>>>>> ...3 seconds a partition rather then ~.5 seconds.
>>>>>>> >
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>>>>>> track this down a bit.
>>>>>>
>>>>>> Edward
>>>>>>
>>>>>
>>>>>
>>>>> Try this:
>>>>>
>>>>> echo "create external table if not exists edtest ( dat string )
>>>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>>>  for i in {1..3000} ; do echo "alter table ed_test add partition
>>>>> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
>>>>> hive -f test.q
>>>>>
>>>>> On Hive 5.0 I get a failure mid way.
>>>>> Edward
>>>>>
>>>>
>>>>
>>> Also trying to do selects from the table without enough pruning in the
>>> where clause causes the same error, sometimes it comes as a JDBC/jpox access
>>> denied error.
>>>
>>
>>
>> Also, there are problems working with this type of table as well. :(
>>
>> $ hive -e "explain select * from XXXXX_action "
>> Hive history
>> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
>> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>> Access denied for user 'hivadm'@'XXXXXX' (using password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
>> password: YES)
>>
>> Interestingly enough if we specify some partitions we can dodge this
>> error. I get the fealing that the select * is trying to select too many
>> partitions and causing this error.
>>
>> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629))
>> - javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
>> .sd.pl.pvt' (using password: YES)
>>     at
>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>     at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> NestedThrowablesStackTrace:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>     at
>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>     at
>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>     at
>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>     at
>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>     at
>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>     at
>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>     at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>>
>> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
>> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
>> org.apache.hadoop.hive.ql.metadata.HiveExcepti
>> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>     at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at
>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>     ... 17 more
>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>     at
>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>     at
>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>     at
>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>     at
>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>     at
>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>     at
>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>     ... 23 more
>>
>> 2010-04-12 10:33:02,793 ERROR ql.Driver
>> (SessionState.java:printError(248)) - FAILED: Error in semantic analysis:
>> javax.jdo.JDODataStoreException:
>>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
>> password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>> org.apache.hadoop.hive.ql.parse.SemanticException:
>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using passwo
>> rd: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (us
>> ing password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>     at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>     ... 15 more
>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at
>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>     ... 17 more
>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>     at
>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>     at
>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>     at
>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>
>> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
>> password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>> org.apache.hadoop.hive.ql.parse.SemanticException:
>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using passwo
>> rd: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at
>> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>>     at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>>     at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>     at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>     at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>     at java.lang.reflect.Method.invoke(Method.java:597)
>>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
>> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (us
>> ing password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>>     at
>> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>>     at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>>     ... 15 more
>> Caused by: javax.jdo.JDODataStoreException: Access denied for user
>> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
>> NestedThrowables:
>> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at
>> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>>     at
>> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>>     at
>> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>>     ... 17 more
>> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
>> (using password: YES)
>>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>>     at
>> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>>     at
>> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>>     at
>> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>>     at
>> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>>     at
>> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>>     at
>> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>>     at
>> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>>     at
>> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>>     at
>> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>>     ... 23 more
>>
>>
>>
>>
>>
> The same problem occurs dropping partitions. At this stage any program I
> write that does heavy duty work I need to prefilter with a unix tool like
> 'split' to make sure I do not blow up half way through 1000 inserts or 1000
> drops. It is really anti-productive.
>
>
>

Also when using this type of layout with external tables, queries on
partitions that do not exist blow up...


hive>  select OFFER_ID from XXXXX_act where hit_date=20990410 and
mid=000979;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201004221345_0002, Tracking URL =
http://rs01.hadoop.pvt:50030/jobdetails.jsp?jobid=job_201004221345_0002
Kill Command = /usr/lib/hadoop-0.20/bin/hadoop job
-Dmapred.job.tracker=rs01.hadoop.pvt:34311 -kill job_201004221345_0002
2010-04-22 13:52:10,004 Stage-1 map = 0%,  reduce = 0%
2010-04-22 13:52:44,173 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201004221345_0002 with errors

Failed tasks with most(4) failures :
Task URL:
http://rs01.hadoop.pvt:50030/taskdetails.jsp?jobid=job_201004221345_0002&tipid=task_201004221345_0002_m_000000

FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.ExecDriver
hive>


...arge...

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Mon, Apr 12, 2010 at 10:39 AM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>wrote:
>
>>
>>
>> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>>
>>> Typo in Ed's last email (table name):
>>> echo "create external table if not exists ed*_*test ( dat string )
>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>
>>>
>>> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>>>
>>>>>
>>>>>
>>>>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>>>>>
>>>>>> Seems to be fixed in 0.6. Here's what I got:
>>>>>>
>>>>>> test.q:
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>>>>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>>>>>
>>>>>>
>>>>>> Hive history
>>>>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>>>>> OK
>>>>>> Time taken: 4.101 seconds
>>>>>> OK
>>>>>> Time taken: 0.558 seconds
>>>>>> OK
>>>>>> Time taken: 0.453 seconds
>>>>>> OK
>>>>>> Time taken: 0.416 seconds
>>>>>> OK
>>>>>> Time taken: 0.378 seconds
>>>>>> OK
>>>>>> Time taken: 0.457 seconds
>>>>>> OK
>>>>>> Time taken: 0.454 seconds
>>>>>>
>>>>>>
>>>>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>>>>> Sent: Thursday, April 08, 2010 1:03 PM
>>>>>> To: hive-user@hadoop.apache.org
>>>>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>>>>
>>>>>> There was a bug that got fixed where each request was creating a
>>>>>> separate metastore client. That could be it or something similar that hasn't
>>>>>> gotten fixed.
>>>>>>
>>>>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>>>>
>>>>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>>>>> location for partitions...
>>>>>> >
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000843') LOCATION 'hit_date=20100329/mid=000843';
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000844') LOCATION 'hit_date=20100329/mid=000844';
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000849') LOCATION 'hit_date=20100329/mid=000849';
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000850') LOCATION 'hit_date=20100329/mid=000850';
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000851') LOCATION 'hit_date=20100329/mid=000851';
>>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>>> '000852') LOCATION 'hit_date=20100329/mid=000852';
>>>>>> >
>>>>>> > Eventually this fails after a number of entries.
>>>>>> >
>>>>>> > Time taken: 0.159 seconds
>>>>>> > OK
>>>>>> > Time taken: 0.17 seconds
>>>>>> > OK
>>>>>> > Time taken: 0.241 seconds
>>>>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>>>>> > FAILED: Execution Error, return code 1 from
>>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>>> >
>>>>>> > Restarting the process after removing the already added tables works
>>>>>> until it breaks again. Anyone ever dealt with this?
>>>>>> >
>>>>>> > Doing one hive -e per table always works but takes a lot longer ...3
>>>>>> seconds a partition rather then ~.5 seconds.
>>>>>> >
>>>>>> >
>>>>>>
>>>>>>
>>>>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>>>>> track this down a bit.
>>>>>
>>>>> Edward
>>>>>
>>>>
>>>>
>>>> Try this:
>>>>
>>>> echo "create external table if not exists edtest ( dat string )
>>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>>  for i in {1..3000} ; do echo "alter table ed_test add partition
>>>> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
>>>> hive -f test.q
>>>>
>>>> On Hive 5.0 I get a failure mid way.
>>>> Edward
>>>>
>>>
>>>
>> Also trying to do selects from the table without enough pruning in the
>> where clause causes the same error, sometimes it comes as a JDBC/jpox access
>> denied error.
>>
>
>
> Also, there are problems working with this type of table as well. :(
>
> $ hive -e "explain select * from XXXXX_action "
> Hive history
> file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
> FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access
> denied for user 'hivadm'@'XXXXXX' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
> password: YES)
>
> Interestingly enough if we specify some partitions we can dodge this error.
> I get the fealing that the select * is trying to select too many partitions
> and causing this error.
>
> 2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) -
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
> .sd.pl.pvt' (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> NestedThrowablesStackTrace:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
>
> 2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
> (SemanticAnalyzer.java:genMapRedTasks(4886)) -
> org.apache.hadoop.hive.ql.metadata.HiveExcepti
> on: javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
> 2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248))
> - FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
>  Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using
> password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>
> Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
> YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
> org.apache.hadoop.hive.ql.parse.SemanticException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using passwo
> rd: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at
> org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
>     at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
>     at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
>     at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
>     at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
>     at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
>     at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
>     at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)
>     at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
> javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (us
> ing password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
>     at
> org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
>     at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
>     ... 15 more
> Caused by: javax.jdo.JDODataStoreException: Access denied for user
> 'hivadm'@'XXXXX.domain.whatetever' (using password: YES)
> NestedThrowables:
> java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at
> org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
>     at
> org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
>     at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
>     at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
>     ... 17 more
> Caused by: java.sql.SQLException: Access denied for user 'hivadm'@'XXXXX.domain.whatetever'
> (using password: YES)
>     at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
>     at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
>     at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
>     at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
>     at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
>     at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
>     at
> com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
>     at
> org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
>     at
> org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
>     at
> org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
>     at
> org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
>     at
> org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
>     at
> org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
>     at
> org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
>     at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
>     at
> org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
>     at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
>     at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
>     ... 23 more
>
>
>
>
>
The same problem occurs dropping partitions. At this stage any program I
write that does heavy duty work I need to prefilter with a unix tool like
'split' to make sure I do not blow up half way through 1000 inserts or 1000
drops. It is really anti-productive.

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Sat, Apr 10, 2010 at 10:30 AM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:
>
>> Typo in Ed's last email (table name):
>> echo "create external table if not exists ed*_*test ( dat string )
>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>
>>
>> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>
>>>
>>>
>>>
>>> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>>
>>>>
>>>>
>>>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>>>>
>>>>> Seems to be fixed in 0.6. Here's what I got:
>>>>>
>>>>> test.q:
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>>>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>>>>
>>>>>
>>>>> Hive history
>>>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>>>> OK
>>>>> Time taken: 4.101 seconds
>>>>> OK
>>>>> Time taken: 0.558 seconds
>>>>> OK
>>>>> Time taken: 0.453 seconds
>>>>> OK
>>>>> Time taken: 0.416 seconds
>>>>> OK
>>>>> Time taken: 0.378 seconds
>>>>> OK
>>>>> Time taken: 0.457 seconds
>>>>> OK
>>>>> Time taken: 0.454 seconds
>>>>>
>>>>>
>>>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>>>> Sent: Thursday, April 08, 2010 1:03 PM
>>>>> To: hive-user@hadoop.apache.org
>>>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>>>
>>>>> There was a bug that got fixed where each request was creating a
>>>>> separate metastore client. That could be it or something similar that hasn't
>>>>> gotten fixed.
>>>>>
>>>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>>>
>>>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>>>> location for partitions...
>>>>> >
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000843') LOCATION 'hit_date=20100329/mid=000843';
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000844') LOCATION 'hit_date=20100329/mid=000844';
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000849') LOCATION 'hit_date=20100329/mid=000849';
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000850') LOCATION 'hit_date=20100329/mid=000850';
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000851') LOCATION 'hit_date=20100329/mid=000851';
>>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>>> '000852') LOCATION 'hit_date=20100329/mid=000852';
>>>>> >
>>>>> > Eventually this fails after a number of entries.
>>>>> >
>>>>> > Time taken: 0.159 seconds
>>>>> > OK
>>>>> > Time taken: 0.17 seconds
>>>>> > OK
>>>>> > Time taken: 0.241 seconds
>>>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>>>> > FAILED: Execution Error, return code 1 from
>>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>>> >
>>>>> > Restarting the process after removing the already added tables works
>>>>> until it breaks again. Anyone ever dealt with this?
>>>>> >
>>>>> > Doing one hive -e per table always works but takes a lot longer ...3
>>>>> seconds a partition rather then ~.5 seconds.
>>>>> >
>>>>> >
>>>>>
>>>>>
>>>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>>>> track this down a bit.
>>>>
>>>> Edward
>>>>
>>>
>>>
>>> Try this:
>>>
>>> echo "create external table if not exists edtest ( dat string )
>>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>>  for i in {1..3000} ; do echo "alter table ed_test add partition
>>> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
>>> hive -f test.q
>>>
>>> On Hive 5.0 I get a failure mid way.
>>> Edward
>>>
>>
>>
> Also trying to do selects from the table without enough pruning in the
> where clause causes the same error, sometimes it comes as a JDBC/jpox access
> denied error.
>


Also, there are problems working with this type of table as well. :(

$ hive -e "explain select * from XXXXX_action "
Hive history file=/tmp/XXXXXX/hive_job_log_media6_201004121029_170696698.txt
FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException: Access
denied for user 'hivadm'@'XXXXXX' (using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user 'hivadm'@'XXXXXX' (using
password: YES)

Interestingly enough if we specify some partitions we can dodge this error.
I get the fealing that the select * is trying to select too many partitions
and causing this error.

2010-04-12 10:33:02,789 ERROR metadata.Hive (Hive.java:getPartition(629)) -
javax.jdo.JDODataStoreException: Access denied for user 'hivadm'@'rs01
.sd.pl.pvt' (using password: YES)
    at
org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
NestedThrowablesStackTrace:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at
org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at
org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at
org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at
org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at
org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    at
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

2010-04-12 10:33:02,790 ERROR parse.SemanticAnalyzer
(SemanticAnalyzer.java:genMapRedTasks(4886)) -
org.apache.hadoop.hive.ql.metadata.HiveExcepti
on: javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at
org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at
org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at
org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at
org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)

    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at
org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at
org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

2010-04-12 10:33:02,793 ERROR ql.Driver (SessionState.java:printError(248))
- FAILED: Error in semantic analysis: javax.jdo.JDODataStoreException:
 Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException:
javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at
org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at
org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at
org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)

Access denied for user 'hivadm'@'XXXXX.domain.whatetever' (using password:
YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
org.apache.hadoop.hive.ql.parse.SemanticException:
javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using passwo
rd: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4887)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:5224)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:44)
    at
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:105)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:275)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:320)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:251)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(us
ing password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:630)
    at
org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner.prune(PartitionPruner.java:215)
    at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapRedTasks(SemanticAnalyzer.java:4883)
    ... 15 more
Caused by: javax.jdo.JDODataStoreException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
NestedThrowables:
java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at
org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:289)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:274)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:551)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getMPartition(ObjectStore.java:716)
    at
org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:704)
    at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partition(HiveMetaStore.java:593)
    at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getPartition(HiveMetaStoreClient.java:418)
    at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:620)
    ... 17 more
Caused by: java.sql.SQLException: Access denied for user
'hivadm'@'XXXXX.domain.whatetever'
(using password: YES)
    at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:946)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:2985)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:885)
    at com.mysql.jdbc.MysqlIO.secureAuth411(MysqlIO.java:3436)
    at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1247)
    at com.mysql.jdbc.Connection.createNewIO(Connection.java:2775)
    at com.mysql.jdbc.Connection.<init>(Connection.java:1555)
    at
com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:285)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:142)
    at
org.datanucleus.store.rdbms.datasource.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:118)
    at
org.datanucleus.store.rdbms.ConnectionProviderPriorityList.getConnection(ConnectionProviderPriorityList.java:59)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:458)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getXAResource(ConnectionFactoryImpl.java:418)
    at
org.datanucleus.ConnectionManagerImpl.enlistResource(ConnectionManagerImpl.java:329)
    at
org.datanucleus.ConnectionManagerImpl.allocateConnection(ConnectionManagerImpl.java:210)
    at
org.datanucleus.store.rdbms.ConnectionFactoryImpl.getConnection(ConnectionFactoryImpl.java:345)
    at
org.datanucleus.store.AbstractStoreManager.getConnection(AbstractStoreManager.java:267)
    at
org.datanucleus.store.rdbms.query.SQLEvaluator.evaluate(SQLEvaluator.java:91)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:287)
    at org.datanucleus.store.query.Query.executeQuery(Query.java:1475)
    at
org.datanucleus.store.rdbms.query.JDOQLQuery.executeQuery(JDOQLQuery.java:244)
    at org.datanucleus.store.query.Query.executeWithArray(Query.java:1357)
    at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:265)
    ... 23 more

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Thu, Apr 8, 2010 at 6:58 PM, Ted Yu <yu...@gmail.com> wrote:

> Typo in Ed's last email (table name):
> echo "create external table if not exists ed*_*test ( dat string )
> partitioned by (dummy string) location '/tmp/a';" > test.q
>
>
> On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>wrote:
>
>>
>>
>>
>> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>wrote:
>>
>>>
>>>
>>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>>>
>>>> Seems to be fixed in 0.6. Here's what I got:
>>>>
>>>> test.q:
>>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>>>
>>>>
>>>> Hive history
>>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>>> OK
>>>> Time taken: 4.101 seconds
>>>> OK
>>>> Time taken: 0.558 seconds
>>>> OK
>>>> Time taken: 0.453 seconds
>>>> OK
>>>> Time taken: 0.416 seconds
>>>> OK
>>>> Time taken: 0.378 seconds
>>>> OK
>>>> Time taken: 0.457 seconds
>>>> OK
>>>> Time taken: 0.454 seconds
>>>>
>>>>
>>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>>> Sent: Thursday, April 08, 2010 1:03 PM
>>>> To: hive-user@hadoop.apache.org
>>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>>
>>>> There was a bug that got fixed where each request was creating a
>>>> separate metastore client. That could be it or something similar that hasn't
>>>> gotten fixed.
>>>>
>>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>>
>>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>>> location for partitions...
>>>> >
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000843') LOCATION 'hit_date=20100329/mid=000843';
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000844') LOCATION 'hit_date=20100329/mid=000844';
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000849') LOCATION 'hit_date=20100329/mid=000849';
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000850') LOCATION 'hit_date=20100329/mid=000850';
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000851') LOCATION 'hit_date=20100329/mid=000851';
>>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>>> '000852') LOCATION 'hit_date=20100329/mid=000852';
>>>> >
>>>> > Eventually this fails after a number of entries.
>>>> >
>>>> > Time taken: 0.159 seconds
>>>> > OK
>>>> > Time taken: 0.17 seconds
>>>> > OK
>>>> > Time taken: 0.241 seconds
>>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>>> > FAILED: Execution Error, return code 1 from
>>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>>> >
>>>> > Restarting the process after removing the already added tables works
>>>> until it breaks again. Anyone ever dealt with this?
>>>> >
>>>> > Doing one hive -e per table always works but takes a lot longer ...3
>>>> seconds a partition rather then ~.5 seconds.
>>>> >
>>>> >
>>>>
>>>>
>>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>>> track this down a bit.
>>>
>>> Edward
>>>
>>
>>
>> Try this:
>>
>> echo "create external table if not exists edtest ( dat string )
>> partitioned by (dummy string) location '/tmp/a';" > test.q
>>  for i in {1..3000} ; do echo "alter table ed_test add partition
>> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
>> hive -f test.q
>>
>> On Hive 5.0 I get a failure mid way.
>> Edward
>>
>
>
Also trying to do selects from the table without enough pruning in the where
clause causes the same error, sometimes it comes as a JDBC/jpox access
denied error.

Re: enough alter tables in the same .q file eventually fail

Posted by Ted Yu <yu...@gmail.com>.
Typo in Ed's last email (table name):
echo "create external table if not exists ed*_*test ( dat string )
partitioned by (dummy string) location '/tmp/a';" > test.q

On Thu, Apr 8, 2010 at 3:14 PM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
>
> On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>wrote:
>
>>
>>
>> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>>
>>> Seems to be fixed in 0.6. Here's what I got:
>>>
>>> test.q:
>>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>>
>>>
>>> Hive history
>>> file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>>> OK
>>> Time taken: 4.101 seconds
>>> OK
>>> Time taken: 0.558 seconds
>>> OK
>>> Time taken: 0.453 seconds
>>> OK
>>> Time taken: 0.416 seconds
>>> OK
>>> Time taken: 0.378 seconds
>>> OK
>>> Time taken: 0.457 seconds
>>> OK
>>> Time taken: 0.454 seconds
>>>
>>>
>>> Can you the stack trace in /tmp/<username>/hive.log?
>>>
>>>
>>> -----Original Message-----
>>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>>> Sent: Thursday, April 08, 2010 1:03 PM
>>> To: hive-user@hadoop.apache.org
>>> Subject: Re: enough alter tables in the same .q file eventually fail
>>>
>>> There was a bug that got fixed where each request was creating a separate
>>> metastore client. That could be it or something similar that hasn't gotten
>>> fixed.
>>>
>>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>>
>>> > Hive 5.0 mysql as metastore backend. Using external tables with
>>> location for partitions...
>>> >
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000843') LOCATION 'hit_date=20100329/mid=000843';
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000844') LOCATION 'hit_date=20100329/mid=000844';
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000849') LOCATION 'hit_date=20100329/mid=000849';
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000850') LOCATION 'hit_date=20100329/mid=000850';
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000851') LOCATION 'hit_date=20100329/mid=000851';
>>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>>> '000852') LOCATION 'hit_date=20100329/mid=000852';
>>> >
>>> > Eventually this fails after a number of entries.
>>> >
>>> > Time taken: 0.159 seconds
>>> > OK
>>> > Time taken: 0.17 seconds
>>> > OK
>>> > Time taken: 0.241 seconds
>>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>>> > FAILED: Execution Error, return code 1 from
>>> org.apache.hadoop.hive.ql.exec.DDLTask
>>> >
>>> > Restarting the process after removing the already added tables works
>>> until it breaks again. Anyone ever dealt with this?
>>> >
>>> > Doing one hive -e per table always works but takes a lot longer ...3
>>> seconds a partition rather then ~.5 seconds.
>>> >
>>> >
>>>
>>>
>> It does not happen after 4 or 5 more like 100 or 1000+. I will try to
>> track this down a bit.
>>
>> Edward
>>
>
>
> Try this:
>
> echo "create external table if not exists edtest ( dat string ) partitioned
> by (dummy string) location '/tmp/a';" > test.q
>  for i in {1..3000} ; do echo "alter table ed_test add partition
> (dummy='${i}') location '/tmp/duh';" ; done >> test.q
> hive -f test.q
>
> On Hive 5.0 I get a failure mid way.
> Edward
>

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Thu, Apr 8, 2010 at 5:22 PM, Edward Capriolo <ed...@gmail.com>wrote:

>
>
> On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:
>
>> Seems to be fixed in 0.6. Here's what I got:
>>
>> test.q:
>> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
>> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>>
>>
>> Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
>> OK
>> Time taken: 4.101 seconds
>> OK
>> Time taken: 0.558 seconds
>> OK
>> Time taken: 0.453 seconds
>> OK
>> Time taken: 0.416 seconds
>> OK
>> Time taken: 0.378 seconds
>> OK
>> Time taken: 0.457 seconds
>> OK
>> Time taken: 0.454 seconds
>>
>>
>> Can you the stack trace in /tmp/<username>/hive.log?
>>
>> -----Original Message-----
>> From: Prasad Chakka [mailto:pchakka@facebook.com]
>> Sent: Thursday, April 08, 2010 1:03 PM
>> To: hive-user@hadoop.apache.org
>> Subject: Re: enough alter tables in the same .q file eventually fail
>>
>> There was a bug that got fixed where each request was creating a separate
>> metastore client. That could be it or something similar that hasn't gotten
>> fixed.
>>
>> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>>
>> > Hive 5.0 mysql as metastore backend. Using external tables with location
>> for partitions...
>> >
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000843') LOCATION 'hit_date=20100329/mid=000843';
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000844') LOCATION 'hit_date=20100329/mid=000844';
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000849') LOCATION 'hit_date=20100329/mid=000849';
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000850') LOCATION 'hit_date=20100329/mid=000850';
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000851') LOCATION 'hit_date=20100329/mid=000851';
>> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
>> '000852') LOCATION 'hit_date=20100329/mid=000852';
>> >
>> > Eventually this fails after a number of entries.
>> >
>> > Time taken: 0.159 seconds
>> > OK
>> > Time taken: 0.17 seconds
>> > OK
>> > Time taken: 0.241 seconds
>> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
>> > FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask
>> >
>> > Restarting the process after removing the already added tables works
>> until it breaks again. Anyone ever dealt with this?
>> >
>> > Doing one hive -e per table always works but takes a lot longer ...3
>> seconds a partition rather then ~.5 seconds.
>> >
>> >
>>
>>
> It does not happen after 4 or 5 more like 100 or 1000+. I will try to track
> this down a bit.
>
> Edward
>


Try this:

echo "create external table if not exists edtest ( dat string ) partitioned
by (dummy string) location '/tmp/a';" > test.q
 for i in {1..3000} ; do echo "alter table ed_test add partition
(dummy='${i}') location '/tmp/duh';" ; done >> test.q
hive -f test.q

On Hive 5.0 I get a failure mid way.
Edward

Re: enough alter tables in the same .q file eventually fail

Posted by Edward Capriolo <ed...@gmail.com>.
On Thu, Apr 8, 2010 at 5:14 PM, Paul Yang <py...@facebook.com> wrote:

> Seems to be fixed in 0.6. Here's what I got:
>
> test.q:
> alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
> alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';
>
>
> Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
> OK
> Time taken: 4.101 seconds
> OK
> Time taken: 0.558 seconds
> OK
> Time taken: 0.453 seconds
> OK
> Time taken: 0.416 seconds
> OK
> Time taken: 0.378 seconds
> OK
> Time taken: 0.457 seconds
> OK
> Time taken: 0.454 seconds
>
>
> Can you the stack trace in /tmp/<username>/hive.log?
>
> -----Original Message-----
> From: Prasad Chakka [mailto:pchakka@facebook.com]
> Sent: Thursday, April 08, 2010 1:03 PM
> To: hive-user@hadoop.apache.org
> Subject: Re: enough alter tables in the same .q file eventually fail
>
> There was a bug that got fixed where each request was creating a separate
> metastore client. That could be it or something similar that hasn't gotten
> fixed.
>
> On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:
>
> > Hive 5.0 mysql as metastore backend. Using external tables with location
> for partitions...
> >
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000843') LOCATION 'hit_date=20100329/mid=000843';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000844') LOCATION 'hit_date=20100329/mid=000844';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000849') LOCATION 'hit_date=20100329/mid=000849';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000850') LOCATION 'hit_date=20100329/mid=000850';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000851') LOCATION 'hit_date=20100329/mid=000851';
> > alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid =
> '000852') LOCATION 'hit_date=20100329/mid=000852';
> >
> > Eventually this fails after a number of entries.
> >
> > Time taken: 0.159 seconds
> > OK
> > Time taken: 0.17 seconds
> > OK
> > Time taken: 0.241 seconds
> > FAILED: Error in metadata: Unable to fetch table XXXXX_action
> > FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.DDLTask
> >
> > Restarting the process after removing the already added tables works
> until it breaks again. Anyone ever dealt with this?
> >
> > Doing one hive -e per table always works but takes a lot longer ...3
> seconds a partition rather then ~.5 seconds.
> >
> >
>
>
It does not happen after 4 or 5 more like 100 or 1000+. I will try to track
this down a bit.

Edward

RE: enough alter tables in the same .q file eventually fail

Posted by Paul Yang <py...@facebook.com>.
Seems to be fixed in 0.6. Here's what I got:

test.q:
alter table tmp_pyang_t ADD PARTITION (ds='2') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='3') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='4') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='5') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='6') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='7') LOCATION '/tmp/blah2';
alter table tmp_pyang_t ADD PARTITION (ds='8') LOCATION '/tmp/blah2';


Hive history file=/tmp/pyang/hive_job_log_pyang_201004081410_378771152.txt
OK
Time taken: 4.101 seconds
OK
Time taken: 0.558 seconds
OK
Time taken: 0.453 seconds
OK
Time taken: 0.416 seconds
OK
Time taken: 0.378 seconds
OK
Time taken: 0.457 seconds
OK
Time taken: 0.454 seconds


Can you the stack trace in /tmp/<username>/hive.log?

-----Original Message-----
From: Prasad Chakka [mailto:pchakka@facebook.com] 
Sent: Thursday, April 08, 2010 1:03 PM
To: hive-user@hadoop.apache.org
Subject: Re: enough alter tables in the same .q file eventually fail

There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
> 
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
> 
> Eventually this fails after a number of entries.
> 
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> 
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
> 
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
> 
> 


Re: enough alter tables in the same .q file eventually fail

Posted by Prasad Chakka <pc...@facebook.com>.
There was a bug that got fixed where each request was creating a separate metastore client. That could be it or something similar that hasn't gotten fixed.

On Apr 8, 2010, at 11:47 AM, Edward Capriolo wrote:

> Hive 5.0 mysql as metastore backend. Using external tables with location for partitions...
> 
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000843') LOCATION 'hit_date=20100329/mid=000843';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000844') LOCATION 'hit_date=20100329/mid=000844';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000849') LOCATION 'hit_date=20100329/mid=000849';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000850') LOCATION 'hit_date=20100329/mid=000850';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000851') LOCATION 'hit_date=20100329/mid=000851';
> alter table XXXX_action ADD PARTITION (hit_date = '20100329' , mid = '000852') LOCATION 'hit_date=20100329/mid=000852';
> 
> Eventually this fails after a number of entries.
> 
> Time taken: 0.159 seconds
> OK
> Time taken: 0.17 seconds
> OK
> Time taken: 0.241 seconds
> FAILED: Error in metadata: Unable to fetch table XXXXX_action
> FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
> 
> Restarting the process after removing the already added tables works until it breaks again. Anyone ever dealt with this?
> 
> Doing one hive -e per table always works but takes a lot longer ...3 seconds a partition rather then ~.5 seconds.
> 
>