You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Aleksandr Shulman (JIRA)" <ji...@apache.org> on 2013/05/25 02:51:20 UTC

[jira] [Created] (HBASE-8619) [HBase Client]: Improve client to retry if master is down when requesting getHTableDescriptor

Aleksandr Shulman created HBASE-8619:
----------------------------------------

             Summary: [HBase Client]: Improve client to retry if master is down when requesting getHTableDescriptor
                 Key: HBASE-8619
                 URL: https://issues.apache.org/jira/browse/HBASE-8619
             Project: HBase
          Issue Type: Bug
          Components: Client
    Affects Versions: 0.94.8
            Reporter: Aleksandr Shulman
            Assignee: Elliott Clark
            Priority: Minor
             Fix For: 0.94.8


In our rolling upgrade testing, running ImportTsv failed in the job submission phase when the master was down. This was when the master was failing over to the backup master. In this case, a retry would have been helpful and made sure that the job would get submitted.

A good solution would be to refresh the master information before placing the call to getHTableDescriptor.

Command:
{code} sudo -u hbase hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,c1,c2,c3 -Dimporttsv.bulk.output=/user/hbase/storeFiles2_2/import2_table1369439156 import2_table1369439156 /user/hbase/tsv2{code}

Here is the stack trace:

{code} 13/05/24 16:55:49 INFO compress.CodecPool: Got brand-new compressor [.deflate]
16:45:44  Exception in thread "main" java.lang.reflect.UndeclaredThrowableException
16:45:44  	at $Proxy7.getHTableDescriptors(Unknown Source)
16:45:44  	at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHTableDescriptor(HConnectionManager.java:1861)
16:45:44  	at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:440)
16:45:44  	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureCompression(HFileOutputFormat.java:458)
16:45:44  	at org.apache.hadoop.hbase.mapreduce.HFileOutputFormat.configureIncrementalLoad(HFileOutputFormat.java:375)
16:45:44  	at org.apache.hadoop.hbase.mapreduce.ImportTsv.createSubmittableJob(ImportTsv.java:280)
16:45:44  	at org.apache.hadoop.hbase.mapreduce.ImportTsv.main(ImportTsv.java:424)
16:45:44  Caused by: java.io.IOException: Call to hbase-rolling-6.ent.cloudera.com/10.20.186.99:22001 failed on local exception: java.io.EOFException
16:45:44  	at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:1030)
16:45:44  	at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:999)
16:45:44  	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:86)
16:45:44  	... 7 more
16:45:44  Caused by: java.io.EOFException
16:45:44  	at java.io.DataInputStream.readInt(DataInputStream.java:375)
16:45:44  	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:646)
16:45:44  	at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:580){code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira