You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by anil gupta <an...@gmail.com> on 2016/02/14 21:44:45 UTC

org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Hi,

We are using phoenix4.4, hbase 1.1(hdp2.3.4).
I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
due to following error:

2016-02-14 12:29:43,182 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:29:53,197 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:03,212 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:13,225 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:23,239 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:33,253 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:43,266 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:30:53,279 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:03,293 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:13,305 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:23,318 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:33,331 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:43,345 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:31:53,358 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:03,371 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:13,385 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:23,399 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:33,412 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:43,428 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:32:53,443 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:03,457 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:13,472 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:23,486 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:33,524 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:43,538 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:33:53,551 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:34:03,565 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16]
org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
attempt=10/35 failed=2000ops, last exception: null on
hdp3.truecar.com,16020,1455326291512, tracking started null, retrying
after=10086ms, replay=2000ops
2016-02-14 12:34:13,578 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish
2016-02-14 12:34:23,593 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish

I have never seen anything like this. Can anyone give me pointers about
this problem?

-- 
Thanks & Regards,
Anil Gupta

Re: Rename tables or swap alias

Posted by Pat Ferrel <pa...@occamsmachete.com>.
We implemented this by upserting changed elements and dropping others. On a given cluster is takes 4.5 hours to load HBase, the trim and cleanup as currently implemented takes 4 days. Back to the drawing board.

I’ve read the references but still don’t grok what to do. I have a table with an event stream, containing duplicates and expired data. I’d like to find the most time-efficient way to remove duplicates and drop expired data from what I’ll call the main_table. This is being queried and added to all the time.

My first thought was to create a new clean_table with Spark by reading main_table, processing and writing clean_table then renaming main_table to old_table, and renaming clean_table to main_table. I can now drop old_table. Ignoring what happens to events during renaming, this would be efficient because it would be equivalent to loading, no complex updates to tables in place and under load. 

Snapshots and clones seem to miss the issue which is writing the cleaned data to some place that can now act like main_table but clearly I don’t understand snapshots and clones. They seem to be some way to alias a table so only changes are logged, without actually copying the data. I’m not sure i care about copying the data into an RDD, which will then undergo some transforms into a final RDD. This can be written efficiently into clean_table with no upserts or droping of elements, which seems to be cause things to slow to a halt.

So assuming I have clean_table, how do I get all queries to go to it, instead of main_table? Elasticsearch has an alias that I can just point somewhere new. Do I need to keep track of something like this outside of HBase and change it after creating clean_table or am I missing how to do this with shapshots and clones?



From: Ted Yu <yuzhihong@gmail.com <ma...@gmail.com>>
Subject: Re: Rename tables or swap alias
Date: February 16, 2016 at 6:48:53 AM PST
To: "user@hbase.apache.org <ma...@hbase.apache.org>" <user@hbase.apache.org <ma...@hbase.apache.org>>
Reply-To: user@hbase.apache.org <ma...@hbase.apache.org>

Please see http://hbase.apache.org/book.html#ops.snapshots <http://hbase.apache.org/book.html#ops.snapshots> for background
on snapshots.

In Anil's description, table_old is the result of cloning the snapshot
which is taken in step #1. See
http://hbase.apache.org/book.html#ops.snapshots.clone <http://hbase.apache.org/book.html#ops.snapshots.clone>

Cheers

On Tue, Feb 16, 2016 at 6:35 AM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> I think I can work out the algorithm if I knew precisely what a “snapshot"
> does. From my reading it seems to be a lightweight fast alias (for lack of
> a better word) since it creates something that refers to the same physical
> data.So if I create a new table with cleaned data, call it table_new. Then
> I drop table_old and “snapshot” table_new into table_old? Is this what is
> suggested?
> 
> This leaves me with a small time where there is no table_old, which is the
> time between dropping table_old and creating a snapshot. Is it feasible to
> lock the DB for this time?
> 
>> On Feb 15, 2016, at 7:13 PM, Ted Yu <yu...@gmail.com> wrote:
>> 
>> Keep in mind that if the writes to this table are not paused, there would
>> be some data coming in between steps #1 and #2 which would not be in the
>> snapshot.
>> 
>> Cheers
>> 
>> On Mon, Feb 15, 2016 at 6:21 PM, Anil Gupta <an...@gmail.com>
> wrote:
>> 
>>> I dont think there is any atomic operations in hbase to support ddl
> across
>>> 2 tables.
>>> 
>>> But, maybe you can use hbase snapshots.
>>> 1.Create a hbase snapshot.
>>> 2.Truncate the table.
>>> 3.Write data to the table.
>>> 4.Create a table from snapshot taken in step #1 as table_old.
>>> 
>>> Now you have two tables. One with current run data and other with last
> run
>>> data.
>>> I think above process will suffice. But, keep in mind that it is not
>>> atomic.
>>> 
>>> HTH,
>>> Anil
>>> Sent from my iPhone
>>> 
>>>> On Feb 15, 2016, at 4:25 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>>>> 
>>>> Any other way to do what I was asking. With Spark this is a very normal
>>> thing to treat a table as immutable and create another to replace the
> old.
>>>> 
>>>> Can you lock two tables and rename them in 2 actions then unlock in a
>>> very short period of time?
>>>> 
>>>> Or an alias for table names?
>>>> 
>>>> Didn’t see these in any docs or Googling, any help is appreciated.
>>> Writing all this data back to the original table would be a huge load
> on a
>>> table being written to by external processes and therefore under large
> load
>>> to begin with.
>>>> 
>>>>> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
>>>>> 
>>>>> There is currently no native support for renaming two tables in one
>>> atomic
>>>>> action.
>>>>> 
>>>>> FYI
>>>>> 
>>>>>> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com>
>>> wrote:
>>>>>> 
>>>>>> I use Spark to take an old table, clean it up to create an RDD of
>>> cleaned
>>>>>> data. What I’d like to do is write all of the data to a new table in
>>> HBase,
>>>>>> then rename the table to the old name. If possible it could be done
> by
>>>>>> changing an alias to point to the new table as long as all external
>>> code
>>>>>> uses the alias, or by a 2 table rename operation. But I don’t see how
>>> to do
>>>>>> this for HBase. I am dealing with a lot of data so don’t want to do
>>> table
>>>>>> modifications with deletes and upserts, this would be incredibly
> slow.
>>>>>> Furthermore I don’t want to disable the table for more than a tiny
>>> span of
>>>>>> time.
>>>>>> 
>>>>>> Is it possible to have 2 tables and rename both in an atomic action,
> or
>>>>>> change some alias to point to the new table in an atomic action. If
> not
>>>>>> what is the quickest way to achieve this to minimize time disabled.
>>>> 
>>> 
> 
> 



Re: Rename tables or swap alias

Posted by Ted Yu <yu...@gmail.com>.
Please see http://hbase.apache.org/book.html#ops.snapshots for background
on snapshots.

In Anil's description, table_old is the result of cloning the snapshot
which is taken in step #1. See
http://hbase.apache.org/book.html#ops.snapshots.clone

Cheers

On Tue, Feb 16, 2016 at 6:35 AM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> I think I can work out the algorithm if I knew precisely what a “snapshot"
> does. From my reading it seems to be a lightweight fast alias (for lack of
> a better word) since it creates something that refers to the same physical
> data.So if I create a new table with cleaned data, call it table_new. Then
> I drop table_old and “snapshot” table_new into table_old? Is this what is
> suggested?
>
> This leaves me with a small time where there is no table_old, which is the
> time between dropping table_old and creating a snapshot. Is it feasible to
> lock the DB for this time?
>
> > On Feb 15, 2016, at 7:13 PM, Ted Yu <yu...@gmail.com> wrote:
> >
> > Keep in mind that if the writes to this table are not paused, there would
> > be some data coming in between steps #1 and #2 which would not be in the
> > snapshot.
> >
> > Cheers
> >
> > On Mon, Feb 15, 2016 at 6:21 PM, Anil Gupta <an...@gmail.com>
> wrote:
> >
> >> I dont think there is any atomic operations in hbase to support ddl
> across
> >> 2 tables.
> >>
> >> But, maybe you can use hbase snapshots.
> >> 1.Create a hbase snapshot.
> >> 2.Truncate the table.
> >> 3.Write data to the table.
> >> 4.Create a table from snapshot taken in step #1 as table_old.
> >>
> >> Now you have two tables. One with current run data and other with last
> run
> >> data.
> >> I think above process will suffice. But, keep in mind that it is not
> >> atomic.
> >>
> >> HTH,
> >> Anil
> >> Sent from my iPhone
> >>
> >>> On Feb 15, 2016, at 4:25 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
> >>>
> >>> Any other way to do what I was asking. With Spark this is a very normal
> >> thing to treat a table as immutable and create another to replace the
> old.
> >>>
> >>> Can you lock two tables and rename them in 2 actions then unlock in a
> >> very short period of time?
> >>>
> >>> Or an alias for table names?
> >>>
> >>> Didn’t see these in any docs or Googling, any help is appreciated.
> >> Writing all this data back to the original table would be a huge load
> on a
> >> table being written to by external processes and therefore under large
> load
> >> to begin with.
> >>>
> >>>> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
> >>>>
> >>>> There is currently no native support for renaming two tables in one
> >> atomic
> >>>> action.
> >>>>
> >>>> FYI
> >>>>
> >>>>> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com>
> >> wrote:
> >>>>>
> >>>>> I use Spark to take an old table, clean it up to create an RDD of
> >> cleaned
> >>>>> data. What I’d like to do is write all of the data to a new table in
> >> HBase,
> >>>>> then rename the table to the old name. If possible it could be done
> by
> >>>>> changing an alias to point to the new table as long as all external
> >> code
> >>>>> uses the alias, or by a 2 table rename operation. But I don’t see how
> >> to do
> >>>>> this for HBase. I am dealing with a lot of data so don’t want to do
> >> table
> >>>>> modifications with deletes and upserts, this would be incredibly
> slow.
> >>>>> Furthermore I don’t want to disable the table for more than a tiny
> >> span of
> >>>>> time.
> >>>>>
> >>>>> Is it possible to have 2 tables and rename both in an atomic action,
> or
> >>>>> change some alias to point to the new table in an atomic action. If
> not
> >>>>> what is the quickest way to achieve this to minimize time disabled.
> >>>
> >>
>
>

Re: Rename tables or swap alias

Posted by Pat Ferrel <pa...@occamsmachete.com>.
I think I can work out the algorithm if I knew precisely what a “snapshot" does. From my reading it seems to be a lightweight fast alias (for lack of a better word) since it creates something that refers to the same physical data.So if I create a new table with cleaned data, call it table_new. Then I drop table_old and “snapshot” table_new into table_old? Is this what is suggested?

This leaves me with a small time where there is no table_old, which is the time between dropping table_old and creating a snapshot. Is it feasible to lock the DB for this time?

> On Feb 15, 2016, at 7:13 PM, Ted Yu <yu...@gmail.com> wrote:
> 
> Keep in mind that if the writes to this table are not paused, there would
> be some data coming in between steps #1 and #2 which would not be in the
> snapshot.
> 
> Cheers
> 
> On Mon, Feb 15, 2016 at 6:21 PM, Anil Gupta <an...@gmail.com> wrote:
> 
>> I dont think there is any atomic operations in hbase to support ddl across
>> 2 tables.
>> 
>> But, maybe you can use hbase snapshots.
>> 1.Create a hbase snapshot.
>> 2.Truncate the table.
>> 3.Write data to the table.
>> 4.Create a table from snapshot taken in step #1 as table_old.
>> 
>> Now you have two tables. One with current run data and other with last run
>> data.
>> I think above process will suffice. But, keep in mind that it is not
>> atomic.
>> 
>> HTH,
>> Anil
>> Sent from my iPhone
>> 
>>> On Feb 15, 2016, at 4:25 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>>> 
>>> Any other way to do what I was asking. With Spark this is a very normal
>> thing to treat a table as immutable and create another to replace the old.
>>> 
>>> Can you lock two tables and rename them in 2 actions then unlock in a
>> very short period of time?
>>> 
>>> Or an alias for table names?
>>> 
>>> Didn’t see these in any docs or Googling, any help is appreciated.
>> Writing all this data back to the original table would be a huge load on a
>> table being written to by external processes and therefore under large load
>> to begin with.
>>> 
>>>> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
>>>> 
>>>> There is currently no native support for renaming two tables in one
>> atomic
>>>> action.
>>>> 
>>>> FYI
>>>> 
>>>>> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com>
>> wrote:
>>>>> 
>>>>> I use Spark to take an old table, clean it up to create an RDD of
>> cleaned
>>>>> data. What I’d like to do is write all of the data to a new table in
>> HBase,
>>>>> then rename the table to the old name. If possible it could be done by
>>>>> changing an alias to point to the new table as long as all external
>> code
>>>>> uses the alias, or by a 2 table rename operation. But I don’t see how
>> to do
>>>>> this for HBase. I am dealing with a lot of data so don’t want to do
>> table
>>>>> modifications with deletes and upserts, this would be incredibly slow.
>>>>> Furthermore I don’t want to disable the table for more than a tiny
>> span of
>>>>> time.
>>>>> 
>>>>> Is it possible to have 2 tables and rename both in an atomic action, or
>>>>> change some alias to point to the new table in an atomic action. If not
>>>>> what is the quickest way to achieve this to minimize time disabled.
>>> 
>> 


Re: Rename tables or swap alias

Posted by Ted Yu <yu...@gmail.com>.
Keep in mind that if the writes to this table are not paused, there would
be some data coming in between steps #1 and #2 which would not be in the
snapshot.

Cheers

On Mon, Feb 15, 2016 at 6:21 PM, Anil Gupta <an...@gmail.com> wrote:

> I dont think there is any atomic operations in hbase to support ddl across
> 2 tables.
>
> But, maybe you can use hbase snapshots.
> 1.Create a hbase snapshot.
> 2.Truncate the table.
> 3.Write data to the table.
> 4.Create a table from snapshot taken in step #1 as table_old.
>
> Now you have two tables. One with current run data and other with last run
> data.
> I think above process will suffice. But, keep in mind that it is not
> atomic.
>
> HTH,
> Anil
> Sent from my iPhone
>
> > On Feb 15, 2016, at 4:25 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
> >
> > Any other way to do what I was asking. With Spark this is a very normal
> thing to treat a table as immutable and create another to replace the old.
> >
> > Can you lock two tables and rename them in 2 actions then unlock in a
> very short period of time?
> >
> > Or an alias for table names?
> >
> > Didn’t see these in any docs or Googling, any help is appreciated.
> Writing all this data back to the original table would be a huge load on a
> table being written to by external processes and therefore under large load
> to begin with.
> >
> >> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
> >>
> >> There is currently no native support for renaming two tables in one
> atomic
> >> action.
> >>
> >> FYI
> >>
> >>> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com>
> wrote:
> >>>
> >>> I use Spark to take an old table, clean it up to create an RDD of
> cleaned
> >>> data. What I’d like to do is write all of the data to a new table in
> HBase,
> >>> then rename the table to the old name. If possible it could be done by
> >>> changing an alias to point to the new table as long as all external
> code
> >>> uses the alias, or by a 2 table rename operation. But I don’t see how
> to do
> >>> this for HBase. I am dealing with a lot of data so don’t want to do
> table
> >>> modifications with deletes and upserts, this would be incredibly slow.
> >>> Furthermore I don’t want to disable the table for more than a tiny
> span of
> >>> time.
> >>>
> >>> Is it possible to have 2 tables and rename both in an atomic action, or
> >>> change some alias to point to the new table in an atomic action. If not
> >>> what is the quickest way to achieve this to minimize time disabled.
> >
>

Re: Rename tables or swap alias

Posted by Anil Gupta <an...@gmail.com>.
I dont think there is any atomic operations in hbase to support ddl across 2 tables.

But, maybe you can use hbase snapshots.
1.Create a hbase snapshot.
2.Truncate the table.
3.Write data to the table.
4.Create a table from snapshot taken in step #1 as table_old.

Now you have two tables. One with current run data and other with last run data.
I think above process will suffice. But, keep in mind that it is not atomic.

HTH,
Anil
Sent from my iPhone

> On Feb 15, 2016, at 4:25 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
> 
> Any other way to do what I was asking. With Spark this is a very normal thing to treat a table as immutable and create another to replace the old.
> 
> Can you lock two tables and rename them in 2 actions then unlock in a very short period of time?
> 
> Or an alias for table names?
> 
> Didn’t see these in any docs or Googling, any help is appreciated. Writing all this data back to the original table would be a huge load on a table being written to by external processes and therefore under large load to begin with.
> 
>> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
>> 
>> There is currently no native support for renaming two tables in one atomic
>> action.
>> 
>> FYI
>> 
>>> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
>>> 
>>> I use Spark to take an old table, clean it up to create an RDD of cleaned
>>> data. What I’d like to do is write all of the data to a new table in HBase,
>>> then rename the table to the old name. If possible it could be done by
>>> changing an alias to point to the new table as long as all external code
>>> uses the alias, or by a 2 table rename operation. But I don’t see how to do
>>> this for HBase. I am dealing with a lot of data so don’t want to do table
>>> modifications with deletes and upserts, this would be incredibly slow.
>>> Furthermore I don’t want to disable the table for more than a tiny span of
>>> time.
>>> 
>>> Is it possible to have 2 tables and rename both in an atomic action, or
>>> change some alias to point to the new table in an atomic action. If not
>>> what is the quickest way to achieve this to minimize time disabled.
> 

Re: Rename tables or swap alias

Posted by Pat Ferrel <pa...@occamsmachete.com>.
Any other way to do what I was asking. With Spark this is a very normal thing to treat a table as immutable and create another to replace the old.

Can you lock two tables and rename them in 2 actions then unlock in a very short period of time?

Or an alias for table names?

Didn’t see these in any docs or Googling, any help is appreciated. Writing all this data back to the original table would be a huge load on a table being written to by external processes and therefore under large load to begin with.
 
> On Feb 14, 2016, at 5:03 PM, Ted Yu <yu...@gmail.com> wrote:
> 
> There is currently no native support for renaming two tables in one atomic
> action.
> 
> FYI
> 
> On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:
> 
>> I use Spark to take an old table, clean it up to create an RDD of cleaned
>> data. What I’d like to do is write all of the data to a new table in HBase,
>> then rename the table to the old name. If possible it could be done by
>> changing an alias to point to the new table as long as all external code
>> uses the alias, or by a 2 table rename operation. But I don’t see how to do
>> this for HBase. I am dealing with a lot of data so don’t want to do table
>> modifications with deletes and upserts, this would be incredibly slow.
>> Furthermore I don’t want to disable the table for more than a tiny span of
>> time.
>> 
>> Is it possible to have 2 tables and rename both in an atomic action, or
>> change some alias to point to the new table in an atomic action. If not
>> what is the quickest way to achieve this to minimize time disabled.


Re: Rename tables or swap alias

Posted by Ted Yu <yu...@gmail.com>.
There is currently no native support for renaming two tables in one atomic
action.

FYI

On Sun, Feb 14, 2016 at 4:18 PM, Pat Ferrel <pa...@occamsmachete.com> wrote:

> I use Spark to take an old table, clean it up to create an RDD of cleaned
> data. What I’d like to do is write all of the data to a new table in HBase,
> then rename the table to the old name. If possible it could be done by
> changing an alias to point to the new table as long as all external code
> uses the alias, or by a 2 table rename operation. But I don’t see how to do
> this for HBase. I am dealing with a lot of data so don’t want to do table
> modifications with deletes and upserts, this would be incredibly slow.
> Furthermore I don’t want to disable the table for more than a tiny span of
> time.
>
> Is it possible to have 2 tables and rename both in an atomic action, or
> change some alias to point to the new table in an atomic action. If not
> what is the quickest way to achieve this to minimize time disabled.

Rename tables or swap alias

Posted by Pat Ferrel <pa...@occamsmachete.com>.
I use Spark to take an old table, clean it up to create an RDD of cleaned data. What I’d like to do is write all of the data to a new table in HBase, then rename the table to the old name. If possible it could be done by changing an alias to point to the new table as long as all external code uses the alias, or by a 2 table rename operation. But I don’t see how to do this for HBase. I am dealing with a lot of data so don’t want to do table modifications with deletes and upserts, this would be incredibly slow. Furthermore I don’t want to disable the table for more than a tiny span of time.

Is it possible to have 2 tables and rename both in an atomic action, or change some alias to point to the new table in an atomic action. If not what is the quickest way to achieve this to minimize time disabled.

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
I figured out the problem. We have phoenix.upsert.batch.size set to 10 in
hbase-site.xml but somehow that property is **not getting picked up in our
oozie workflow**
When i am explicitly setting phoenix.upsert.batch.size property in my oozie
workflow then my job ran successfully.

By default, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.

Thanks,
Anil Gupta


On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <he...@gmail.com> wrote:

> I am not sure whether "upsert batch size in phoenix" equals HBase Client
> batch puts size or not.
>
> But as log shows, it seems there are 2000 actions send to hbase one time.
>
> 2016-02-15 11:38 GMT+08:00 anil gupta <an...@gmail.com>:
>
>> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>>
>> However, AsyncProcess is complaining about 2000 actions.
>>
>> I tried with upsert batch size of 5 also. But it didnt help.
>>
>> On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com>
>> wrote:
>>
>> > My phoenix upsert batch size is 50. You mean to say that 50 is also a
>> lot?
>> >
>> > However, AsyncProcess is complaining about 2000 actions.
>> >
>> > I tried with upsert batch size of 5 also. But it didnt help.
>> >
>> >
>> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
>> > wrote:
>> >
>> >> 2016-02-14 12:34:23,593 INFO [main]
>> >> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions to finish
>> >>
>> >> It means your writes are too many,  please decrease the batch size of
>> your
>> >> puts,  and balance your requests on each RS.
>> >>
>> >> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
>> >>
>> >> > After a while we also get this error:
>> >> > 2016-02-14 12:45:10,515 WARN [main]
>> >> > org.apache.phoenix.execute.MutationState: Swallowing exception and
>> >> > retrying after clearing meta cache on connection.
>> >> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached
>> index
>> >> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
>> >> > cached index metadata.  key=-594230549321118802
>> >> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7.
>> Index
>> >> > update failed
>> >> >
>> >> > We have already set:
>> >> >
>> >> >
>> >>
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>> >> >
>> >> > Upset batch size is 50. Write are quite frequent so the cache would
>> >> > not timeout in 180000ms
>> >> >
>> >> >
>> >> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > Hi,
>> >> > >
>> >> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
>> >> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
>> >> > failing
>> >> > > due to following error:
>> >> > >
>> >> > > 2016-02-14 12:29:43,182 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:29:53,197 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:03,212 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:13,225 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:23,239 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:33,253 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:43,266 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:53,279 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:03,293 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:13,305 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:23,318 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:33,331 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:43,345 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:53,358 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:03,371 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:13,385 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:23,399 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:33,412 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:43,428 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:53,443 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:03,457 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:13,472 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:23,486 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:33,524 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:43,538 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:53,551 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,565 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,953 INFO
>> [hconnection-0xe82ca6e-shared--pool2-t16]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
>> >> > attempt=10/35 failed=2000ops, last exception: null on
>> hdp3.truecar.com
>> >> ,16020,1455326291512,
>> >> > tracking started null, retrying after=10086ms, replay=2000ops
>> >> > > 2016-02-14 12:34:13,578 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:23,593 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > >
>> >> > > I have never seen anything like this. Can anyone give me pointers
>> >> about
>> >> > > this problem?
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
I figured out the problem. We have phoenix.upsert.batch.size set to 10 in
hbase-site.xml but somehow that property is **not getting picked up in our
oozie workflow**
When i am explicitly setting phoenix.upsert.batch.size property in my oozie
workflow then my job ran successfully.

By default, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.

Thanks,
Anil Gupta


On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <he...@gmail.com> wrote:

> I am not sure whether "upsert batch size in phoenix" equals HBase Client
> batch puts size or not.
>
> But as log shows, it seems there are 2000 actions send to hbase one time.
>
> 2016-02-15 11:38 GMT+08:00 anil gupta <an...@gmail.com>:
>
>> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>>
>> However, AsyncProcess is complaining about 2000 actions.
>>
>> I tried with upsert batch size of 5 also. But it didnt help.
>>
>> On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com>
>> wrote:
>>
>> > My phoenix upsert batch size is 50. You mean to say that 50 is also a
>> lot?
>> >
>> > However, AsyncProcess is complaining about 2000 actions.
>> >
>> > I tried with upsert batch size of 5 also. But it didnt help.
>> >
>> >
>> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
>> > wrote:
>> >
>> >> 2016-02-14 12:34:23,593 INFO [main]
>> >> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions to finish
>> >>
>> >> It means your writes are too many,  please decrease the batch size of
>> your
>> >> puts,  and balance your requests on each RS.
>> >>
>> >> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
>> >>
>> >> > After a while we also get this error:
>> >> > 2016-02-14 12:45:10,515 WARN [main]
>> >> > org.apache.phoenix.execute.MutationState: Swallowing exception and
>> >> > retrying after clearing meta cache on connection.
>> >> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached
>> index
>> >> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
>> >> > cached index metadata.  key=-594230549321118802
>> >> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7.
>> Index
>> >> > update failed
>> >> >
>> >> > We have already set:
>> >> >
>> >> >
>> >>
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>> >> >
>> >> > Upset batch size is 50. Write are quite frequent so the cache would
>> >> > not timeout in 180000ms
>> >> >
>> >> >
>> >> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > Hi,
>> >> > >
>> >> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
>> >> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
>> >> > failing
>> >> > > due to following error:
>> >> > >
>> >> > > 2016-02-14 12:29:43,182 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:29:53,197 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:03,212 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:13,225 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:23,239 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:33,253 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:43,266 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:53,279 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:03,293 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:13,305 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:23,318 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:33,331 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:43,345 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:53,358 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:03,371 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:13,385 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:23,399 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:33,412 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:43,428 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:53,443 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:03,457 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:13,472 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:23,486 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:33,524 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:43,538 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:53,551 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,565 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,953 INFO
>> [hconnection-0xe82ca6e-shared--pool2-t16]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
>> >> > attempt=10/35 failed=2000ops, last exception: null on
>> hdp3.truecar.com
>> >> ,16020,1455326291512,
>> >> > tracking started null, retrying after=10086ms, replay=2000ops
>> >> > > 2016-02-14 12:34:13,578 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:23,593 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > >
>> >> > > I have never seen anything like this. Can anyone give me pointers
>> >> about
>> >> > > this problem?
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by Heng Chen <he...@gmail.com>.
I am not sure whether "upsert batch size in phoenix" equals HBase Client
batch puts size or not.

But as log shows, it seems there are 2000 actions send to hbase one time.

2016-02-15 11:38 GMT+08:00 anil gupta <an...@gmail.com>:

> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>
> However, AsyncProcess is complaining about 2000 actions.
>
> I tried with upsert batch size of 5 also. But it didnt help.
>
> On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com> wrote:
>
> > My phoenix upsert batch size is 50. You mean to say that 50 is also a
> lot?
> >
> > However, AsyncProcess is complaining about 2000 actions.
> >
> > I tried with upsert batch size of 5 also. But it didnt help.
> >
> >
> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
> > wrote:
> >
> >> 2016-02-14 12:34:23,593 INFO [main]
> >> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions to finish
> >>
> >> It means your writes are too many,  please decrease the batch size of
> your
> >> puts,  and balance your requests on each RS.
> >>
> >> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
> >>
> >> > After a while we also get this error:
> >> > 2016-02-14 12:45:10,515 WARN [main]
> >> > org.apache.phoenix.execute.MutationState: Swallowing exception and
> >> > retrying after clearing meta cache on connection.
> >> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> >> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
> >> > cached index metadata.  key=-594230549321118802
> >> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
> >> > update failed
> >> >
> >> > We have already set:
> >> >
> >> >
> >>
> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
> >> >
> >> > Upset batch size is 50. Write are quite frequent so the cache would
> >> > not timeout in 180000ms
> >> >
> >> >
> >> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> >> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
> >> > failing
> >> > > due to following error:
> >> > >
> >> > > 2016-02-14 12:29:43,182 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:29:53,197 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:03,212 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:13,225 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:23,239 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:33,253 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:43,266 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:53,279 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:03,293 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:13,305 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:23,318 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:33,331 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:43,345 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:53,358 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:03,371 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:13,385 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:23,399 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:33,412 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:43,428 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:53,443 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:03,457 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:13,472 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:23,486 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:33,524 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:43,538 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:53,551 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:03,565 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:03,953 INFO
> [hconnection-0xe82ca6e-shared--pool2-t16]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
> >> > attempt=10/35 failed=2000ops, last exception: null on
> hdp3.truecar.com
> >> ,16020,1455326291512,
> >> > tracking started null, retrying after=10086ms, replay=2000ops
> >> > > 2016-02-14 12:34:13,578 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:23,593 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > >
> >> > > I have never seen anything like this. Can anyone give me pointers
> >> about
> >> > > this problem?
> >> > >
> >> > > --
> >> > > Thanks & Regards,
> >> > > Anil Gupta
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Anil Gupta
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by Heng Chen <he...@gmail.com>.
I am not sure whether "upsert batch size in phoenix" equals HBase Client
batch puts size or not.

But as log shows, it seems there are 2000 actions send to hbase one time.

2016-02-15 11:38 GMT+08:00 anil gupta <an...@gmail.com>:

> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>
> However, AsyncProcess is complaining about 2000 actions.
>
> I tried with upsert batch size of 5 also. But it didnt help.
>
> On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com> wrote:
>
> > My phoenix upsert batch size is 50. You mean to say that 50 is also a
> lot?
> >
> > However, AsyncProcess is complaining about 2000 actions.
> >
> > I tried with upsert batch size of 5 also. But it didnt help.
> >
> >
> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
> > wrote:
> >
> >> 2016-02-14 12:34:23,593 INFO [main]
> >> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions to finish
> >>
> >> It means your writes are too many,  please decrease the batch size of
> your
> >> puts,  and balance your requests on each RS.
> >>
> >> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
> >>
> >> > After a while we also get this error:
> >> > 2016-02-14 12:45:10,515 WARN [main]
> >> > org.apache.phoenix.execute.MutationState: Swallowing exception and
> >> > retrying after clearing meta cache on connection.
> >> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> >> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
> >> > cached index metadata.  key=-594230549321118802
> >> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
> >> > update failed
> >> >
> >> > We have already set:
> >> >
> >> >
> >>
> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
> >> >
> >> > Upset batch size is 50. Write are quite frequent so the cache would
> >> > not timeout in 180000ms
> >> >
> >> >
> >> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> >> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
> >> > failing
> >> > > due to following error:
> >> > >
> >> > > 2016-02-14 12:29:43,182 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:29:53,197 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:03,212 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:13,225 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:23,239 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:33,253 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:43,266 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:30:53,279 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:03,293 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:13,305 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:23,318 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:33,331 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:43,345 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:31:53,358 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:03,371 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:13,385 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:23,399 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:33,412 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:43,428 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:32:53,443 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:03,457 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:13,472 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:23,486 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:33,524 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:43,538 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:33:53,551 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:03,565 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:03,953 INFO
> [hconnection-0xe82ca6e-shared--pool2-t16]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
> >> > attempt=10/35 failed=2000ops, last exception: null on
> hdp3.truecar.com
> >> ,16020,1455326291512,
> >> > tracking started null, retrying after=10086ms, replay=2000ops
> >> > > 2016-02-14 12:34:13,578 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > > 2016-02-14 12:34:23,593 INFO [main]
> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> >> actions
> >> > to finish
> >> > >
> >> > > I have never seen anything like this. Can anyone give me pointers
> >> about
> >> > > this problem?
> >> > >
> >> > > --
> >> > > Thanks & Regards,
> >> > > Anil Gupta
> >> > >
> >> >
> >> >
> >> >
> >> > --
> >> > Thanks & Regards,
> >> > Anil Gupta
> >> >
> >>
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?

However, AsyncProcess is complaining about 2000 actions.

I tried with upsert batch size of 5 also. But it didnt help.

On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com> wrote:

> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>
> However, AsyncProcess is complaining about 2000 actions.
>
> I tried with upsert batch size of 5 also. But it didnt help.
>
>
> On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
> wrote:
>
>> 2016-02-14 12:34:23,593 INFO [main]
>> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions to finish
>>
>> It means your writes are too many,  please decrease the batch size of your
>> puts,  and balance your requests on each RS.
>>
>> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
>>
>> > After a while we also get this error:
>> > 2016-02-14 12:45:10,515 WARN [main]
>> > org.apache.phoenix.execute.MutationState: Swallowing exception and
>> > retrying after clearing meta cache on connection.
>> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
>> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
>> > cached index metadata.  key=-594230549321118802
>> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
>> > update failed
>> >
>> > We have already set:
>> >
>> >
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>> >
>> > Upset batch size is 50. Write are quite frequent so the cache would
>> > not timeout in 180000ms
>> >
>> >
>> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> >
>> > > Hi,
>> > >
>> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
>> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
>> > failing
>> > > due to following error:
>> > >
>> > > 2016-02-14 12:29:43,182 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:29:53,197 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:03,212 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:13,225 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:23,239 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:33,253 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:43,266 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:53,279 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:03,293 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:13,305 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:23,318 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:33,331 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:43,345 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:53,358 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:03,371 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:13,385 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:23,399 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:33,412 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:43,428 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:53,443 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:03,457 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:13,472 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:23,486 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:33,524 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:43,538 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:53,551 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:03,565 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
>> > attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com
>> ,16020,1455326291512,
>> > tracking started null, retrying after=10086ms, replay=2000ops
>> > > 2016-02-14 12:34:13,578 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:23,593 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > >
>> > > I have never seen anything like this. Can anyone give me pointers
>> about
>> > > this problem?
>> > >
>> > > --
>> > > Thanks & Regards,
>> > > Anil Gupta
>> > >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?

However, AsyncProcess is complaining about 2000 actions.

I tried with upsert batch size of 5 also. But it didnt help.

On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <an...@gmail.com> wrote:

> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>
> However, AsyncProcess is complaining about 2000 actions.
>
> I tried with upsert batch size of 5 also. But it didnt help.
>
>
> On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com>
> wrote:
>
>> 2016-02-14 12:34:23,593 INFO [main]
>> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions to finish
>>
>> It means your writes are too many,  please decrease the batch size of your
>> puts,  and balance your requests on each RS.
>>
>> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
>>
>> > After a while we also get this error:
>> > 2016-02-14 12:45:10,515 WARN [main]
>> > org.apache.phoenix.execute.MutationState: Swallowing exception and
>> > retrying after clearing meta cache on connection.
>> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
>> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
>> > cached index metadata.  key=-594230549321118802
>> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
>> > update failed
>> >
>> > We have already set:
>> >
>> >
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>> >
>> > Upset batch size is 50. Write are quite frequent so the cache would
>> > not timeout in 180000ms
>> >
>> >
>> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
>> > wrote:
>> >
>> > > Hi,
>> > >
>> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
>> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
>> > failing
>> > > due to following error:
>> > >
>> > > 2016-02-14 12:29:43,182 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:29:53,197 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:03,212 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:13,225 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:23,239 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:33,253 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:43,266 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:30:53,279 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:03,293 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:13,305 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:23,318 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:33,331 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:43,345 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:31:53,358 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:03,371 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:13,385 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:23,399 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:33,412 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:43,428 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:32:53,443 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:03,457 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:13,472 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:23,486 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:33,524 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:43,538 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:33:53,551 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:03,565 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
>> > attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com
>> ,16020,1455326291512,
>> > tracking started null, retrying after=10086ms, replay=2000ops
>> > > 2016-02-14 12:34:13,578 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > > 2016-02-14 12:34:23,593 INFO [main]
>> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> actions
>> > to finish
>> > >
>> > > I have never seen anything like this. Can anyone give me pointers
>> about
>> > > this problem?
>> > >
>> > > --
>> > > Thanks & Regards,
>> > > Anil Gupta
>> > >
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>>
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?

However, AsyncProcess is complaining about 2000 actions.

I tried with upsert batch size of 5 also. But it didnt help.


On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <he...@gmail.com> wrote:

> 2016-02-14 12:34:23,593 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions to finish
>
> It means your writes are too many,  please decrease the batch size of your
> puts,  and balance your requests on each RS.
>
> 2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:
>
> > After a while we also get this error:
> > 2016-02-14 12:45:10,515 WARN [main]
> > org.apache.phoenix.execute.MutationState: Swallowing exception and
> > retrying after clearing meta cache on connection.
> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
> > cached index metadata.  key=-594230549321118802
> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
> > update failed
> >
> > We have already set:
> >
> >
> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
> >
> > Upset batch size is 50. Write are quite frequent so the cache would
> > not timeout in 180000ms
> >
> >
> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> > > I have a MR job that is using PhoenixOutputFormat. My job keeps on
> > failing
> > > due to following error:
> > >
> > > 2016-02-14 12:29:43,182 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:29:53,197 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:03,212 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:13,225 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:23,239 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:33,253 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:43,266 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:30:53,279 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:03,293 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:13,305 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:23,318 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:33,331 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:43,345 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:31:53,358 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:03,371 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:13,385 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:23,399 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:33,412 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:43,428 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:32:53,443 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:03,457 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:13,472 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:23,486 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:33,524 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:43,538 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:33:53,551 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:34:03,565 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
> > attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com
> ,16020,1455326291512,
> > tracking started null, retrying after=10086ms, replay=2000ops
> > > 2016-02-14 12:34:13,578 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > > 2016-02-14 12:34:23,593 INFO [main]
> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
> actions
> > to finish
> > >
> > > I have never seen anything like this. Can anyone give me pointers about
> > > this problem?
> > >
> > > --
> > > Thanks & Regards,
> > > Anil Gupta
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>



-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by Heng Chen <he...@gmail.com>.
2016-02-14 12:34:23,593 INFO [main]
org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
actions to finish

It means your writes are too many,  please decrease the batch size of your
puts,  and balance your requests on each RS.

2016-02-15 4:53 GMT+08:00 anil gupta <an...@gmail.com>:

> After a while we also get this error:
> 2016-02-14 12:45:10,515 WARN [main]
> org.apache.phoenix.execute.MutationState: Swallowing exception and
> retrying after clearing meta cache on connection.
> java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
> metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
> cached index metadata.  key=-594230549321118802
> region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
> update failed
>
> We have already set:
>
> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>
> Upset batch size is 50. Write are quite frequent so the cache would
> not timeout in 180000ms
>
>
> On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com>
> wrote:
>
> > Hi,
> >
> > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> > I have a MR job that is using PhoenixOutputFormat. My job keeps on
> failing
> > due to following error:
> >
> > 2016-02-14 12:29:43,182 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:29:53,197 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:03,212 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:13,225 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:23,239 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:33,253 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:43,266 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:30:53,279 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:03,293 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:13,305 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:23,318 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:33,331 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:43,345 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:31:53,358 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:03,371 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:13,385 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:23,399 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:33,412 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:43,428 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:32:53,443 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:03,457 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:13,472 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:23,486 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:33,524 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:43,538 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:33:53,551 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:34:03,565 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
> attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com,16020,1455326291512,
> tracking started null, retrying after=10086ms, replay=2000ops
> > 2016-02-14 12:34:13,578 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> > 2016-02-14 12:34:23,593 INFO [main]
> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions
> to finish
> >
> > I have never seen anything like this. Can anyone give me pointers about
> > this problem?
> >
> > --
> > Thanks & Regards,
> > Anil Gupta
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
After a while we also get this error:
2016-02-14 12:45:10,515 WARN [main]
org.apache.phoenix.execute.MutationState: Swallowing exception and
retrying after clearing meta cache on connection.
java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
cached index metadata.  key=-594230549321118802
region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
update failed

We have already set:
<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>

Upset batch size is 50. Write are quite frequent so the cache would
not timeout in 180000ms


On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com> wrote:

> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
> 2016-02-14 12:29:43,182 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:29:53,197 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:03,212 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:13,225 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:23,239 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:33,253 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:43,266 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:53,279 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:03,293 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:13,305 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:23,318 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:33,331 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:43,345 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:53,358 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:03,371 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:13,385 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:23,399 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:33,412 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:43,428 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:53,443 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:03,457 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:13,472 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:23,486 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:33,524 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:43,538 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:53,551 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:03,565 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16] org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES, attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com,16020,1455326291512, tracking started null, retrying after=10086ms, replay=2000ops
> 2016-02-14 12:34:13,578 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:23,593 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
>
> I have never seen anything like this. Can anyone give me pointers about
> this problem?
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta

Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish

Posted by anil gupta <an...@gmail.com>.
After a while we also get this error:
2016-02-14 12:45:10,515 WARN [main]
org.apache.phoenix.execute.MutationState: Swallowing exception and
retrying after clearing meta cache on connection.
java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached index
metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
cached index metadata.  key=-594230549321118802
region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7. Index
update failed

We have already set:
<name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>

Upset batch size is 50. Write are quite frequent so the cache would
not timeout in 180000ms


On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <an...@gmail.com> wrote:

> Hi,
>
> We are using phoenix4.4, hbase 1.1(hdp2.3.4).
> I have a MR job that is using PhoenixOutputFormat. My job keeps on failing
> due to following error:
>
> 2016-02-14 12:29:43,182 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:29:53,197 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:03,212 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:13,225 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:23,239 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:33,253 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:43,266 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:30:53,279 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:03,293 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:13,305 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:23,318 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:33,331 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:43,345 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:31:53,358 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:03,371 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:13,385 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:23,399 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:33,412 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:43,428 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:32:53,443 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:03,457 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:13,472 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:23,486 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:33,524 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:43,538 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:33:53,551 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:03,565 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:03,953 INFO [hconnection-0xe82ca6e-shared--pool2-t16] org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES, attempt=10/35 failed=2000ops, last exception: null on hdp3.truecar.com,16020,1455326291512, tracking started null, retrying after=10086ms, replay=2000ops
> 2016-02-14 12:34:13,578 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
> 2016-02-14 12:34:23,593 INFO [main] org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000  actions to finish
>
> I have never seen anything like this. Can anyone give me pointers about
> this problem?
>
> --
> Thanks & Regards,
> Anil Gupta
>



-- 
Thanks & Regards,
Anil Gupta