You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@phoenix.apache.org by Abe Weinograd <ab...@flonet.com> on 2014/08/03 15:18:38 UTC

table seems to get corrupt

We are loading it via Map reduce directly into HFiles.  This has worked
well for us for a while.  Recently, while loading a table with 99 columns
and a 3 column composite key, it became unqueryable at some point during or
after we load it.

When trying to run an aggregation (Group by and COUNT(1)), Squirrel dumps
out java.lang.IllegalStateException: Expected single, aggregated KeyValue
from coprocessor, but instead received
keyvalues={\x00\x00\x00M\x00\x00\x01G\x92I\xEB\xEB3fcf9e6d-01f9-4bb9-9d08-f8cc95c00f28/0:ACTION_END/1406909122452/Put/vlen=8/ts=0/value=

When trying to do a SELECT * LIMIT 1, we
get org.apache.phoenix.exception.PhoenixIOException:
org.apache.phoenix.exception.PhoenixIOException: IPC server unable to read
call parameters: Can't find class
org.apache.phoenix.filter.ColumnProjectionFilter

This has happened a couple of times and the only way we can seem to get
querying via phoenix to work is to blow up the table.  I can scan and do
stuff fine via hbase and the Hbase shell.

Having trouble getting anywhere debugging myself.  Any thoughts/help would
be greatly appreciated.

Abe

Re: table seems to get corrupt

Posted by Abe Weinograd <ab...@flonet.com>.
I had a busted jar on one of my Region Servers.  Any table with data on it
was giving that error.  Thanks for the pointer!


On Wed, Aug 6, 2014 at 2:36 AM, Gabriel Reid <ga...@gmail.com> wrote:

> Hi Abe,
>
> I believe the second part of the "Expected single, aggregated
> KeyValue" error message is "Ensure aggregating coprocessors are loaded
> correctly on server". The fact that a basic scan isn't finding the
> org.apache.phoenix.filter.ColumnProjectionFilter class also points to
> the same thing: that there has been an issue loading the phoenix
> classes within HBase.
>
> Could you check the listing of registered coprocessors for the table
> in question (you can see this in the HBase shell). Is this only
> happening on the one table that you're trying to query, or on all
> Phoenix tables in HBase? Could you also tell a bit more about the
> loading process, i.e. are you using the Phoenix-provided bulk loading
> tool, or a custom one? And what is your environment (i.e. which
> version of Phoenix, which version of HBase)?
>
> - Gabriel
>
>
> On Wed, Aug 6, 2014 at 12:32 AM, Abe Weinograd <ab...@flonet.com> wrote:
> > Any idea why this is happening? All region servers have the Phoenix jars.
> >
> > On Sunday, August 3, 2014, Abe Weinograd <ab...@flonet.com> wrote:
> >>
> >> We are loading it via Map reduce directly into HFiles.  This has worked
> >> well for us for a while.  Recently, while loading a table with 99
> columns
> >> and a 3 column composite key, it became unqueryable at some point
> during or
> >> after we load it.
> >>
> >> When trying to run an aggregation (Group by and COUNT(1)), Squirrel
> dumps
> >> out java.lang.IllegalStateException: Expected single, aggregated
> KeyValue
> >> from coprocessor, but instead received
> >>
> keyvalues={\x00\x00\x00M\x00\x00\x01G\x92I\xEB\xEB3fcf9e6d-01f9-4bb9-9d08-f8cc95c00f28/0:ACTION_END/1406909122452/Put/vlen=8/ts=0/value=
> >>
> >> When trying to do a SELECT * LIMIT 1, we get
> >> org.apache.phoenix.exception.PhoenixIOException:
> >> org.apache.phoenix.exception.PhoenixIOException: IPC server unable to
> read
> >> call parameters: Can't find class
> >> org.apache.phoenix.filter.ColumnProjectionFilter
> >>
> >> This has happened a couple of times and the only way we can seem to get
> >> querying via phoenix to work is to blow up the table.  I can scan and do
> >> stuff fine via hbase and the Hbase shell.
> >>
> >> Having trouble getting anywhere debugging myself.  Any thoughts/help
> would
> >> be greatly appreciated.
> >>
> >> Abe
> >
> >
> >
> > --
> > Sent from MetroMail
>

Re: table seems to get corrupt

Posted by Gabriel Reid <ga...@gmail.com>.
Hi Abe,

I believe the second part of the "Expected single, aggregated
KeyValue" error message is "Ensure aggregating coprocessors are loaded
correctly on server". The fact that a basic scan isn't finding the
org.apache.phoenix.filter.ColumnProjectionFilter class also points to
the same thing: that there has been an issue loading the phoenix
classes within HBase.

Could you check the listing of registered coprocessors for the table
in question (you can see this in the HBase shell). Is this only
happening on the one table that you're trying to query, or on all
Phoenix tables in HBase? Could you also tell a bit more about the
loading process, i.e. are you using the Phoenix-provided bulk loading
tool, or a custom one? And what is your environment (i.e. which
version of Phoenix, which version of HBase)?

- Gabriel


On Wed, Aug 6, 2014 at 12:32 AM, Abe Weinograd <ab...@flonet.com> wrote:
> Any idea why this is happening? All region servers have the Phoenix jars.
>
> On Sunday, August 3, 2014, Abe Weinograd <ab...@flonet.com> wrote:
>>
>> We are loading it via Map reduce directly into HFiles.  This has worked
>> well for us for a while.  Recently, while loading a table with 99 columns
>> and a 3 column composite key, it became unqueryable at some point during or
>> after we load it.
>>
>> When trying to run an aggregation (Group by and COUNT(1)), Squirrel dumps
>> out java.lang.IllegalStateException: Expected single, aggregated KeyValue
>> from coprocessor, but instead received
>> keyvalues={\x00\x00\x00M\x00\x00\x01G\x92I\xEB\xEB3fcf9e6d-01f9-4bb9-9d08-f8cc95c00f28/0:ACTION_END/1406909122452/Put/vlen=8/ts=0/value=
>>
>> When trying to do a SELECT * LIMIT 1, we get
>> org.apache.phoenix.exception.PhoenixIOException:
>> org.apache.phoenix.exception.PhoenixIOException: IPC server unable to read
>> call parameters: Can't find class
>> org.apache.phoenix.filter.ColumnProjectionFilter
>>
>> This has happened a couple of times and the only way we can seem to get
>> querying via phoenix to work is to blow up the table.  I can scan and do
>> stuff fine via hbase and the Hbase shell.
>>
>> Having trouble getting anywhere debugging myself.  Any thoughts/help would
>> be greatly appreciated.
>>
>> Abe
>
>
>
> --
> Sent from MetroMail

Re: table seems to get corrupt

Posted by Abe Weinograd <ab...@flonet.com>.
Any idea why this is happening? All region servers have the Phoenix jars.

On Sunday, August 3, 2014, Abe Weinograd <ab...@flonet.com> wrote:

> We are loading it via Map reduce directly into HFiles.  This has worked
> well for us for a while.  Recently, while loading a table with 99 columns
> and a 3 column composite key, it became unqueryable at some point during or
> after we load it.
>
> When trying to run an aggregation (Group by and COUNT(1)), Squirrel dumps
> out java.lang.IllegalStateException: Expected single, aggregated KeyValue
> from coprocessor, but instead received
> keyvalues={\x00\x00\x00M\x00\x00\x01G\x92I\xEB\xEB3fcf9e6d-01f9-4bb9-9d08-f8cc95c00f28/0:ACTION_END/1406909122452/Put/vlen=8/ts=0/value=
>
> When trying to do a SELECT * LIMIT 1, we
> get org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException: IPC server unable to read
> call parameters: Can't find class
> org.apache.phoenix.filter.ColumnProjectionFilter
>
> This has happened a couple of times and the only way we can seem to get
> querying via phoenix to work is to blow up the table.  I can scan and do
> stuff fine via hbase and the Hbase shell.
>
> Having trouble getting anywhere debugging myself.  Any thoughts/help would
> be greatly appreciated.
>
> Abe
>


-- 
Sent from MetroMail