You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Cyril Scetbon <cy...@free.fr> on 2012/07/29 23:49:43 UTC

Coprocessor POC

Hi,

I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.

The code I use is the following :

package coreprocessor;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
import org.apache.hadoop.hbase.util.Bytes;

public class AggregationClientTest {

   private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
   private static final byte[] CF = Bytes.toBytes("core");

   public static void main(String[] args) throws Throwable {
      
       Configuration configuration = HBaseConfiguration.create();

       configuration.setLong("hbase.client.scanner.caching", 1000);
       AggregationClient aggregationClient = new AggregationClient(
               configuration);
       Scan scan = new Scan();
       scan.addColumn(CF, Bytes.toBytes("value"));
       System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
       System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
       System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
   }
}

The only one working is the rowCount function. For others I get a NPE error !
I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :

ROW                                                  COLUMN+CELL                                                                                                                                            
 id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A                                                                     

The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar

any idea why it doesn't work ?

thanks 
Cyril SCETBON


Re: Coprocessor POC

Posted by Ted Yu <yu...@gmail.com>.
Did your client include the following fix ?
HBASE-5821  Incorrect handling of null value in Coprocessor aggregation
function min() (Maryann Xue)

>From the stack trace you provided, looks like the NPE came from this line:
          sumVal = ci.add(sumVal, ci.castToReturnType(ci.getValue(colFamily,
              qualifier, kv)));
ci.getValue() might have returned null.

Please update your client to 0.94.1 RC and see if the NPE occurs.

Thanks

On Mon, Jul 30, 2012 at 6:10 AM, Cyril Scetbon <cy...@free.fr>wrote:

> Here is the stack : (with hbase-0.94.0.jar)
>
> row count is 1
> 12/07/30 15:08:53 WARN
> client.HConnectionManager$HConnectionImplementation: Error executing for row
> java.util.concurrent.ExecutionException:
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after
> attempts=10, exceptions:
> Mon Jul 30 15:08:14 CEST 2012,
> org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@341049d3,
> java.io.IOException: java.io.IOException: java.lang.NullPointerException
>         at
> org.apache.hadoop.hbase.coprocessor.AggregateImplementation.getAvg(AggregateImplementation.java:189)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4770)
>         at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3457)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
>         at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)
>
> Regards
> Cyril SCETBON
>
> On Jul 29, 2012, at 11:54 PM, yuzhihong@gmail.com wrote:
>
> > Can you use 0.94 for your client jar ?
> >
> > Please show us the NullPointerException stack.
> >
> > Thanks
> >
> >
> >
> > On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr>
> wrote:
> >
> >> Hi,
> >>
> >> I'm testing AggregationClient functions to check if we could use
> coprocessors for mathematical functions.
> >>
> >> The code I use is the following :
> >>
> >> package coreprocessor;
> >>
> >> import org.apache.hadoop.conf.Configuration;
> >> import org.apache.hadoop.hbase.HBaseConfiguration;
> >> import org.apache.hadoop.hbase.client.Scan;
> >> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
> >> import org.apache.hadoop.hbase.util.Bytes;
> >>
> >> public class AggregationClientTest {
> >>
> >>  private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
> >>  private static final byte[] CF = Bytes.toBytes("core");
> >>
> >>  public static void main(String[] args) throws Throwable {
> >>
> >>      Configuration configuration = HBaseConfiguration.create();
> >>
> >>      configuration.setLong("hbase.client.scanner.caching", 1000);
> >>      AggregationClient aggregationClient = new AggregationClient(
> >>              configuration);
> >>      Scan scan = new Scan();
> >>      scan.addColumn(CF, Bytes.toBytes("value"));
> >>      System.out.println("row count is " +
> aggregationClient.rowCount(TABLE_NAME, null, scan));
> >>      System.out.println("avg is " + aggregationClient.avg(TABLE_NAME,
> null, scan));
> >>      System.out.println("sum is " + aggregationClient.sum(TABLE_NAME,
> null, scan));
> >>  }
> >> }
> >>
> >> The only one working is the rowCount function. For others I get a NPE
> error !
> >> I've checked that my table use only Long values for the column on which
> I work, and I've only one row in my table :
> >>
> >> ROW                                                  COLUMN+CELL
> >> id-cyr1                                             column=core:value,
> timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
> >>
> >> The only thing I can add is that my hbase server's version is 0.94.0
> and that I use version 0.92.0 of the hbase jar
> >>
> >> any idea why it doesn't work ?
> >>
> >> thanks
> >> Cyril SCETBON
> >>
>
>

Re: Coprocessor POC

Posted by Cyril Scetbon <cy...@free.fr>.
Here is the stack : (with hbase-0.94.0.jar)

row count is 1
12/07/30 15:08:53 WARN client.HConnectionManager$HConnectionImplementation: Error executing for row 
java.util.concurrent.ExecutionException: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=10, exceptions:
Mon Jul 30 15:08:14 CEST 2012, org.apache.hadoop.hbase.ipc.ExecRPCInvoker$1@341049d3, java.io.IOException: java.io.IOException: java.lang.NullPointerException
	at org.apache.hadoop.hbase.coprocessor.AggregateImplementation.getAvg(AggregateImplementation.java:189)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.hbase.regionserver.HRegion.exec(HRegion.java:4770)
	at org.apache.hadoop.hbase.regionserver.HRegionServer.execCoprocessor(HRegionServer.java:3457)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364)
	at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1376)

Regards
Cyril SCETBON

On Jul 29, 2012, at 11:54 PM, yuzhihong@gmail.com wrote:

> Can you use 0.94 for your client jar ?
> 
> Please show us the NullPointerException stack. 
> 
> Thanks
> 
> 
> 
> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
> 
>> Hi,
>> 
>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>> 
>> The code I use is the following :
>> 
>> package coreprocessor;
>> 
>> import org.apache.hadoop.conf.Configuration;
>> import org.apache.hadoop.hbase.HBaseConfiguration;
>> import org.apache.hadoop.hbase.client.Scan;
>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>> import org.apache.hadoop.hbase.util.Bytes;
>> 
>> public class AggregationClientTest {
>> 
>>  private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>  private static final byte[] CF = Bytes.toBytes("core");
>> 
>>  public static void main(String[] args) throws Throwable {
>> 
>>      Configuration configuration = HBaseConfiguration.create();
>> 
>>      configuration.setLong("hbase.client.scanner.caching", 1000);
>>      AggregationClient aggregationClient = new AggregationClient(
>>              configuration);
>>      Scan scan = new Scan();
>>      scan.addColumn(CF, Bytes.toBytes("value"));
>>      System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>      System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>      System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>  }
>> }
>> 
>> The only one working is the rowCount function. For others I get a NPE error !
>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>> 
>> ROW                                                  COLUMN+CELL                                                                                                                                            
>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A                                                                     
>> 
>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>> 
>> any idea why it doesn't work ?
>> 
>> thanks 
>> Cyril SCETBON
>> 


Re: Coprocessor POC

Posted by Cyril Scetbon <cy...@free.fr>.
unfortunately I can't remember/find it :( and I see in AggregationClient's javadoc that :

"Column family can't be null" , so I suppose I should have read it at first !

Thanks again
Cyril SCETBON

On Jul 30, 2012, at 7:30 PM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:

> We should fix the reference then. Where did you read it?
> 
> On Mon, Jul 30, 2012 at 10:43 AM, Cyril Scetbon <cy...@free.fr> wrote:
>> Thanks, it's really better !
>> 
>> I've read that by default it supports only Long values, that's why I was using a null ColumnInterpreter.
>> 
>> Regards.
>> Cyril SCETBON
>> 
>> On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>> 
>>> On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon <cy...@free.fr> wrote:
>>> 
>>>> I've given the values returned by scan 'table' command in hbase shell in my first email.
>>> Somehow I missed the scan result in your first email. So, can you pass
>>> a LongColumnInterpreter instance instead of null?
>>> See TestAggregateProtocol methods for usage.
>>> 
>>> Thanks
>>> Himanshu
>>> 
>>>> 
>>>> Regards
>>>> Cyril SCETBON
>>>> 
>>>> On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>>>> 
>>>>> And also, what are your cell values look like?
>>>>> 
>>>>> Himanshu
>>>>> 
>>>>> On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
>>>>>> Can you use 0.94 for your client jar ?
>>>>>> 
>>>>>> Please show us the NullPointerException stack.
>>>>>> 
>>>>>> Thanks
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>>>>>> 
>>>>>>> The code I use is the following :
>>>>>>> 
>>>>>>> package coreprocessor;
>>>>>>> 
>>>>>>> import org.apache.hadoop.conf.Configuration;
>>>>>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>>>>>> import org.apache.hadoop.hbase.client.Scan;
>>>>>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>>>>>> import org.apache.hadoop.hbase.util.Bytes;
>>>>>>> 
>>>>>>> public class AggregationClientTest {
>>>>>>> 
>>>>>>> private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>>>>> private static final byte[] CF = Bytes.toBytes("core");
>>>>>>> 
>>>>>>> public static void main(String[] args) throws Throwable {
>>>>>>> 
>>>>>>>    Configuration configuration = HBaseConfiguration.create();
>>>>>>> 
>>>>>>>    configuration.setLong("hbase.client.scanner.caching", 1000);
>>>>>>>    AggregationClient aggregationClient = new AggregationClient(
>>>>>>>            configuration);
>>>>>>>    Scan scan = new Scan();
>>>>>>>    scan.addColumn(CF, Bytes.toBytes("value"));
>>>>>>>    System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>>>>>    System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>>>>>    System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>>>>> }
>>>>>>> }
>>>>>>> 
>>>>>>> The only one working is the rowCount function. For others I get a NPE error !
>>>>>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>>>>>> 
>>>>>>> ROW                                                  COLUMN+CELL
>>>>>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>>>>>> 
>>>>>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>>>>>> 
>>>>>>> any idea why it doesn't work ?
>>>>>>> 
>>>>>>> thanks
>>>>>>> Cyril SCETBON
>>>>>>> 
>>>> 
>> 


Re: Coprocessor POC

Posted by Himanshu Vashishtha <hv...@cs.ualberta.ca>.
We should fix the reference then. Where did you read it?

On Mon, Jul 30, 2012 at 10:43 AM, Cyril Scetbon <cy...@free.fr> wrote:
> Thanks, it's really better !
>
> I've read that by default it supports only Long values, that's why I was using a null ColumnInterpreter.
>
> Regards.
> Cyril SCETBON
>
> On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>
>> On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon <cy...@free.fr> wrote:
>>
>>> I've given the values returned by scan 'table' command in hbase shell in my first email.
>> Somehow I missed the scan result in your first email. So, can you pass
>> a LongColumnInterpreter instance instead of null?
>> See TestAggregateProtocol methods for usage.
>>
>> Thanks
>> Himanshu
>>
>>>
>>> Regards
>>> Cyril SCETBON
>>>
>>> On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>>>
>>>> And also, what are your cell values look like?
>>>>
>>>> Himanshu
>>>>
>>>> On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
>>>>> Can you use 0.94 for your client jar ?
>>>>>
>>>>> Please show us the NullPointerException stack.
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>
>>>>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>>>>>
>>>>>> The code I use is the following :
>>>>>>
>>>>>> package coreprocessor;
>>>>>>
>>>>>> import org.apache.hadoop.conf.Configuration;
>>>>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>>>>> import org.apache.hadoop.hbase.client.Scan;
>>>>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>>>>> import org.apache.hadoop.hbase.util.Bytes;
>>>>>>
>>>>>> public class AggregationClientTest {
>>>>>>
>>>>>> private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>>>> private static final byte[] CF = Bytes.toBytes("core");
>>>>>>
>>>>>> public static void main(String[] args) throws Throwable {
>>>>>>
>>>>>>     Configuration configuration = HBaseConfiguration.create();
>>>>>>
>>>>>>     configuration.setLong("hbase.client.scanner.caching", 1000);
>>>>>>     AggregationClient aggregationClient = new AggregationClient(
>>>>>>             configuration);
>>>>>>     Scan scan = new Scan();
>>>>>>     scan.addColumn(CF, Bytes.toBytes("value"));
>>>>>>     System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>>>>     System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>>>>     System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> The only one working is the rowCount function. For others I get a NPE error !
>>>>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>>>>>
>>>>>> ROW                                                  COLUMN+CELL
>>>>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>>>>>
>>>>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>>>>>
>>>>>> any idea why it doesn't work ?
>>>>>>
>>>>>> thanks
>>>>>> Cyril SCETBON
>>>>>>
>>>
>

Re: Coprocessor POC

Posted by Cyril Scetbon <cy...@free.fr>.
Thanks, it's really better ! 

I've read that by default it supports only Long values, that's why I was using a null ColumnInterpreter.

Regards.
Cyril SCETBON

On Jul 30, 2012, at 5:56 PM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:

> On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon <cy...@free.fr> wrote:
> 
>> I've given the values returned by scan 'table' command in hbase shell in my first email.
> Somehow I missed the scan result in your first email. So, can you pass
> a LongColumnInterpreter instance instead of null?
> See TestAggregateProtocol methods for usage.
> 
> Thanks
> Himanshu
> 
>> 
>> Regards
>> Cyril SCETBON
>> 
>> On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>> 
>>> And also, what are your cell values look like?
>>> 
>>> Himanshu
>>> 
>>> On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
>>>> Can you use 0.94 for your client jar ?
>>>> 
>>>> Please show us the NullPointerException stack.
>>>> 
>>>> Thanks
>>>> 
>>>> 
>>>> 
>>>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>>>> 
>>>>> The code I use is the following :
>>>>> 
>>>>> package coreprocessor;
>>>>> 
>>>>> import org.apache.hadoop.conf.Configuration;
>>>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>>>> import org.apache.hadoop.hbase.client.Scan;
>>>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>>>> import org.apache.hadoop.hbase.util.Bytes;
>>>>> 
>>>>> public class AggregationClientTest {
>>>>> 
>>>>> private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>>> private static final byte[] CF = Bytes.toBytes("core");
>>>>> 
>>>>> public static void main(String[] args) throws Throwable {
>>>>> 
>>>>>     Configuration configuration = HBaseConfiguration.create();
>>>>> 
>>>>>     configuration.setLong("hbase.client.scanner.caching", 1000);
>>>>>     AggregationClient aggregationClient = new AggregationClient(
>>>>>             configuration);
>>>>>     Scan scan = new Scan();
>>>>>     scan.addColumn(CF, Bytes.toBytes("value"));
>>>>>     System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>>>     System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>>>     System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>>> }
>>>>> }
>>>>> 
>>>>> The only one working is the rowCount function. For others I get a NPE error !
>>>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>>>> 
>>>>> ROW                                                  COLUMN+CELL
>>>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>>>> 
>>>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>>>> 
>>>>> any idea why it doesn't work ?
>>>>> 
>>>>> thanks
>>>>> Cyril SCETBON
>>>>> 
>> 


Re: Coprocessor POC

Posted by Himanshu Vashishtha <hv...@cs.ualberta.ca>.
On Mon, Jul 30, 2012 at 6:55 AM, Cyril Scetbon <cy...@free.fr> wrote:

> I've given the values returned by scan 'table' command in hbase shell in my first email.
Somehow I missed the scan result in your first email. So, can you pass
a LongColumnInterpreter instance instead of null?
See TestAggregateProtocol methods for usage.

Thanks
Himanshu

>
> Regards
> Cyril SCETBON
>
> On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:
>
>> And also, what are your cell values look like?
>>
>> Himanshu
>>
>> On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
>>> Can you use 0.94 for your client jar ?
>>>
>>> Please show us the NullPointerException stack.
>>>
>>> Thanks
>>>
>>>
>>>
>>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>>>
>>>> Hi,
>>>>
>>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>>>
>>>> The code I use is the following :
>>>>
>>>> package coreprocessor;
>>>>
>>>> import org.apache.hadoop.conf.Configuration;
>>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>>> import org.apache.hadoop.hbase.client.Scan;
>>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>>> import org.apache.hadoop.hbase.util.Bytes;
>>>>
>>>> public class AggregationClientTest {
>>>>
>>>>  private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>>  private static final byte[] CF = Bytes.toBytes("core");
>>>>
>>>>  public static void main(String[] args) throws Throwable {
>>>>
>>>>      Configuration configuration = HBaseConfiguration.create();
>>>>
>>>>      configuration.setLong("hbase.client.scanner.caching", 1000);
>>>>      AggregationClient aggregationClient = new AggregationClient(
>>>>              configuration);
>>>>      Scan scan = new Scan();
>>>>      scan.addColumn(CF, Bytes.toBytes("value"));
>>>>      System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>>      System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>>      System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>>  }
>>>> }
>>>>
>>>> The only one working is the rowCount function. For others I get a NPE error !
>>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>>>
>>>> ROW                                                  COLUMN+CELL
>>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>>>
>>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>>>
>>>> any idea why it doesn't work ?
>>>>
>>>> thanks
>>>> Cyril SCETBON
>>>>
>

Re: Coprocessor POC

Posted by Cyril Scetbon <cy...@free.fr>.
I've given the values returned by scan 'table' command in hbase shell in my first email.

Regards
Cyril SCETBON

On Jul 30, 2012, at 12:50 AM, Himanshu Vashishtha <hv...@cs.ualberta.ca> wrote:

> And also, what are your cell values look like?
> 
> Himanshu
> 
> On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
>> Can you use 0.94 for your client jar ?
>> 
>> Please show us the NullPointerException stack.
>> 
>> Thanks
>> 
>> 
>> 
>> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>> 
>>> Hi,
>>> 
>>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>> 
>>> The code I use is the following :
>>> 
>>> package coreprocessor;
>>> 
>>> import org.apache.hadoop.conf.Configuration;
>>> import org.apache.hadoop.hbase.HBaseConfiguration;
>>> import org.apache.hadoop.hbase.client.Scan;
>>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>>> import org.apache.hadoop.hbase.util.Bytes;
>>> 
>>> public class AggregationClientTest {
>>> 
>>>  private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>>  private static final byte[] CF = Bytes.toBytes("core");
>>> 
>>>  public static void main(String[] args) throws Throwable {
>>> 
>>>      Configuration configuration = HBaseConfiguration.create();
>>> 
>>>      configuration.setLong("hbase.client.scanner.caching", 1000);
>>>      AggregationClient aggregationClient = new AggregationClient(
>>>              configuration);
>>>      Scan scan = new Scan();
>>>      scan.addColumn(CF, Bytes.toBytes("value"));
>>>      System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>>      System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>>      System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>>  }
>>> }
>>> 
>>> The only one working is the rowCount function. For others I get a NPE error !
>>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>> 
>>> ROW                                                  COLUMN+CELL
>>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>> 
>>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>> 
>>> any idea why it doesn't work ?
>>> 
>>> thanks
>>> Cyril SCETBON
>>> 


Re: Coprocessor POC

Posted by Himanshu Vashishtha <hv...@cs.ualberta.ca>.
And also, what are your cell values look like?

Himanshu

On Sun, Jul 29, 2012 at 3:54 PM,  <yu...@gmail.com> wrote:
> Can you use 0.94 for your client jar ?
>
> Please show us the NullPointerException stack.
>
> Thanks
>
>
>
> On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:
>
>> Hi,
>>
>> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
>>
>> The code I use is the following :
>>
>> package coreprocessor;
>>
>> import org.apache.hadoop.conf.Configuration;
>> import org.apache.hadoop.hbase.HBaseConfiguration;
>> import org.apache.hadoop.hbase.client.Scan;
>> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
>> import org.apache.hadoop.hbase.util.Bytes;
>>
>> public class AggregationClientTest {
>>
>>   private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>>   private static final byte[] CF = Bytes.toBytes("core");
>>
>>   public static void main(String[] args) throws Throwable {
>>
>>       Configuration configuration = HBaseConfiguration.create();
>>
>>       configuration.setLong("hbase.client.scanner.caching", 1000);
>>       AggregationClient aggregationClient = new AggregationClient(
>>               configuration);
>>       Scan scan = new Scan();
>>       scan.addColumn(CF, Bytes.toBytes("value"));
>>       System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>>       System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>>       System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>>   }
>> }
>>
>> The only one working is the rowCount function. For others I get a NPE error !
>> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
>>
>> ROW                                                  COLUMN+CELL
>> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A
>>
>> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
>>
>> any idea why it doesn't work ?
>>
>> thanks
>> Cyril SCETBON
>>

Re: Coprocessor POC

Posted by yu...@gmail.com.
Can you use 0.94 for your client jar ?

Please show us the NullPointerException stack. 

Thanks



On Jul 29, 2012, at 2:49 PM, Cyril Scetbon <cy...@free.fr> wrote:

> Hi,
> 
> I'm testing AggregationClient functions to check if we could use coprocessors for mathematical functions.
> 
> The code I use is the following :
> 
> package coreprocessor;
> 
> import org.apache.hadoop.conf.Configuration;
> import org.apache.hadoop.hbase.HBaseConfiguration;
> import org.apache.hadoop.hbase.client.Scan;
> import org.apache.hadoop.hbase.client.coprocessor.AggregationClient;
> import org.apache.hadoop.hbase.util.Bytes;
> 
> public class AggregationClientTest {
> 
>   private static final byte[] TABLE_NAME = Bytes.toBytes("ise");
>   private static final byte[] CF = Bytes.toBytes("core");
> 
>   public static void main(String[] args) throws Throwable {
> 
>       Configuration configuration = HBaseConfiguration.create();
> 
>       configuration.setLong("hbase.client.scanner.caching", 1000);
>       AggregationClient aggregationClient = new AggregationClient(
>               configuration);
>       Scan scan = new Scan();
>       scan.addColumn(CF, Bytes.toBytes("value"));
>       System.out.println("row count is " + aggregationClient.rowCount(TABLE_NAME, null, scan));
>       System.out.println("avg is " + aggregationClient.avg(TABLE_NAME, null, scan));
>       System.out.println("sum is " + aggregationClient.sum(TABLE_NAME, null, scan));
>   }
> }
> 
> The only one working is the rowCount function. For others I get a NPE error !
> I've checked that my table use only Long values for the column on which I work, and I've only one row in my table :
> 
> ROW                                                  COLUMN+CELL                                                                                                                                            
> id-cyr1                                             column=core:value, timestamp=1343596419845, value=\x00\x00\x00\x00\x00\x00\x00\x0A                                                                     
> 
> The only thing I can add is that my hbase server's version is 0.94.0 and that I use version 0.92.0 of the hbase jar
> 
> any idea why it doesn't work ?
> 
> thanks 
> Cyril SCETBON
>