You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Bing Li <lb...@gmail.com> on 2013/02/04 21:20:07 UTC

Is "synchronized" required?

Dear all,

When writing data into HBase, sometimes I got exceptions. I guess they
might be caused by concurrent writings. But I am not sure.

My question is whether it is necessary to put "synchronized" before
the writing methods? The following lines are the sample code.

I think the directive, synchronized, must lower the performance of
writing. Sometimes concurrent writing is needed in my system.

Thanks so much!

Best wishes,
Bing

public synchronized void AddDomainNodeRanks(String domainKey, int
timingScale, Map<String, Double> nodeRankMap)
{
      List<Put> puts = new ArrayList<Put>();
      Put domainKeyPut;
      Put timingScalePut;
      Put nodeKeyPut;
      Put rankPut;

      byte[] domainNodeRankRowKey;

      for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
      {
          domainNodeRankRowKey =
Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));

         domainKeyPut = new Put(domainNodeRankRowKey);
         domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
Bytes.toBytes(domainKey));
         puts.add(domainKeyPut);

         timingScalePut = new Put(domainNodeRankRowKey);
         timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
Bytes.toBytes(timingScale));
        puts.add(timingScalePut);

        nodeKeyPut = new Put(domainNodeRankRowKey);
        nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
Bytes.toBytes(nodeRankEntry.getKey()));
        puts.add(nodeKeyPut);

        rankPut = new Put(domainNodeRankRowKey);
        rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
Bytes.toBytes(nodeRankEntry.getValue()));
        puts.add(rankPut);
     }

     try
     {
         this.rankTable.put(puts);
     }
     catch (IOException e)
     {
         e.printStackTrace();
     }
}

Re: Is "synchronized" required?

Posted by lars hofhansl <la...@apache.org>.
Yep.



________________________________
 From: Bing Li <lb...@gmail.com>
To: user <us...@hbase.apache.org>; lars hofhansl <la...@apache.org> 
Sent: Wednesday, February 6, 2013 2:31 AM
Subject: Re: Is "synchronized" required?
 
Dear Lars,

I am now running HBase in the pseudo-distributed mode. The updated
HTable constructor also works?

Thanks so much!
Bing

On Wed, Feb 6, 2013 at 3:44 PM, lars hofhansl <la...@apache.org> wrote:
> Don't use a pool at all.
> With HBASE-4805 (https://issues.apache.org/jira/browse/HBASE-4805) you can precreate an HConnection and ExecutorService and then make HTable cheaply on demand every time you need one.
>
> Checkout HConnectionManager.createConnection(...) and the HTable constructors.
>
> I need to document this somewhere.
>
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: user <us...@hbase.apache.org>; lars hofhansl <la...@apache.org>; "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>
> Sent: Tuesday, February 5, 2013 10:36 PM
> Subject: Re: Is "synchronized" required?
>
> Lars,
>
> I found that at least the exceptions have nothing to do with shared HTable.
>
> To save the resources, I designed a pool for the classes that write
> and read from HBase. The primary resources consumed in the classes are
> HTable. The pool has some bugs.
>
> My question is whether it is necessary to design such a pool? Is it
> fine to create a instance of HTable for each thread?
>
> I noticed that HBase has a class, HTablePool. Maybe the pool I
> designed is NOT required?
>
> Thanks so much!
>
> Best wishes!
> Bing
>
> On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
>> Are you sharing this.rankTable between threads? HTable is not thread safe.
>>
>> -- Lars
>>
>>
>>
>> ________________________________
>>  From: Bing Li <lb...@gmail.com>
>> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
>> Sent: Tuesday, February 5, 2013 8:54 AM
>> Subject: Re: Is "synchronized" required?
>>
>> Dear all,
>>
>> After "synchronized" is removed from the method of writing, I get the
>> following exceptions when reading. Before the removal, no such
>> exceptions.
>>
>> Could you help me how to solve it?
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>>      [java] Feb 6, 2013 12:21:31 AM
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>>      [java] WARNING: Unexpected exception receiving call responses
>>      [java] java.lang.NullPointerException
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>      [java] Feb 6, 2013 12:21:31 AM
>> org.apache.hadoop.hbase.client.ScannerCallable close
>>      [java] WARNING: Ignore, probably already closed
>>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
>> failed on local exception: java.io.IOException: Unexpected exception
>> receiving call responses
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>      [java]     at $Proxy6.close(Unknown Source)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>>      [java]     at
>> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>>      [java]     at
>> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>>      [java]     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>      [java]     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>      [java]     at java.lang.Thread.run(Thread.java:662)
>>      [java] Caused by: java.io.IOException: Unexpected exception
>> receiving call responses
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>>      [java] Caused by: java.lang.NullPointerException
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>
>>
>> The code that causes the exceptions is as follows.
>>
>>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
>> hostNodeKey, String groupKey, int timingScale)
>>         {
>>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>>
>>                 SingleColumnValueFilter hostNodeKeyFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>>                 hostNodeKeyFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>>
>>                 SingleColumnValueFilter groupKeyFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>>                 groupKeyFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(groupKeyFilter);
>>
>>                 SingleColumnValueFilter timingScaleFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new
>> BinaryComparator(Bytes.toBytes(timingScale)));
>>                 timingScaleFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(timingScaleFilter);
>>
>>                 FilterList nodeGroupFilter = new
>> FilterList(nodeGroupFilterList);
>>                 Scan scan = new Scan();
>>                 scan.setFilter(nodeGroupFilter);
>>                 scan.setCaching(Parameters.CACHING_SIZE);
>>                 scan.setBatch(Parameters.BATCHING_SIZE);
>>
>>                 Set<String> rowKeySet = Sets.newHashSet();
>>                 try
>>                 {
>>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>>                         for (Result result : scanner)          //
>> <---- EXCEPTIONS are raised at this line.
>>                         {
>>                                 for (KeyValue kv : result.raw())
>>                                 {
>>
>> rowKeySet.add(Bytes.toString(kv.getRow()));
>>                                         break;
>>                                 }
>>                         }
>>                         scanner.close();
>>                 }
>>                 catch (IOException e)
>>                 {
>>                         e.printStackTrace();
>>                 }
>>                 return rowKeySet;
>>         }
>>
>>
>> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>>> Dear all,
>>>
>>> When writing data into HBase, sometimes I got exceptions. I guess they
>>> might be caused by concurrent writings. But I am not sure.
>>>
>>> My question is whether it is necessary to put "synchronized" before
>>> the writing methods? The following lines are the sample code.
>>>
>>> I think the directive, synchronized, must lower the performance of
>>> writing. Sometimes concurrent writing is needed in my system.
>>>
>>> Thanks so much!
>>>
>>> Best wishes,
>>> Bing
>>>
>>> public synchronized void AddDomainNodeRanks(String domainKey, int
>>> timingScale, Map<String, Double> nodeRankMap)
>>> {
>>>       List<Put> puts = new ArrayList<Put>();
>>>       Put domainKeyPut;
>>>       Put timingScalePut;
>>>       Put nodeKeyPut;
>>>       Put rankPut;
>>>
>>>       byte[] domainNodeRankRowKey;
>>>
>>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>>       {
>>>           domainNodeRankRowKey =
>>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>>
>>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>>> Bytes.toBytes(domainKey));
>>>          puts.add(domainKeyPut);
>>>
>>>          timingScalePut = new Put(domainNodeRankRowKey);
>>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>>> Bytes.toBytes(timingScale));
>>>         puts.add(timingScalePut);
>>>
>>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>>> Bytes.toBytes(nodeRankEntry.getKey()));
>>>         puts.add(nodeKeyPut);
>>>
>>>         rankPut = new Put(domainNodeRankRowKey);
>>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>>> Bytes.toBytes(nodeRankEntry.getValue()));
>>>         puts.add(rankPut);
>>>      }
>>>
>>>      try
>>>      {
>>>          this.rankTable.put(puts);
>>>      }
>>>      catch (IOException e)
>>>      {
>>>          e.printStackTrace();
>>>      }
>>> }

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Dear Lars,

I am now running HBase in the pseudo-distributed mode. The updated
HTable constructor also works?

Thanks so much!
Bing

On Wed, Feb 6, 2013 at 3:44 PM, lars hofhansl <la...@apache.org> wrote:
> Don't use a pool at all.
> With HBASE-4805 (https://issues.apache.org/jira/browse/HBASE-4805) you can precreate an HConnection and ExecutorService and then make HTable cheaply on demand every time you need one.
>
> Checkout HConnectionManager.createConnection(...) and the HTable constructors.
>
> I need to document this somewhere.
>
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: user <us...@hbase.apache.org>; lars hofhansl <la...@apache.org>; "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>
> Sent: Tuesday, February 5, 2013 10:36 PM
> Subject: Re: Is "synchronized" required?
>
> Lars,
>
> I found that at least the exceptions have nothing to do with shared HTable.
>
> To save the resources, I designed a pool for the classes that write
> and read from HBase. The primary resources consumed in the classes are
> HTable. The pool has some bugs.
>
> My question is whether it is necessary to design such a pool? Is it
> fine to create a instance of HTable for each thread?
>
> I noticed that HBase has a class, HTablePool. Maybe the pool I
> designed is NOT required?
>
> Thanks so much!
>
> Best wishes!
> Bing
>
> On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
>> Are you sharing this.rankTable between threads? HTable is not thread safe.
>>
>> -- Lars
>>
>>
>>
>> ________________________________
>>  From: Bing Li <lb...@gmail.com>
>> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
>> Sent: Tuesday, February 5, 2013 8:54 AM
>> Subject: Re: Is "synchronized" required?
>>
>> Dear all,
>>
>> After "synchronized" is removed from the method of writing, I get the
>> following exceptions when reading. Before the removal, no such
>> exceptions.
>>
>> Could you help me how to solve it?
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>>      [java] Feb 6, 2013 12:21:31 AM
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>>      [java] WARNING: Unexpected exception receiving call responses
>>      [java] java.lang.NullPointerException
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>      [java] Feb 6, 2013 12:21:31 AM
>> org.apache.hadoop.hbase.client.ScannerCallable close
>>      [java] WARNING: Ignore, probably already closed
>>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
>> failed on local exception: java.io.IOException: Unexpected exception
>> receiving call responses
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>>      [java]     at $Proxy6.close(Unknown Source)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>>      [java]     at
>> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>>      [java]     at
>> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>>      [java]     at
>> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>>      [java]     at
>> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>>      [java]     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>      [java]     at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>      [java]     at java.lang.Thread.run(Thread.java:662)
>>      [java] Caused by: java.io.IOException: Unexpected exception
>> receiving call responses
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>>      [java] Caused by: java.lang.NullPointerException
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>>      [java]     at
>> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>>      [java]     at
>> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>>
>>
>> The code that causes the exceptions is as follows.
>>
>>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
>> hostNodeKey, String groupKey, int timingScale)
>>         {
>>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>>
>>                 SingleColumnValueFilter hostNodeKeyFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>>                 hostNodeKeyFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>>
>>                 SingleColumnValueFilter groupKeyFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>>                 groupKeyFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(groupKeyFilter);
>>
>>                 SingleColumnValueFilter timingScaleFilter = new
>> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
>> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
>> CompareFilter.CompareOp.EQUAL, new
>> BinaryComparator(Bytes.toBytes(timingScale)));
>>                 timingScaleFilter.setFilterIfMissing(true);
>>                 nodeGroupFilterList.add(timingScaleFilter);
>>
>>                 FilterList nodeGroupFilter = new
>> FilterList(nodeGroupFilterList);
>>                 Scan scan = new Scan();
>>                 scan.setFilter(nodeGroupFilter);
>>                 scan.setCaching(Parameters.CACHING_SIZE);
>>                 scan.setBatch(Parameters.BATCHING_SIZE);
>>
>>                 Set<String> rowKeySet = Sets.newHashSet();
>>                 try
>>                 {
>>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>>                         for (Result result : scanner)          //
>> <---- EXCEPTIONS are raised at this line.
>>                         {
>>                                 for (KeyValue kv : result.raw())
>>                                 {
>>
>> rowKeySet.add(Bytes.toString(kv.getRow()));
>>                                         break;
>>                                 }
>>                         }
>>                         scanner.close();
>>                 }
>>                 catch (IOException e)
>>                 {
>>                         e.printStackTrace();
>>                 }
>>                 return rowKeySet;
>>         }
>>
>>
>> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>>> Dear all,
>>>
>>> When writing data into HBase, sometimes I got exceptions. I guess they
>>> might be caused by concurrent writings. But I am not sure.
>>>
>>> My question is whether it is necessary to put "synchronized" before
>>> the writing methods? The following lines are the sample code.
>>>
>>> I think the directive, synchronized, must lower the performance of
>>> writing. Sometimes concurrent writing is needed in my system.
>>>
>>> Thanks so much!
>>>
>>> Best wishes,
>>> Bing
>>>
>>> public synchronized void AddDomainNodeRanks(String domainKey, int
>>> timingScale, Map<String, Double> nodeRankMap)
>>> {
>>>       List<Put> puts = new ArrayList<Put>();
>>>       Put domainKeyPut;
>>>       Put timingScalePut;
>>>       Put nodeKeyPut;
>>>       Put rankPut;
>>>
>>>       byte[] domainNodeRankRowKey;
>>>
>>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>>       {
>>>           domainNodeRankRowKey =
>>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>>
>>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>>> Bytes.toBytes(domainKey));
>>>          puts.add(domainKeyPut);
>>>
>>>          timingScalePut = new Put(domainNodeRankRowKey);
>>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>>> Bytes.toBytes(timingScale));
>>>         puts.add(timingScalePut);
>>>
>>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>>> Bytes.toBytes(nodeRankEntry.getKey()));
>>>         puts.add(nodeKeyPut);
>>>
>>>         rankPut = new Put(domainNodeRankRowKey);
>>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>>> Bytes.toBytes(nodeRankEntry.getValue()));
>>>         puts.add(rankPut);
>>>      }
>>>
>>>      try
>>>      {
>>>          this.rankTable.put(puts);
>>>      }
>>>      catch (IOException e)
>>>      {
>>>          e.printStackTrace();
>>>      }
>>> }

Re: Is "synchronized" required?

Posted by lars hofhansl <la...@apache.org>.
Don't use a pool at all.
With HBASE-4805 (https://issues.apache.org/jira/browse/HBASE-4805) you can precreate an HConnection and ExecutorService and then make HTable cheaply on demand every time you need one.

Checkout HConnectionManager.createConnection(...) and the HTable constructors.

I need to document this somewhere.


-- Lars



________________________________
 From: Bing Li <lb...@gmail.com>
To: user <us...@hbase.apache.org>; lars hofhansl <la...@apache.org>; "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org> 
Sent: Tuesday, February 5, 2013 10:36 PM
Subject: Re: Is "synchronized" required?
 
Lars,

I found that at least the exceptions have nothing to do with shared HTable.

To save the resources, I designed a pool for the classes that write
and read from HBase. The primary resources consumed in the classes are
HTable. The pool has some bugs.

My question is whether it is necessary to design such a pool? Is it
fine to create a instance of HTable for each thread?

I noticed that HBase has a class, HTablePool. Maybe the pool I
designed is NOT required?

Thanks so much!

Best wishes!
Bing

On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
> Are you sharing this.rankTable between threads? HTable is not thread safe.
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
> Sent: Tuesday, February 5, 2013 8:54 AM
> Subject: Re: Is "synchronized" required?
>
> Dear all,
>
> After "synchronized" is removed from the method of writing, I get the
> following exceptions when reading. Before the removal, no such
> exceptions.
>
> Could you help me how to solve it?
>
> Thanks so much!
>
> Best wishes,
> Bing
>
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>      [java] WARNING: Unexpected exception receiving call responses
>      [java] java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.client.ScannerCallable close
>      [java] WARNING: Ignore, probably already closed
>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> failed on local exception: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>      [java]     at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      [java]     at $Proxy6.close(Unknown Source)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>      [java]     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>      [java]     at
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>      [java]     at
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      [java]     at java.lang.Thread.run(Thread.java:662)
>      [java] Caused by: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>      [java] Caused by: java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>
>
> The code that causes the exceptions is as follows.
>
>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
> hostNodeKey, String groupKey, int timingScale)
>         {
>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter groupKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>                 groupKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(groupKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(timingScaleFilter);
>
>                 FilterList nodeGroupFilter = new
> FilterList(nodeGroupFilterList);
>                 Scan scan = new Scan();
>                 scan.setFilter(nodeGroupFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 Set<String> rowKeySet = Sets.newHashSet();
>                 try
>                 {
>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>                         for (Result result : scanner)          //
> <---- EXCEPTIONS are raised at this line.
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>
> rowKeySet.add(Bytes.toString(kv.getRow()));
>                                         break;
>                                 }
>                         }
>                         scanner.close();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return rowKeySet;
>         }
>
>
> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>> Dear all,
>>
>> When writing data into HBase, sometimes I got exceptions. I guess they
>> might be caused by concurrent writings. But I am not sure.
>>
>> My question is whether it is necessary to put "synchronized" before
>> the writing methods? The following lines are the sample code.
>>
>> I think the directive, synchronized, must lower the performance of
>> writing. Sometimes concurrent writing is needed in my system.
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>> public synchronized void AddDomainNodeRanks(String domainKey, int
>> timingScale, Map<String, Double> nodeRankMap)
>> {
>>       List<Put> puts = new ArrayList<Put>();
>>       Put domainKeyPut;
>>       Put timingScalePut;
>>       Put nodeKeyPut;
>>       Put rankPut;
>>
>>       byte[] domainNodeRankRowKey;
>>
>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>       {
>>           domainNodeRankRowKey =
>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>
>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> Bytes.toBytes(domainKey));
>>          puts.add(domainKeyPut);
>>
>>          timingScalePut = new Put(domainNodeRankRowKey);
>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> Bytes.toBytes(timingScale));
>>         puts.add(timingScalePut);
>>
>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getKey()));
>>         puts.add(nodeKeyPut);
>>
>>         rankPut = new Put(domainNodeRankRowKey);
>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getValue()));
>>         puts.add(rankPut);
>>      }
>>
>>      try
>>      {
>>          this.rankTable.put(puts);
>>      }
>>      catch (IOException e)
>>      {
>>          e.printStackTrace();
>>      }
>> }

Re: Is "synchronized" required?

Posted by Adrien Mogenet <ad...@gmail.com>.
I probably don't know your application enough to make an accurate answer,
but you could have a look at asynchbase [
https://github.com/OpenTSDB/asynchbase] if you have thread-safety issues
and feel the need to control your resources over your threads.


On Wed, Feb 6, 2013 at 7:36 AM, Bing Li <lb...@gmail.com> wrote:

> Lars,
>
> I found that at least the exceptions have nothing to do with shared HTable.
>
> To save the resources, I designed a pool for the classes that write
> and read from HBase. The primary resources consumed in the classes are
> HTable. The pool has some bugs.
>
> My question is whether it is necessary to design such a pool? Is it
> fine to create a instance of HTable for each thread?
>
> I noticed that HBase has a class, HTablePool. Maybe the pool I
> designed is NOT required?
>
> Thanks so much!
>
> Best wishes!
> Bing
>
> On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
> > Are you sharing this.rankTable between threads? HTable is not thread
> safe.
> >
> > -- Lars
> >
> >
> >
> > ________________________________
> >  From: Bing Li <lb...@gmail.com>
> > To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user
> <us...@hbase.apache.org>
> > Sent: Tuesday, February 5, 2013 8:54 AM
> > Subject: Re: Is "synchronized" required?
> >
> > Dear all,
> >
> > After "synchronized" is removed from the method of writing, I get the
> > following exceptions when reading. Before the removal, no such
> > exceptions.
> >
> > Could you help me how to solve it?
> >
> > Thanks so much!
> >
> > Best wishes,
> > Bing
> >
> >      [java] Feb 6, 2013 12:21:31 AM
> > org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
> >      [java] WARNING: Unexpected exception receiving call responses
> >      [java] java.lang.NullPointerException
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> >      [java] Feb 6, 2013 12:21:31 AM
> > org.apache.hadoop.hbase.client.ScannerCallable close
> >      [java] WARNING: Ignore, probably already closed
> >      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> > failed on local exception: java.io.IOException: Unexpected exception
> > receiving call responses
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
> >      [java]     at
> > org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
> >      [java]     at $Proxy6.close(Unknown Source)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
> >      [java]     at
> >
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
> >      [java]     at
> >
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
> >      [java]     at
> >
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
> >      [java]     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> >      [java]     at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> >      [java]     at java.lang.Thread.run(Thread.java:662)
> >      [java] Caused by: java.io.IOException: Unexpected exception
> > receiving call responses
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
> >      [java] Caused by: java.lang.NullPointerException
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
> >      [java]     at
> >
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
> >      [java]     at
> >
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
> >
> >
> > The code that causes the exceptions is as follows.
> >
> >         public Set<String> LoadNodeGroupNodeRankRowKeys(String
> > hostNodeKey, String groupKey, int timingScale)
> >         {
> >                 List<Filter> nodeGroupFilterList = new
> ArrayList<Filter>();
> >
> >                 SingleColumnValueFilter hostNodeKeyFilter = new
> > SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> > RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
> >                 hostNodeKeyFilter.setFilterIfMissing(true);
> >                 nodeGroupFilterList.add(hostNodeKeyFilter);
> >
> >                 SingleColumnValueFilter groupKeyFilter = new
> > SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> > RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
> >                 groupKeyFilter.setFilterIfMissing(true);
> >                 nodeGroupFilterList.add(groupKeyFilter);
> >
> >                 SingleColumnValueFilter timingScaleFilter = new
> > SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> > RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(timingScale)));
> >                 timingScaleFilter.setFilterIfMissing(true);
> >                 nodeGroupFilterList.add(timingScaleFilter);
> >
> >                 FilterList nodeGroupFilter = new
> > FilterList(nodeGroupFilterList);
> >                 Scan scan = new Scan();
> >                 scan.setFilter(nodeGroupFilter);
> >                 scan.setCaching(Parameters.CACHING_SIZE);
> >                 scan.setBatch(Parameters.BATCHING_SIZE);
> >
> >                 Set<String> rowKeySet = Sets.newHashSet();
> >                 try
> >                 {
> >                         ResultScanner scanner =
> this.rankTable.getScanner(scan);
> >                         for (Result result : scanner)          //
> > <---- EXCEPTIONS are raised at this line.
> >                         {
> >                                 for (KeyValue kv : result.raw())
> >                                 {
> >
> > rowKeySet.add(Bytes.toString(kv.getRow()));
> >                                         break;
> >                                 }
> >                         }
> >                         scanner.close();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >                 return rowKeySet;
> >         }
> >
> >
> > On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
> >> Dear all,
> >>
> >> When writing data into HBase, sometimes I got exceptions. I guess they
> >> might be caused by concurrent writings. But I am not sure.
> >>
> >> My question is whether it is necessary to put "synchronized" before
> >> the writing methods? The following lines are the sample code.
> >>
> >> I think the directive, synchronized, must lower the performance of
> >> writing. Sometimes concurrent writing is needed in my system.
> >>
> >> Thanks so much!
> >>
> >> Best wishes,
> >> Bing
> >>
> >> public synchronized void AddDomainNodeRanks(String domainKey, int
> >> timingScale, Map<String, Double> nodeRankMap)
> >> {
> >>       List<Put> puts = new ArrayList<Put>();
> >>       Put domainKeyPut;
> >>       Put timingScalePut;
> >>       Put nodeKeyPut;
> >>       Put rankPut;
> >>
> >>       byte[] domainNodeRankRowKey;
> >>
> >>       for (Map.Entry<String, Double> nodeRankEntry :
> nodeRankMap.entrySet())
> >>       {
> >>           domainNodeRankRowKey =
> >> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> >> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
> >>
> >>          domainKeyPut = new Put(domainNodeRankRowKey);
> >>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> >> Bytes.toBytes(domainKey));
> >>          puts.add(domainKeyPut);
> >>
> >>          timingScalePut = new Put(domainNodeRankRowKey);
> >>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> >> Bytes.toBytes(timingScale));
> >>         puts.add(timingScalePut);
> >>
> >>         nodeKeyPut = new Put(domainNodeRankRowKey);
> >>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> >> Bytes.toBytes(nodeRankEntry.getKey()));
> >>         puts.add(nodeKeyPut);
> >>
> >>         rankPut = new Put(domainNodeRankRowKey);
> >>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> >> Bytes.toBytes(nodeRankEntry.getValue()));
> >>         puts.add(rankPut);
> >>      }
> >>
> >>      try
> >>      {
> >>          this.rankTable.put(puts);
> >>      }
> >>      catch (IOException e)
> >>      {
> >>          e.printStackTrace();
> >>      }
> >> }
>



-- 
Adrien Mogenet
06.59.16.64.22
http://www.mogenet.me

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Lars,

I found that at least the exceptions have nothing to do with shared HTable.

To save the resources, I designed a pool for the classes that write
and read from HBase. The primary resources consumed in the classes are
HTable. The pool has some bugs.

My question is whether it is necessary to design such a pool? Is it
fine to create a instance of HTable for each thread?

I noticed that HBase has a class, HTablePool. Maybe the pool I
designed is NOT required?

Thanks so much!

Best wishes!
Bing

On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
> Are you sharing this.rankTable between threads? HTable is not thread safe.
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
> Sent: Tuesday, February 5, 2013 8:54 AM
> Subject: Re: Is "synchronized" required?
>
> Dear all,
>
> After "synchronized" is removed from the method of writing, I get the
> following exceptions when reading. Before the removal, no such
> exceptions.
>
> Could you help me how to solve it?
>
> Thanks so much!
>
> Best wishes,
> Bing
>
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>      [java] WARNING: Unexpected exception receiving call responses
>      [java] java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.client.ScannerCallable close
>      [java] WARNING: Ignore, probably already closed
>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> failed on local exception: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>      [java]     at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      [java]     at $Proxy6.close(Unknown Source)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>      [java]     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>      [java]     at
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>      [java]     at
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      [java]     at java.lang.Thread.run(Thread.java:662)
>      [java] Caused by: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>      [java] Caused by: java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>
>
> The code that causes the exceptions is as follows.
>
>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
> hostNodeKey, String groupKey, int timingScale)
>         {
>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter groupKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>                 groupKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(groupKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(timingScaleFilter);
>
>                 FilterList nodeGroupFilter = new
> FilterList(nodeGroupFilterList);
>                 Scan scan = new Scan();
>                 scan.setFilter(nodeGroupFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 Set<String> rowKeySet = Sets.newHashSet();
>                 try
>                 {
>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>                         for (Result result : scanner)          //
> <---- EXCEPTIONS are raised at this line.
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>
> rowKeySet.add(Bytes.toString(kv.getRow()));
>                                         break;
>                                 }
>                         }
>                         scanner.close();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return rowKeySet;
>         }
>
>
> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>> Dear all,
>>
>> When writing data into HBase, sometimes I got exceptions. I guess they
>> might be caused by concurrent writings. But I am not sure.
>>
>> My question is whether it is necessary to put "synchronized" before
>> the writing methods? The following lines are the sample code.
>>
>> I think the directive, synchronized, must lower the performance of
>> writing. Sometimes concurrent writing is needed in my system.
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>> public synchronized void AddDomainNodeRanks(String domainKey, int
>> timingScale, Map<String, Double> nodeRankMap)
>> {
>>       List<Put> puts = new ArrayList<Put>();
>>       Put domainKeyPut;
>>       Put timingScalePut;
>>       Put nodeKeyPut;
>>       Put rankPut;
>>
>>       byte[] domainNodeRankRowKey;
>>
>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>       {
>>           domainNodeRankRowKey =
>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>
>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> Bytes.toBytes(domainKey));
>>          puts.add(domainKeyPut);
>>
>>          timingScalePut = new Put(domainNodeRankRowKey);
>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> Bytes.toBytes(timingScale));
>>         puts.add(timingScalePut);
>>
>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getKey()));
>>         puts.add(nodeKeyPut);
>>
>>         rankPut = new Put(domainNodeRankRowKey);
>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getValue()));
>>         puts.add(rankPut);
>>      }
>>
>>      try
>>      {
>>          this.rankTable.put(puts);
>>      }
>>      catch (IOException e)
>>      {
>>          e.printStackTrace();
>>      }
>> }

Re: searching functionality in Hbase

Posted by Amit Sela <am...@infolinks.com>.
If you are going to use EndPoint coprocessor check this experiment:
http://hbase-coprocessor-experiments.blogspot.co.il/2011/05/extending.html

it shows when EndPoint should be preferred over Scan.

Also, keep in mind that HBase row keys are lexicographically ordered.

Good luck!

On Thu, Feb 7, 2013 at 2:15 PM, Mohammad Tariq <do...@gmail.com> wrote:

> Hello Dastagiri,
>
>       Search is actually Get or Scan+some condition. You have Hbase Filters
> and Co-proc. You can use indexing for faster results(search).
>
> Warm Regards,
> Tariq
> https://mtariq.jux.com/
> cloudfront.blogspot.com
>
>
> On Thu, Feb 7, 2013 at 5:43 PM, Dastagiri S Shaik <
> dastagiri.shaik@manthanservices.com> wrote:
>
> > Hi All,
> >
> > Can u please help me any implementations are available in Hbase for
> > Searching Functionality?
> >
> >
> > Thanks
> > Dastagiri
> >
> >
>

Re: searching functionality in Hbase

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Dastagiri,

      Search is actually Get or Scan+some condition. You have Hbase Filters
and Co-proc. You can use indexing for faster results(search).

Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com


On Thu, Feb 7, 2013 at 5:43 PM, Dastagiri S Shaik <
dastagiri.shaik@manthanservices.com> wrote:

> Hi All,
>
> Can u please help me any implementations are available in Hbase for
> Searching Functionality?
>
>
> Thanks
> Dastagiri
>
>

searching functionality in Hbase

Posted by Dastagiri S Shaik <da...@manthanservices.com>.
Hi All,

Can u please help me any implementations are available in Hbase for
Searching Functionality?


Thanks
Dastagiri


Re: Is "synchronized" required?

Posted by lars hofhansl <la...@apache.org>.
Maybe you ran into HBASE-6651 (https://issues.apache.org/jira/browse/HBASE-6651)
HTablePool is bad and we should just deprecate and remove it.

Lemme run a vote on the dev list.

-- Lars



________________________________
 From: Bing Li <lb...@gmail.com>
To: user <us...@hbase.apache.org>; lars hofhansl <la...@apache.org> 
Sent: Thursday, February 7, 2013 12:10 AM
Subject: Re: Is "synchronized" required?
 
Dear Lars,

Some exceptions are raised when I concurrently read data from HBase.
Each thread to read is assigned a HTable instance. The version of
HBase I used is 0.92.0.

I cannot fix the problem. Could you please help me?

Thanks so much!

Best wishes,
Bing

      Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
      WARNING: Unexpected exception receiving call responses
java.lang.NullPointerException
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
      Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.client.ScannerCallable close
      WARNING: Ignore, probably already closed
      java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
failed on local exception: java.io.IOException: Unexpected exception
receiving call responses
          at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
          at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
          at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
          at $Proxy6.close(Unknown Source)
          at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
          at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
          at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
          at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
          at com.greatfree.hbase.rank.NodeRankRetriever.loadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
          at com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:662) Caused by:
java.io.IOException: Unexpected exception receiving call responses
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
      Caused by: java.lang.NullPointerException
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)

I read data from HBase concurrently with the following code.

        ...
                ExecutorService threadPool = Executors.newFixedThreadPool(100);
                LoadNodeGroupNodeRankRowKeyThread thread;
                for (String nodeKey : nodeKeys)
                {
                       // Threads are initialized and executed here.
                       thread = new LoadNodeGroupNodeRankRowKeyThread(nodeKey);
                       threadPool.execute(thread);
                }
                Scanner in = new Scanner(System.in);
                in.nextLine();
                threadPool.shutdownNow();
        ...

The code of LoadNodeGroupNodeRankRowKeyThread is as follows,

        ...
        public void run()
        {
                NodeRankRetriever retriever = new NodeRankRetriever();

// The following line reader data from HBase.
                Set<String> rowKeys =
retriever.loadNodeGroupNodeRankRowKeys(this.hostNodeKey);
                if (rowKeys.size() > 0)
                {
                        for (String rowKey : rowKeys)
                        {
                                System.out.println(rowKey);
                        }
                }
                else
                {
                        System.out.println("No data loaded");
                }
                retriever.dispose();
        }
        ...

The constructor of NodeRankRetriever() just got an instance of HTable
from HTablePool from the following method.

        ...
        public HTableInterface getTable(String tableName)
        {
                return this.hTablePool.getTable(tableName);
        }
        ...

The method dispose() of NodeRankRetriever() just close the
HTableInterface created by HTablePool.

        ...
        public void dispose()
        {
                try
                {
                        this.rankTable.close();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
        }
        ...

On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
> Are you sharing this.rankTable between threads? HTable is not thread safe.
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
> Sent: Tuesday, February 5, 2013 8:54 AM
> Subject: Re: Is "synchronized" required?
>
> Dear all,
>
> After "synchronized" is removed from the method of writing, I get the
> following exceptions when reading. Before the removal, no such
> exceptions.
>
> Could you help me how to solve it?
>
> Thanks so much!
>
> Best wishes,
> Bing
>
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>      [java] WARNING: Unexpected exception receiving call responses
>      [java] java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.client.ScannerCallable close
>      [java] WARNING: Ignore, probably already closed
>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> failed on local exception: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>      [java]     at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      [java]     at $Proxy6.close(Unknown Source)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>      [java]     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>      [java]     at
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>      [java]     at
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      [java]     at java.lang.Thread.run(Thread.java:662)
>      [java] Caused by: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>      [java] Caused by: java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>
>
> The code that causes the exceptions is as follows.
>
>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
> hostNodeKey, String groupKey, int timingScale)
>         {
>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter groupKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>                 groupKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(groupKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(timingScaleFilter);
>
>                 FilterList nodeGroupFilter = new
> FilterList(nodeGroupFilterList);
>                 Scan scan = new Scan();
>                 scan.setFilter(nodeGroupFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 Set<String> rowKeySet = Sets.newHashSet();
>                 try
>                 {
>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>                         for (Result result : scanner)          //
> <---- EXCEPTIONS are raised at this line.
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>
> rowKeySet.add(Bytes.toString(kv.getRow()));
>                                         break;
>                                 }
>                         }
>                         scanner.close();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return rowKeySet;
>         }
>
>
> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>> Dear all,
>>
>> When writing data into HBase, sometimes I got exceptions. I guess they
>> might be caused by concurrent writings. But I am not sure.
>>
>> My question is whether it is necessary to put "synchronized" before
>> the writing methods? The following lines are the sample code.
>>
>> I think the directive, synchronized, must lower the performance of
>> writing. Sometimes concurrent writing is needed in my system.
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>> public synchronized void AddDomainNodeRanks(String domainKey, int
>> timingScale, Map<String, Double> nodeRankMap)
>> {
>>       List<Put> puts = new ArrayList<Put>();
>>       Put domainKeyPut;
>>       Put timingScalePut;
>>       Put nodeKeyPut;
>>       Put rankPut;
>>
>>       byte[] domainNodeRankRowKey;
>>
>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>       {
>>           domainNodeRankRowKey =
>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>
>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> Bytes.toBytes(domainKey));
>>          puts.add(domainKeyPut);
>>
>>          timingScalePut = new Put(domainNodeRankRowKey);
>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> Bytes.toBytes(timingScale));
>>         puts.add(timingScalePut);
>>
>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getKey()));
>>         puts.add(nodeKeyPut);
>>
>>         rankPut = new Put(domainNodeRankRowKey);
>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getValue()));
>>         puts.add(rankPut);
>>      }
>>
>>      try
>>      {
>>          this.rankTable.put(puts);
>>      }
>>      catch (IOException e)
>>      {
>>          e.printStackTrace();
>>      }
>> }

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Dear Lars,

Some exceptions are raised when I concurrently read data from HBase.
Each thread to read is assigned a HTable instance. The version of
HBase I used is 0.92.0.

I cannot fix the problem. Could you please help me?

Thanks so much!

Best wishes,
Bing

      Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
      WARNING: Unexpected exception receiving call responses
java.lang.NullPointerException
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
      Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.client.ScannerCallable close
      WARNING: Ignore, probably already closed
      java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
failed on local exception: java.io.IOException: Unexpected exception
receiving call responses
          at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
          at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
          at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
          at $Proxy6.close(Unknown Source)
          at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
          at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
          at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
          at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
          at org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
          at com.greatfree.hbase.rank.NodeRankRetriever.loadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
          at com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
          at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
          at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
          at java.lang.Thread.run(Thread.java:662) Caused by:
java.io.IOException: Unexpected exception receiving call responses
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
      Caused by: java.lang.NullPointerException
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
          at org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
          at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)

I read data from HBase concurrently with the following code.

        ...
                ExecutorService threadPool = Executors.newFixedThreadPool(100);
                LoadNodeGroupNodeRankRowKeyThread thread;
                for (String nodeKey : nodeKeys)
                {
                       // Threads are initialized and executed here.
                       thread = new LoadNodeGroupNodeRankRowKeyThread(nodeKey);
                       threadPool.execute(thread);
                }
                Scanner in = new Scanner(System.in);
                in.nextLine();
                threadPool.shutdownNow();
        ...

The code of LoadNodeGroupNodeRankRowKeyThread is as follows,

        ...
        public void run()
        {
                NodeRankRetriever retriever = new NodeRankRetriever();

// The following line reader data from HBase.
                Set<String> rowKeys =
retriever.loadNodeGroupNodeRankRowKeys(this.hostNodeKey);
                if (rowKeys.size() > 0)
                {
                        for (String rowKey : rowKeys)
                        {
                                System.out.println(rowKey);
                        }
                }
                else
                {
                        System.out.println("No data loaded");
                }
                retriever.dispose();
        }
        ...

The constructor of NodeRankRetriever() just got an instance of HTable
from HTablePool from the following method.

        ...
        public HTableInterface getTable(String tableName)
        {
                return this.hTablePool.getTable(tableName);
        }
        ...

The method dispose() of NodeRankRetriever() just close the
HTableInterface created by HTablePool.

        ...
        public void dispose()
        {
                try
                {
                        this.rankTable.close();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
        }
        ...

On Wed, Feb 6, 2013 at 1:05 PM, lars hofhansl <la...@apache.org> wrote:
> Are you sharing this.rankTable between threads? HTable is not thread safe.
>
> -- Lars
>
>
>
> ________________________________
>  From: Bing Li <lb...@gmail.com>
> To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org>
> Sent: Tuesday, February 5, 2013 8:54 AM
> Subject: Re: Is "synchronized" required?
>
> Dear all,
>
> After "synchronized" is removed from the method of writing, I get the
> following exceptions when reading. Before the removal, no such
> exceptions.
>
> Could you help me how to solve it?
>
> Thanks so much!
>
> Best wishes,
> Bing
>
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
>      [java] WARNING: Unexpected exception receiving call responses
>      [java] java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>      [java] Feb 6, 2013 12:21:31 AM
> org.apache.hadoop.hbase.client.ScannerCallable close
>      [java] WARNING: Ignore, probably already closed
>      [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
> failed on local exception: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
>      [java]     at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
>      [java]     at $Proxy6.close(Unknown Source)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
>      [java]     at
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
>      [java]     at
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
>      [java]     at
> org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
>      [java]     at
> com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
>      [java]     at
> com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      [java]     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      [java]     at java.lang.Thread.run(Thread.java:662)
>      [java] Caused by: java.io.IOException: Unexpected exception
> receiving call responses
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
>      [java] Caused by: java.lang.NullPointerException
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
>      [java]     at
> org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
>      [java]     at
> org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
>
>
> The code that causes the exceptions is as follows.
>
>         public Set<String> LoadNodeGroupNodeRankRowKeys(String
> hostNodeKey, String groupKey, int timingScale)
>         {
>                 List<Filter> nodeGroupFilterList = new ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter groupKeyFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
>                 groupKeyFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(groupKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
> RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 nodeGroupFilterList.add(timingScaleFilter);
>
>                 FilterList nodeGroupFilter = new
> FilterList(nodeGroupFilterList);
>                 Scan scan = new Scan();
>                 scan.setFilter(nodeGroupFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 Set<String> rowKeySet = Sets.newHashSet();
>                 try
>                 {
>                         ResultScanner scanner = this.rankTable.getScanner(scan);
>                         for (Result result : scanner)          //
> <---- EXCEPTIONS are raised at this line.
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>
> rowKeySet.add(Bytes.toString(kv.getRow()));
>                                         break;
>                                 }
>                         }
>                         scanner.close();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return rowKeySet;
>         }
>
>
> On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
>> Dear all,
>>
>> When writing data into HBase, sometimes I got exceptions. I guess they
>> might be caused by concurrent writings. But I am not sure.
>>
>> My question is whether it is necessary to put "synchronized" before
>> the writing methods? The following lines are the sample code.
>>
>> I think the directive, synchronized, must lower the performance of
>> writing. Sometimes concurrent writing is needed in my system.
>>
>> Thanks so much!
>>
>> Best wishes,
>> Bing
>>
>> public synchronized void AddDomainNodeRanks(String domainKey, int
>> timingScale, Map<String, Double> nodeRankMap)
>> {
>>       List<Put> puts = new ArrayList<Put>();
>>       Put domainKeyPut;
>>       Put timingScalePut;
>>       Put nodeKeyPut;
>>       Put rankPut;
>>
>>       byte[] domainNodeRankRowKey;
>>
>>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>>       {
>>           domainNodeRankRowKey =
>> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>>
>>          domainKeyPut = new Put(domainNodeRankRowKey);
>>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> Bytes.toBytes(domainKey));
>>          puts.add(domainKeyPut);
>>
>>          timingScalePut = new Put(domainNodeRankRowKey);
>>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> Bytes.toBytes(timingScale));
>>         puts.add(timingScalePut);
>>
>>         nodeKeyPut = new Put(domainNodeRankRowKey);
>>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getKey()));
>>         puts.add(nodeKeyPut);
>>
>>         rankPut = new Put(domainNodeRankRowKey);
>>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> Bytes.toBytes(nodeRankEntry.getValue()));
>>         puts.add(rankPut);
>>      }
>>
>>      try
>>      {
>>          this.rankTable.put(puts);
>>      }
>>      catch (IOException e)
>>      {
>>          e.printStackTrace();
>>      }
>> }

Re: Is "synchronized" required?

Posted by lars hofhansl <la...@apache.org>.
Are you sharing this.rankTable between threads? HTable is not thread safe.

-- Lars



________________________________
 From: Bing Li <lb...@gmail.com>
To: "hbase-user@hadoop.apache.org" <hb...@hadoop.apache.org>; user <us...@hbase.apache.org> 
Sent: Tuesday, February 5, 2013 8:54 AM
Subject: Re: Is "synchronized" required?
 
Dear all,

After "synchronized" is removed from the method of writing, I get the
following exceptions when reading. Before the removal, no such
exceptions.

Could you help me how to solve it?

Thanks so much!

Best wishes,
Bing

     [java] Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
     [java] WARNING: Unexpected exception receiving call responses
     [java] java.lang.NullPointerException
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
     [java] Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.client.ScannerCallable close
     [java] WARNING: Ignore, probably already closed
     [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
failed on local exception: java.io.IOException: Unexpected exception
receiving call responses
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
     [java]     at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
     [java]     at $Proxy6.close(Unknown Source)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
     [java]     at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
     [java]     at
com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
     [java]     at
com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
     [java]     at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
     [java]     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
     [java]     at java.lang.Thread.run(Thread.java:662)
     [java] Caused by: java.io.IOException: Unexpected exception
receiving call responses
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
     [java] Caused by: java.lang.NullPointerException
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)


The code that causes the exceptions is as follows.

        public Set<String> LoadNodeGroupNodeRankRowKeys(String
hostNodeKey, String groupKey, int timingScale)
        {
                List<Filter> nodeGroupFilterList = new ArrayList<Filter>();

                SingleColumnValueFilter hostNodeKeyFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
                hostNodeKeyFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(hostNodeKeyFilter);

                SingleColumnValueFilter groupKeyFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
                groupKeyFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(groupKeyFilter);

                SingleColumnValueFilter timingScaleFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(timingScale)));
                timingScaleFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(timingScaleFilter);

                FilterList nodeGroupFilter = new
FilterList(nodeGroupFilterList);
                Scan scan = new Scan();
                scan.setFilter(nodeGroupFilter);
                scan.setCaching(Parameters.CACHING_SIZE);
                scan.setBatch(Parameters.BATCHING_SIZE);

                Set<String> rowKeySet = Sets.newHashSet();
                try
                {
                        ResultScanner scanner = this.rankTable.getScanner(scan);
                        for (Result result : scanner)          //
<---- EXCEPTIONS are raised at this line.
                        {
                                for (KeyValue kv : result.raw())
                                {

rowKeySet.add(Bytes.toString(kv.getRow()));
                                        break;
                                }
                        }
                        scanner.close();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
                return rowKeySet;
        }


On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
> Dear all,
>
> When writing data into HBase, sometimes I got exceptions. I guess they
> might be caused by concurrent writings. But I am not sure.
>
> My question is whether it is necessary to put "synchronized" before
> the writing methods? The following lines are the sample code.
>
> I think the directive, synchronized, must lower the performance of
> writing. Sometimes concurrent writing is needed in my system.
>
> Thanks so much!
>
> Best wishes,
> Bing
>
> public synchronized void AddDomainNodeRanks(String domainKey, int
> timingScale, Map<String, Double> nodeRankMap)
> {
>       List<Put> puts = new ArrayList<Put>();
>       Put domainKeyPut;
>       Put timingScalePut;
>       Put nodeKeyPut;
>       Put rankPut;
>
>       byte[] domainNodeRankRowKey;
>
>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>       {
>           domainNodeRankRowKey =
> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>
>          domainKeyPut = new Put(domainNodeRankRowKey);
>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> Bytes.toBytes(domainKey));
>          puts.add(domainKeyPut);
>
>          timingScalePut = new Put(domainNodeRankRowKey);
>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> Bytes.toBytes(timingScale));
>         puts.add(timingScalePut);
>
>         nodeKeyPut = new Put(domainNodeRankRowKey);
>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> Bytes.toBytes(nodeRankEntry.getKey()));
>         puts.add(nodeKeyPut);
>
>         rankPut = new Put(domainNodeRankRowKey);
>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> Bytes.toBytes(nodeRankEntry.getValue()));
>         puts.add(rankPut);
>      }
>
>      try
>      {
>          this.rankTable.put(puts);
>      }
>      catch (IOException e)
>      {
>          e.printStackTrace();
>      }
> }

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Dear all,

After "synchronized" is removed from the method of writing, I get the
following exceptions when reading. Before the removal, no such
exceptions.

Could you help me how to solve it?

Thanks so much!

Best wishes,
Bing

     [java] Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.ipc.HBaseClient$Connection run
     [java] WARNING: Unexpected exception receiving call responses
     [java] java.lang.NullPointerException
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)
     [java] Feb 6, 2013 12:21:31 AM
org.apache.hadoop.hbase.client.ScannerCallable close
     [java] WARNING: Ignore, probably already closed
     [java] java.io.IOException: Call to greatfreeweb/127.0.1.1:60020
failed on local exception: java.io.IOException: Unexpected exception
receiving call responses
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:934)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:903)
     [java]     at
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
     [java]     at $Proxy6.close(Unknown Source)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:112)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:74)
     [java]     at
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:39)
     [java]     at
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getRegionServerWithRetries(HConnectionManager.java:1325)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner.nextScanner(HTable.java:1167)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner.next(HTable.java:1296)
     [java]     at
org.apache.hadoop.hbase.client.HTable$ClientScanner$1.hasNext(HTable.java:1356)
     [java]     at
com.greatfree.hbase.rank.NodeRankRetriever.LoadNodeGroupNodeRankRowKeys(NodeRankRetriever.java:348)
     [java]     at
com.greatfree.ranking.PersistNodeGroupNodeRanksThread.run(PersistNodeGroupNodeRanksThread.java:29)
     [java]     at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
     [java]     at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
     [java]     at java.lang.Thread.run(Thread.java:662)
     [java] Caused by: java.io.IOException: Unexpected exception
receiving call responses
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:509)
     [java] Caused by: java.lang.NullPointerException
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readObject(HbaseObjectWritable.java:521)
     [java]     at
org.apache.hadoop.hbase.io.HbaseObjectWritable.readFields(HbaseObjectWritable.java:297)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.receiveResponse(HBaseClient.java:593)
     [java]     at
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.run(HBaseClient.java:505)


The code that causes the exceptions is as follows.

        public Set<String> LoadNodeGroupNodeRankRowKeys(String
hostNodeKey, String groupKey, int timingScale)
        {
                List<Filter> nodeGroupFilterList = new ArrayList<Filter>();

                SingleColumnValueFilter hostNodeKeyFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_HOST_NODE_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
                hostNodeKeyFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(hostNodeKeyFilter);

                SingleColumnValueFilter groupKeyFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_GROUP_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(groupKey));
                groupKeyFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(groupKeyFilter);

                SingleColumnValueFilter timingScaleFilter = new
SingleColumnValueFilter(RankStructure.NODE_GROUP_NODE_RANK_FAMILY,
RankStructure.NODE_GROUP_NODE_RANK_TIMING_SCALE_COLUMN,
CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(timingScale)));
                timingScaleFilter.setFilterIfMissing(true);
                nodeGroupFilterList.add(timingScaleFilter);

                FilterList nodeGroupFilter = new
FilterList(nodeGroupFilterList);
                Scan scan = new Scan();
                scan.setFilter(nodeGroupFilter);
                scan.setCaching(Parameters.CACHING_SIZE);
                scan.setBatch(Parameters.BATCHING_SIZE);

                Set<String> rowKeySet = Sets.newHashSet();
                try
                {
                        ResultScanner scanner = this.rankTable.getScanner(scan);
                        for (Result result : scanner)          //
<---- EXCEPTIONS are raised at this line.
                        {
                                for (KeyValue kv : result.raw())
                                {

rowKeySet.add(Bytes.toString(kv.getRow()));
                                        break;
                                }
                        }
                        scanner.close();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
                return rowKeySet;
        }


On Tue, Feb 5, 2013 at 4:20 AM, Bing Li <lb...@gmail.com> wrote:
> Dear all,
>
> When writing data into HBase, sometimes I got exceptions. I guess they
> might be caused by concurrent writings. But I am not sure.
>
> My question is whether it is necessary to put "synchronized" before
> the writing methods? The following lines are the sample code.
>
> I think the directive, synchronized, must lower the performance of
> writing. Sometimes concurrent writing is needed in my system.
>
> Thanks so much!
>
> Best wishes,
> Bing
>
> public synchronized void AddDomainNodeRanks(String domainKey, int
> timingScale, Map<String, Double> nodeRankMap)
> {
>       List<Put> puts = new ArrayList<Put>();
>       Put domainKeyPut;
>       Put timingScalePut;
>       Put nodeKeyPut;
>       Put rankPut;
>
>       byte[] domainNodeRankRowKey;
>
>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>       {
>           domainNodeRankRowKey =
> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>
>          domainKeyPut = new Put(domainNodeRankRowKey);
>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> Bytes.toBytes(domainKey));
>          puts.add(domainKeyPut);
>
>          timingScalePut = new Put(domainNodeRankRowKey);
>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> Bytes.toBytes(timingScale));
>         puts.add(timingScalePut);
>
>         nodeKeyPut = new Put(domainNodeRankRowKey);
>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> Bytes.toBytes(nodeRankEntry.getKey()));
>         puts.add(nodeKeyPut);
>
>         rankPut = new Put(domainNodeRankRowKey);
>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> Bytes.toBytes(nodeRankEntry.getValue()));
>         puts.add(rankPut);
>      }
>
>      try
>      {
>          this.rankTable.put(puts);
>      }
>      catch (IOException e)
>      {
>          e.printStackTrace();
>      }
> }

Re: Is "synchronized" required?

Posted by Nicolas Liochon <nk...@gmail.com>.
You don't have to use synchronized, just create multiple instances of
HTable and you will be fine.
Le 4 févr. 2013 23:40, "Bing Li" <lb...@gmail.com> a écrit :

> Dear Nicolas,
>
> If using synchronized is required, the performance must be too low,
> right? Are there any other ways to minimize the synchronization
> granularity?
>
> Thanks so much!
> Bing
>
> On Tue, Feb 5, 2013 at 5:31 AM, Nicolas Liochon <nk...@gmail.com> wrote:
> > Yes, HTable is not thread safe, and using synchronized around them could
> > work, but would be implementation dependent.
> > You can have one HTable per request at a reasonable cost since
> > https://issues.apache.org/jira/browse/HBASE-4805. It's seems to be
> > available in 0.92 as well.
> >
> > Cheers,
> >
> > Nicolas
> >
> >
> > On Mon, Feb 4, 2013 at 10:13 PM, Adrien Mogenet <
> adrien.mogenet@gmail.com>wrote:
> >
> >> Beware, HTablePool is not totally thread-safe as well:
> >> https://issues.apache.org/jira/browse/HBASE-6651.
> >>
> >>
> >> On Mon, Feb 4, 2013 at 9:42 PM, Haijia Zhou <le...@gmail.com> wrote:
> >>
> >> > Hi, Bing,
> >> >
> >> >  Not sure about your scenario but HTable class is not thread safe for
> >> > neither reads nor write.
> >> >  If you consider writing/reading from a table in a multiple-threaded
> way,
> >> > you can consider using HTablePool.
> >> >
> >> >  Hope it helps
> >> >
> >> > HJ
> >> >
> >> >
> >> > On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lb...@gmail.com> wrote:
> >> >
> >> > > Dear Ted and Harsh,
> >> > >
> >> > > I am sorry I didn't keep the exceptions. It occurred many days ago.
> My
> >> > > current version is 0.92.
> >> > >
> >> > > Now "synchronized" is removed. Is it correct?
> >> > >
> >> > > I will test if such exceptions are raised. I will let you know.
> >> > >
> >> > > Thanks!
> >> > >
> >> > > Best wishes,
> >> > > Bing
> >> > >
> >> > >
> >> > > On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
> >> > > > Bing:
> >> > > > Use pastebin.com instead of attaching exception report.
> >> > > >
> >> > > > What version of HBase are you using ?
> >> > > >
> >> > > > Thanks
> >> > > >
> >> > > >
> >> > > > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com>
> wrote:
> >> > > >>
> >> > > >> What exceptions do you actually receive - can you send them here?
> >> > > >> Knowing that is key to addressing your issue.
> >> > > >>
> >> > > >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com>
> wrote:
> >> > > >> > Dear all,
> >> > > >> >
> >> > > >> > When writing data into HBase, sometimes I got exceptions. I
> guess
> >> > they
> >> > > >> > might be caused by concurrent writings. But I am not sure.
> >> > > >> >
> >> > > >> > My question is whether it is necessary to put "synchronized"
> >> before
> >> > > >> > the writing methods? The following lines are the sample code.
> >> > > >> >
> >> > > >> > I think the directive, synchronized, must lower the
> performance of
> >> > > >> > writing. Sometimes concurrent writing is needed in my system.
> >> > > >> >
> >> > > >> > Thanks so much!
> >> > > >> >
> >> > > >> > Best wishes,
> >> > > >> > Bing
> >> > > >> >
> >> > > >> > public synchronized void AddDomainNodeRanks(String domainKey,
> int
> >> > > >> > timingScale, Map<String, Double> nodeRankMap)
> >> > > >> > {
> >> > > >> >       List<Put> puts = new ArrayList<Put>();
> >> > > >> >       Put domainKeyPut;
> >> > > >> >       Put timingScalePut;
> >> > > >> >       Put nodeKeyPut;
> >> > > >> >       Put rankPut;
> >> > > >> >
> >> > > >> >       byte[] domainNodeRankRowKey;
> >> > > >> >
> >> > > >> >       for (Map.Entry<String, Double> nodeRankEntry :
> >> > > >> > nodeRankMap.entrySet())
> >> > > >> >       {
> >> > > >> >           domainNodeRankRowKey =
> >> > > >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> >> > > >> > Tools.GetAHash(domainKey + timingScale +
> nodeRankEntry.getKey()));
> >> > > >> >
> >> > > >> >          domainKeyPut = new Put(domainNodeRankRowKey);
> >> > > >> >
>  domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > > >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> >> > > >> > Bytes.toBytes(domainKey));
> >> > > >> >          puts.add(domainKeyPut);
> >> > > >> >
> >> > > >> >          timingScalePut = new Put(domainNodeRankRowKey);
> >> > > >> >
>  timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > > >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> >> > > >> > Bytes.toBytes(timingScale));
> >> > > >> >         puts.add(timingScalePut);
> >> > > >> >
> >> > > >> >         nodeKeyPut = new Put(domainNodeRankRowKey);
> >> > > >> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > > >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> >> > > >> > Bytes.toBytes(nodeRankEntry.getKey()));
> >> > > >> >         puts.add(nodeKeyPut);
> >> > > >> >
> >> > > >> >         rankPut = new Put(domainNodeRankRowKey);
> >> > > >> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > > >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> >> > > >> > Bytes.toBytes(nodeRankEntry.getValue()));
> >> > > >> >         puts.add(rankPut);
> >> > > >> >      }
> >> > > >> >
> >> > > >> >      try
> >> > > >> >      {
> >> > > >> >          this.rankTable.put(puts);
> >> > > >> >      }
> >> > > >> >      catch (IOException e)
> >> > > >> >      {
> >> > > >> >          e.printStackTrace();
> >> > > >> >      }
> >> > > >> > }
> >> > > >>
> >> > > >>
> >> > > >>
> >> > > >> --
> >> > > >> Harsh J
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> Adrien Mogenet
> >> 06.59.16.64.22
> >> http://www.mogenet.me
> >>
>

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Dear Nicolas,

If using synchronized is required, the performance must be too low,
right? Are there any other ways to minimize the synchronization
granularity?

Thanks so much!
Bing

On Tue, Feb 5, 2013 at 5:31 AM, Nicolas Liochon <nk...@gmail.com> wrote:
> Yes, HTable is not thread safe, and using synchronized around them could
> work, but would be implementation dependent.
> You can have one HTable per request at a reasonable cost since
> https://issues.apache.org/jira/browse/HBASE-4805. It's seems to be
> available in 0.92 as well.
>
> Cheers,
>
> Nicolas
>
>
> On Mon, Feb 4, 2013 at 10:13 PM, Adrien Mogenet <ad...@gmail.com>wrote:
>
>> Beware, HTablePool is not totally thread-safe as well:
>> https://issues.apache.org/jira/browse/HBASE-6651.
>>
>>
>> On Mon, Feb 4, 2013 at 9:42 PM, Haijia Zhou <le...@gmail.com> wrote:
>>
>> > Hi, Bing,
>> >
>> >  Not sure about your scenario but HTable class is not thread safe for
>> > neither reads nor write.
>> >  If you consider writing/reading from a table in a multiple-threaded way,
>> > you can consider using HTablePool.
>> >
>> >  Hope it helps
>> >
>> > HJ
>> >
>> >
>> > On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lb...@gmail.com> wrote:
>> >
>> > > Dear Ted and Harsh,
>> > >
>> > > I am sorry I didn't keep the exceptions. It occurred many days ago. My
>> > > current version is 0.92.
>> > >
>> > > Now "synchronized" is removed. Is it correct?
>> > >
>> > > I will test if such exceptions are raised. I will let you know.
>> > >
>> > > Thanks!
>> > >
>> > > Best wishes,
>> > > Bing
>> > >
>> > >
>> > > On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
>> > > > Bing:
>> > > > Use pastebin.com instead of attaching exception report.
>> > > >
>> > > > What version of HBase are you using ?
>> > > >
>> > > > Thanks
>> > > >
>> > > >
>> > > > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:
>> > > >>
>> > > >> What exceptions do you actually receive - can you send them here?
>> > > >> Knowing that is key to addressing your issue.
>> > > >>
>> > > >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
>> > > >> > Dear all,
>> > > >> >
>> > > >> > When writing data into HBase, sometimes I got exceptions. I guess
>> > they
>> > > >> > might be caused by concurrent writings. But I am not sure.
>> > > >> >
>> > > >> > My question is whether it is necessary to put "synchronized"
>> before
>> > > >> > the writing methods? The following lines are the sample code.
>> > > >> >
>> > > >> > I think the directive, synchronized, must lower the performance of
>> > > >> > writing. Sometimes concurrent writing is needed in my system.
>> > > >> >
>> > > >> > Thanks so much!
>> > > >> >
>> > > >> > Best wishes,
>> > > >> > Bing
>> > > >> >
>> > > >> > public synchronized void AddDomainNodeRanks(String domainKey, int
>> > > >> > timingScale, Map<String, Double> nodeRankMap)
>> > > >> > {
>> > > >> >       List<Put> puts = new ArrayList<Put>();
>> > > >> >       Put domainKeyPut;
>> > > >> >       Put timingScalePut;
>> > > >> >       Put nodeKeyPut;
>> > > >> >       Put rankPut;
>> > > >> >
>> > > >> >       byte[] domainNodeRankRowKey;
>> > > >> >
>> > > >> >       for (Map.Entry<String, Double> nodeRankEntry :
>> > > >> > nodeRankMap.entrySet())
>> > > >> >       {
>> > > >> >           domainNodeRankRowKey =
>> > > >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> > > >> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>> > > >> >
>> > > >> >          domainKeyPut = new Put(domainNodeRankRowKey);
>> > > >> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > > >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> > > >> > Bytes.toBytes(domainKey));
>> > > >> >          puts.add(domainKeyPut);
>> > > >> >
>> > > >> >          timingScalePut = new Put(domainNodeRankRowKey);
>> > > >> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > > >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> > > >> > Bytes.toBytes(timingScale));
>> > > >> >         puts.add(timingScalePut);
>> > > >> >
>> > > >> >         nodeKeyPut = new Put(domainNodeRankRowKey);
>> > > >> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > > >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> > > >> > Bytes.toBytes(nodeRankEntry.getKey()));
>> > > >> >         puts.add(nodeKeyPut);
>> > > >> >
>> > > >> >         rankPut = new Put(domainNodeRankRowKey);
>> > > >> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > > >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> > > >> > Bytes.toBytes(nodeRankEntry.getValue()));
>> > > >> >         puts.add(rankPut);
>> > > >> >      }
>> > > >> >
>> > > >> >      try
>> > > >> >      {
>> > > >> >          this.rankTable.put(puts);
>> > > >> >      }
>> > > >> >      catch (IOException e)
>> > > >> >      {
>> > > >> >          e.printStackTrace();
>> > > >> >      }
>> > > >> > }
>> > > >>
>> > > >>
>> > > >>
>> > > >> --
>> > > >> Harsh J
>> > > >
>> > > >
>> > >
>> >
>>
>>
>>
>> --
>> Adrien Mogenet
>> 06.59.16.64.22
>> http://www.mogenet.me
>>

Re: Is "synchronized" required?

Posted by Nicolas Liochon <nk...@gmail.com>.
Yes, HTable is not thread safe, and using synchronized around them could
work, but would be implementation dependent.
You can have one HTable per request at a reasonable cost since
https://issues.apache.org/jira/browse/HBASE-4805. It's seems to be
available in 0.92 as well.

Cheers,

Nicolas


On Mon, Feb 4, 2013 at 10:13 PM, Adrien Mogenet <ad...@gmail.com>wrote:

> Beware, HTablePool is not totally thread-safe as well:
> https://issues.apache.org/jira/browse/HBASE-6651.
>
>
> On Mon, Feb 4, 2013 at 9:42 PM, Haijia Zhou <le...@gmail.com> wrote:
>
> > Hi, Bing,
> >
> >  Not sure about your scenario but HTable class is not thread safe for
> > neither reads nor write.
> >  If you consider writing/reading from a table in a multiple-threaded way,
> > you can consider using HTablePool.
> >
> >  Hope it helps
> >
> > HJ
> >
> >
> > On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lb...@gmail.com> wrote:
> >
> > > Dear Ted and Harsh,
> > >
> > > I am sorry I didn't keep the exceptions. It occurred many days ago. My
> > > current version is 0.92.
> > >
> > > Now "synchronized" is removed. Is it correct?
> > >
> > > I will test if such exceptions are raised. I will let you know.
> > >
> > > Thanks!
> > >
> > > Best wishes,
> > > Bing
> > >
> > >
> > > On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
> > > > Bing:
> > > > Use pastebin.com instead of attaching exception report.
> > > >
> > > > What version of HBase are you using ?
> > > >
> > > > Thanks
> > > >
> > > >
> > > > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:
> > > >>
> > > >> What exceptions do you actually receive - can you send them here?
> > > >> Knowing that is key to addressing your issue.
> > > >>
> > > >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
> > > >> > Dear all,
> > > >> >
> > > >> > When writing data into HBase, sometimes I got exceptions. I guess
> > they
> > > >> > might be caused by concurrent writings. But I am not sure.
> > > >> >
> > > >> > My question is whether it is necessary to put "synchronized"
> before
> > > >> > the writing methods? The following lines are the sample code.
> > > >> >
> > > >> > I think the directive, synchronized, must lower the performance of
> > > >> > writing. Sometimes concurrent writing is needed in my system.
> > > >> >
> > > >> > Thanks so much!
> > > >> >
> > > >> > Best wishes,
> > > >> > Bing
> > > >> >
> > > >> > public synchronized void AddDomainNodeRanks(String domainKey, int
> > > >> > timingScale, Map<String, Double> nodeRankMap)
> > > >> > {
> > > >> >       List<Put> puts = new ArrayList<Put>();
> > > >> >       Put domainKeyPut;
> > > >> >       Put timingScalePut;
> > > >> >       Put nodeKeyPut;
> > > >> >       Put rankPut;
> > > >> >
> > > >> >       byte[] domainNodeRankRowKey;
> > > >> >
> > > >> >       for (Map.Entry<String, Double> nodeRankEntry :
> > > >> > nodeRankMap.entrySet())
> > > >> >       {
> > > >> >           domainNodeRankRowKey =
> > > >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> > > >> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
> > > >> >
> > > >> >          domainKeyPut = new Put(domainNodeRankRowKey);
> > > >> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > > >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> > > >> > Bytes.toBytes(domainKey));
> > > >> >          puts.add(domainKeyPut);
> > > >> >
> > > >> >          timingScalePut = new Put(domainNodeRankRowKey);
> > > >> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > > >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> > > >> > Bytes.toBytes(timingScale));
> > > >> >         puts.add(timingScalePut);
> > > >> >
> > > >> >         nodeKeyPut = new Put(domainNodeRankRowKey);
> > > >> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > > >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> > > >> > Bytes.toBytes(nodeRankEntry.getKey()));
> > > >> >         puts.add(nodeKeyPut);
> > > >> >
> > > >> >         rankPut = new Put(domainNodeRankRowKey);
> > > >> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > > >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> > > >> > Bytes.toBytes(nodeRankEntry.getValue()));
> > > >> >         puts.add(rankPut);
> > > >> >      }
> > > >> >
> > > >> >      try
> > > >> >      {
> > > >> >          this.rankTable.put(puts);
> > > >> >      }
> > > >> >      catch (IOException e)
> > > >> >      {
> > > >> >          e.printStackTrace();
> > > >> >      }
> > > >> > }
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> Harsh J
> > > >
> > > >
> > >
> >
>
>
>
> --
> Adrien Mogenet
> 06.59.16.64.22
> http://www.mogenet.me
>

Re: Is "synchronized" required?

Posted by Adrien Mogenet <ad...@gmail.com>.
Beware, HTablePool is not totally thread-safe as well:
https://issues.apache.org/jira/browse/HBASE-6651.


On Mon, Feb 4, 2013 at 9:42 PM, Haijia Zhou <le...@gmail.com> wrote:

> Hi, Bing,
>
>  Not sure about your scenario but HTable class is not thread safe for
> neither reads nor write.
>  If you consider writing/reading from a table in a multiple-threaded way,
> you can consider using HTablePool.
>
>  Hope it helps
>
> HJ
>
>
> On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lb...@gmail.com> wrote:
>
> > Dear Ted and Harsh,
> >
> > I am sorry I didn't keep the exceptions. It occurred many days ago. My
> > current version is 0.92.
> >
> > Now "synchronized" is removed. Is it correct?
> >
> > I will test if such exceptions are raised. I will let you know.
> >
> > Thanks!
> >
> > Best wishes,
> > Bing
> >
> >
> > On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
> > > Bing:
> > > Use pastebin.com instead of attaching exception report.
> > >
> > > What version of HBase are you using ?
> > >
> > > Thanks
> > >
> > >
> > > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:
> > >>
> > >> What exceptions do you actually receive - can you send them here?
> > >> Knowing that is key to addressing your issue.
> > >>
> > >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
> > >> > Dear all,
> > >> >
> > >> > When writing data into HBase, sometimes I got exceptions. I guess
> they
> > >> > might be caused by concurrent writings. But I am not sure.
> > >> >
> > >> > My question is whether it is necessary to put "synchronized" before
> > >> > the writing methods? The following lines are the sample code.
> > >> >
> > >> > I think the directive, synchronized, must lower the performance of
> > >> > writing. Sometimes concurrent writing is needed in my system.
> > >> >
> > >> > Thanks so much!
> > >> >
> > >> > Best wishes,
> > >> > Bing
> > >> >
> > >> > public synchronized void AddDomainNodeRanks(String domainKey, int
> > >> > timingScale, Map<String, Double> nodeRankMap)
> > >> > {
> > >> >       List<Put> puts = new ArrayList<Put>();
> > >> >       Put domainKeyPut;
> > >> >       Put timingScalePut;
> > >> >       Put nodeKeyPut;
> > >> >       Put rankPut;
> > >> >
> > >> >       byte[] domainNodeRankRowKey;
> > >> >
> > >> >       for (Map.Entry<String, Double> nodeRankEntry :
> > >> > nodeRankMap.entrySet())
> > >> >       {
> > >> >           domainNodeRankRowKey =
> > >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> > >> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
> > >> >
> > >> >          domainKeyPut = new Put(domainNodeRankRowKey);
> > >> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> > >> > Bytes.toBytes(domainKey));
> > >> >          puts.add(domainKeyPut);
> > >> >
> > >> >          timingScalePut = new Put(domainNodeRankRowKey);
> > >> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> > >> > Bytes.toBytes(timingScale));
> > >> >         puts.add(timingScalePut);
> > >> >
> > >> >         nodeKeyPut = new Put(domainNodeRankRowKey);
> > >> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> > >> > Bytes.toBytes(nodeRankEntry.getKey()));
> > >> >         puts.add(nodeKeyPut);
> > >> >
> > >> >         rankPut = new Put(domainNodeRankRowKey);
> > >> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> > >> > Bytes.toBytes(nodeRankEntry.getValue()));
> > >> >         puts.add(rankPut);
> > >> >      }
> > >> >
> > >> >      try
> > >> >      {
> > >> >          this.rankTable.put(puts);
> > >> >      }
> > >> >      catch (IOException e)
> > >> >      {
> > >> >          e.printStackTrace();
> > >> >      }
> > >> > }
> > >>
> > >>
> > >>
> > >> --
> > >> Harsh J
> > >
> > >
> >
>



-- 
Adrien Mogenet
06.59.16.64.22
http://www.mogenet.me

Re: Is "synchronized" required?

Posted by Haijia Zhou <le...@gmail.com>.
Hi, Bing,

 Not sure about your scenario but HTable class is not thread safe for
neither reads nor write.
 If you consider writing/reading from a table in a multiple-threaded way,
you can consider using HTablePool.

 Hope it helps

HJ


On Mon, Feb 4, 2013 at 3:32 PM, Bing Li <lb...@gmail.com> wrote:

> Dear Ted and Harsh,
>
> I am sorry I didn't keep the exceptions. It occurred many days ago. My
> current version is 0.92.
>
> Now "synchronized" is removed. Is it correct?
>
> I will test if such exceptions are raised. I will let you know.
>
> Thanks!
>
> Best wishes,
> Bing
>
>
> On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
> > Bing:
> > Use pastebin.com instead of attaching exception report.
> >
> > What version of HBase are you using ?
> >
> > Thanks
> >
> >
> > On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:
> >>
> >> What exceptions do you actually receive - can you send them here?
> >> Knowing that is key to addressing your issue.
> >>
> >> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
> >> > Dear all,
> >> >
> >> > When writing data into HBase, sometimes I got exceptions. I guess they
> >> > might be caused by concurrent writings. But I am not sure.
> >> >
> >> > My question is whether it is necessary to put "synchronized" before
> >> > the writing methods? The following lines are the sample code.
> >> >
> >> > I think the directive, synchronized, must lower the performance of
> >> > writing. Sometimes concurrent writing is needed in my system.
> >> >
> >> > Thanks so much!
> >> >
> >> > Best wishes,
> >> > Bing
> >> >
> >> > public synchronized void AddDomainNodeRanks(String domainKey, int
> >> > timingScale, Map<String, Double> nodeRankMap)
> >> > {
> >> >       List<Put> puts = new ArrayList<Put>();
> >> >       Put domainKeyPut;
> >> >       Put timingScalePut;
> >> >       Put nodeKeyPut;
> >> >       Put rankPut;
> >> >
> >> >       byte[] domainNodeRankRowKey;
> >> >
> >> >       for (Map.Entry<String, Double> nodeRankEntry :
> >> > nodeRankMap.entrySet())
> >> >       {
> >> >           domainNodeRankRowKey =
> >> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> >> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
> >> >
> >> >          domainKeyPut = new Put(domainNodeRankRowKey);
> >> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> >> > Bytes.toBytes(domainKey));
> >> >          puts.add(domainKeyPut);
> >> >
> >> >          timingScalePut = new Put(domainNodeRankRowKey);
> >> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> >> > Bytes.toBytes(timingScale));
> >> >         puts.add(timingScalePut);
> >> >
> >> >         nodeKeyPut = new Put(domainNodeRankRowKey);
> >> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> >> > Bytes.toBytes(nodeRankEntry.getKey()));
> >> >         puts.add(nodeKeyPut);
> >> >
> >> >         rankPut = new Put(domainNodeRankRowKey);
> >> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> >> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> >> > Bytes.toBytes(nodeRankEntry.getValue()));
> >> >         puts.add(rankPut);
> >> >      }
> >> >
> >> >      try
> >> >      {
> >> >          this.rankTable.put(puts);
> >> >      }
> >> >      catch (IOException e)
> >> >      {
> >> >          e.printStackTrace();
> >> >      }
> >> > }
> >>
> >>
> >>
> >> --
> >> Harsh J
> >
> >
>

Re: Is "synchronized" required?

Posted by Bing Li <lb...@gmail.com>.
Dear Ted and Harsh,

I am sorry I didn't keep the exceptions. It occurred many days ago. My
current version is 0.92.

Now "synchronized" is removed. Is it correct?

I will test if such exceptions are raised. I will let you know.

Thanks!

Best wishes,
Bing


On Tue, Feb 5, 2013 at 4:25 AM, Ted Yu <yu...@gmail.com> wrote:
> Bing:
> Use pastebin.com instead of attaching exception report.
>
> What version of HBase are you using ?
>
> Thanks
>
>
> On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:
>>
>> What exceptions do you actually receive - can you send them here?
>> Knowing that is key to addressing your issue.
>>
>> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
>> > Dear all,
>> >
>> > When writing data into HBase, sometimes I got exceptions. I guess they
>> > might be caused by concurrent writings. But I am not sure.
>> >
>> > My question is whether it is necessary to put "synchronized" before
>> > the writing methods? The following lines are the sample code.
>> >
>> > I think the directive, synchronized, must lower the performance of
>> > writing. Sometimes concurrent writing is needed in my system.
>> >
>> > Thanks so much!
>> >
>> > Best wishes,
>> > Bing
>> >
>> > public synchronized void AddDomainNodeRanks(String domainKey, int
>> > timingScale, Map<String, Double> nodeRankMap)
>> > {
>> >       List<Put> puts = new ArrayList<Put>();
>> >       Put domainKeyPut;
>> >       Put timingScalePut;
>> >       Put nodeKeyPut;
>> >       Put rankPut;
>> >
>> >       byte[] domainNodeRankRowKey;
>> >
>> >       for (Map.Entry<String, Double> nodeRankEntry :
>> > nodeRankMap.entrySet())
>> >       {
>> >           domainNodeRankRowKey =
>> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
>> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>> >
>> >          domainKeyPut = new Put(domainNodeRankRowKey);
>> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
>> > Bytes.toBytes(domainKey));
>> >          puts.add(domainKeyPut);
>> >
>> >          timingScalePut = new Put(domainNodeRankRowKey);
>> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
>> > Bytes.toBytes(timingScale));
>> >         puts.add(timingScalePut);
>> >
>> >         nodeKeyPut = new Put(domainNodeRankRowKey);
>> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
>> > Bytes.toBytes(nodeRankEntry.getKey()));
>> >         puts.add(nodeKeyPut);
>> >
>> >         rankPut = new Put(domainNodeRankRowKey);
>> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
>> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
>> > Bytes.toBytes(nodeRankEntry.getValue()));
>> >         puts.add(rankPut);
>> >      }
>> >
>> >      try
>> >      {
>> >          this.rankTable.put(puts);
>> >      }
>> >      catch (IOException e)
>> >      {
>> >          e.printStackTrace();
>> >      }
>> > }
>>
>>
>>
>> --
>> Harsh J
>
>

Re: Is "synchronized" required?

Posted by Ted Yu <yu...@gmail.com>.
Bing:
Use pastebin.com instead of attaching exception report.

What version of HBase are you using ?

Thanks

On Mon, Feb 4, 2013 at 12:21 PM, Harsh J <ha...@cloudera.com> wrote:

> What exceptions do you actually receive - can you send them here?
> Knowing that is key to addressing your issue.
>
> On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
> > Dear all,
> >
> > When writing data into HBase, sometimes I got exceptions. I guess they
> > might be caused by concurrent writings. But I am not sure.
> >
> > My question is whether it is necessary to put "synchronized" before
> > the writing methods? The following lines are the sample code.
> >
> > I think the directive, synchronized, must lower the performance of
> > writing. Sometimes concurrent writing is needed in my system.
> >
> > Thanks so much!
> >
> > Best wishes,
> > Bing
> >
> > public synchronized void AddDomainNodeRanks(String domainKey, int
> > timingScale, Map<String, Double> nodeRankMap)
> > {
> >       List<Put> puts = new ArrayList<Put>();
> >       Put domainKeyPut;
> >       Put timingScalePut;
> >       Put nodeKeyPut;
> >       Put rankPut;
> >
> >       byte[] domainNodeRankRowKey;
> >
> >       for (Map.Entry<String, Double> nodeRankEntry :
> nodeRankMap.entrySet())
> >       {
> >           domainNodeRankRowKey =
> > Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> > Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
> >
> >          domainKeyPut = new Put(domainNodeRankRowKey);
> >          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> > Bytes.toBytes(domainKey));
> >          puts.add(domainKeyPut);
> >
> >          timingScalePut = new Put(domainNodeRankRowKey);
> >          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> > Bytes.toBytes(timingScale));
> >         puts.add(timingScalePut);
> >
> >         nodeKeyPut = new Put(domainNodeRankRowKey);
> >         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> > Bytes.toBytes(nodeRankEntry.getKey()));
> >         puts.add(nodeKeyPut);
> >
> >         rankPut = new Put(domainNodeRankRowKey);
> >         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> > RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> > Bytes.toBytes(nodeRankEntry.getValue()));
> >         puts.add(rankPut);
> >      }
> >
> >      try
> >      {
> >          this.rankTable.put(puts);
> >      }
> >      catch (IOException e)
> >      {
> >          e.printStackTrace();
> >      }
> > }
>
>
>
> --
> Harsh J
>

Re: Is "synchronized" required?

Posted by Harsh J <ha...@cloudera.com>.
What exceptions do you actually receive - can you send them here?
Knowing that is key to addressing your issue.

On Tue, Feb 5, 2013 at 1:50 AM, Bing Li <lb...@gmail.com> wrote:
> Dear all,
>
> When writing data into HBase, sometimes I got exceptions. I guess they
> might be caused by concurrent writings. But I am not sure.
>
> My question is whether it is necessary to put "synchronized" before
> the writing methods? The following lines are the sample code.
>
> I think the directive, synchronized, must lower the performance of
> writing. Sometimes concurrent writing is needed in my system.
>
> Thanks so much!
>
> Best wishes,
> Bing
>
> public synchronized void AddDomainNodeRanks(String domainKey, int
> timingScale, Map<String, Double> nodeRankMap)
> {
>       List<Put> puts = new ArrayList<Put>();
>       Put domainKeyPut;
>       Put timingScalePut;
>       Put nodeKeyPut;
>       Put rankPut;
>
>       byte[] domainNodeRankRowKey;
>
>       for (Map.Entry<String, Double> nodeRankEntry : nodeRankMap.entrySet())
>       {
>           domainNodeRankRowKey =
> Bytes.toBytes(RankStructure.DOMAIN_NODE_RANK_ROW +
> Tools.GetAHash(domainKey + timingScale + nodeRankEntry.getKey()));
>
>          domainKeyPut = new Put(domainNodeRankRowKey);
>          domainKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_DOMAIN_KEY_COLUMN,
> Bytes.toBytes(domainKey));
>          puts.add(domainKeyPut);
>
>          timingScalePut = new Put(domainNodeRankRowKey);
>          timingScalePut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_TIMING_SCALE_COLUMN,
> Bytes.toBytes(timingScale));
>         puts.add(timingScalePut);
>
>         nodeKeyPut = new Put(domainNodeRankRowKey);
>         nodeKeyPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_NODE_KEY_COLUMN,
> Bytes.toBytes(nodeRankEntry.getKey()));
>         puts.add(nodeKeyPut);
>
>         rankPut = new Put(domainNodeRankRowKey);
>         rankPut.add(RankStructure.DOMAIN_NODE_RANK_FAMILY,
> RankStructure.DOMAIN_NODE_RANK_RANKS_COLUMN,
> Bytes.toBytes(nodeRankEntry.getValue()));
>         puts.add(rankPut);
>      }
>
>      try
>      {
>          this.rankTable.put(puts);
>      }
>      catch (IOException e)
>      {
>          e.printStackTrace();
>      }
> }



-- 
Harsh J