You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hbase.apache.org by Bing Li <lb...@gmail.com> on 2012/09/20 04:02:02 UTC

Is it correct and required to keep consistency this way?

Dear all,

Sorry to send the email multiple times! An error in the previous email is
corrected.

I am not exactly sure if it is correct and required to keep consistency as
follows when saving and reading from HBase? Your help is highly appreciated.

Best regards,
Bing

        // Writing
        public void AddOutgoingNeighbor(String hostNodeKey, String
groupKey, int timingScale, String neighborKey)
        {
                List<Put> puts = new ArrayList<Put>();
                Put hostNodeKeyPut;
                Put groupKeyPut;
                Put topGroupKeyPut;
                Put timingScalePut;
                Put neighborKeyPut;

                byte[] outgoingRowKey =
Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));

                hostNodeKeyPut = new Put(outgoingRowKey);

hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
Bytes.toBytes(hostNodeKey));
                puts.add(hostNodeKeyPut);

                groupKeyPut = new Put(outgoingRowKey);

groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
Bytes.toBytes(groupKey));
                puts.add(groupKeyPut);

                topGroupKeyPut = new Put(outgoingRowKey);

topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
                puts.add(topGroupKeyPut);

                timingScalePut = new Put(outgoingRowKey);

timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
Bytes.toBytes(timingScale));
                puts.add(timingScalePut);

                neighborKeyPut = new Put(outgoingRowKey);

neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
Bytes.toBytes(neighborKey));
                puts.add(neighborKeyPut);

                try
                {
                        // Locking is here
                        this.lock.writeLock().lock();
                        this.neighborTable.put(puts);
                        this.lock.writeLock().unlock();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
        }

        // Reading
        public Set<String> GetOutgoingNeighborKeys(String hostNodeKey, int
timingScale)
        {
                List<Filter> outgoingNeighborsList = new
ArrayList<Filter>();

                SingleColumnValueFilter hostNodeKeyFilter = new
SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
                hostNodeKeyFilter.setFilterIfMissing(true);
                outgoingNeighborsList.add(hostNodeKeyFilter);

                SingleColumnValueFilter timingScaleFilter = new
SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(timingScale)));
                timingScaleFilter.setFilterIfMissing(true);
                outgoingNeighborsList.add(timingScaleFilter);

                FilterList outgoingNeighborFilter = new
FilterList(outgoingNeighborsList);
                Scan scan = new Scan();
                scan.setFilter(outgoingNeighborFilter);
                scan.setCaching(Parameters.CACHING_SIZE);
                scan.setBatch(Parameters.BATCHING_SIZE);

                String qualifier;
                Set<String> neighborKeySet = Sets.newHashSet();
                try
                {
                        // Lock is here
                        this.lock.readLock().lock();
                        ResultScanner scanner =
this.neighborTable.getScanner(scan);
                        for (Result result : scanner)
                        {
                                for (KeyValue kv : result.raw())
                                {
                                        qualifier =
Bytes.toString(kv.getQualifier());
                                        if
(qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
                                        {

neighborKeySet.add(Bytes.toString(kv.getValue()));
                                        }
                                }
                        }
                        scanner.close();
                        this.lock.readLock().unlock();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
                return neighborKeySet;
        }

Re: Is it correct and required to keep consistency this way?

Posted by Bing Li <lb...@gmail.com>.
Jieshan,

Thanks! HTablePool is used in my system.

Best,
Bing

On Thu, Sep 20, 2012 at 11:19 AM, Bijieshan <bi...@huawei.com> wrote:

> >If it is not safe, it means locking must be set as what is
> >shown in my code, doesn't it?
>
> You should not use one HTableInterface instance in multi-threads("Sharing
> one HTableInterface in multi-threads + Lock" will degrade the performance).
> There are 2 options:
> 1. Create one HTableInterface instance in each thread.
> 2. Use HTablePool to get HTableInnterface. See
> http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html
> .
>
> Hope it helps.
> Jieshan.
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 11:07 AM
> To: user@hbase.apache.org
> Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
> Subject: Re: Is it correct and required to keep consistency this way?
>
> Sorry, I didn't keep the exceptions. I will post the exceptions if I get
> them again.
>
> But after putting "synchronized" on the writing methods, the exceptions are
> gone.
>
> I am a little confused. HTable must be the interface to write/read data
> from HBase. If it is not safe, it means locking must be set as what is
> shown in my code, doesn't it?
>
> Thanks so much!
> Bing
>
> On Thu, Sep 20, 2012 at 11:00 AM, Bijieshan <bi...@huawei.com> wrote:
>
> > Yes. It should be safe. What you need to pay attention is HTable is not
> > thread safe. What are the exceptions?
> >
> > Jieshan
> > -----Original Message-----
> > From: Bing Li [mailto:lblabs@gmail.com]
> > Sent: Thursday, September 20, 2012 10:52 AM
> > To: user@hbase.apache.org
> > Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
> > Subject: Re: Is it correct and required to keep consistency this way?
> >
> > Dear Jieshan,
> >
> > Thanks so much for your reply!
> >
> > Now locking is not set on the reading methods in my system. It seems to
> be
> > fine with that.
> >
> > But I noticed exceptions when no locking was put on the writing method.
> If
> > multiple threads are writing to HBase concurrently, do you think it is
> safe
> > without locking?
> >
> > Best regards,
> > Bing
> >
> > On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bi...@huawei.com>
> wrote:
> >
> > > You can avoid read & write running parallel from your application
> level,
> > > if I read your mail correctly. You can use ReentrantReadWriteLock if
> your
> > > intention is like that. But it's not recommended.
> > > HBase has its own mechanism(MVCC) to manage the read/write consistency.
> > > When we start a scanning, the latest data has not committed by MVCC may
> > not
> > > be visible(According to our configuration).
> > >
> > > Jieshan
> > > -----Original Message-----
> > > From: Bing Li [mailto:lblabs@gmail.com]
> > > Sent: Thursday, September 20, 2012 10:02 AM
> > > To: hbase-user@hadoop.apache.org; user
> > > Subject: Is it correct and required to keep consistency this way?
> > >
> > > Dear all,
> > >
> > > Sorry to send the email multiple times! An error in the previous email
> is
> > > corrected.
> > >
> > > I am not exactly sure if it is correct and required to keep consistency
> > as
> > > follows when saving and reading from HBase? Your help is highly
> > > appreciated.
> > >
> > > Best regards,
> > > Bing
> > >
> > >         // Writing
> > >         public void AddOutgoingNeighbor(String hostNodeKey, String
> > > groupKey, int timingScale, String neighborKey)
> > >         {
> > >                 List<Put> puts = new ArrayList<Put>();
> > >                 Put hostNodeKeyPut;
> > >                 Put groupKeyPut;
> > >                 Put topGroupKeyPut;
> > >                 Put timingScalePut;
> > >                 Put neighborKeyPut;
> > >
> > >                 byte[] outgoingRowKey =
> > > Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> > > Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
> > >
> > >                 hostNodeKeyPut = new Put(outgoingRowKey);
> > >
> > > hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > > Bytes.toBytes(hostNodeKey));
> > >                 puts.add(hostNodeKeyPut);
> > >
> > >                 groupKeyPut = new Put(outgoingRowKey);
> > >
> > > groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> > > Bytes.toBytes(groupKey));
> > >                 puts.add(groupKeyPut);
> > >
> > >                 topGroupKeyPut = new Put(outgoingRowKey);
> > >
> > > topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> > > Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
> > >                 puts.add(topGroupKeyPut);
> > >
> > >                 timingScalePut = new Put(outgoingRowKey);
> > >
> > > timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > > Bytes.toBytes(timingScale));
> > >                 puts.add(timingScalePut);
> > >
> > >                 neighborKeyPut = new Put(outgoingRowKey);
> > >
> > > neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> > > Bytes.toBytes(neighborKey));
> > >                 puts.add(neighborKeyPut);
> > >
> > >                 try
> > >                 {
> > >                         // Locking is here
> > >                         this.lock.writeLock().lock();
> > >                         this.neighborTable.put(puts);
> > >                         this.lock.writeLock().unlock();
> > >                 }
> > >                 catch (IOException e)
> > >                 {
> > >                         e.printStackTrace();
> > >                 }
> > >         }
> > >
> > >         // Reading
> > >         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey,
> > int
> > > timingScale)
> > >         {
> > >                 List<Filter> outgoingNeighborsList = new
> > > ArrayList<Filter>();
> > >
> > >                 SingleColumnValueFilter hostNodeKeyFilter = new
> > >
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > > CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
> > >                 hostNodeKeyFilter.setFilterIfMissing(true);
> > >                 outgoingNeighborsList.add(hostNodeKeyFilter);
> > >
> > >                 SingleColumnValueFilter timingScaleFilter = new
> > >
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > > CompareFilter.CompareOp.EQUAL, new
> > > BinaryComparator(Bytes.toBytes(timingScale)));
> > >                 timingScaleFilter.setFilterIfMissing(true);
> > >                 outgoingNeighborsList.add(timingScaleFilter);
> > >
> > >                 FilterList outgoingNeighborFilter = new
> > > FilterList(outgoingNeighborsList);
> > >                 Scan scan = new Scan();
> > >                 scan.setFilter(outgoingNeighborFilter);
> > >                 scan.setCaching(Parameters.CACHING_SIZE);
> > >                 scan.setBatch(Parameters.BATCHING_SIZE);
> > >
> > >                 String qualifier;
> > >                 Set<String> neighborKeySet = Sets.newHashSet();
> > >                 try
> > >                 {
> > >                         // Lock is here
> > >                         this.lock.readLock().lock();
> > >                         ResultScanner scanner =
> > > this.neighborTable.getScanner(scan);
> > >                         for (Result result : scanner)
> > >                         {
> > >                                 for (KeyValue kv : result.raw())
> > >                                 {
> > >                                         qualifier =
> > > Bytes.toString(kv.getQualifier());
> > >                                         if
> > >
> > >
> >
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
> > >                                         {
> > >
> > > neighborKeySet.add(Bytes.toString(kv.getValue()));
> > >                                         }
> > >                                 }
> > >                         }
> > >                         scanner.close();
> > >                         this.lock.readLock().unlock();
> > >                 }
> > >                 catch (IOException e)
> > >                 {
> > >                         e.printStackTrace();
> > >                 }
> > >                 return neighborKeySet;
> > >         }
> > >
> >
>

RE: Is it correct and required to keep consistency this way?

Posted by Bijieshan <bi...@huawei.com>.
>If it is not safe, it means locking must be set as what is
>shown in my code, doesn't it?

You should not use one HTableInterface instance in multi-threads("Sharing one HTableInterface in multi-threads + Lock" will degrade the performance). There are 2 options:
1. Create one HTableInterface instance in each thread.
2. Use HTablePool to get HTableInnterface. See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTablePool.html.

Hope it helps.
Jieshan.
-----Original Message-----
From: Bing Li [mailto:lblabs@gmail.com] 
Sent: Thursday, September 20, 2012 11:07 AM
To: user@hbase.apache.org
Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
Subject: Re: Is it correct and required to keep consistency this way?

Sorry, I didn't keep the exceptions. I will post the exceptions if I get
them again.

But after putting "synchronized" on the writing methods, the exceptions are
gone.

I am a little confused. HTable must be the interface to write/read data
from HBase. If it is not safe, it means locking must be set as what is
shown in my code, doesn't it?

Thanks so much!
Bing

On Thu, Sep 20, 2012 at 11:00 AM, Bijieshan <bi...@huawei.com> wrote:

> Yes. It should be safe. What you need to pay attention is HTable is not
> thread safe. What are the exceptions?
>
> Jieshan
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 10:52 AM
> To: user@hbase.apache.org
> Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
> Subject: Re: Is it correct and required to keep consistency this way?
>
> Dear Jieshan,
>
> Thanks so much for your reply!
>
> Now locking is not set on the reading methods in my system. It seems to be
> fine with that.
>
> But I noticed exceptions when no locking was put on the writing method. If
> multiple threads are writing to HBase concurrently, do you think it is safe
> without locking?
>
> Best regards,
> Bing
>
> On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bi...@huawei.com> wrote:
>
> > You can avoid read & write running parallel from your application level,
> > if I read your mail correctly. You can use ReentrantReadWriteLock if your
> > intention is like that. But it's not recommended.
> > HBase has its own mechanism(MVCC) to manage the read/write consistency.
> > When we start a scanning, the latest data has not committed by MVCC may
> not
> > be visible(According to our configuration).
> >
> > Jieshan
> > -----Original Message-----
> > From: Bing Li [mailto:lblabs@gmail.com]
> > Sent: Thursday, September 20, 2012 10:02 AM
> > To: hbase-user@hadoop.apache.org; user
> > Subject: Is it correct and required to keep consistency this way?
> >
> > Dear all,
> >
> > Sorry to send the email multiple times! An error in the previous email is
> > corrected.
> >
> > I am not exactly sure if it is correct and required to keep consistency
> as
> > follows when saving and reading from HBase? Your help is highly
> > appreciated.
> >
> > Best regards,
> > Bing
> >
> >         // Writing
> >         public void AddOutgoingNeighbor(String hostNodeKey, String
> > groupKey, int timingScale, String neighborKey)
> >         {
> >                 List<Put> puts = new ArrayList<Put>();
> >                 Put hostNodeKeyPut;
> >                 Put groupKeyPut;
> >                 Put topGroupKeyPut;
> >                 Put timingScalePut;
> >                 Put neighborKeyPut;
> >
> >                 byte[] outgoingRowKey =
> > Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> > Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
> >
> >                 hostNodeKeyPut = new Put(outgoingRowKey);
> >
> > hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > Bytes.toBytes(hostNodeKey));
> >                 puts.add(hostNodeKeyPut);
> >
> >                 groupKeyPut = new Put(outgoingRowKey);
> >
> > groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> > Bytes.toBytes(groupKey));
> >                 puts.add(groupKeyPut);
> >
> >                 topGroupKeyPut = new Put(outgoingRowKey);
> >
> > topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> > Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
> >                 puts.add(topGroupKeyPut);
> >
> >                 timingScalePut = new Put(outgoingRowKey);
> >
> > timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > Bytes.toBytes(timingScale));
> >                 puts.add(timingScalePut);
> >
> >                 neighborKeyPut = new Put(outgoingRowKey);
> >
> > neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> > Bytes.toBytes(neighborKey));
> >                 puts.add(neighborKeyPut);
> >
> >                 try
> >                 {
> >                         // Locking is here
> >                         this.lock.writeLock().lock();
> >                         this.neighborTable.put(puts);
> >                         this.lock.writeLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >         }
> >
> >         // Reading
> >         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey,
> int
> > timingScale)
> >         {
> >                 List<Filter> outgoingNeighborsList = new
> > ArrayList<Filter>();
> >
> >                 SingleColumnValueFilter hostNodeKeyFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
> >                 hostNodeKeyFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(hostNodeKeyFilter);
> >
> >                 SingleColumnValueFilter timingScaleFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(timingScale)));
> >                 timingScaleFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(timingScaleFilter);
> >
> >                 FilterList outgoingNeighborFilter = new
> > FilterList(outgoingNeighborsList);
> >                 Scan scan = new Scan();
> >                 scan.setFilter(outgoingNeighborFilter);
> >                 scan.setCaching(Parameters.CACHING_SIZE);
> >                 scan.setBatch(Parameters.BATCHING_SIZE);
> >
> >                 String qualifier;
> >                 Set<String> neighborKeySet = Sets.newHashSet();
> >                 try
> >                 {
> >                         // Lock is here
> >                         this.lock.readLock().lock();
> >                         ResultScanner scanner =
> > this.neighborTable.getScanner(scan);
> >                         for (Result result : scanner)
> >                         {
> >                                 for (KeyValue kv : result.raw())
> >                                 {
> >                                         qualifier =
> > Bytes.toString(kv.getQualifier());
> >                                         if
> >
> >
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
> >                                         {
> >
> > neighborKeySet.add(Bytes.toString(kv.getValue()));
> >                                         }
> >                                 }
> >                         }
> >                         scanner.close();
> >                         this.lock.readLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >                 return neighborKeySet;
> >         }
> >
>

Re: Is it correct and required to keep consistency this way?

Posted by Bing Li <lb...@gmail.com>.
Sorry, I didn't keep the exceptions. I will post the exceptions if I get
them again.

But after putting "synchronized" on the writing methods, the exceptions are
gone.

I am a little confused. HTable must be the interface to write/read data
from HBase. If it is not safe, it means locking must be set as what is
shown in my code, doesn't it?

Thanks so much!
Bing

On Thu, Sep 20, 2012 at 11:00 AM, Bijieshan <bi...@huawei.com> wrote:

> Yes. It should be safe. What you need to pay attention is HTable is not
> thread safe. What are the exceptions?
>
> Jieshan
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 10:52 AM
> To: user@hbase.apache.org
> Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
> Subject: Re: Is it correct and required to keep consistency this way?
>
> Dear Jieshan,
>
> Thanks so much for your reply!
>
> Now locking is not set on the reading methods in my system. It seems to be
> fine with that.
>
> But I noticed exceptions when no locking was put on the writing method. If
> multiple threads are writing to HBase concurrently, do you think it is safe
> without locking?
>
> Best regards,
> Bing
>
> On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bi...@huawei.com> wrote:
>
> > You can avoid read & write running parallel from your application level,
> > if I read your mail correctly. You can use ReentrantReadWriteLock if your
> > intention is like that. But it's not recommended.
> > HBase has its own mechanism(MVCC) to manage the read/write consistency.
> > When we start a scanning, the latest data has not committed by MVCC may
> not
> > be visible(According to our configuration).
> >
> > Jieshan
> > -----Original Message-----
> > From: Bing Li [mailto:lblabs@gmail.com]
> > Sent: Thursday, September 20, 2012 10:02 AM
> > To: hbase-user@hadoop.apache.org; user
> > Subject: Is it correct and required to keep consistency this way?
> >
> > Dear all,
> >
> > Sorry to send the email multiple times! An error in the previous email is
> > corrected.
> >
> > I am not exactly sure if it is correct and required to keep consistency
> as
> > follows when saving and reading from HBase? Your help is highly
> > appreciated.
> >
> > Best regards,
> > Bing
> >
> >         // Writing
> >         public void AddOutgoingNeighbor(String hostNodeKey, String
> > groupKey, int timingScale, String neighborKey)
> >         {
> >                 List<Put> puts = new ArrayList<Put>();
> >                 Put hostNodeKeyPut;
> >                 Put groupKeyPut;
> >                 Put topGroupKeyPut;
> >                 Put timingScalePut;
> >                 Put neighborKeyPut;
> >
> >                 byte[] outgoingRowKey =
> > Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> > Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
> >
> >                 hostNodeKeyPut = new Put(outgoingRowKey);
> >
> > hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > Bytes.toBytes(hostNodeKey));
> >                 puts.add(hostNodeKeyPut);
> >
> >                 groupKeyPut = new Put(outgoingRowKey);
> >
> > groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> > Bytes.toBytes(groupKey));
> >                 puts.add(groupKeyPut);
> >
> >                 topGroupKeyPut = new Put(outgoingRowKey);
> >
> > topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> > Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
> >                 puts.add(topGroupKeyPut);
> >
> >                 timingScalePut = new Put(outgoingRowKey);
> >
> > timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > Bytes.toBytes(timingScale));
> >                 puts.add(timingScalePut);
> >
> >                 neighborKeyPut = new Put(outgoingRowKey);
> >
> > neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> > Bytes.toBytes(neighborKey));
> >                 puts.add(neighborKeyPut);
> >
> >                 try
> >                 {
> >                         // Locking is here
> >                         this.lock.writeLock().lock();
> >                         this.neighborTable.put(puts);
> >                         this.lock.writeLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >         }
> >
> >         // Reading
> >         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey,
> int
> > timingScale)
> >         {
> >                 List<Filter> outgoingNeighborsList = new
> > ArrayList<Filter>();
> >
> >                 SingleColumnValueFilter hostNodeKeyFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
> >                 hostNodeKeyFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(hostNodeKeyFilter);
> >
> >                 SingleColumnValueFilter timingScaleFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(timingScale)));
> >                 timingScaleFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(timingScaleFilter);
> >
> >                 FilterList outgoingNeighborFilter = new
> > FilterList(outgoingNeighborsList);
> >                 Scan scan = new Scan();
> >                 scan.setFilter(outgoingNeighborFilter);
> >                 scan.setCaching(Parameters.CACHING_SIZE);
> >                 scan.setBatch(Parameters.BATCHING_SIZE);
> >
> >                 String qualifier;
> >                 Set<String> neighborKeySet = Sets.newHashSet();
> >                 try
> >                 {
> >                         // Lock is here
> >                         this.lock.readLock().lock();
> >                         ResultScanner scanner =
> > this.neighborTable.getScanner(scan);
> >                         for (Result result : scanner)
> >                         {
> >                                 for (KeyValue kv : result.raw())
> >                                 {
> >                                         qualifier =
> > Bytes.toString(kv.getQualifier());
> >                                         if
> >
> >
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
> >                                         {
> >
> > neighborKeySet.add(Bytes.toString(kv.getValue()));
> >                                         }
> >                                 }
> >                         }
> >                         scanner.close();
> >                         this.lock.readLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >                 return neighborKeySet;
> >         }
> >
>

RE: Is it correct and required to keep consistency this way?

Posted by Bijieshan <bi...@huawei.com>.
Yes. It should be safe. What you need to pay attention is HTable is not thread safe. What are the exceptions? 

Jieshan
-----Original Message-----
From: Bing Li [mailto:lblabs@gmail.com] 
Sent: Thursday, September 20, 2012 10:52 AM
To: user@hbase.apache.org
Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
Subject: Re: Is it correct and required to keep consistency this way?

Dear Jieshan,

Thanks so much for your reply!

Now locking is not set on the reading methods in my system. It seems to be
fine with that.

But I noticed exceptions when no locking was put on the writing method. If
multiple threads are writing to HBase concurrently, do you think it is safe
without locking?

Best regards,
Bing

On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bi...@huawei.com> wrote:

> You can avoid read & write running parallel from your application level,
> if I read your mail correctly. You can use ReentrantReadWriteLock if your
> intention is like that. But it's not recommended.
> HBase has its own mechanism(MVCC) to manage the read/write consistency.
> When we start a scanning, the latest data has not committed by MVCC may not
> be visible(According to our configuration).
>
> Jieshan
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 10:02 AM
> To: hbase-user@hadoop.apache.org; user
> Subject: Is it correct and required to keep consistency this way?
>
> Dear all,
>
> Sorry to send the email multiple times! An error in the previous email is
> corrected.
>
> I am not exactly sure if it is correct and required to keep consistency as
> follows when saving and reading from HBase? Your help is highly
> appreciated.
>
> Best regards,
> Bing
>
>         // Writing
>         public void AddOutgoingNeighbor(String hostNodeKey, String
> groupKey, int timingScale, String neighborKey)
>         {
>                 List<Put> puts = new ArrayList<Put>();
>                 Put hostNodeKeyPut;
>                 Put groupKeyPut;
>                 Put topGroupKeyPut;
>                 Put timingScalePut;
>                 Put neighborKeyPut;
>
>                 byte[] outgoingRowKey =
> Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
>
>                 hostNodeKeyPut = new Put(outgoingRowKey);
>
> hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> Bytes.toBytes(hostNodeKey));
>                 puts.add(hostNodeKeyPut);
>
>                 groupKeyPut = new Put(outgoingRowKey);
>
> groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> Bytes.toBytes(groupKey));
>                 puts.add(groupKeyPut);
>
>                 topGroupKeyPut = new Put(outgoingRowKey);
>
> topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
>                 puts.add(topGroupKeyPut);
>
>                 timingScalePut = new Put(outgoingRowKey);
>
> timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> Bytes.toBytes(timingScale));
>                 puts.add(timingScalePut);
>
>                 neighborKeyPut = new Put(outgoingRowKey);
>
> neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> Bytes.toBytes(neighborKey));
>                 puts.add(neighborKeyPut);
>
>                 try
>                 {
>                         // Locking is here
>                         this.lock.writeLock().lock();
>                         this.neighborTable.put(puts);
>                         this.lock.writeLock().unlock();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>         }
>
>         // Reading
>         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey, int
> timingScale)
>         {
>                 List<Filter> outgoingNeighborsList = new
> ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 outgoingNeighborsList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 outgoingNeighborsList.add(timingScaleFilter);
>
>                 FilterList outgoingNeighborFilter = new
> FilterList(outgoingNeighborsList);
>                 Scan scan = new Scan();
>                 scan.setFilter(outgoingNeighborFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 String qualifier;
>                 Set<String> neighborKeySet = Sets.newHashSet();
>                 try
>                 {
>                         // Lock is here
>                         this.lock.readLock().lock();
>                         ResultScanner scanner =
> this.neighborTable.getScanner(scan);
>                         for (Result result : scanner)
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>                                         qualifier =
> Bytes.toString(kv.getQualifier());
>                                         if
>
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
>                                         {
>
> neighborKeySet.add(Bytes.toString(kv.getValue()));
>                                         }
>                                 }
>                         }
>                         scanner.close();
>                         this.lock.readLock().unlock();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return neighborKeySet;
>         }
>

Re: Is it correct and required to keep consistency this way?

Posted by Bing Li <lb...@gmail.com>.
Dear Jieshan,

Thanks so much for your reply!

Now locking is not set on the reading methods in my system. It seems to be
fine with that.

But I noticed exceptions when no locking was put on the writing method. If
multiple threads are writing to HBase concurrently, do you think it is safe
without locking?

Best regards,
Bing

On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bi...@huawei.com> wrote:

> You can avoid read & write running parallel from your application level,
> if I read your mail correctly. You can use ReentrantReadWriteLock if your
> intention is like that. But it's not recommended.
> HBase has its own mechanism(MVCC) to manage the read/write consistency.
> When we start a scanning, the latest data has not committed by MVCC may not
> be visible(According to our configuration).
>
> Jieshan
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 10:02 AM
> To: hbase-user@hadoop.apache.org; user
> Subject: Is it correct and required to keep consistency this way?
>
> Dear all,
>
> Sorry to send the email multiple times! An error in the previous email is
> corrected.
>
> I am not exactly sure if it is correct and required to keep consistency as
> follows when saving and reading from HBase? Your help is highly
> appreciated.
>
> Best regards,
> Bing
>
>         // Writing
>         public void AddOutgoingNeighbor(String hostNodeKey, String
> groupKey, int timingScale, String neighborKey)
>         {
>                 List<Put> puts = new ArrayList<Put>();
>                 Put hostNodeKeyPut;
>                 Put groupKeyPut;
>                 Put topGroupKeyPut;
>                 Put timingScalePut;
>                 Put neighborKeyPut;
>
>                 byte[] outgoingRowKey =
> Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
>
>                 hostNodeKeyPut = new Put(outgoingRowKey);
>
> hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> Bytes.toBytes(hostNodeKey));
>                 puts.add(hostNodeKeyPut);
>
>                 groupKeyPut = new Put(outgoingRowKey);
>
> groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> Bytes.toBytes(groupKey));
>                 puts.add(groupKeyPut);
>
>                 topGroupKeyPut = new Put(outgoingRowKey);
>
> topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
>                 puts.add(topGroupKeyPut);
>
>                 timingScalePut = new Put(outgoingRowKey);
>
> timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> Bytes.toBytes(timingScale));
>                 puts.add(timingScalePut);
>
>                 neighborKeyPut = new Put(outgoingRowKey);
>
> neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> Bytes.toBytes(neighborKey));
>                 puts.add(neighborKeyPut);
>
>                 try
>                 {
>                         // Locking is here
>                         this.lock.writeLock().lock();
>                         this.neighborTable.put(puts);
>                         this.lock.writeLock().unlock();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>         }
>
>         // Reading
>         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey, int
> timingScale)
>         {
>                 List<Filter> outgoingNeighborsList = new
> ArrayList<Filter>();
>
>                 SingleColumnValueFilter hostNodeKeyFilter = new
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
>                 hostNodeKeyFilter.setFilterIfMissing(true);
>                 outgoingNeighborsList.add(hostNodeKeyFilter);
>
>                 SingleColumnValueFilter timingScaleFilter = new
> SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> CompareFilter.CompareOp.EQUAL, new
> BinaryComparator(Bytes.toBytes(timingScale)));
>                 timingScaleFilter.setFilterIfMissing(true);
>                 outgoingNeighborsList.add(timingScaleFilter);
>
>                 FilterList outgoingNeighborFilter = new
> FilterList(outgoingNeighborsList);
>                 Scan scan = new Scan();
>                 scan.setFilter(outgoingNeighborFilter);
>                 scan.setCaching(Parameters.CACHING_SIZE);
>                 scan.setBatch(Parameters.BATCHING_SIZE);
>
>                 String qualifier;
>                 Set<String> neighborKeySet = Sets.newHashSet();
>                 try
>                 {
>                         // Lock is here
>                         this.lock.readLock().lock();
>                         ResultScanner scanner =
> this.neighborTable.getScanner(scan);
>                         for (Result result : scanner)
>                         {
>                                 for (KeyValue kv : result.raw())
>                                 {
>                                         qualifier =
> Bytes.toString(kv.getQualifier());
>                                         if
>
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
>                                         {
>
> neighborKeySet.add(Bytes.toString(kv.getValue()));
>                                         }
>                                 }
>                         }
>                         scanner.close();
>                         this.lock.readLock().unlock();
>                 }
>                 catch (IOException e)
>                 {
>                         e.printStackTrace();
>                 }
>                 return neighborKeySet;
>         }
>

RE: Is it correct and required to keep consistency this way?

Posted by Bijieshan <bi...@huawei.com>.
You can avoid read & write running parallel from your application level, if I read your mail correctly. You can use ReentrantReadWriteLock if your intention is like that. But it's not recommended. 
HBase has its own mechanism(MVCC) to manage the read/write consistency. When we start a scanning, the latest data has not committed by MVCC may not be visible(According to our configuration). 

Jieshan
-----Original Message-----
From: Bing Li [mailto:lblabs@gmail.com] 
Sent: Thursday, September 20, 2012 10:02 AM
To: hbase-user@hadoop.apache.org; user
Subject: Is it correct and required to keep consistency this way?

Dear all,

Sorry to send the email multiple times! An error in the previous email is
corrected.

I am not exactly sure if it is correct and required to keep consistency as
follows when saving and reading from HBase? Your help is highly appreciated.

Best regards,
Bing

        // Writing
        public void AddOutgoingNeighbor(String hostNodeKey, String
groupKey, int timingScale, String neighborKey)
        {
                List<Put> puts = new ArrayList<Put>();
                Put hostNodeKeyPut;
                Put groupKeyPut;
                Put topGroupKeyPut;
                Put timingScalePut;
                Put neighborKeyPut;

                byte[] outgoingRowKey =
Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));

                hostNodeKeyPut = new Put(outgoingRowKey);

hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
Bytes.toBytes(hostNodeKey));
                puts.add(hostNodeKeyPut);

                groupKeyPut = new Put(outgoingRowKey);

groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
Bytes.toBytes(groupKey));
                puts.add(groupKeyPut);

                topGroupKeyPut = new Put(outgoingRowKey);

topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
                puts.add(topGroupKeyPut);

                timingScalePut = new Put(outgoingRowKey);

timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
Bytes.toBytes(timingScale));
                puts.add(timingScalePut);

                neighborKeyPut = new Put(outgoingRowKey);

neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
Bytes.toBytes(neighborKey));
                puts.add(neighborKeyPut);

                try
                {
                        // Locking is here
                        this.lock.writeLock().lock();
                        this.neighborTable.put(puts);
                        this.lock.writeLock().unlock();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
        }

        // Reading
        public Set<String> GetOutgoingNeighborKeys(String hostNodeKey, int
timingScale)
        {
                List<Filter> outgoingNeighborsList = new
ArrayList<Filter>();

                SingleColumnValueFilter hostNodeKeyFilter = new
SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
                hostNodeKeyFilter.setFilterIfMissing(true);
                outgoingNeighborsList.add(hostNodeKeyFilter);

                SingleColumnValueFilter timingScaleFilter = new
SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
CompareFilter.CompareOp.EQUAL, new
BinaryComparator(Bytes.toBytes(timingScale)));
                timingScaleFilter.setFilterIfMissing(true);
                outgoingNeighborsList.add(timingScaleFilter);

                FilterList outgoingNeighborFilter = new
FilterList(outgoingNeighborsList);
                Scan scan = new Scan();
                scan.setFilter(outgoingNeighborFilter);
                scan.setCaching(Parameters.CACHING_SIZE);
                scan.setBatch(Parameters.BATCHING_SIZE);

                String qualifier;
                Set<String> neighborKeySet = Sets.newHashSet();
                try
                {
                        // Lock is here
                        this.lock.readLock().lock();
                        ResultScanner scanner =
this.neighborTable.getScanner(scan);
                        for (Result result : scanner)
                        {
                                for (KeyValue kv : result.raw())
                                {
                                        qualifier =
Bytes.toString(kv.getQualifier());
                                        if
(qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
                                        {

neighborKeySet.add(Bytes.toString(kv.getValue()));
                                        }
                                }
                        }
                        scanner.close();
                        this.lock.readLock().unlock();
                }
                catch (IOException e)
                {
                        e.printStackTrace();
                }
                return neighborKeySet;
        }