You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@avalon.apache.org by Berin Loritsch <bl...@apache.org> on 2002/02/15 18:53:13 UTC

We desperately need an efficient threadsafe hashmap

I remember reading an article on how to implement an efficient ThreadSafe hashmap.
The problem is that I can't remember where.  The concept is to reduce the locking
contention to a "bucket".  The implementation they had was fairly simple, and
went something along these lines:

LockedBucketMap
{
     Object[] locks;
     Node[] buckets;

     LockedBucketMap()
     {
         this(256);
     }

     LockedBucketMap( int numBockets )
     {
         locks = new Object[numBuckets];
         buckets = new Node[numBuckets];

         for (int i = 0; i < locks.length; i++)
         {
             locks[i] = new Object();
         }
     }

     void put( Object key, Object value )
     {
         int hash = getHash(key);

         Node node = null;

         synchronized( locks[hash] )
         {
             node = buckets[hash];
             if (node == null)
             {
                 buckets[hash] = new Node( key, value );
             }
             else
             {
                 while (node != null)
                 {
                     node = node.next;
                 }
                 node.next = new Node(key, value);
             }
         }
     }

     Object get( Object key )
     {
         int hash = getHash(key);
         Node node = null;
         Object value = null;

         synchronized( locks[hash] )
         {
             node = buckets[hash];

             while (node != null)
             {
                 if (node.key.equals(key)) return node.value;
                 node = node.next;
             }
         }
         return value;
     }

     int getHash( Object key )
     {
         return key.hashCode() % buckets.length;
     }

     class Node
     {
         Object key;
         Object value;
         Node next;
     }
}

I don't have time to test it, debug it, or optimize it (what about using
ArrayList and a Node that matches against the key object etc. for lookup).
Could someone take the ball and run with it?  We desperately need to reduce
thread contention on the ECM and ContainerManager implementations.

-- 

"They that give up essential liberty to obtain a little temporary safety
  deserve neither liberty nor safety."
                 - Benjamin Franklin


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: We desperately need an efficient threadsafe hashmap

Posted by Peter Donald <pe...@apache.org>.
On Sat, 16 Feb 2002 08:49, Colin Sampaleanu wrote:
> I haven't looked recently at how much threading related code Avalon has
> in it, but I would really suggest staying away from reinventing the
> wheel. As opposed to a lot of other types of code, writing good,
> efficient, safe threading code is relatively hard to do on the scale of
> things. Why not use something like Doug Lea's util.concurrent package?

Some of us do :)

> http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.
>html The stuff is fantastic. It's been around for years, does a ton of
> stuff, is well tested and efficient, and is completely free. He even has a
> book that gets into most of the concept of what the code does.

It will also be mutated to be part of a future JDK ;) (probably jdk1.5)

-- 
Cheers,

Pete

Doubt is not a pleasant condition, but certainty is absurd.
                -- Voltaire

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: We desperately need an efficient threadsafe hashmap

Posted by Colin Sampaleanu <co...@exis.com>.
I haven't looked recently at how much threading related code Avalon has 
in it, but I would really suggest staying away from reinventing the 
wheel. As opposed to a lot of other types of code, writing good, 
efficient, safe threading code is relatively hard to do on the scale of 
things. Why not use something like Doug Lea's util.concurrent package?
  
http://gee.cs.oswego.edu/dl/classes/EDU/oswego/cs/dl/util/concurrent/intro.html
The stuff is fantastic. It's been around for years, does a ton of stuff, 
is well tested and efficient, and is completely free. He even has a book 
that gets into most of the concept of what the code does.


Berin Loritsch wrote:

> I remember reading an article on how to implement an efficient 
> ThreadSafe hashmap.
> The problem is that I can't remember where.  The concept is to reduce 
> the locking
> contention to a "bucket".  The implementation they had was fairly 
> simple, and
> went something along these lines:
>
> LockedBucketMap
> {
>     Object[] locks;
>     Node[] buckets;
>
>     LockedBucketMap()
>     {
>         this(256);
>     }
>
>     LockedBucketMap( int numBockets )
>     {
>         locks = new Object[numBuckets];
>         buckets = new Node[numBuckets];
>
>         for (int i = 0; i < locks.length; i++)
>         {
>             locks[i] = new Object();
>         }
>     }
>
>     void put( Object key, Object value )
>     {
>         int hash = getHash(key);
>
>         Node node = null;
>
>         synchronized( locks[hash] )
>         {
>             node = buckets[hash];
>             if (node == null)
>             {
>                 buckets[hash] = new Node( key, value );
>             }
>             else
>             {
>                 while (node != null)
>                 {
>                     node = node.next;
>                 }
>                 node.next = new Node(key, value);
>             }
>         }
>     }
>
>     Object get( Object key )
>     {
>         int hash = getHash(key);
>         Node node = null;
>         Object value = null;
>
>         synchronized( locks[hash] )
>         {
>             node = buckets[hash];
>
>             while (node != null)
>             {
>                 if (node.key.equals(key)) return node.value;
>                 node = node.next;
>             }
>         }
>         return value;
>     }
>
>     int getHash( Object key )
>     {
>         return key.hashCode() % buckets.length;
>     }
>
>     class Node
>     {
>         Object key;
>         Object value;
>         Node next;
>     }
> }
>
> I don't have time to test it, debug it, or optimize it (what about using
> ArrayList and a Node that matches against the key object etc. for 
> lookup).
> Could someone take the ball and run with it?  We desperately need to 
> reduce
> thread contention on the ECM and ContainerManager implementations.
>




--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: We desperately need an efficient threadsafe hashmap

Posted by Berin Loritsch <bl...@apache.org>.
vinay nair wrote:
> --- Berin Loritsch <bl...@apache.org> wrote:
> 
>>I remember reading an article on how to implement an
>>efficient ThreadSafe hashmap.
>>The problem is that I can't remember where.  The
>>concept is to reduce the locking
>>contention to a "bucket".  The implementation they
>>had was fairly simple, and
>>went something along these lines:
>>
> This might be the place.(atleast this where I found:-)
> http://www-106.ibm.com/developerworks/java/library/j-threads2.html


That was the place!

BTW, I named it BucketMap, it's in CVS, and it desparately needs some
comments.  Also, the ContainerProfile.java class shows it is a whole
lot more efficient than the previous implementation.


-- 

"They that give up essential liberty to obtain a little temporary safety
  deserve neither liberty nor safety."
                 - Benjamin Franklin


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: We desperately need an efficient threadsafe hashmap

Posted by vinay nair <vi...@yahoo.com>.
--- Berin Loritsch <bl...@apache.org> wrote:
> I remember reading an article on how to implement an
> efficient ThreadSafe hashmap.
> The problem is that I can't remember where.  The
> concept is to reduce the locking
> contention to a "bucket".  The implementation they
> had was fairly simple, and
> went something along these lines:
This might be the place.(atleast this where I found:-)
http://www-106.ibm.com/developerworks/java/library/j-threads2.html

> 
> LockedBucketMap
> {
>      Object[] locks;
>      Node[] buckets;
> 
>      LockedBucketMap()
>      {
>          this(256);
>      }
> 
>      LockedBucketMap( int numBockets )
>      {
>          locks = new Object[numBuckets];
>          buckets = new Node[numBuckets];
> 
>          for (int i = 0; i < locks.length; i++)
>          {
>              locks[i] = new Object();
>          }
>      }
> 
>      void put( Object key, Object value )
>      {
>          int hash = getHash(key);
> 
>          Node node = null;
> 
>          synchronized( locks[hash] )
>          {
>              node = buckets[hash];
>              if (node == null)
>              {
>                  buckets[hash] = new Node( key,
> value );
>              }
>              else
>              {
>                  while (node != null)
>                  {
>                      node = node.next;
>                  }
>                  node.next = new Node(key, value);
>              }
>          }
>      }
> 
>      Object get( Object key )
>      {
>          int hash = getHash(key);
>          Node node = null;
>          Object value = null;
> 
>          synchronized( locks[hash] )
>          {
>              node = buckets[hash];
> 
>              while (node != null)
>              {
>                  if (node.key.equals(key)) return
> node.value;
>                  node = node.next;
>              }
>          }
>          return value;
>      }
> 
>      int getHash( Object key )
>      {
>          return key.hashCode() % buckets.length;
>      }
> 
>      class Node
>      {
>          Object key;
>          Object value;
>          Node next;
>      }
> }
> 
> I don't have time to test it, debug it, or optimize
> it (what about using
> ArrayList and a Node that matches against the key
> object etc. for lookup).
> Could someone take the ball and run with it?  We
> desperately need to reduce
> thread contention on the ECM and ContainerManager
> implementations.
> 
> -- 
> 
> "They that give up essential liberty to obtain a
> little temporary safety
>   deserve neither liberty nor safety."
>                  - Benjamin Franklin
> 
> 
> --
> To unsubscribe, e-mail:  
> <ma...@jakarta.apache.org>
> For additional commands, e-mail:
> <ma...@jakarta.apache.org>
> 


__________________________________________________
Do You Yahoo!?
Got something to say? Say it better with Yahoo! Video Mail 
http://mail.yahoo.com

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


RE: We desperately need an efficient threadsafe hashmap

Posted by Gerhard Froehlich <g-...@gmx.de>.
Hi,
I had to take the ball ;-).

attached a minimal implementation, just to test
the main alhorithm. is doesn't implement Map in
the moment...

just an impression...follow more later...

but it rolls!

  ~Gerhard 

"Sorry, but my karma just ran over your dogma."

>-----Original Message-----
>From: Berin Loritsch [mailto:bloritsch@apache.org]
>Sent: Friday, February 15, 2002 6:53 PM
>To: avalon-dev@jakarta.apache.org
>Subject: We desperately need an efficient threadsafe hashmap
>
>
>I remember reading an article on how to implement an efficient ThreadSafe hashmap.
>The problem is that I can't remember where.  The concept is to reduce the locking
>contention to a "bucket".  The implementation they had was fairly simple, and
>went something along these lines:
>
>LockedBucketMap
>{
>     Object[] locks;
>     Node[] buckets;
>
>     LockedBucketMap()
>     {
>         this(256);
>     }
>
>     LockedBucketMap( int numBockets )
>     {
>         locks = new Object[numBuckets];
>         buckets = new Node[numBuckets];
>
>         for (int i = 0; i < locks.length; i++)
>         {
>             locks[i] = new Object();
>         }
>     }
>
>     void put( Object key, Object value )
>     {
>         int hash = getHash(key);
>
>         Node node = null;
>
>         synchronized( locks[hash] )
>         {
>             node = buckets[hash];
>             if (node == null)
>             {
>                 buckets[hash] = new Node( key, value );
>             }
>             else
>             {
>                 while (node != null)
>                 {
>                     node = node.next;
>                 }
>                 node.next = new Node(key, value);
>             }
>         }
>     }
>
>     Object get( Object key )
>     {
>         int hash = getHash(key);
>         Node node = null;
>         Object value = null;
>
>         synchronized( locks[hash] )
>         {
>             node = buckets[hash];
>
>             while (node != null)
>             {
>                 if (node.key.equals(key)) return node.value;
>                 node = node.next;
>             }
>         }
>         return value;
>     }
>
>     int getHash( Object key )
>     {
>         return key.hashCode() % buckets.length;
>     }
>
>     class Node
>     {
>         Object key;
>         Object value;
>         Node next;
>     }
>}
>
>I don't have time to test it, debug it, or optimize it (what about using
>ArrayList and a Node that matches against the key object etc. for lookup).
>Could someone take the ball and run with it?  We desperately need to reduce
>thread contention on the ECM and ContainerManager implementations.
>
>-- 
>
>"They that give up essential liberty to obtain a little temporary safety
>  deserve neither liberty nor safety."
>                 - Benjamin Franklin
>
>
>--
>To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
>For additional commands, e-mail: <ma...@jakarta.apache.org>
>
>