You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jcs-users@jakarta.apache.org by "rcolmegna@tiscali.it" <rc...@tiscali.it> on 2009/08/11 19:40:41 UTC

JCS HA cluster, is possible?

Hi,

I have a question about JCS behavior.

Suppose this scenario:
- 
four JCS server: S1,S2,S3,S4
- for every server a max of 10,000 
cacheable objects
- a client C1

I need that client C1 writes 40,000 
objects to 4 JCS servers.  JCS could automatic balance the 
objects 
insertion between the 4 servers partitioning the objects set (via round 
robin, for example)?  

I need the maximum performance (availability), 
not affordability.  Substantially I need
that an object instance "O1" 
is allocated _only_ in one server and that the client lookup
is 
executed in parallel on the four servers.

Is this possible with JCS?


TIA
Roberto Colmegna




Torna a grande richiesta l'offerta estiva di Tiscali Photo !! Non rinuniciare ai tuoi ricordi. Stampa le tue foto a soli 0,09 euro



http://photo.tiscali.it

---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jcs-users-help@jakarta.apache.org


Re: JCS HA cluster, is possible?

Posted by Aaron Smuts <as...@yahoo.com>.
I'll add something under the JCS util package.  

The utility I'm using looks like this:

/**
 * This handles dividing puts and gets.
 * <p>
 * There are two required properties.
 * <p>
 * <ol>
 * <li>.numberOfPartitions</li>
 * <li>.partitionRegionNamePrefix</li>
 * </ol>
 * System properties will override values in the properties file.
 * <p>
 * We use a JCS region name for each partition that looks like this: partitionRegionNamePrefix + "_"
 * + patitionNuber. The number is ) indexed based.
 * <p>
 * @author Aaron Smuts
 */
public class PartitionedJCSCacheImpl
    extends AbstractPropertyContainer
    implements Cache
{
    /** the logger. */
    private static final Log log = LogFactory.getLog( PartitionedJCSCacheImpl.class );

    /** The number of partitions. */
    private int numberOfPartitions = 1;

    /**
     * We use a JCS region name for each partition that looks like this: partitionRegionNamePrefix +
     * "_" + partitionNumber
     */
    private String partitionRegionNamePrefix;

    /** An array of partitions built during inialization. */
    private JCS[] partitions;

    /** Is the class initialized. */
    private boolean initialized = false;

    /** Sets default properties heading and group. */
    public PartitionedJCSCacheImpl()
    {
        setPropertiesHeading( "PartitionedJCSCache" );
        setPropertiesGroup( "webservices" );
    }

    /**
     * Puts the value into the appropriate cache partition.
     * <p>
     * @param key key 
     * @param object object
     * @return true if there were no errors.
     * @throws ConfigurationException on configuration problem
     */
    public boolean put( Serializable key, Serializable object )
        throws ConfigurationException
    {
        if ( key == null || object == null )
        {
            log.warn( "Bad input key [" + key + "].  Cannot put null into the cache." );
            return false;
        }
        ensureInit();

        int partition = getPartitionNumberForKey( key );
        try
        {
            partitions[partition].put( key, object );
            return true;
        }
        catch ( CacheException e )
        {
            log.error( "Problem putting value for key [" + key + "] in cache [" + partitions[partition] + "]" );
            return false;
        }
    }

    /**
     * Gets the object for the key from the desired partition.
     * <p>
     * @param key key
     * @return result, null if not found.
     * @throws ConfigurationException on configuration problem
     */
    public Object get( Serializable key )
        throws ConfigurationException
    {
        if ( key == null )
        {
            log.warn( "Bad input key [" + key + "]." );
            return null;
        }
        ensureInit();

        int partition = getPartitionNumberForKey( key );

        return partitions[partition].get( key );
    }
    
    /**
     * Gets the ICacheElement (the wrapped object) for the key from the desired partition.
     * <p>
     * @param key key
     * @return result, null if not found.
     * @throws ConfigurationException on configuration problem
     */
    public ICacheElement getCacheElement( Serializable key )
        throws ConfigurationException
    {
        if ( key == null )
        {
            log.warn( "Bad input key [" + key + "]." );
            return null;
        }
        ensureInit();

        int partition = getPartitionNumberForKey( key );

        return partitions[partition].getCacheElement( key );
    }

    /**
     * This expects a numeric key. If the key cannot be converted into a number, we wil return 0.
     * TODO we could md5 it or get the hashcode.
     * <p>
     * We determine the partition by taking the mod of the number of partions.
     * <p>
     * @param key key
     * @return the partition number.
     */
    protected int getPartitionNumberForKey( Serializable key )
    {
        if ( key == null )
        {
            return 0;
        }

        long keyNum = getNumericValueForKey( key );

        int partition = (int) ( keyNum % getNumberOfPartitions() );

        if ( log.isDebugEnabled() )
        {
            log.debug( "Using partition [" + partition + "] for key [" + key + "]" );
        }

        return partition;
    }

    /**
     * This can be overridden for special purposes.
     * <p>
     * @param key key
     * @return long
     */
    public long getNumericValueForKey( Serializable key )
    {
        String keyString = key.toString();
        long keyNum = -1;
        try
        {
            keyNum = Long.parseLong( keyString );
        }
        catch ( NumberFormatException e )
        {
            // THIS IS UGLY, but I can't think of a better failsafe right now.
            keyNum = key.hashCode();
            log.warn( "Counldn't convert [" + key + "] into a number.  Will use hashcode [" + keyNum + "]" );
        }
        return keyNum;
    }

    /**
     * Initialize if we haven't already.
     * <p>
     * @throws ConfigurationException on configuration problem
     */
    protected synchronized void ensureInit()
        throws ConfigurationException
    {
        if ( !initialized )
        {
            initialize();
        }
    }

    /**
     * Use the partion prefix and the number of partions to get JCS regions.
     * <p>
     * @throws ConfigurationException on configuration problem
     */
    protected synchronized void initialize()
        throws ConfigurationException
    {
        ensureProperties();

        JCS[] tempPartitions = new JCS[this.getNumberOfPartitions()];
        for ( int i = 0; i < this.getNumberOfPartitions(); i++ )
        {
            String regionName = this.getPartitionRegionNamePrefix() + "_" + i;
            try
            {
                tempPartitions[i] = JCS.getInstance( regionName );
            }
            catch ( CacheException e )
            {
                log.error( "Problem getting cache for region [" + regionName + "]" );
            }
        }
        partitions = tempPartitions;
        initialized = true;
    }

    /**
     * Loads in the needed configuration settings. System properties are checked first. A system
     * property will override local property value.
     * <p>
     * Loads the following JCS Cache specific properties:
     * <ul>
     * <li>heading.numberOfPartitions</li>
     * <li>heading.partitionRegionNamePrefix</li>
     * </ul>
     * @throws ConfigurationException on configuration problem
     */
    protected void handleProperties()
        throws ConfigurationException
    {
        // Number of Partitions.
        String numberOfPartitionsPropertyName = this.getPropertiesHeading() + ".numberOfPartitions";
        String numberOfPartitionsPropertyValue = getPropertyForName( numberOfPartitionsPropertyName, true );
        try
        {
            this.setNumberOfPartitions( Integer.parseInt( numberOfPartitionsPropertyValue ) );
        }
        catch ( NumberFormatException e )
        {
            String message = "Could not convert [" + numberOfPartitionsPropertyValue + "] into a number for ["
                + numberOfPartitionsPropertyName + "]";
            log.error( message );
            throw new ConfigurationException( message );
        }

        // Partition Name Prefix.
        String prefixPropertyName = this.getPropertiesHeading() + ".partitionRegionNamePrefix";
        String prefix = getPropertyForName( prefixPropertyName, true );
        this.setPartitionRegionNamePrefix( prefix );
    }

    /**
     * Checks the system properties before the properties.
     * <p>
     * @param propertyName name
     * @param required is it required?
     * @return the property value if one is found
     * @throws ConfigurationException thrown if it is required and not found.
     */
    protected String getPropertyForName( String propertyName, boolean required )
        throws ConfigurationException
    {
        String propertyValue = null;
        propertyValue = System.getProperty( propertyName );
        if ( propertyValue != null )
        {
            if ( log.isInfoEnabled() )
            {
                log.info( "Found system property override: Name [" + propertyName + "] Value [" + propertyValue + "]" );
            }
        }
        else
        {
            propertyValue = this.getProperties().getProperty( propertyName );
            if ( required && propertyValue == null )
            {
                String message = "Could not find required property [" + propertyName + "] in propertiesGroup ["
                    + this.getPropertiesGroup() + "]";
                log.error( message );
                throw new ConfigurationException( message );
            }
            else
            {
                if ( log.isInfoEnabled() )
                {
                    log.info( "Name [" + propertyName + "] Value [" + propertyValue + "]" );
                }
            }
        }
        return propertyValue;
    }

    /**
     * @param numberOfPartitions The numberOfPartitions to set.
     */
    protected void setNumberOfPartitions( int numberOfPartitions )
    {
        this.numberOfPartitions = numberOfPartitions;
    }

    /**
     * @return Returns the numberOfPartitions.
     */
    protected int getNumberOfPartitions()
    {
        return numberOfPartitions;
    }

    /**
     * @param partitionRegionNamePrefix The partitionRegionNamePrefix to set.
     */
    protected void setPartitionRegionNamePrefix( String partitionRegionNamePrefix )
    {
        this.partitionRegionNamePrefix = partitionRegionNamePrefix;
    }

    /**
     * @return Returns the partitionRegionNamePrefix.
     */
    protected String getPartitionRegionNamePrefix()
    {
        return partitionRegionNamePrefix;
    }

    /**
     * @param partitions The partitions to set.
     */
    protected void setPartitions( JCS[] partitions )
    {
        this.partitions = partitions;
    }

    /**
     * @return Returns the partitions.
     */
    protected JCS[] getPartitions()
    {
        return partitions;
    }

--- On Tue, 8/11/09, Aaron Smuts <as...@yahoo.com> wrote:

> From: Aaron Smuts <as...@yahoo.com>
> Subject: Re: JCS HA cluster, is possible?
> To: "JCS Users List" <jc...@jakarta.apache.org>, "rcolmegna@tiscali.it" <rc...@tiscali.it>
> Date: Tuesday, August 11, 2009, 10:54 AM
> 
> You can implement this very easily with a bit of client
> code.  In fact, I've done just this elsewhere, or at
> least something very similar. 
> 
> I have an installation that handles millions of items a
> day.  The items average 80k.  I don't have disk
> space to keep them all on any one box.  So, I
> partitioned the data.  I did it this way.
> 
> I took 8 boxes and setup 4 remote cache server
> primary/failover pairs.  (fyi. Each is configured to
> use a jdbc disk cache backed by mysql running locally.)
> 
> Every client is configured with 4 remote cache client
> auxiliaries, one for each pair.
> 
> I configured 4 regions on the clients.  You could have
> far more.      
> 
> I divided the remote clients between the regions. 
> Region 1 uses p/f pair A.  Region 2, B . . .  
> 
> (In another instance I have 180 regions divided between 4
> remote servers.)
> 
> I decide what region to put the data in through a simple
> algorithm:  key mod numerOfPartitions (here 4) = the
> region.
> 
> I named the regions in a pattern like this:
> 
> MyPartitionedData_0
> MyPartitionedData_1
> MyPartitionedData_2
> MyPartitionedData_3
> 
> I take the suffic from the algorithm and put the data in
> the appropriate region.  I made a simple abstraction
> that does just this. 
> 
> I could add more partitions and reconfigure.  It can
> be scaled indefinitely. . .. 
> 
> Perhaps I'll put in in the JCS util package.
> 
> Cheers,
> 
> Aaron
> 
> 
> 
> --- On Tue, 8/11/09, rcolmegna@tiscali.it
> <rc...@tiscali.it>
> wrote:
> 
> > From: rcolmegna@tiscali.it
> <rc...@tiscali.it>
> > Subject: JCS HA cluster, is possible?
> > To: jcs-users@jakarta.apache.org
> > Date: Tuesday, August 11, 2009, 10:40 AM
> > Hi,
> > 
> > I have a question about JCS behavior.
> > 
> > Suppose this scenario:
> > - 
> > four JCS server: S1,S2,S3,S4
> > - for every server a max of 10,000 
> > cacheable objects
> > - a client C1
> > 
> > I need that client C1 writes 40,000 
> > objects to 4 JCS servers.  JCS could automatic
> balance
> > the 
> > objects 
> > insertion between the 4 servers partitioning the
> objects
> > set (via round 
> > robin, for example)?  
> > 
> > I need the maximum performance (availability), 
> > not affordability.  Substantially I need
> > that an object instance "O1" 
> > is allocated _only_ in one server and that the client
> > lookup
> > is 
> > executed in parallel on the four servers.
> > 
> > Is this possible with JCS?
> > 
> > 
> > TIA
> > Roberto Colmegna
> > 
> > 
> > 
> > 
> > Torna a grande richiesta l'offerta estiva di Tiscali
> Photo
> > !! Non rinuniciare ai tuoi ricordi. Stampa le tue foto
> a
> > soli 0,09 euro
> > 
> > 
> > 
> > http://photo.tiscali.it
> > 
> >
> ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jcs-users-help@jakarta.apache.org
> > 
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jcs-users-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jcs-users-help@jakarta.apache.org


Re: JCS HA cluster, is possible?

Posted by Aaron Smuts <as...@yahoo.com>.
You can implement this very easily with a bit of client code.  In fact, I've done just this elsewhere, or at least something very similar. 

I have an installation that handles millions of items a day.  The items average 80k.  I don't have disk space to keep them all on any one box.  So, I partitioned the data.  I did it this way.

I took 8 boxes and setup 4 remote cache server primary/failover pairs.  (fyi. Each is configured to use a jdbc disk cache backed by mysql running locally.)

Every client is configured with 4 remote cache client auxiliaries, one for each pair.

I configured 4 regions on the clients.  You could have far more.      

I divided the remote clients between the regions.  Region 1 uses p/f pair A.  Region 2, B . . .  

(In another instance I have 180 regions divided between 4 remote servers.)

I decide what region to put the data in through a simple algorithm:  key mod numerOfPartitions (here 4) = the region.

I named the regions in a pattern like this:

MyPartitionedData_0
MyPartitionedData_1
MyPartitionedData_2
MyPartitionedData_3

I take the suffic from the algorithm and put the data in the appropriate region.  I made a simple abstraction that does just this. 

I could add more partitions and reconfigure.  It can be scaled indefinitely. . .. 

Perhaps I'll put in in the JCS util package.

Cheers,

Aaron



--- On Tue, 8/11/09, rcolmegna@tiscali.it <rc...@tiscali.it> wrote:

> From: rcolmegna@tiscali.it <rc...@tiscali.it>
> Subject: JCS HA cluster, is possible?
> To: jcs-users@jakarta.apache.org
> Date: Tuesday, August 11, 2009, 10:40 AM
> Hi,
> 
> I have a question about JCS behavior.
> 
> Suppose this scenario:
> - 
> four JCS server: S1,S2,S3,S4
> - for every server a max of 10,000 
> cacheable objects
> - a client C1
> 
> I need that client C1 writes 40,000 
> objects to 4 JCS servers.  JCS could automatic balance
> the 
> objects 
> insertion between the 4 servers partitioning the objects
> set (via round 
> robin, for example)?  
> 
> I need the maximum performance (availability), 
> not affordability.  Substantially I need
> that an object instance "O1" 
> is allocated _only_ in one server and that the client
> lookup
> is 
> executed in parallel on the four servers.
> 
> Is this possible with JCS?
> 
> 
> TIA
> Roberto Colmegna
> 
> 
> 
> 
> Torna a grande richiesta l'offerta estiva di Tiscali Photo
> !! Non rinuniciare ai tuoi ricordi. Stampa le tue foto a
> soli 0,09 euro
> 
> 
> 
> http://photo.tiscali.it
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jcs-users-help@jakarta.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: jcs-users-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jcs-users-help@jakarta.apache.org