You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by lu...@free.fr on 2008/11/27 18:10:17 UTC

Re: svn commit: r721203 [1/2] - in /commons/proper/math/branches/MATH_2_0: ./ src/java/org/apache/commons/math/linear/ src/site/xdoc/ src/site/xdoc/userguide/ src/test/org/apache/commons/math/linear/

Hello,

This commit is the result of weeks of work. I hope it completes an important feature
to [math], computation of eigenvalues and eigenvectors for symmetric real matrices.

The implementation is based on algorithms developed in the last 10 years or so. It is based partly on two reference papers and partly on LAPACK. Lapack is distributed under a modified-BSD license, so this is acceptable for [math]. I have updated the NOTICE file and taken care of the proper attributions in Javadoc.

The current status is that we can solve eigenproblems much faster than Jama (see the speed gains in the commit message below). Furthermore, the eigenvectors are not always computed, they are computed only if needed. So applications that only need eigenvalues will benefit from a larger speed gain. This could even be improved again by allowing to compute only some eigenvalues, not all of them. This feature is available in the higher level LAPACK routine, but I didn't include it yet. I'll do it only when required, as this as already been a very large amount of work.

If someone could test this new decomposition algorithm further, I would be more than happy.

My next goal is now to implement Singular Value Decomposition. I will most probably use a method based on eigen decomposition as this seems to be now the prefered way since dqd/dqds and MRRR algorithms are available.

Luc

----- luc@apache.org a écrit :

> Author: luc
> Date: Thu Nov 27 07:50:42 2008
> New Revision: 721203
> 
> URL: http://svn.apache.org/viewvc?rev=721203&view=rev
> Log:
> completed implementation of EigenDecompositionImpl.
> The implementation is now based on the very fast and accurate dqd/dqds
> algorithm.
> It is faster than Jama for all dimensions and speed gain increases
> with dimensions.
> The gain is about 30% below dimension 100, about 50% around dimension
> 250 and about
> 65% for dimensions around 700.
> It is also possible to compute only eigenvalues (and hence saving
> computation of
> eigenvectors, thus even increasing the speed gain).
> JIRA: MATH-220

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: svn commit: r721203 [1/2] - in /commons/proper/math/branches/MATH_2_0: ./ src/java/org/apache/commons/math/linear/ src/site/xdoc/ src/site/xdoc/userguide/ src/test/org/apache/commons/math/linear/

Posted by Ted Dunning <te...@gmail.com>.
Luc,

Last I looked, I think I saw that commons math used a double indirect
storage format very similar to Jama.

Is there any thought to going to a higher performance layout such as used by
Colt?

On Thu, Nov 27, 2008 at 9:10 AM, <lu...@free.fr> wrote:

> Hello,
>
> This commit is the result of weeks of work. I hope it completes an
> important feature
> to [math], computation of eigenvalues and eigenvectors for symmetric real
> matrices.
>
> The implementation is based on algorithms developed in the last 10 years or
> so. It is based partly on two reference papers and partly on LAPACK. Lapack
> is distributed under a modified-BSD license, so this is acceptable for
> [math]. I have updated the NOTICE file and taken care of the proper
> attributions in Javadoc.
>
> The current status is that we can solve eigenproblems much faster than Jama
> (see the speed gains in the commit message below). Furthermore, the
> eigenvectors are not always computed, they are computed only if needed. So
> applications that only need eigenvalues will benefit from a larger speed
> gain. This could even be improved again by allowing to compute only some
> eigenvalues, not all of them. This feature is available in the higher level
> LAPACK routine, but I didn't include it yet. I'll do it only when required,
> as this as already been a very large amount of work.
>
> If someone could test this new decomposition algorithm further, I would be
> more than happy.
>
> My next goal is now to implement Singular Value Decomposition. I will most
> probably use a method based on eigen decomposition as this seems to be now
> the prefered way since dqd/dqds and MRRR algorithms are available.
>
> Luc
>
> ----- luc@apache.org a écrit :
>
> > Author: luc
> > Date: Thu Nov 27 07:50:42 2008
> > New Revision: 721203
> >
> > URL: http://svn.apache.org/viewvc?rev=721203&view=rev
> > Log:
> > completed implementation of EigenDecompositionImpl.
> > The implementation is now based on the very fast and accurate dqd/dqds
> > algorithm.
> > It is faster than Jama for all dimensions and speed gain increases
> > with dimensions.
> > The gain is about 30% below dimension 100, about 50% around dimension
> > 250 and about
> > 65% for dimensions around 700.
> > It is also possible to compute only eigenvalues (and hence saving
> > computation of
> > eigenvectors, thus even increasing the speed gain).
> > JIRA: MATH-220
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


-- 
Ted Dunning, CTO
DeepDyve
4600 Bohannon Drive, Suite 220
Menlo Park, CA 94025
www.deepdyve.com
650-324-0110, ext. 738
858-414-0013 (m)