You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@commons.apache.org by Sébastien Brisard <se...@m4x.org> on 2011/08/08 20:46:47 UTC

[math] Testing for convergence in iterative algorithms

Hi,
discussions regarding the development of iterative linear solvers
(MATH-581) triggered the following issue. Usually, a default
(efficiently implemented) convergence checker is provided for each
standard linear iterative solver. However, it would be very useful to
have the ability to use a custom convergence checker. This is very
similar to what is done in o.a.c.m.optimization. Only,
ConvergenceChecker is too specific in my view, since the decision on
convergence is based on the last two iterations. While a perfectly
valid approach, this is not the only one, and I was wondering whether
we could be a little bit more general ?
For example, in iterative linear solvers, the convergence test is
(often) based on the residual r = b - Ax, not on the difference
between newX and oldX.
Thanks in advance for your thoughts on this topic.

Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [math] Testing for convergence in iterative algorithms

Posted by Sébastien Brisard <se...@m4x.org>.
>
> The tricky bit here is a) how to encapsulate state in a generic way
> that has enough substance to it to be actually useful and b)
> similarly how to do the same for convergence.  I am probably too
> influenced by the topological connotation of the latter term which
> makes it seem basically different to me from what we have in the GA,
> where we are not really trying to engineer convergence of a sequence
> in any topological space.  But it could be they can rightly be seen
> as the same and Population in the GA should be a sublcass of
> whatever we end up calling iterative algorithm state.
>
> Another tricky bit is that in some of the examples you gave, history
> as well as current state must be available to the StoppingCondition
> - so just passing it current state is not enough.   It might
> actually be better for the StoppingCondition to be what you are
> calling a monitor - an event listener that can subscribe to the
> events carrying the information that it needs.
>
> As I said on the other thread, I think in any case a generic event
> framework to support the "monitoring" requirement would be a good
> thing to develop.
>
> Phil
Hi,
this takes us back to a discussion we had on JIRA: is a stopping
criterion a kind of monitor? Gilles argued that the two concepts
should be distinguished, and a default stopping criterion should be
hard coded in the iterative algorithm itself. I agree with that view,
since the default stopping criterion would then be highly optimized.
As for some stopping criteria requiring more than one state (e.g state
at times t and t-1), don't you think it should be the responsibility
of the convergence checker itself to store previous values of the
state. Let me explain myself. I might want to design a special
stopping criterion wich would work as follows:
  - take the last say three states of the iterative algorithm: x[n],
x[n-1], x[n-2],
  - extrapolate a "converged" state, based on these values y[n] =
extrapolate(x[n], x[n-1], x[n-2])
  - check for convergence of the extrapolated values abs(y[n]-y[n-1]) <= eps ?
As you see, each call to the checker would require the last *three*
states. This is endless! So I think the convergence checker itself
should store the previous states as needed. This way, we could have a
generic ConvergenceChecker

public interface ConvergenceChecker<State>{
  public boolean hasConverged(State s);
}

What do you think of this design? On a more tentative side, here is
what an IterativeAlgorithm could look like

public interface IterativeAlgorithm<State>{
  public boolean isIterating();

  public int getIteration();

  public State getState();

  public int getMaximumNumberOfIterations();
}


As you can see, I haven't dared to try to put Observers/Listeners into
that... BTW, GSL might be worth looking into in order to see what
design they chose. I'll look into it.
Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [math] Testing for convergence in iterative algorithms

Posted by Phil Steitz <ph...@gmail.com>.
On 8/8/11 10:41 PM, Sébastien Brisard wrote:
> 2011/8/8 Phil Steitz <ph...@gmail.com>:
>> Sounds reasonable.  Have a look at the StoppingCondition interface
>> and its implementations in the genetics package.  Something like
>> that would probably work.
>>
>> Phil
>>
> I wasn't aware of o.a.c.m.genetics.StoppingCondition. It's exactly
> what I would need, only slightly more general. Instead of a
> Population, the StoppingCriterion should be passed the "state" (for
> what that means) of the iterative solvers.

Right.  Just need to decide how to encapsulate that state. 
Interestingly, the Population abstraction in the genetics package
provides another good example to consider following here.  It is
very similar in its context to the state of an iterative numerical
algorithm.
>
> So at the moment, stopping criteria are implemented in at least two
> different places in Commons Math
>   - o.a.c.m.optimization.ConvergenceChecker.converged(int iteration,
> PAIR previous, PAIR current)
>   - o.a.c.m.genetic.StoppingCondition.isSatisfied(Population population)
>
> Do we want to reconcile these two different implementations of the
> same concept? I think it would be interesting to give a thought about
> iterative algorithms at large, and design some general classes that
> support the implementation of such an iterative algorithm.

Interesting idea.  Lets see where it goes.
>
> Here are a few (obvious) thoughts on what makes an iterative
> algorithm. Maybe that could help.
>   - Iterative algorithms obviously iterate, so they probably keep an
> eye on the current iteration number. Besides, an upper bound on the
> total number of iterations must be set. As was suggested, this hints
> at the use of o.a.c.m.util.Incrementor.

Another thing to keep in mind is that in some cases the stopping
criteria is the number of function evaluations.  But yes, IIUC the
intent, all of these kinds of counts can be represented using the
Incrementor.
>   - Iterative algorithms update their *state*, whether this is a
> Population, a RealVector, or anything. So from one iteration to the
> next, the state changes.

Yes, at least that is how we think about them.  In some cases,
obviously state ends up in some kind of attractor or divergent
condition that makes it not change any more; but the point of the
iteration is to evolve the state.

>   - The iterative scheme itself can be quite long, and it might be
> interesting to have a way to *listen* to the algorithm while the
> iterations take place (for example, to back-up the whole thing). This
> relates to the thread on "Monitors".

Yes, I think we all agree we need to have a way to do this.
>   - The last two points bring up a new question (I think). Let us
> imagine that at the end of each iteration, the Iterative Algorithm
> calls monitor.update(state). Then the monitor can access the updated
> state of the Iterative Algorithm, and possibly *alter* it, which might
> ruin the whole thing. This would call for monitor.update() being
> passed a *copy* of the updated state, which might be unefficient, no
> (if the computation works on large data sets)?

This gets problem and implementation-dependent quickly, but I think
in principle it would be a good idea to ensure that only immutable
objects bearing information about state could be carried on events
propagated via the monitoring framework.  At the same time, in some
cases, it might be good to allow monitors to halt or change
execution via established interfaces.  JMX provides some good
examples to think about here.  You can start and stop web
applications in Tomcat, for example, via JMX and you can see how
many active sessions there are, etc; but you can't get a reference
to a user session to play with (at least I hope not :)
>   - The iterations should stop at some point. If the maximum iteration
> count has been reached, an exception should generally (but not
> necessarily) be thrown, this is taken care of by the Incrementor.
> Otherwise, it means that the algorithm has reached a *satisfactory*
> state. What satisfactory means depends on the actual problem at hand.
> For optimization purposes, it really means "stable": the current
> solution is not much different from the previous solution (the
> previous state must be kept in some place). For linear solvers, it
> means: the residual is small enough (only the current state is used).
> Both approaches might be reconciled by the same method hasConverged()
> which would take anything as a parameter (a pair of RealPointValuePair
> --previous and next--, a residual,...).

Here we have the StoppingCondition again.
>
> So a few concepts emerge: Iterative Algorithm, State, Incrementor,
> Observer (or monitor, or listener...), Convergence Criterion.
>
> Maybe that takes us too much on the philosophical side, in which case
> the discussion should be closed soon. As for me, I'm very much
> interested...

The tricky bit here is a) how to encapsulate state in a generic way
that has enough substance to it to be actually useful and b)
similarly how to do the same for convergence.  I am probably too
influenced by the topological connotation of the latter term which
makes it seem basically different to me from what we have in the GA,
where we are not really trying to engineer convergence of a sequence
in any topological space.  But it could be they can rightly be seen
as the same and Population in the GA should be a sublcass of
whatever we end up calling iterative algorithm state.

Another tricky bit is that in some of the examples you gave, history
as well as current state must be available to the StoppingCondition
- so just passing it current state is not enough.   It might
actually be better for the StoppingCondition to be what you are
calling a monitor - an event listener that can subscribe to the
events carrying the information that it needs.

As I said on the other thread, I think in any case a generic event
framework to support the "monitoring" requirement would be a good
thing to develop.

Phil
>
> Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [math] Testing for convergence in iterative algorithms

Posted by Sébastien Brisard <se...@m4x.org>.
2011/8/8 Phil Steitz <ph...@gmail.com>:
>
> Sounds reasonable.  Have a look at the StoppingCondition interface
> and its implementations in the genetics package.  Something like
> that would probably work.
>
> Phil
>

I wasn't aware of o.a.c.m.genetics.StoppingCondition. It's exactly
what I would need, only slightly more general. Instead of a
Population, the StoppingCriterion should be passed the "state" (for
what that means) of the iterative solvers.

So at the moment, stopping criteria are implemented in at least two
different places in Commons Math
  - o.a.c.m.optimization.ConvergenceChecker.converged(int iteration,
PAIR previous, PAIR current)
  - o.a.c.m.genetic.StoppingCondition.isSatisfied(Population population)

Do we want to reconcile these two different implementations of the
same concept? I think it would be interesting to give a thought about
iterative algorithms at large, and design some general classes that
support the implementation of such an iterative algorithm.

Here are a few (obvious) thoughts on what makes an iterative
algorithm. Maybe that could help.
  - Iterative algorithms obviously iterate, so they probably keep an
eye on the current iteration number. Besides, an upper bound on the
total number of iterations must be set. As was suggested, this hints
at the use of o.a.c.m.util.Incrementor.
  - Iterative algorithms update their *state*, whether this is a
Population, a RealVector, or anything. So from one iteration to the
next, the state changes.
  - The iterative scheme itself can be quite long, and it might be
interesting to have a way to *listen* to the algorithm while the
iterations take place (for example, to back-up the whole thing). This
relates to the thread on "Monitors".
  - The last two points bring up a new question (I think). Let us
imagine that at the end of each iteration, the Iterative Algorithm
calls monitor.update(state). Then the monitor can access the updated
state of the Iterative Algorithm, and possibly *alter* it, which might
ruin the whole thing. This would call for monitor.update() being
passed a *copy* of the updated state, which might be unefficient, no
(if the computation works on large data sets)?
  - The iterations should stop at some point. If the maximum iteration
count has been reached, an exception should generally (but not
necessarily) be thrown, this is taken care of by the Incrementor.
Otherwise, it means that the algorithm has reached a *satisfactory*
state. What satisfactory means depends on the actual problem at hand.
For optimization purposes, it really means "stable": the current
solution is not much different from the previous solution (the
previous state must be kept in some place). For linear solvers, it
means: the residual is small enough (only the current state is used).
Both approaches might be reconciled by the same method hasConverged()
which would take anything as a parameter (a pair of RealPointValuePair
--previous and next--, a residual,...).

So a few concepts emerge: Iterative Algorithm, State, Incrementor,
Observer (or monitor, or listener...), Convergence Criterion.

Maybe that takes us too much on the philosophical side, in which case
the discussion should be closed soon. As for me, I'm very much
interested...

Sebastien

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Re: [math] Testing for convergence in iterative algorithms

Posted by Phil Steitz <ph...@gmail.com>.
On 8/8/11 11:46 AM, Sébastien Brisard wrote:
> Hi,
> discussions regarding the development of iterative linear solvers
> (MATH-581) triggered the following issue. Usually, a default
> (efficiently implemented) convergence checker is provided for each
> standard linear iterative solver. However, it would be very useful to
> have the ability to use a custom convergence checker. This is very
> similar to what is done in o.a.c.m.optimization. Only,
> ConvergenceChecker is too specific in my view, since the decision on
> convergence is based on the last two iterations. While a perfectly
> valid approach, this is not the only one, and I was wondering whether
> we could be a little bit more general ?
> For example, in iterative linear solvers, the convergence test is
> (often) based on the residual r = b - Ax, not on the difference
> between newX and oldX.
> Thanks in advance for your thoughts on this topic.

Sounds reasonable.  Have a look at the StoppingCondition interface
and its implementations in the genetics package.  Something like
that would probably work.

Phil
>
> Sebastien
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
> For additional commands, e-mail: dev-help@commons.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org