You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mahout.apache.org by Dmitriy Lyubimov <dl...@gmail.com> on 2014/03/17 19:31:57 UTC

Is Cholesky too sensitive to rank deficiency?

I still seem to get signficant differences on the norm differences of
Householder QR and QR via Cholesky trick. our stock in-core QR seems to be
comfortable populating some R values (and therefore Q columns) with values
as small as 1e-16, whereas Cholesky computation for L seems to set these
things to 0. Norms on Q in this case differ more than trivially.

Are we sure we cannot decrease sensitivity of Cholesky decomposition to
small values? I have manipulated the limit there that controls the decision
for positive-definite-ness but I am not sure i understand algorithm well
enough to do a meaningful sensitivity reduction.

Re: Is Cholesky too sensitive to rank deficiency?

Posted by Ted Dunning <te...@gmail.com>.
This may not be an issue that can actually be cured. The cholesky trick is akin to squaring a number.  Inherently you tend to lose precision by doing this.  

With the possibility of iteration we should consider more advanced methods for large qr. The great value of the cholesky trick is that one can use map reduce with no iteration.  

Sent from my iPhone

> On Mar 17, 2014, at 11:31, Dmitriy Lyubimov <dl...@gmail.com> wrote:
> 
> I still seem to get signficant differences on the norm differences of
> Householder QR and QR via Cholesky trick. our stock in-core QR seems to be
> comfortable populating some R values (and therefore Q columns) with values
> as small as 1e-16, whereas Cholesky computation for L seems to set these
> things to 0. Norms on Q in this case differ more than trivially.
> 
> Are we sure we cannot decrease sensitivity of Cholesky decomposition to
> small values? I have manipulated the limit there that controls the decision
> for positive-definite-ness but I am not sure i understand algorithm well
> enough to do a meaningful sensitivity reduction.