1: *> \brief \b DGEJSV
2: *
3: * =========== DOCUMENTATION ===========
4: *
5: * Online html documentation available at
6: * http://www.netlib.org/lapack/explore-html/
7: *
8: *> \htmlonly
9: *> Download DGEJSV + dependencies
10: *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.tgz?format=tgz&filename=/lapack/lapack_routine/dgejsv.f">
11: *> [TGZ]</a>
12: *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.zip?format=zip&filename=/lapack/lapack_routine/dgejsv.f">
13: *> [ZIP]</a>
14: *> <a href="http://www.netlib.org/cgi-bin/netlibfiles.txt?format=txt&filename=/lapack/lapack_routine/dgejsv.f">
15: *> [TXT]</a>
16: *> \endhtmlonly
17: *
18: * Definition:
19: * ===========
20: *
21: * SUBROUTINE DGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
22: * M, N, A, LDA, SVA, U, LDU, V, LDV,
23: * WORK, LWORK, IWORK, INFO )
24: *
25: * .. Scalar Arguments ..
26: * IMPLICIT NONE
27: * INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
28: * ..
29: * .. Array Arguments ..
30: * DOUBLE PRECISION A( LDA, * ), SVA( N ), U( LDU, * ), V( LDV, * ),
31: * $ WORK( LWORK )
32: * INTEGER IWORK( * )
33: * CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
34: * ..
35: *
36: *
37: *> \par Purpose:
38: * =============
39: *>
40: *> \verbatim
41: *>
42: *> DGEJSV computes the singular value decomposition (SVD) of a real M-by-N
43: *> matrix [A], where M >= N. The SVD of [A] is written as
44: *>
45: *> [A] = [U] * [SIGMA] * [V]^t,
46: *>
47: *> where [SIGMA] is an N-by-N (M-by-N) matrix which is zero except for its N
48: *> diagonal elements, [U] is an M-by-N (or M-by-M) orthonormal matrix, and
49: *> [V] is an N-by-N orthogonal matrix. The diagonal elements of [SIGMA] are
50: *> the singular values of [A]. The columns of [U] and [V] are the left and
51: *> the right singular vectors of [A], respectively. The matrices [U] and [V]
52: *> are computed and stored in the arrays U and V, respectively. The diagonal
53: *> of [SIGMA] is computed and stored in the array SVA.
54: *> \endverbatim
55: *
56: * Arguments:
57: * ==========
58: *
59: *> \param[in] JOBA
60: *> \verbatim
61: *> JOBA is CHARACTER*1
62: *> Specifies the level of accuracy:
63: *> = 'C': This option works well (high relative accuracy) if A = B * D,
64: *> with well-conditioned B and arbitrary diagonal matrix D.
65: *> The accuracy cannot be spoiled by COLUMN scaling. The
66: *> accuracy of the computed output depends on the condition of
67: *> B, and the procedure aims at the best theoretical accuracy.
68: *> The relative error max_{i=1:N}|d sigma_i| / sigma_i is
69: *> bounded by f(M,N)*epsilon* cond(B), independent of D.
70: *> The input matrix is preprocessed with the QRF with column
71: *> pivoting. This initial preprocessing and preconditioning by
72: *> a rank revealing QR factorization is common for all values of
73: *> JOBA. Additional actions are specified as follows:
74: *> = 'E': Computation as with 'C' with an additional estimate of the
75: *> condition number of B. It provides a realistic error bound.
76: *> = 'F': If A = D1 * C * D2 with ill-conditioned diagonal scalings
77: *> D1, D2, and well-conditioned matrix C, this option gives
78: *> higher accuracy than the 'C' option. If the structure of the
79: *> input matrix is not known, and relative accuracy is
80: *> desirable, then this option is advisable. The input matrix A
81: *> is preprocessed with QR factorization with FULL (row and
82: *> column) pivoting.
83: *> = 'G' Computation as with 'F' with an additional estimate of the
84: *> condition number of B, where A=D*B. If A has heavily weighted
85: *> rows, then using this condition number gives too pessimistic
86: *> error bound.
87: *> = 'A': Small singular values are the noise and the matrix is treated
88: *> as numerically rank defficient. The error in the computed
89: *> singular values is bounded by f(m,n)*epsilon*||A||.
90: *> The computed SVD A = U * S * V^t restores A up to
91: *> f(m,n)*epsilon*||A||.
92: *> This gives the procedure the licence to discard (set to zero)
93: *> all singular values below N*epsilon*||A||.
94: *> = 'R': Similar as in 'A'. Rank revealing property of the initial
95: *> QR factorization is used do reveal (using triangular factor)
96: *> a gap sigma_{r+1} < epsilon * sigma_r in which case the
97: *> numerical RANK is declared to be r. The SVD is computed with
98: *> absolute error bounds, but more accurately than with 'A'.
99: *> \endverbatim
100: *>
101: *> \param[in] JOBU
102: *> \verbatim
103: *> JOBU is CHARACTER*1
104: *> Specifies whether to compute the columns of U:
105: *> = 'U': N columns of U are returned in the array U.
106: *> = 'F': full set of M left sing. vectors is returned in the array U.
107: *> = 'W': U may be used as workspace of length M*N. See the description
108: *> of U.
109: *> = 'N': U is not computed.
110: *> \endverbatim
111: *>
112: *> \param[in] JOBV
113: *> \verbatim
114: *> JOBV is CHARACTER*1
115: *> Specifies whether to compute the matrix V:
116: *> = 'V': N columns of V are returned in the array V; Jacobi rotations
117: *> are not explicitly accumulated.
118: *> = 'J': N columns of V are returned in the array V, but they are
119: *> computed as the product of Jacobi rotations. This option is
120: *> allowed only if JOBU .NE. 'N', i.e. in computing the full SVD.
121: *> = 'W': V may be used as workspace of length N*N. See the description
122: *> of V.
123: *> = 'N': V is not computed.
124: *> \endverbatim
125: *>
126: *> \param[in] JOBR
127: *> \verbatim
128: *> JOBR is CHARACTER*1
129: *> Specifies the RANGE for the singular values. Issues the licence to
130: *> set to zero small positive singular values if they are outside
131: *> specified range. If A .NE. 0 is scaled so that the largest singular
132: *> value of c*A is around DSQRT(BIG), BIG=SLAMCH('O'), then JOBR issues
133: *> the licence to kill columns of A whose norm in c*A is less than
134: *> DSQRT(SFMIN) (for JOBR.EQ.'R'), or less than SMALL=SFMIN/EPSLN,
135: *> where SFMIN=SLAMCH('S'), EPSLN=SLAMCH('E').
136: *> = 'N': Do not kill small columns of c*A. This option assumes that
137: *> BLAS and QR factorizations and triangular solvers are
138: *> implemented to work in that range. If the condition of A
139: *> is greater than BIG, use DGESVJ.
140: *> = 'R': RESTRICTED range for sigma(c*A) is [DSQRT(SFMIN), DSQRT(BIG)]
141: *> (roughly, as described above). This option is recommended.
142: *> ~~~~~~~~~~~~~~~~~~~~~~~~~~~
143: *> For computing the singular values in the FULL range [SFMIN,BIG]
144: *> use DGESVJ.
145: *> \endverbatim
146: *>
147: *> \param[in] JOBT
148: *> \verbatim
149: *> JOBT is CHARACTER*1
150: *> If the matrix is square then the procedure may determine to use
151: *> transposed A if A^t seems to be better with respect to convergence.
152: *> If the matrix is not square, JOBT is ignored. This is subject to
153: *> changes in the future.
154: *> The decision is based on two values of entropy over the adjoint
155: *> orbit of A^t * A. See the descriptions of WORK(6) and WORK(7).
156: *> = 'T': transpose if entropy test indicates possibly faster
157: *> convergence of Jacobi process if A^t is taken as input. If A is
158: *> replaced with A^t, then the row pivoting is included automatically.
159: *> = 'N': do not speculate.
160: *> This option can be used to compute only the singular values, or the
161: *> full SVD (U, SIGMA and V). For only one set of singular vectors
162: *> (U or V), the caller should provide both U and V, as one of the
163: *> matrices is used as workspace if the matrix A is transposed.
164: *> The implementer can easily remove this constraint and make the
165: *> code more complicated. See the descriptions of U and V.
166: *> \endverbatim
167: *>
168: *> \param[in] JOBP
169: *> \verbatim
170: *> JOBP is CHARACTER*1
171: *> Issues the licence to introduce structured perturbations to drown
172: *> denormalized numbers. This licence should be active if the
173: *> denormals are poorly implemented, causing slow computation,
174: *> especially in cases of fast convergence (!). For details see [1,2].
175: *> For the sake of simplicity, this perturbations are included only
176: *> when the full SVD or only the singular values are requested. The
177: *> implementer/user can easily add the perturbation for the cases of
178: *> computing one set of singular vectors.
179: *> = 'P': introduce perturbation
180: *> = 'N': do not perturb
181: *> \endverbatim
182: *>
183: *> \param[in] M
184: *> \verbatim
185: *> M is INTEGER
186: *> The number of rows of the input matrix A. M >= 0.
187: *> \endverbatim
188: *>
189: *> \param[in] N
190: *> \verbatim
191: *> N is INTEGER
192: *> The number of columns of the input matrix A. M >= N >= 0.
193: *> \endverbatim
194: *>
195: *> \param[in,out] A
196: *> \verbatim
197: *> A is DOUBLE PRECISION array, dimension (LDA,N)
198: *> On entry, the M-by-N matrix A.
199: *> \endverbatim
200: *>
201: *> \param[in] LDA
202: *> \verbatim
203: *> LDA is INTEGER
204: *> The leading dimension of the array A. LDA >= max(1,M).
205: *> \endverbatim
206: *>
207: *> \param[out] SVA
208: *> \verbatim
209: *> SVA is DOUBLE PRECISION array, dimension (N)
210: *> On exit,
211: *> - For WORK(1)/WORK(2) = ONE: The singular values of A. During the
212: *> computation SVA contains Euclidean column norms of the
213: *> iterated matrices in the array A.
214: *> - For WORK(1) .NE. WORK(2): The singular values of A are
215: *> (WORK(1)/WORK(2)) * SVA(1:N). This factored form is used if
216: *> sigma_max(A) overflows or if small singular values have been
217: *> saved from underflow by scaling the input matrix A.
218: *> - If JOBR='R' then some of the singular values may be returned
219: *> as exact zeros obtained by "set to zero" because they are
220: *> below the numerical rank threshold or are denormalized numbers.
221: *> \endverbatim
222: *>
223: *> \param[out] U
224: *> \verbatim
225: *> U is DOUBLE PRECISION array, dimension ( LDU, N )
226: *> If JOBU = 'U', then U contains on exit the M-by-N matrix of
227: *> the left singular vectors.
228: *> If JOBU = 'F', then U contains on exit the M-by-M matrix of
229: *> the left singular vectors, including an ONB
230: *> of the orthogonal complement of the Range(A).
231: *> If JOBU = 'W' .AND. (JOBV.EQ.'V' .AND. JOBT.EQ.'T' .AND. M.EQ.N),
232: *> then U is used as workspace if the procedure
233: *> replaces A with A^t. In that case, [V] is computed
234: *> in U as left singular vectors of A^t and then
235: *> copied back to the V array. This 'W' option is just
236: *> a reminder to the caller that in this case U is
237: *> reserved as workspace of length N*N.
238: *> If JOBU = 'N' U is not referenced.
239: *> \endverbatim
240: *>
241: *> \param[in] LDU
242: *> \verbatim
243: *> LDU is INTEGER
244: *> The leading dimension of the array U, LDU >= 1.
245: *> IF JOBU = 'U' or 'F' or 'W', then LDU >= M.
246: *> \endverbatim
247: *>
248: *> \param[out] V
249: *> \verbatim
250: *> V is DOUBLE PRECISION array, dimension ( LDV, N )
251: *> If JOBV = 'V', 'J' then V contains on exit the N-by-N matrix of
252: *> the right singular vectors;
253: *> If JOBV = 'W', AND (JOBU.EQ.'U' AND JOBT.EQ.'T' AND M.EQ.N),
254: *> then V is used as workspace if the pprocedure
255: *> replaces A with A^t. In that case, [U] is computed
256: *> in V as right singular vectors of A^t and then
257: *> copied back to the U array. This 'W' option is just
258: *> a reminder to the caller that in this case V is
259: *> reserved as workspace of length N*N.
260: *> If JOBV = 'N' V is not referenced.
261: *> \endverbatim
262: *>
263: *> \param[in] LDV
264: *> \verbatim
265: *> LDV is INTEGER
266: *> The leading dimension of the array V, LDV >= 1.
267: *> If JOBV = 'V' or 'J' or 'W', then LDV >= N.
268: *> \endverbatim
269: *>
270: *> \param[out] WORK
271: *> \verbatim
272: *> WORK is DOUBLE PRECISION array, dimension at least LWORK.
273: *> On exit, if N.GT.0 .AND. M.GT.0 (else not referenced),
274: *> WORK(1) = SCALE = WORK(2) / WORK(1) is the scaling factor such
275: *> that SCALE*SVA(1:N) are the computed singular values
276: *> of A. (See the description of SVA().)
277: *> WORK(2) = See the description of WORK(1).
278: *> WORK(3) = SCONDA is an estimate for the condition number of
279: *> column equilibrated A. (If JOBA .EQ. 'E' or 'G')
280: *> SCONDA is an estimate of DSQRT(||(R^t * R)^(-1)||_1).
281: *> It is computed using DPOCON. It holds
282: *> N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
283: *> where R is the triangular factor from the QRF of A.
284: *> However, if R is truncated and the numerical rank is
285: *> determined to be strictly smaller than N, SCONDA is
286: *> returned as -1, thus indicating that the smallest
287: *> singular values might be lost.
288: *>
289: *> If full SVD is needed, the following two condition numbers are
290: *> useful for the analysis of the algorithm. They are provied for
291: *> a developer/implementer who is familiar with the details of
292: *> the method.
293: *>
294: *> WORK(4) = an estimate of the scaled condition number of the
295: *> triangular factor in the first QR factorization.
296: *> WORK(5) = an estimate of the scaled condition number of the
297: *> triangular factor in the second QR factorization.
298: *> The following two parameters are computed if JOBT .EQ. 'T'.
299: *> They are provided for a developer/implementer who is familiar
300: *> with the details of the method.
301: *>
302: *> WORK(6) = the entropy of A^t*A :: this is the Shannon entropy
303: *> of diag(A^t*A) / Trace(A^t*A) taken as point in the
304: *> probability simplex.
305: *> WORK(7) = the entropy of A*A^t.
306: *> \endverbatim
307: *>
308: *> \param[in] LWORK
309: *> \verbatim
310: *> LWORK is INTEGER
311: *> Length of WORK to confirm proper allocation of work space.
312: *> LWORK depends on the job:
313: *>
314: *> If only SIGMA is needed ( JOBU.EQ.'N', JOBV.EQ.'N' ) and
315: *> -> .. no scaled condition estimate required (JOBE.EQ.'N'):
316: *> LWORK >= max(2*M+N,4*N+1,7). This is the minimal requirement.
317: *> ->> For optimal performance (blocked code) the optimal value
318: *> is LWORK >= max(2*M+N,3*N+(N+1)*NB,7). Here NB is the optimal
319: *> block size for DGEQP3 and DGEQRF.
320: *> In general, optimal LWORK is computed as
321: *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DGEQRF), 7).
322: *> -> .. an estimate of the scaled condition number of A is
323: *> required (JOBA='E', 'G'). In this case, LWORK is the maximum
324: *> of the above and N*N+4*N, i.e. LWORK >= max(2*M+N,N*N+4*N,7).
325: *> ->> For optimal performance (blocked code) the optimal value
326: *> is LWORK >= max(2*M+N,3*N+(N+1)*NB, N*N+4*N, 7).
327: *> In general, the optimal length LWORK is computed as
328: *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DGEQRF),
329: *> N+N*N+LWORK(DPOCON),7).
330: *>
331: *> If SIGMA and the right singular vectors are needed (JOBV.EQ.'V'),
332: *> -> the minimal requirement is LWORK >= max(2*M+N,4*N+1,7).
333: *> -> For optimal performance, LWORK >= max(2*M+N,3*N+(N+1)*NB,7),
334: *> where NB is the optimal block size for DGEQP3, DGEQRF, DGELQ,
335: *> DORMLQ. In general, the optimal length LWORK is computed as
336: *> LWORK >= max(2*M+N,N+LWORK(DGEQP3), N+LWORK(DPOCON),
337: *> N+LWORK(DGELQ), 2*N+LWORK(DGEQRF), N+LWORK(DORMLQ)).
338: *>
339: *> If SIGMA and the left singular vectors are needed
340: *> -> the minimal requirement is LWORK >= max(2*M+N,4*N+1,7).
341: *> -> For optimal performance:
342: *> if JOBU.EQ.'U' :: LWORK >= max(2*M+N,3*N+(N+1)*NB,7),
343: *> if JOBU.EQ.'F' :: LWORK >= max(2*M+N,3*N+(N+1)*NB,N+M*NB,7),
344: *> where NB is the optimal block size for DGEQP3, DGEQRF, DORMQR.
345: *> In general, the optimal length LWORK is computed as
346: *> LWORK >= max(2*M+N,N+LWORK(DGEQP3),N+LWORK(DPOCON),
347: *> 2*N+LWORK(DGEQRF), N+LWORK(DORMQR)).
348: *> Here LWORK(DORMQR) equals N*NB (for JOBU.EQ.'U') or
349: *> M*NB (for JOBU.EQ.'F').
350: *>
351: *> If the full SVD is needed: (JOBU.EQ.'U' or JOBU.EQ.'F') and
352: *> -> if JOBV.EQ.'V'
353: *> the minimal requirement is LWORK >= max(2*M+N,6*N+2*N*N).
354: *> -> if JOBV.EQ.'J' the minimal requirement is
355: *> LWORK >= max(2*M+N, 4*N+N*N,2*N+N*N+6).
356: *> -> For optimal performance, LWORK should be additionally
357: *> larger than N+M*NB, where NB is the optimal block size
358: *> for DORMQR.
359: *> \endverbatim
360: *>
361: *> \param[out] IWORK
362: *> \verbatim
363: *> IWORK is INTEGER array, dimension M+3*N.
364: *> On exit,
365: *> IWORK(1) = the numerical rank determined after the initial
366: *> QR factorization with pivoting. See the descriptions
367: *> of JOBA and JOBR.
368: *> IWORK(2) = the number of the computed nonzero singular values
369: *> IWORK(3) = if nonzero, a warning message:
370: *> If IWORK(3).EQ.1 then some of the column norms of A
371: *> were denormalized floats. The requested high accuracy
372: *> is not warranted by the data.
373: *> \endverbatim
374: *>
375: *> \param[out] INFO
376: *> \verbatim
377: *> INFO is INTEGER
378: *> < 0 : if INFO = -i, then the i-th argument had an illegal value.
379: *> = 0 : successfull exit;
380: *> > 0 : DGEJSV did not converge in the maximal allowed number
381: *> of sweeps. The computed values may be inaccurate.
382: *> \endverbatim
383: *
384: * Authors:
385: * ========
386: *
387: *> \author Univ. of Tennessee
388: *> \author Univ. of California Berkeley
389: *> \author Univ. of Colorado Denver
390: *> \author NAG Ltd.
391: *
392: *> \date September 2012
393: *
394: *> \ingroup doubleGEsing
395: *
396: *> \par Further Details:
397: * =====================
398: *>
399: *> \verbatim
400: *>
401: *> DGEJSV implements a preconditioned Jacobi SVD algorithm. It uses DGEQP3,
402: *> DGEQRF, and DGELQF as preprocessors and preconditioners. Optionally, an
403: *> additional row pivoting can be used as a preprocessor, which in some
404: *> cases results in much higher accuracy. An example is matrix A with the
405: *> structure A = D1 * C * D2, where D1, D2 are arbitrarily ill-conditioned
406: *> diagonal matrices and C is well-conditioned matrix. In that case, complete
407: *> pivoting in the first QR factorizations provides accuracy dependent on the
408: *> condition number of C, and independent of D1, D2. Such higher accuracy is
409: *> not completely understood theoretically, but it works well in practice.
410: *> Further, if A can be written as A = B*D, with well-conditioned B and some
411: *> diagonal D, then the high accuracy is guaranteed, both theoretically and
412: *> in software, independent of D. For more details see [1], [2].
413: *> The computational range for the singular values can be the full range
414: *> ( UNDERFLOW,OVERFLOW ), provided that the machine arithmetic and the BLAS
415: *> & LAPACK routines called by DGEJSV are implemented to work in that range.
416: *> If that is not the case, then the restriction for safe computation with
417: *> the singular values in the range of normalized IEEE numbers is that the
418: *> spectral condition number kappa(A)=sigma_max(A)/sigma_min(A) does not
419: *> overflow. This code (DGEJSV) is best used in this restricted range,
420: *> meaning that singular values of magnitude below ||A||_2 / DLAMCH('O') are
421: *> returned as zeros. See JOBR for details on this.
422: *> Further, this implementation is somewhat slower than the one described
423: *> in [1,2] due to replacement of some non-LAPACK components, and because
424: *> the choice of some tuning parameters in the iterative part (DGESVJ) is
425: *> left to the implementer on a particular machine.
426: *> The rank revealing QR factorization (in this code: DGEQP3) should be
427: *> implemented as in [3]. We have a new version of DGEQP3 under development
428: *> that is more robust than the current one in LAPACK, with a cleaner cut in
429: *> rank defficient cases. It will be available in the SIGMA library [4].
430: *> If M is much larger than N, it is obvious that the inital QRF with
431: *> column pivoting can be preprocessed by the QRF without pivoting. That
432: *> well known trick is not used in DGEJSV because in some cases heavy row
433: *> weighting can be treated with complete pivoting. The overhead in cases
434: *> M much larger than N is then only due to pivoting, but the benefits in
435: *> terms of accuracy have prevailed. The implementer/user can incorporate
436: *> this extra QRF step easily. The implementer can also improve data movement
437: *> (matrix transpose, matrix copy, matrix transposed copy) - this
438: *> implementation of DGEJSV uses only the simplest, naive data movement.
439: *> \endverbatim
440: *
441: *> \par Contributors:
442: * ==================
443: *>
444: *> Zlatko Drmac (Zagreb, Croatia) and Kresimir Veselic (Hagen, Germany)
445: *
446: *> \par References:
447: * ================
448: *>
449: *> \verbatim
450: *>
451: *> [1] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm I.
452: *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1322-1342.
453: *> LAPACK Working note 169.
454: *> [2] Z. Drmac and K. Veselic: New fast and accurate Jacobi SVD algorithm II.
455: *> SIAM J. Matrix Anal. Appl. Vol. 35, No. 2 (2008), pp. 1343-1362.
456: *> LAPACK Working note 170.
457: *> [3] Z. Drmac and Z. Bujanovic: On the failure of rank-revealing QR
458: *> factorization software - a case study.
459: *> ACM Trans. Math. Softw. Vol. 35, No 2 (2008), pp. 1-28.
460: *> LAPACK Working note 176.
461: *> [4] Z. Drmac: SIGMA - mathematical software library for accurate SVD, PSV,
462: *> QSVD, (H,K)-SVD computations.
463: *> Department of Mathematics, University of Zagreb, 2008.
464: *> \endverbatim
465: *
466: *> \par Bugs, examples and comments:
467: * =================================
468: *>
469: *> Please report all bugs and send interesting examples and/or comments to
470: *> drmac@math.hr. Thank you.
471: *>
472: * =====================================================================
473: SUBROUTINE DGEJSV( JOBA, JOBU, JOBV, JOBR, JOBT, JOBP,
474: $ M, N, A, LDA, SVA, U, LDU, V, LDV,
475: $ WORK, LWORK, IWORK, INFO )
476: *
477: * -- LAPACK computational routine (version 3.4.2) --
478: * -- LAPACK is a software package provided by Univ. of Tennessee, --
479: * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
480: * September 2012
481: *
482: * .. Scalar Arguments ..
483: IMPLICIT NONE
484: INTEGER INFO, LDA, LDU, LDV, LWORK, M, N
485: * ..
486: * .. Array Arguments ..
487: DOUBLE PRECISION A( LDA, * ), SVA( N ), U( LDU, * ), V( LDV, * ),
488: $ WORK( LWORK )
489: INTEGER IWORK( * )
490: CHARACTER*1 JOBA, JOBP, JOBR, JOBT, JOBU, JOBV
491: * ..
492: *
493: * ===========================================================================
494: *
495: * .. Local Parameters ..
496: DOUBLE PRECISION ZERO, ONE
497: PARAMETER ( ZERO = 0.0D0, ONE = 1.0D0 )
498: * ..
499: * .. Local Scalars ..
500: DOUBLE PRECISION AAPP, AAQQ, AATMAX, AATMIN, BIG, BIG1, COND_OK,
501: $ CONDR1, CONDR2, ENTRA, ENTRAT, EPSLN, MAXPRJ, SCALEM,
502: $ SCONDA, SFMIN, SMALL, TEMP1, USCAL1, USCAL2, XSC
503: INTEGER IERR, N1, NR, NUMRANK, p, q, WARNING
504: LOGICAL ALMORT, DEFR, ERREST, GOSCAL, JRACC, KILL, LSVEC,
505: $ L2ABER, L2KILL, L2PERT, L2RANK, L2TRAN,
506: $ NOSCAL, ROWPIV, RSVEC, TRANSP
507: * ..
508: * .. Intrinsic Functions ..
509: INTRINSIC DABS, DLOG, DMAX1, DMIN1, DBLE,
510: $ MAX0, MIN0, IDNINT, DSIGN, DSQRT
511: * ..
512: * .. External Functions ..
513: DOUBLE PRECISION DLAMCH, DNRM2
514: INTEGER IDAMAX
515: LOGICAL LSAME
516: EXTERNAL IDAMAX, LSAME, DLAMCH, DNRM2
517: * ..
518: * .. External Subroutines ..
519: EXTERNAL DCOPY, DGELQF, DGEQP3, DGEQRF, DLACPY, DLASCL,
520: $ DLASET, DLASSQ, DLASWP, DORGQR, DORMLQ,
521: $ DORMQR, DPOCON, DSCAL, DSWAP, DTRSM, XERBLA
522: *
523: EXTERNAL DGESVJ
524: * ..
525: *
526: * Test the input arguments
527: *
528: LSVEC = LSAME( JOBU, 'U' ) .OR. LSAME( JOBU, 'F' )
529: JRACC = LSAME( JOBV, 'J' )
530: RSVEC = LSAME( JOBV, 'V' ) .OR. JRACC
531: ROWPIV = LSAME( JOBA, 'F' ) .OR. LSAME( JOBA, 'G' )
532: L2RANK = LSAME( JOBA, 'R' )
533: L2ABER = LSAME( JOBA, 'A' )
534: ERREST = LSAME( JOBA, 'E' ) .OR. LSAME( JOBA, 'G' )
535: L2TRAN = LSAME( JOBT, 'T' )
536: L2KILL = LSAME( JOBR, 'R' )
537: DEFR = LSAME( JOBR, 'N' )
538: L2PERT = LSAME( JOBP, 'P' )
539: *
540: IF ( .NOT.(ROWPIV .OR. L2RANK .OR. L2ABER .OR.
541: $ ERREST .OR. LSAME( JOBA, 'C' ) )) THEN
542: INFO = - 1
543: ELSE IF ( .NOT.( LSVEC .OR. LSAME( JOBU, 'N' ) .OR.
544: $ LSAME( JOBU, 'W' )) ) THEN
545: INFO = - 2
546: ELSE IF ( .NOT.( RSVEC .OR. LSAME( JOBV, 'N' ) .OR.
547: $ LSAME( JOBV, 'W' )) .OR. ( JRACC .AND. (.NOT.LSVEC) ) ) THEN
548: INFO = - 3
549: ELSE IF ( .NOT. ( L2KILL .OR. DEFR ) ) THEN
550: INFO = - 4
551: ELSE IF ( .NOT. ( L2TRAN .OR. LSAME( JOBT, 'N' ) ) ) THEN
552: INFO = - 5
553: ELSE IF ( .NOT. ( L2PERT .OR. LSAME( JOBP, 'N' ) ) ) THEN
554: INFO = - 6
555: ELSE IF ( M .LT. 0 ) THEN
556: INFO = - 7
557: ELSE IF ( ( N .LT. 0 ) .OR. ( N .GT. M ) ) THEN
558: INFO = - 8
559: ELSE IF ( LDA .LT. M ) THEN
560: INFO = - 10
561: ELSE IF ( LSVEC .AND. ( LDU .LT. M ) ) THEN
562: INFO = - 13
563: ELSE IF ( RSVEC .AND. ( LDV .LT. N ) ) THEN
564: INFO = - 14
565: ELSE IF ( (.NOT.(LSVEC .OR. RSVEC .OR. ERREST).AND.
566: & (LWORK .LT. MAX0(7,4*N+1,2*M+N))) .OR.
567: & (.NOT.(LSVEC .OR. RSVEC) .AND. ERREST .AND.
568: & (LWORK .LT. MAX0(7,4*N+N*N,2*M+N))) .OR.
569: & (LSVEC .AND. (.NOT.RSVEC) .AND. (LWORK .LT. MAX0(7,2*M+N,4*N+1)))
570: & .OR.
571: & (RSVEC .AND. (.NOT.LSVEC) .AND. (LWORK .LT. MAX0(7,2*M+N,4*N+1)))
572: & .OR.
573: & (LSVEC .AND. RSVEC .AND. (.NOT.JRACC) .AND.
574: & (LWORK.LT.MAX0(2*M+N,6*N+2*N*N)))
575: & .OR. (LSVEC .AND. RSVEC .AND. JRACC .AND.
576: & LWORK.LT.MAX0(2*M+N,4*N+N*N,2*N+N*N+6)))
577: & THEN
578: INFO = - 17
579: ELSE
580: * #:)
581: INFO = 0
582: END IF
583: *
584: IF ( INFO .NE. 0 ) THEN
585: * #:(
586: CALL XERBLA( 'DGEJSV', - INFO )
587: RETURN
588: END IF
589: *
590: * Quick return for void matrix (Y3K safe)
591: * #:)
592: IF ( ( M .EQ. 0 ) .OR. ( N .EQ. 0 ) ) RETURN
593: *
594: * Determine whether the matrix U should be M x N or M x M
595: *
596: IF ( LSVEC ) THEN
597: N1 = N
598: IF ( LSAME( JOBU, 'F' ) ) N1 = M
599: END IF
600: *
601: * Set numerical parameters
602: *
603: *! NOTE: Make sure DLAMCH() does not fail on the target architecture.
604: *
605: EPSLN = DLAMCH('Epsilon')
606: SFMIN = DLAMCH('SafeMinimum')
607: SMALL = SFMIN / EPSLN
608: BIG = DLAMCH('O')
609: * BIG = ONE / SFMIN
610: *
611: * Initialize SVA(1:N) = diag( ||A e_i||_2 )_1^N
612: *
613: *(!) If necessary, scale SVA() to protect the largest norm from
614: * overflow. It is possible that this scaling pushes the smallest
615: * column norm left from the underflow threshold (extreme case).
616: *
617: SCALEM = ONE / DSQRT(DBLE(M)*DBLE(N))
618: NOSCAL = .TRUE.
619: GOSCAL = .TRUE.
620: DO 1874 p = 1, N
621: AAPP = ZERO
622: AAQQ = ONE
623: CALL DLASSQ( M, A(1,p), 1, AAPP, AAQQ )
624: IF ( AAPP .GT. BIG ) THEN
625: INFO = - 9
626: CALL XERBLA( 'DGEJSV', -INFO )
627: RETURN
628: END IF
629: AAQQ = DSQRT(AAQQ)
630: IF ( ( AAPP .LT. (BIG / AAQQ) ) .AND. NOSCAL ) THEN
631: SVA(p) = AAPP * AAQQ
632: ELSE
633: NOSCAL = .FALSE.
634: SVA(p) = AAPP * ( AAQQ * SCALEM )
635: IF ( GOSCAL ) THEN
636: GOSCAL = .FALSE.
637: CALL DSCAL( p-1, SCALEM, SVA, 1 )
638: END IF
639: END IF
640: 1874 CONTINUE
641: *
642: IF ( NOSCAL ) SCALEM = ONE
643: *
644: AAPP = ZERO
645: AAQQ = BIG
646: DO 4781 p = 1, N
647: AAPP = DMAX1( AAPP, SVA(p) )
648: IF ( SVA(p) .NE. ZERO ) AAQQ = DMIN1( AAQQ, SVA(p) )
649: 4781 CONTINUE
650: *
651: * Quick return for zero M x N matrix
652: * #:)
653: IF ( AAPP .EQ. ZERO ) THEN
654: IF ( LSVEC ) CALL DLASET( 'G', M, N1, ZERO, ONE, U, LDU )
655: IF ( RSVEC ) CALL DLASET( 'G', N, N, ZERO, ONE, V, LDV )
656: WORK(1) = ONE
657: WORK(2) = ONE
658: IF ( ERREST ) WORK(3) = ONE
659: IF ( LSVEC .AND. RSVEC ) THEN
660: WORK(4) = ONE
661: WORK(5) = ONE
662: END IF
663: IF ( L2TRAN ) THEN
664: WORK(6) = ZERO
665: WORK(7) = ZERO
666: END IF
667: IWORK(1) = 0
668: IWORK(2) = 0
669: IWORK(3) = 0
670: RETURN
671: END IF
672: *
673: * Issue warning if denormalized column norms detected. Override the
674: * high relative accuracy request. Issue licence to kill columns
675: * (set them to zero) whose norm is less than sigma_max / BIG (roughly).
676: * #:(
677: WARNING = 0
678: IF ( AAQQ .LE. SFMIN ) THEN
679: L2RANK = .TRUE.
680: L2KILL = .TRUE.
681: WARNING = 1
682: END IF
683: *
684: * Quick return for one-column matrix
685: * #:)
686: IF ( N .EQ. 1 ) THEN
687: *
688: IF ( LSVEC ) THEN
689: CALL DLASCL( 'G',0,0,SVA(1),SCALEM, M,1,A(1,1),LDA,IERR )
690: CALL DLACPY( 'A', M, 1, A, LDA, U, LDU )
691: * computing all M left singular vectors of the M x 1 matrix
692: IF ( N1 .NE. N ) THEN
693: CALL DGEQRF( M, N, U,LDU, WORK, WORK(N+1),LWORK-N,IERR )
694: CALL DORGQR( M,N1,1, U,LDU,WORK,WORK(N+1),LWORK-N,IERR )
695: CALL DCOPY( M, A(1,1), 1, U(1,1), 1 )
696: END IF
697: END IF
698: IF ( RSVEC ) THEN
699: V(1,1) = ONE
700: END IF
701: IF ( SVA(1) .LT. (BIG*SCALEM) ) THEN
702: SVA(1) = SVA(1) / SCALEM
703: SCALEM = ONE
704: END IF
705: WORK(1) = ONE / SCALEM
706: WORK(2) = ONE
707: IF ( SVA(1) .NE. ZERO ) THEN
708: IWORK(1) = 1
709: IF ( ( SVA(1) / SCALEM) .GE. SFMIN ) THEN
710: IWORK(2) = 1
711: ELSE
712: IWORK(2) = 0
713: END IF
714: ELSE
715: IWORK(1) = 0
716: IWORK(2) = 0
717: END IF
718: IF ( ERREST ) WORK(3) = ONE
719: IF ( LSVEC .AND. RSVEC ) THEN
720: WORK(4) = ONE
721: WORK(5) = ONE
722: END IF
723: IF ( L2TRAN ) THEN
724: WORK(6) = ZERO
725: WORK(7) = ZERO
726: END IF
727: RETURN
728: *
729: END IF
730: *
731: TRANSP = .FALSE.
732: L2TRAN = L2TRAN .AND. ( M .EQ. N )
733: *
734: AATMAX = -ONE
735: AATMIN = BIG
736: IF ( ROWPIV .OR. L2TRAN ) THEN
737: *
738: * Compute the row norms, needed to determine row pivoting sequence
739: * (in the case of heavily row weighted A, row pivoting is strongly
740: * advised) and to collect information needed to compare the
741: * structures of A * A^t and A^t * A (in the case L2TRAN.EQ..TRUE.).
742: *
743: IF ( L2TRAN ) THEN
744: DO 1950 p = 1, M
745: XSC = ZERO
746: TEMP1 = ONE
747: CALL DLASSQ( N, A(p,1), LDA, XSC, TEMP1 )
748: * DLASSQ gets both the ell_2 and the ell_infinity norm
749: * in one pass through the vector
750: WORK(M+N+p) = XSC * SCALEM
751: WORK(N+p) = XSC * (SCALEM*DSQRT(TEMP1))
752: AATMAX = DMAX1( AATMAX, WORK(N+p) )
753: IF (WORK(N+p) .NE. ZERO) AATMIN = DMIN1(AATMIN,WORK(N+p))
754: 1950 CONTINUE
755: ELSE
756: DO 1904 p = 1, M
757: WORK(M+N+p) = SCALEM*DABS( A(p,IDAMAX(N,A(p,1),LDA)) )
758: AATMAX = DMAX1( AATMAX, WORK(M+N+p) )
759: AATMIN = DMIN1( AATMIN, WORK(M+N+p) )
760: 1904 CONTINUE
761: END IF
762: *
763: END IF
764: *
765: * For square matrix A try to determine whether A^t would be better
766: * input for the preconditioned Jacobi SVD, with faster convergence.
767: * The decision is based on an O(N) function of the vector of column
768: * and row norms of A, based on the Shannon entropy. This should give
769: * the right choice in most cases when the difference actually matters.
770: * It may fail and pick the slower converging side.
771: *
772: ENTRA = ZERO
773: ENTRAT = ZERO
774: IF ( L2TRAN ) THEN
775: *
776: XSC = ZERO
777: TEMP1 = ONE
778: CALL DLASSQ( N, SVA, 1, XSC, TEMP1 )
779: TEMP1 = ONE / TEMP1
780: *
781: ENTRA = ZERO
782: DO 1113 p = 1, N
783: BIG1 = ( ( SVA(p) / XSC )**2 ) * TEMP1
784: IF ( BIG1 .NE. ZERO ) ENTRA = ENTRA + BIG1 * DLOG(BIG1)
785: 1113 CONTINUE
786: ENTRA = - ENTRA / DLOG(DBLE(N))
787: *
788: * Now, SVA().^2/Trace(A^t * A) is a point in the probability simplex.
789: * It is derived from the diagonal of A^t * A. Do the same with the
790: * diagonal of A * A^t, compute the entropy of the corresponding
791: * probability distribution. Note that A * A^t and A^t * A have the
792: * same trace.
793: *
794: ENTRAT = ZERO
795: DO 1114 p = N+1, N+M
796: BIG1 = ( ( WORK(p) / XSC )**2 ) * TEMP1
797: IF ( BIG1 .NE. ZERO ) ENTRAT = ENTRAT + BIG1 * DLOG(BIG1)
798: 1114 CONTINUE
799: ENTRAT = - ENTRAT / DLOG(DBLE(M))
800: *
801: * Analyze the entropies and decide A or A^t. Smaller entropy
802: * usually means better input for the algorithm.
803: *
804: TRANSP = ( ENTRAT .LT. ENTRA )
805: *
806: * If A^t is better than A, transpose A.
807: *
808: IF ( TRANSP ) THEN
809: * In an optimal implementation, this trivial transpose
810: * should be replaced with faster transpose.
811: DO 1115 p = 1, N - 1
812: DO 1116 q = p + 1, N
813: TEMP1 = A(q,p)
814: A(q,p) = A(p,q)
815: A(p,q) = TEMP1
816: 1116 CONTINUE
817: 1115 CONTINUE
818: DO 1117 p = 1, N
819: WORK(M+N+p) = SVA(p)
820: SVA(p) = WORK(N+p)
821: 1117 CONTINUE
822: TEMP1 = AAPP
823: AAPP = AATMAX
824: AATMAX = TEMP1
825: TEMP1 = AAQQ
826: AAQQ = AATMIN
827: AATMIN = TEMP1
828: KILL = LSVEC
829: LSVEC = RSVEC
830: RSVEC = KILL
831: IF ( LSVEC ) N1 = N
832: *
833: ROWPIV = .TRUE.
834: END IF
835: *
836: END IF
837: * END IF L2TRAN
838: *
839: * Scale the matrix so that its maximal singular value remains less
840: * than DSQRT(BIG) -- the matrix is scaled so that its maximal column
841: * has Euclidean norm equal to DSQRT(BIG/N). The only reason to keep
842: * DSQRT(BIG) instead of BIG is the fact that DGEJSV uses LAPACK and
843: * BLAS routines that, in some implementations, are not capable of
844: * working in the full interval [SFMIN,BIG] and that they may provoke
845: * overflows in the intermediate results. If the singular values spread
846: * from SFMIN to BIG, then DGESVJ will compute them. So, in that case,
847: * one should use DGESVJ instead of DGEJSV.
848: *
849: BIG1 = DSQRT( BIG )
850: TEMP1 = DSQRT( BIG / DBLE(N) )
851: *
852: CALL DLASCL( 'G', 0, 0, AAPP, TEMP1, N, 1, SVA, N, IERR )
853: IF ( AAQQ .GT. (AAPP * SFMIN) ) THEN
854: AAQQ = ( AAQQ / AAPP ) * TEMP1
855: ELSE
856: AAQQ = ( AAQQ * TEMP1 ) / AAPP
857: END IF
858: TEMP1 = TEMP1 * SCALEM
859: CALL DLASCL( 'G', 0, 0, AAPP, TEMP1, M, N, A, LDA, IERR )
860: *
861: * To undo scaling at the end of this procedure, multiply the
862: * computed singular values with USCAL2 / USCAL1.
863: *
864: USCAL1 = TEMP1
865: USCAL2 = AAPP
866: *
867: IF ( L2KILL ) THEN
868: * L2KILL enforces computation of nonzero singular values in
869: * the restricted range of condition number of the initial A,
870: * sigma_max(A) / sigma_min(A) approx. DSQRT(BIG)/DSQRT(SFMIN).
871: XSC = DSQRT( SFMIN )
872: ELSE
873: XSC = SMALL
874: *
875: * Now, if the condition number of A is too big,
876: * sigma_max(A) / sigma_min(A) .GT. DSQRT(BIG/N) * EPSLN / SFMIN,
877: * as a precaution measure, the full SVD is computed using DGESVJ
878: * with accumulated Jacobi rotations. This provides numerically
879: * more robust computation, at the cost of slightly increased run
880: * time. Depending on the concrete implementation of BLAS and LAPACK
881: * (i.e. how they behave in presence of extreme ill-conditioning) the
882: * implementor may decide to remove this switch.
883: IF ( ( AAQQ.LT.DSQRT(SFMIN) ) .AND. LSVEC .AND. RSVEC ) THEN
884: JRACC = .TRUE.
885: END IF
886: *
887: END IF
888: IF ( AAQQ .LT. XSC ) THEN
889: DO 700 p = 1, N
890: IF ( SVA(p) .LT. XSC ) THEN
891: CALL DLASET( 'A', M, 1, ZERO, ZERO, A(1,p), LDA )
892: SVA(p) = ZERO
893: END IF
894: 700 CONTINUE
895: END IF
896: *
897: * Preconditioning using QR factorization with pivoting
898: *
899: IF ( ROWPIV ) THEN
900: * Optional row permutation (Bjoerck row pivoting):
901: * A result by Cox and Higham shows that the Bjoerck's
902: * row pivoting combined with standard column pivoting
903: * has similar effect as Powell-Reid complete pivoting.
904: * The ell-infinity norms of A are made nonincreasing.
905: DO 1952 p = 1, M - 1
906: q = IDAMAX( M-p+1, WORK(M+N+p), 1 ) + p - 1
907: IWORK(2*N+p) = q
908: IF ( p .NE. q ) THEN
909: TEMP1 = WORK(M+N+p)
910: WORK(M+N+p) = WORK(M+N+q)
911: WORK(M+N+q) = TEMP1
912: END IF
913: 1952 CONTINUE
914: CALL DLASWP( N, A, LDA, 1, M-1, IWORK(2*N+1), 1 )
915: END IF
916: *
917: * End of the preparation phase (scaling, optional sorting and
918: * transposing, optional flushing of small columns).
919: *
920: * Preconditioning
921: *
922: * If the full SVD is needed, the right singular vectors are computed
923: * from a matrix equation, and for that we need theoretical analysis
924: * of the Businger-Golub pivoting. So we use DGEQP3 as the first RR QRF.
925: * In all other cases the first RR QRF can be chosen by other criteria
926: * (eg speed by replacing global with restricted window pivoting, such
927: * as in SGEQPX from TOMS # 782). Good results will be obtained using
928: * SGEQPX with properly (!) chosen numerical parameters.
929: * Any improvement of DGEQP3 improves overal performance of DGEJSV.
930: *
931: * A * P1 = Q1 * [ R1^t 0]^t:
932: DO 1963 p = 1, N
933: * .. all columns are free columns
934: IWORK(p) = 0
935: 1963 CONTINUE
936: CALL DGEQP3( M,N,A,LDA, IWORK,WORK, WORK(N+1),LWORK-N, IERR )
937: *
938: * The upper triangular matrix R1 from the first QRF is inspected for
939: * rank deficiency and possibilities for deflation, or possible
940: * ill-conditioning. Depending on the user specified flag L2RANK,
941: * the procedure explores possibilities to reduce the numerical
942: * rank by inspecting the computed upper triangular factor. If
943: * L2RANK or L2ABER are up, then DGEJSV will compute the SVD of
944: * A + dA, where ||dA|| <= f(M,N)*EPSLN.
945: *
946: NR = 1
947: IF ( L2ABER ) THEN
948: * Standard absolute error bound suffices. All sigma_i with
949: * sigma_i < N*EPSLN*||A|| are flushed to zero. This is an
950: * agressive enforcement of lower numerical rank by introducing a
951: * backward error of the order of N*EPSLN*||A||.
952: TEMP1 = DSQRT(DBLE(N))*EPSLN
953: DO 3001 p = 2, N
954: IF ( DABS(A(p,p)) .GE. (TEMP1*DABS(A(1,1))) ) THEN
955: NR = NR + 1
956: ELSE
957: GO TO 3002
958: END IF
959: 3001 CONTINUE
960: 3002 CONTINUE
961: ELSE IF ( L2RANK ) THEN
962: * .. similarly as above, only slightly more gentle (less agressive).
963: * Sudden drop on the diagonal of R1 is used as the criterion for
964: * close-to-rank-defficient.
965: TEMP1 = DSQRT(SFMIN)
966: DO 3401 p = 2, N
967: IF ( ( DABS(A(p,p)) .LT. (EPSLN*DABS(A(p-1,p-1))) ) .OR.
968: $ ( DABS(A(p,p)) .LT. SMALL ) .OR.
969: $ ( L2KILL .AND. (DABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3402
970: NR = NR + 1
971: 3401 CONTINUE
972: 3402 CONTINUE
973: *
974: ELSE
975: * The goal is high relative accuracy. However, if the matrix
976: * has high scaled condition number the relative accuracy is in
977: * general not feasible. Later on, a condition number estimator
978: * will be deployed to estimate the scaled condition number.
979: * Here we just remove the underflowed part of the triangular
980: * factor. This prevents the situation in which the code is
981: * working hard to get the accuracy not warranted by the data.
982: TEMP1 = DSQRT(SFMIN)
983: DO 3301 p = 2, N
984: IF ( ( DABS(A(p,p)) .LT. SMALL ) .OR.
985: $ ( L2KILL .AND. (DABS(A(p,p)) .LT. TEMP1) ) ) GO TO 3302
986: NR = NR + 1
987: 3301 CONTINUE
988: 3302 CONTINUE
989: *
990: END IF
991: *
992: ALMORT = .FALSE.
993: IF ( NR .EQ. N ) THEN
994: MAXPRJ = ONE
995: DO 3051 p = 2, N
996: TEMP1 = DABS(A(p,p)) / SVA(IWORK(p))
997: MAXPRJ = DMIN1( MAXPRJ, TEMP1 )
998: 3051 CONTINUE
999: IF ( MAXPRJ**2 .GE. ONE - DBLE(N)*EPSLN ) ALMORT = .TRUE.
1000: END IF
1001: *
1002: *
1003: SCONDA = - ONE
1004: CONDR1 = - ONE
1005: CONDR2 = - ONE
1006: *
1007: IF ( ERREST ) THEN
1008: IF ( N .EQ. NR ) THEN
1009: IF ( RSVEC ) THEN
1010: * .. V is available as workspace
1011: CALL DLACPY( 'U', N, N, A, LDA, V, LDV )
1012: DO 3053 p = 1, N
1013: TEMP1 = SVA(IWORK(p))
1014: CALL DSCAL( p, ONE/TEMP1, V(1,p), 1 )
1015: 3053 CONTINUE
1016: CALL DPOCON( 'U', N, V, LDV, ONE, TEMP1,
1017: $ WORK(N+1), IWORK(2*N+M+1), IERR )
1018: ELSE IF ( LSVEC ) THEN
1019: * .. U is available as workspace
1020: CALL DLACPY( 'U', N, N, A, LDA, U, LDU )
1021: DO 3054 p = 1, N
1022: TEMP1 = SVA(IWORK(p))
1023: CALL DSCAL( p, ONE/TEMP1, U(1,p), 1 )
1024: 3054 CONTINUE
1025: CALL DPOCON( 'U', N, U, LDU, ONE, TEMP1,
1026: $ WORK(N+1), IWORK(2*N+M+1), IERR )
1027: ELSE
1028: CALL DLACPY( 'U', N, N, A, LDA, WORK(N+1), N )
1029: DO 3052 p = 1, N
1030: TEMP1 = SVA(IWORK(p))
1031: CALL DSCAL( p, ONE/TEMP1, WORK(N+(p-1)*N+1), 1 )
1032: 3052 CONTINUE
1033: * .. the columns of R are scaled to have unit Euclidean lengths.
1034: CALL DPOCON( 'U', N, WORK(N+1), N, ONE, TEMP1,
1035: $ WORK(N+N*N+1), IWORK(2*N+M+1), IERR )
1036: END IF
1037: SCONDA = ONE / DSQRT(TEMP1)
1038: * SCONDA is an estimate of DSQRT(||(R^t * R)^(-1)||_1).
1039: * N^(-1/4) * SCONDA <= ||R^(-1)||_2 <= N^(1/4) * SCONDA
1040: ELSE
1041: SCONDA = - ONE
1042: END IF
1043: END IF
1044: *
1045: L2PERT = L2PERT .AND. ( DABS( A(1,1)/A(NR,NR) ) .GT. DSQRT(BIG1) )
1046: * If there is no violent scaling, artificial perturbation is not needed.
1047: *
1048: * Phase 3:
1049: *
1050: IF ( .NOT. ( RSVEC .OR. LSVEC ) ) THEN
1051: *
1052: * Singular Values only
1053: *
1054: * .. transpose A(1:NR,1:N)
1055: DO 1946 p = 1, MIN0( N-1, NR )
1056: CALL DCOPY( N-p, A(p,p+1), LDA, A(p+1,p), 1 )
1057: 1946 CONTINUE
1058: *
1059: * The following two DO-loops introduce small relative perturbation
1060: * into the strict upper triangle of the lower triangular matrix.
1061: * Small entries below the main diagonal are also changed.
1062: * This modification is useful if the computing environment does not
1063: * provide/allow FLUSH TO ZERO underflow, for it prevents many
1064: * annoying denormalized numbers in case of strongly scaled matrices.
1065: * The perturbation is structured so that it does not introduce any
1066: * new perturbation of the singular values, and it does not destroy
1067: * the job done by the preconditioner.
1068: * The licence for this perturbation is in the variable L2PERT, which
1069: * should be .FALSE. if FLUSH TO ZERO underflow is active.
1070: *
1071: IF ( .NOT. ALMORT ) THEN
1072: *
1073: IF ( L2PERT ) THEN
1074: * XSC = DSQRT(SMALL)
1075: XSC = EPSLN / DBLE(N)
1076: DO 4947 q = 1, NR
1077: TEMP1 = XSC*DABS(A(q,q))
1078: DO 4949 p = 1, N
1079: IF ( ( (p.GT.q) .AND. (DABS(A(p,q)).LE.TEMP1) )
1080: $ .OR. ( p .LT. q ) )
1081: $ A(p,q) = DSIGN( TEMP1, A(p,q) )
1082: 4949 CONTINUE
1083: 4947 CONTINUE
1084: ELSE
1085: CALL DLASET( 'U', NR-1,NR-1, ZERO,ZERO, A(1,2),LDA )
1086: END IF
1087: *
1088: * .. second preconditioning using the QR factorization
1089: *
1090: CALL DGEQRF( N,NR, A,LDA, WORK, WORK(N+1),LWORK-N, IERR )
1091: *
1092: * .. and transpose upper to lower triangular
1093: DO 1948 p = 1, NR - 1
1094: CALL DCOPY( NR-p, A(p,p+1), LDA, A(p+1,p), 1 )
1095: 1948 CONTINUE
1096: *
1097: END IF
1098: *
1099: * Row-cyclic Jacobi SVD algorithm with column pivoting
1100: *
1101: * .. again some perturbation (a "background noise") is added
1102: * to drown denormals
1103: IF ( L2PERT ) THEN
1104: * XSC = DSQRT(SMALL)
1105: XSC = EPSLN / DBLE(N)
1106: DO 1947 q = 1, NR
1107: TEMP1 = XSC*DABS(A(q,q))
1108: DO 1949 p = 1, NR
1109: IF ( ( (p.GT.q) .AND. (DABS(A(p,q)).LE.TEMP1) )
1110: $ .OR. ( p .LT. q ) )
1111: $ A(p,q) = DSIGN( TEMP1, A(p,q) )
1112: 1949 CONTINUE
1113: 1947 CONTINUE
1114: ELSE
1115: CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, A(1,2), LDA )
1116: END IF
1117: *
1118: * .. and one-sided Jacobi rotations are started on a lower
1119: * triangular matrix (plus perturbation which is ignored in
1120: * the part which destroys triangular form (confusing?!))
1121: *
1122: CALL DGESVJ( 'L', 'NoU', 'NoV', NR, NR, A, LDA, SVA,
1123: $ N, V, LDV, WORK, LWORK, INFO )
1124: *
1125: SCALEM = WORK(1)
1126: NUMRANK = IDNINT(WORK(2))
1127: *
1128: *
1129: ELSE IF ( RSVEC .AND. ( .NOT. LSVEC ) ) THEN
1130: *
1131: * -> Singular Values and Right Singular Vectors <-
1132: *
1133: IF ( ALMORT ) THEN
1134: *
1135: * .. in this case NR equals N
1136: DO 1998 p = 1, NR
1137: CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1138: 1998 CONTINUE
1139: CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1140: *
1141: CALL DGESVJ( 'L','U','N', N, NR, V,LDV, SVA, NR, A,LDA,
1142: $ WORK, LWORK, INFO )
1143: SCALEM = WORK(1)
1144: NUMRANK = IDNINT(WORK(2))
1145:
1146: ELSE
1147: *
1148: * .. two more QR factorizations ( one QRF is not enough, two require
1149: * accumulated product of Jacobi rotations, three are perfect )
1150: *
1151: CALL DLASET( 'Lower', NR-1, NR-1, ZERO, ZERO, A(2,1), LDA )
1152: CALL DGELQF( NR, N, A, LDA, WORK, WORK(N+1), LWORK-N, IERR)
1153: CALL DLACPY( 'Lower', NR, NR, A, LDA, V, LDV )
1154: CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1155: CALL DGEQRF( NR, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1156: $ LWORK-2*N, IERR )
1157: DO 8998 p = 1, NR
1158: CALL DCOPY( NR-p+1, V(p,p), LDV, V(p,p), 1 )
1159: 8998 CONTINUE
1160: CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1161: *
1162: CALL DGESVJ( 'Lower', 'U','N', NR, NR, V,LDV, SVA, NR, U,
1163: $ LDU, WORK(N+1), LWORK, INFO )
1164: SCALEM = WORK(N+1)
1165: NUMRANK = IDNINT(WORK(N+2))
1166: IF ( NR .LT. N ) THEN
1167: CALL DLASET( 'A',N-NR, NR, ZERO,ZERO, V(NR+1,1), LDV )
1168: CALL DLASET( 'A',NR, N-NR, ZERO,ZERO, V(1,NR+1), LDV )
1169: CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE, V(NR+1,NR+1), LDV )
1170: END IF
1171: *
1172: CALL DORMLQ( 'Left', 'Transpose', N, N, NR, A, LDA, WORK,
1173: $ V, LDV, WORK(N+1), LWORK-N, IERR )
1174: *
1175: END IF
1176: *
1177: DO 8991 p = 1, N
1178: CALL DCOPY( N, V(p,1), LDV, A(IWORK(p),1), LDA )
1179: 8991 CONTINUE
1180: CALL DLACPY( 'All', N, N, A, LDA, V, LDV )
1181: *
1182: IF ( TRANSP ) THEN
1183: CALL DLACPY( 'All', N, N, V, LDV, U, LDU )
1184: END IF
1185: *
1186: ELSE IF ( LSVEC .AND. ( .NOT. RSVEC ) ) THEN
1187: *
1188: * .. Singular Values and Left Singular Vectors ..
1189: *
1190: * .. second preconditioning step to avoid need to accumulate
1191: * Jacobi rotations in the Jacobi iterations.
1192: DO 1965 p = 1, NR
1193: CALL DCOPY( N-p+1, A(p,p), LDA, U(p,p), 1 )
1194: 1965 CONTINUE
1195: CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1196: *
1197: CALL DGEQRF( N, NR, U, LDU, WORK(N+1), WORK(2*N+1),
1198: $ LWORK-2*N, IERR )
1199: *
1200: DO 1967 p = 1, NR - 1
1201: CALL DCOPY( NR-p, U(p,p+1), LDU, U(p+1,p), 1 )
1202: 1967 CONTINUE
1203: CALL DLASET( 'Upper', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1204: *
1205: CALL DGESVJ( 'Lower', 'U', 'N', NR,NR, U, LDU, SVA, NR, A,
1206: $ LDA, WORK(N+1), LWORK-N, INFO )
1207: SCALEM = WORK(N+1)
1208: NUMRANK = IDNINT(WORK(N+2))
1209: *
1210: IF ( NR .LT. M ) THEN
1211: CALL DLASET( 'A', M-NR, NR,ZERO, ZERO, U(NR+1,1), LDU )
1212: IF ( NR .LT. N1 ) THEN
1213: CALL DLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1), LDU )
1214: CALL DLASET( 'A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1), LDU )
1215: END IF
1216: END IF
1217: *
1218: CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1219: $ LDU, WORK(N+1), LWORK-N, IERR )
1220: *
1221: IF ( ROWPIV )
1222: $ CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1223: *
1224: DO 1974 p = 1, N1
1225: XSC = ONE / DNRM2( M, U(1,p), 1 )
1226: CALL DSCAL( M, XSC, U(1,p), 1 )
1227: 1974 CONTINUE
1228: *
1229: IF ( TRANSP ) THEN
1230: CALL DLACPY( 'All', N, N, U, LDU, V, LDV )
1231: END IF
1232: *
1233: ELSE
1234: *
1235: * .. Full SVD ..
1236: *
1237: IF ( .NOT. JRACC ) THEN
1238: *
1239: IF ( .NOT. ALMORT ) THEN
1240: *
1241: * Second Preconditioning Step (QRF [with pivoting])
1242: * Note that the composition of TRANSPOSE, QRF and TRANSPOSE is
1243: * equivalent to an LQF CALL. Since in many libraries the QRF
1244: * seems to be better optimized than the LQF, we do explicit
1245: * transpose and use the QRF. This is subject to changes in an
1246: * optimized implementation of DGEJSV.
1247: *
1248: DO 1968 p = 1, NR
1249: CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1250: 1968 CONTINUE
1251: *
1252: * .. the following two loops perturb small entries to avoid
1253: * denormals in the second QR factorization, where they are
1254: * as good as zeros. This is done to avoid painfully slow
1255: * computation with denormals. The relative size of the perturbation
1256: * is a parameter that can be changed by the implementer.
1257: * This perturbation device will be obsolete on machines with
1258: * properly implemented arithmetic.
1259: * To switch it off, set L2PERT=.FALSE. To remove it from the
1260: * code, remove the action under L2PERT=.TRUE., leave the ELSE part.
1261: * The following two loops should be blocked and fused with the
1262: * transposed copy above.
1263: *
1264: IF ( L2PERT ) THEN
1265: XSC = DSQRT(SMALL)
1266: DO 2969 q = 1, NR
1267: TEMP1 = XSC*DABS( V(q,q) )
1268: DO 2968 p = 1, N
1269: IF ( ( p .GT. q ) .AND. ( DABS(V(p,q)) .LE. TEMP1 )
1270: $ .OR. ( p .LT. q ) )
1271: $ V(p,q) = DSIGN( TEMP1, V(p,q) )
1272: IF ( p .LT. q ) V(p,q) = - V(p,q)
1273: 2968 CONTINUE
1274: 2969 CONTINUE
1275: ELSE
1276: CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1277: END IF
1278: *
1279: * Estimate the row scaled condition number of R1
1280: * (If R1 is rectangular, N > NR, then the condition number
1281: * of the leading NR x NR submatrix is estimated.)
1282: *
1283: CALL DLACPY( 'L', NR, NR, V, LDV, WORK(2*N+1), NR )
1284: DO 3950 p = 1, NR
1285: TEMP1 = DNRM2(NR-p+1,WORK(2*N+(p-1)*NR+p),1)
1286: CALL DSCAL(NR-p+1,ONE/TEMP1,WORK(2*N+(p-1)*NR+p),1)
1287: 3950 CONTINUE
1288: CALL DPOCON('Lower',NR,WORK(2*N+1),NR,ONE,TEMP1,
1289: $ WORK(2*N+NR*NR+1),IWORK(M+2*N+1),IERR)
1290: CONDR1 = ONE / DSQRT(TEMP1)
1291: * .. here need a second oppinion on the condition number
1292: * .. then assume worst case scenario
1293: * R1 is OK for inverse <=> CONDR1 .LT. DBLE(N)
1294: * more conservative <=> CONDR1 .LT. DSQRT(DBLE(N))
1295: *
1296: COND_OK = DSQRT(DBLE(NR))
1297: *[TP] COND_OK is a tuning parameter.
1298:
1299: IF ( CONDR1 .LT. COND_OK ) THEN
1300: * .. the second QRF without pivoting. Note: in an optimized
1301: * implementation, this QRF should be implemented as the QRF
1302: * of a lower triangular matrix.
1303: * R1^t = Q2 * R2
1304: CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1305: $ LWORK-2*N, IERR )
1306: *
1307: IF ( L2PERT ) THEN
1308: XSC = DSQRT(SMALL)/EPSLN
1309: DO 3959 p = 2, NR
1310: DO 3958 q = 1, p - 1
1311: TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
1312: IF ( DABS(V(q,p)) .LE. TEMP1 )
1313: $ V(q,p) = DSIGN( TEMP1, V(q,p) )
1314: 3958 CONTINUE
1315: 3959 CONTINUE
1316: END IF
1317: *
1318: IF ( NR .NE. N )
1319: $ CALL DLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
1320: * .. save ...
1321: *
1322: * .. this transposed copy should be better than naive
1323: DO 1969 p = 1, NR - 1
1324: CALL DCOPY( NR-p, V(p,p+1), LDV, V(p+1,p), 1 )
1325: 1969 CONTINUE
1326: *
1327: CONDR2 = CONDR1
1328: *
1329: ELSE
1330: *
1331: * .. ill-conditioned case: second QRF with pivoting
1332: * Note that windowed pivoting would be equaly good
1333: * numerically, and more run-time efficient. So, in
1334: * an optimal implementation, the next call to DGEQP3
1335: * should be replaced with eg. CALL SGEQPX (ACM TOMS #782)
1336: * with properly (carefully) chosen parameters.
1337: *
1338: * R1^t * P2 = Q2 * R2
1339: DO 3003 p = 1, NR
1340: IWORK(N+p) = 0
1341: 3003 CONTINUE
1342: CALL DGEQP3( N, NR, V, LDV, IWORK(N+1), WORK(N+1),
1343: $ WORK(2*N+1), LWORK-2*N, IERR )
1344: ** CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1345: ** $ LWORK-2*N, IERR )
1346: IF ( L2PERT ) THEN
1347: XSC = DSQRT(SMALL)
1348: DO 3969 p = 2, NR
1349: DO 3968 q = 1, p - 1
1350: TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
1351: IF ( DABS(V(q,p)) .LE. TEMP1 )
1352: $ V(q,p) = DSIGN( TEMP1, V(q,p) )
1353: 3968 CONTINUE
1354: 3969 CONTINUE
1355: END IF
1356: *
1357: CALL DLACPY( 'A', N, NR, V, LDV, WORK(2*N+1), N )
1358: *
1359: IF ( L2PERT ) THEN
1360: XSC = DSQRT(SMALL)
1361: DO 8970 p = 2, NR
1362: DO 8971 q = 1, p - 1
1363: TEMP1 = XSC * DMIN1(DABS(V(p,p)),DABS(V(q,q)))
1364: V(p,q) = - DSIGN( TEMP1, V(q,p) )
1365: 8971 CONTINUE
1366: 8970 CONTINUE
1367: ELSE
1368: CALL DLASET( 'L',NR-1,NR-1,ZERO,ZERO,V(2,1),LDV )
1369: END IF
1370: * Now, compute R2 = L3 * Q3, the LQ factorization.
1371: CALL DGELQF( NR, NR, V, LDV, WORK(2*N+N*NR+1),
1372: $ WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, IERR )
1373: * .. and estimate the condition number
1374: CALL DLACPY( 'L',NR,NR,V,LDV,WORK(2*N+N*NR+NR+1),NR )
1375: DO 4950 p = 1, NR
1376: TEMP1 = DNRM2( p, WORK(2*N+N*NR+NR+p), NR )
1377: CALL DSCAL( p, ONE/TEMP1, WORK(2*N+N*NR+NR+p), NR )
1378: 4950 CONTINUE
1379: CALL DPOCON( 'L',NR,WORK(2*N+N*NR+NR+1),NR,ONE,TEMP1,
1380: $ WORK(2*N+N*NR+NR+NR*NR+1),IWORK(M+2*N+1),IERR )
1381: CONDR2 = ONE / DSQRT(TEMP1)
1382: *
1383: IF ( CONDR2 .GE. COND_OK ) THEN
1384: * .. save the Householder vectors used for Q3
1385: * (this overwrittes the copy of R2, as it will not be
1386: * needed in this branch, but it does not overwritte the
1387: * Huseholder vectors of Q2.).
1388: CALL DLACPY( 'U', NR, NR, V, LDV, WORK(2*N+1), N )
1389: * .. and the rest of the information on Q3 is in
1390: * WORK(2*N+N*NR+1:2*N+N*NR+N)
1391: END IF
1392: *
1393: END IF
1394: *
1395: IF ( L2PERT ) THEN
1396: XSC = DSQRT(SMALL)
1397: DO 4968 q = 2, NR
1398: TEMP1 = XSC * V(q,q)
1399: DO 4969 p = 1, q - 1
1400: * V(p,q) = - DSIGN( TEMP1, V(q,p) )
1401: V(p,q) = - DSIGN( TEMP1, V(p,q) )
1402: 4969 CONTINUE
1403: 4968 CONTINUE
1404: ELSE
1405: CALL DLASET( 'U', NR-1,NR-1, ZERO,ZERO, V(1,2), LDV )
1406: END IF
1407: *
1408: * Second preconditioning finished; continue with Jacobi SVD
1409: * The input matrix is lower trinagular.
1410: *
1411: * Recover the right singular vectors as solution of a well
1412: * conditioned triangular matrix equation.
1413: *
1414: IF ( CONDR1 .LT. COND_OK ) THEN
1415: *
1416: CALL DGESVJ( 'L','U','N',NR,NR,V,LDV,SVA,NR,U,
1417: $ LDU,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,INFO )
1418: SCALEM = WORK(2*N+N*NR+NR+1)
1419: NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
1420: DO 3970 p = 1, NR
1421: CALL DCOPY( NR, V(1,p), 1, U(1,p), 1 )
1422: CALL DSCAL( NR, SVA(p), V(1,p), 1 )
1423: 3970 CONTINUE
1424:
1425: * .. pick the right matrix equation and solve it
1426: *
1427: IF ( NR .EQ. N ) THEN
1428: * :)) .. best case, R1 is inverted. The solution of this matrix
1429: * equation is Q2*V2 = the product of the Jacobi rotations
1430: * used in DGESVJ, premultiplied with the orthogonal matrix
1431: * from the second QR factorization.
1432: CALL DTRSM( 'L','U','N','N', NR,NR,ONE, A,LDA, V,LDV )
1433: ELSE
1434: * .. R1 is well conditioned, but non-square. Transpose(R2)
1435: * is inverted to get the product of the Jacobi rotations
1436: * used in DGESVJ. The Q-factor from the second QR
1437: * factorization is then built in explicitly.
1438: CALL DTRSM('L','U','T','N',NR,NR,ONE,WORK(2*N+1),
1439: $ N,V,LDV)
1440: IF ( NR .LT. N ) THEN
1441: CALL DLASET('A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV)
1442: CALL DLASET('A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV)
1443: CALL DLASET('A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV)
1444: END IF
1445: CALL DORMQR('L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1446: $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR)
1447: END IF
1448: *
1449: ELSE IF ( CONDR2 .LT. COND_OK ) THEN
1450: *
1451: * :) .. the input matrix A is very likely a relative of
1452: * the Kahan matrix :)
1453: * The matrix R2 is inverted. The solution of the matrix equation
1454: * is Q3^T*V3 = the product of the Jacobi rotations (appplied to
1455: * the lower triangular L3 from the LQ factorization of
1456: * R2=L3*Q3), pre-multiplied with the transposed Q3.
1457: CALL DGESVJ( 'L', 'U', 'N', NR, NR, V, LDV, SVA, NR, U,
1458: $ LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
1459: SCALEM = WORK(2*N+N*NR+NR+1)
1460: NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
1461: DO 3870 p = 1, NR
1462: CALL DCOPY( NR, V(1,p), 1, U(1,p), 1 )
1463: CALL DSCAL( NR, SVA(p), U(1,p), 1 )
1464: 3870 CONTINUE
1465: CALL DTRSM('L','U','N','N',NR,NR,ONE,WORK(2*N+1),N,U,LDU)
1466: * .. apply the permutation from the second QR factorization
1467: DO 873 q = 1, NR
1468: DO 872 p = 1, NR
1469: WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1470: 872 CONTINUE
1471: DO 874 p = 1, NR
1472: U(p,q) = WORK(2*N+N*NR+NR+p)
1473: 874 CONTINUE
1474: 873 CONTINUE
1475: IF ( NR .LT. N ) THEN
1476: CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1477: CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1478: CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1479: END IF
1480: CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1481: $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1482: ELSE
1483: * Last line of defense.
1484: * #:( This is a rather pathological case: no scaled condition
1485: * improvement after two pivoted QR factorizations. Other
1486: * possibility is that the rank revealing QR factorization
1487: * or the condition estimator has failed, or the COND_OK
1488: * is set very close to ONE (which is unnecessary). Normally,
1489: * this branch should never be executed, but in rare cases of
1490: * failure of the RRQR or condition estimator, the last line of
1491: * defense ensures that DGEJSV completes the task.
1492: * Compute the full SVD of L3 using DGESVJ with explicit
1493: * accumulation of Jacobi rotations.
1494: CALL DGESVJ( 'L', 'U', 'V', NR, NR, V, LDV, SVA, NR, U,
1495: $ LDU, WORK(2*N+N*NR+NR+1), LWORK-2*N-N*NR-NR, INFO )
1496: SCALEM = WORK(2*N+N*NR+NR+1)
1497: NUMRANK = IDNINT(WORK(2*N+N*NR+NR+2))
1498: IF ( NR .LT. N ) THEN
1499: CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1500: CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1501: CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1502: END IF
1503: CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1504: $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1505: *
1506: CALL DORMLQ( 'L', 'T', NR, NR, NR, WORK(2*N+1), N,
1507: $ WORK(2*N+N*NR+1), U, LDU, WORK(2*N+N*NR+NR+1),
1508: $ LWORK-2*N-N*NR-NR, IERR )
1509: DO 773 q = 1, NR
1510: DO 772 p = 1, NR
1511: WORK(2*N+N*NR+NR+IWORK(N+p)) = U(p,q)
1512: 772 CONTINUE
1513: DO 774 p = 1, NR
1514: U(p,q) = WORK(2*N+N*NR+NR+p)
1515: 774 CONTINUE
1516: 773 CONTINUE
1517: *
1518: END IF
1519: *
1520: * Permute the rows of V using the (column) permutation from the
1521: * first QRF. Also, scale the columns to make them unit in
1522: * Euclidean norm. This applies to all cases.
1523: *
1524: TEMP1 = DSQRT(DBLE(N)) * EPSLN
1525: DO 1972 q = 1, N
1526: DO 972 p = 1, N
1527: WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
1528: 972 CONTINUE
1529: DO 973 p = 1, N
1530: V(p,q) = WORK(2*N+N*NR+NR+p)
1531: 973 CONTINUE
1532: XSC = ONE / DNRM2( N, V(1,q), 1 )
1533: IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1534: $ CALL DSCAL( N, XSC, V(1,q), 1 )
1535: 1972 CONTINUE
1536: * At this moment, V contains the right singular vectors of A.
1537: * Next, assemble the left singular vector matrix U (M x N).
1538: IF ( NR .LT. M ) THEN
1539: CALL DLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
1540: IF ( NR .LT. N1 ) THEN
1541: CALL DLASET('A',NR,N1-NR,ZERO,ZERO,U(1,NR+1),LDU)
1542: CALL DLASET('A',M-NR,N1-NR,ZERO,ONE,U(NR+1,NR+1),LDU)
1543: END IF
1544: END IF
1545: *
1546: * The Q matrix from the first QRF is built into the left singular
1547: * matrix U. This applies to all cases.
1548: *
1549: CALL DORMQR( 'Left', 'No_Tr', M, N1, N, A, LDA, WORK, U,
1550: $ LDU, WORK(N+1), LWORK-N, IERR )
1551:
1552: * The columns of U are normalized. The cost is O(M*N) flops.
1553: TEMP1 = DSQRT(DBLE(M)) * EPSLN
1554: DO 1973 p = 1, NR
1555: XSC = ONE / DNRM2( M, U(1,p), 1 )
1556: IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1557: $ CALL DSCAL( M, XSC, U(1,p), 1 )
1558: 1973 CONTINUE
1559: *
1560: * If the initial QRF is computed with row pivoting, the left
1561: * singular vectors must be adjusted.
1562: *
1563: IF ( ROWPIV )
1564: $ CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1565: *
1566: ELSE
1567: *
1568: * .. the initial matrix A has almost orthogonal columns and
1569: * the second QRF is not needed
1570: *
1571: CALL DLACPY( 'Upper', N, N, A, LDA, WORK(N+1), N )
1572: IF ( L2PERT ) THEN
1573: XSC = DSQRT(SMALL)
1574: DO 5970 p = 2, N
1575: TEMP1 = XSC * WORK( N + (p-1)*N + p )
1576: DO 5971 q = 1, p - 1
1577: WORK(N+(q-1)*N+p)=-DSIGN(TEMP1,WORK(N+(p-1)*N+q))
1578: 5971 CONTINUE
1579: 5970 CONTINUE
1580: ELSE
1581: CALL DLASET( 'Lower',N-1,N-1,ZERO,ZERO,WORK(N+2),N )
1582: END IF
1583: *
1584: CALL DGESVJ( 'Upper', 'U', 'N', N, N, WORK(N+1), N, SVA,
1585: $ N, U, LDU, WORK(N+N*N+1), LWORK-N-N*N, INFO )
1586: *
1587: SCALEM = WORK(N+N*N+1)
1588: NUMRANK = IDNINT(WORK(N+N*N+2))
1589: DO 6970 p = 1, N
1590: CALL DCOPY( N, WORK(N+(p-1)*N+1), 1, U(1,p), 1 )
1591: CALL DSCAL( N, SVA(p), WORK(N+(p-1)*N+1), 1 )
1592: 6970 CONTINUE
1593: *
1594: CALL DTRSM( 'Left', 'Upper', 'NoTrans', 'No UD', N, N,
1595: $ ONE, A, LDA, WORK(N+1), N )
1596: DO 6972 p = 1, N
1597: CALL DCOPY( N, WORK(N+p), N, V(IWORK(p),1), LDV )
1598: 6972 CONTINUE
1599: TEMP1 = DSQRT(DBLE(N))*EPSLN
1600: DO 6971 p = 1, N
1601: XSC = ONE / DNRM2( N, V(1,p), 1 )
1602: IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1603: $ CALL DSCAL( N, XSC, V(1,p), 1 )
1604: 6971 CONTINUE
1605: *
1606: * Assemble the left singular vector matrix U (M x N).
1607: *
1608: IF ( N .LT. M ) THEN
1609: CALL DLASET( 'A', M-N, N, ZERO, ZERO, U(N+1,1), LDU )
1610: IF ( N .LT. N1 ) THEN
1611: CALL DLASET( 'A',N, N1-N, ZERO, ZERO, U(1,N+1),LDU )
1612: CALL DLASET( 'A',M-N,N1-N, ZERO, ONE,U(N+1,N+1),LDU )
1613: END IF
1614: END IF
1615: CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1616: $ LDU, WORK(N+1), LWORK-N, IERR )
1617: TEMP1 = DSQRT(DBLE(M))*EPSLN
1618: DO 6973 p = 1, N1
1619: XSC = ONE / DNRM2( M, U(1,p), 1 )
1620: IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1621: $ CALL DSCAL( M, XSC, U(1,p), 1 )
1622: 6973 CONTINUE
1623: *
1624: IF ( ROWPIV )
1625: $ CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1626: *
1627: END IF
1628: *
1629: * end of the >> almost orthogonal case << in the full SVD
1630: *
1631: ELSE
1632: *
1633: * This branch deploys a preconditioned Jacobi SVD with explicitly
1634: * accumulated rotations. It is included as optional, mainly for
1635: * experimental purposes. It does perfom well, and can also be used.
1636: * In this implementation, this branch will be automatically activated
1637: * if the condition number sigma_max(A) / sigma_min(A) is predicted
1638: * to be greater than the overflow threshold. This is because the
1639: * a posteriori computation of the singular vectors assumes robust
1640: * implementation of BLAS and some LAPACK procedures, capable of working
1641: * in presence of extreme values. Since that is not always the case, ...
1642: *
1643: DO 7968 p = 1, NR
1644: CALL DCOPY( N-p+1, A(p,p), LDA, V(p,p), 1 )
1645: 7968 CONTINUE
1646: *
1647: IF ( L2PERT ) THEN
1648: XSC = DSQRT(SMALL/EPSLN)
1649: DO 5969 q = 1, NR
1650: TEMP1 = XSC*DABS( V(q,q) )
1651: DO 5968 p = 1, N
1652: IF ( ( p .GT. q ) .AND. ( DABS(V(p,q)) .LE. TEMP1 )
1653: $ .OR. ( p .LT. q ) )
1654: $ V(p,q) = DSIGN( TEMP1, V(p,q) )
1655: IF ( p .LT. q ) V(p,q) = - V(p,q)
1656: 5968 CONTINUE
1657: 5969 CONTINUE
1658: ELSE
1659: CALL DLASET( 'U', NR-1, NR-1, ZERO, ZERO, V(1,2), LDV )
1660: END IF
1661:
1662: CALL DGEQRF( N, NR, V, LDV, WORK(N+1), WORK(2*N+1),
1663: $ LWORK-2*N, IERR )
1664: CALL DLACPY( 'L', N, NR, V, LDV, WORK(2*N+1), N )
1665: *
1666: DO 7969 p = 1, NR
1667: CALL DCOPY( NR-p+1, V(p,p), LDV, U(p,p), 1 )
1668: 7969 CONTINUE
1669:
1670: IF ( L2PERT ) THEN
1671: XSC = DSQRT(SMALL/EPSLN)
1672: DO 9970 q = 2, NR
1673: DO 9971 p = 1, q - 1
1674: TEMP1 = XSC * DMIN1(DABS(U(p,p)),DABS(U(q,q)))
1675: U(p,q) = - DSIGN( TEMP1, U(q,p) )
1676: 9971 CONTINUE
1677: 9970 CONTINUE
1678: ELSE
1679: CALL DLASET('U', NR-1, NR-1, ZERO, ZERO, U(1,2), LDU )
1680: END IF
1681:
1682: CALL DGESVJ( 'G', 'U', 'V', NR, NR, U, LDU, SVA,
1683: $ N, V, LDV, WORK(2*N+N*NR+1), LWORK-2*N-N*NR, INFO )
1684: SCALEM = WORK(2*N+N*NR+1)
1685: NUMRANK = IDNINT(WORK(2*N+N*NR+2))
1686:
1687: IF ( NR .LT. N ) THEN
1688: CALL DLASET( 'A',N-NR,NR,ZERO,ZERO,V(NR+1,1),LDV )
1689: CALL DLASET( 'A',NR,N-NR,ZERO,ZERO,V(1,NR+1),LDV )
1690: CALL DLASET( 'A',N-NR,N-NR,ZERO,ONE,V(NR+1,NR+1),LDV )
1691: END IF
1692:
1693: CALL DORMQR( 'L','N',N,N,NR,WORK(2*N+1),N,WORK(N+1),
1694: $ V,LDV,WORK(2*N+N*NR+NR+1),LWORK-2*N-N*NR-NR,IERR )
1695: *
1696: * Permute the rows of V using the (column) permutation from the
1697: * first QRF. Also, scale the columns to make them unit in
1698: * Euclidean norm. This applies to all cases.
1699: *
1700: TEMP1 = DSQRT(DBLE(N)) * EPSLN
1701: DO 7972 q = 1, N
1702: DO 8972 p = 1, N
1703: WORK(2*N+N*NR+NR+IWORK(p)) = V(p,q)
1704: 8972 CONTINUE
1705: DO 8973 p = 1, N
1706: V(p,q) = WORK(2*N+N*NR+NR+p)
1707: 8973 CONTINUE
1708: XSC = ONE / DNRM2( N, V(1,q), 1 )
1709: IF ( (XSC .LT. (ONE-TEMP1)) .OR. (XSC .GT. (ONE+TEMP1)) )
1710: $ CALL DSCAL( N, XSC, V(1,q), 1 )
1711: 7972 CONTINUE
1712: *
1713: * At this moment, V contains the right singular vectors of A.
1714: * Next, assemble the left singular vector matrix U (M x N).
1715: *
1716: IF ( NR .LT. M ) THEN
1717: CALL DLASET( 'A', M-NR, NR, ZERO, ZERO, U(NR+1,1), LDU )
1718: IF ( NR .LT. N1 ) THEN
1719: CALL DLASET( 'A',NR, N1-NR, ZERO, ZERO, U(1,NR+1),LDU )
1720: CALL DLASET( 'A',M-NR,N1-NR, ZERO, ONE,U(NR+1,NR+1),LDU )
1721: END IF
1722: END IF
1723: *
1724: CALL DORMQR( 'Left', 'No Tr', M, N1, N, A, LDA, WORK, U,
1725: $ LDU, WORK(N+1), LWORK-N, IERR )
1726: *
1727: IF ( ROWPIV )
1728: $ CALL DLASWP( N1, U, LDU, 1, M-1, IWORK(2*N+1), -1 )
1729: *
1730: *
1731: END IF
1732: IF ( TRANSP ) THEN
1733: * .. swap U and V because the procedure worked on A^t
1734: DO 6974 p = 1, N
1735: CALL DSWAP( N, U(1,p), 1, V(1,p), 1 )
1736: 6974 CONTINUE
1737: END IF
1738: *
1739: END IF
1740: * end of the full SVD
1741: *
1742: * Undo scaling, if necessary (and possible)
1743: *
1744: IF ( USCAL2 .LE. (BIG/SVA(1))*USCAL1 ) THEN
1745: CALL DLASCL( 'G', 0, 0, USCAL1, USCAL2, NR, 1, SVA, N, IERR )
1746: USCAL1 = ONE
1747: USCAL2 = ONE
1748: END IF
1749: *
1750: IF ( NR .LT. N ) THEN
1751: DO 3004 p = NR+1, N
1752: SVA(p) = ZERO
1753: 3004 CONTINUE
1754: END IF
1755: *
1756: WORK(1) = USCAL2 * SCALEM
1757: WORK(2) = USCAL1
1758: IF ( ERREST ) WORK(3) = SCONDA
1759: IF ( LSVEC .AND. RSVEC ) THEN
1760: WORK(4) = CONDR1
1761: WORK(5) = CONDR2
1762: END IF
1763: IF ( L2TRAN ) THEN
1764: WORK(6) = ENTRA
1765: WORK(7) = ENTRAT
1766: END IF
1767: *
1768: IWORK(1) = NR
1769: IWORK(2) = NUMRANK
1770: IWORK(3) = WARNING
1771: *
1772: RETURN
1773: * ..
1774: * .. END OF DGEJSV
1775: * ..
1776: END
1777: *
CVSweb interface <joel.bertrand@systella.fr>