E. C. Pielou - The interpretation of ecological data_ a primer on classification and ordination (1984, John Wiley & Sons).pdf

The lnterpretation of Ecological Data The lnterpretation of Ecological Data A Primer on Classification and Ordination

Views 46 Downloads 1 File size 12MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

The lnterpretation of Ecological Data

The lnterpretation of Ecological Data A Primer on Classification and Ordination

E. C. Pielou University of Lethbridge

A Wiley-lnterscience Publication

JOHN WILEY & SONS New York • Chichester. Brisbane. Taranta. Singapore

Preface

Thethaim . . d of this d b book is to .give a fu 11' d etailed, introductory account of the me lfo s. use fi /d cdommuruty ecologists to make large, unwieldy masses of mu ivanat~ e ata comprehensible and interpretable. 1 am convinced that there is need for such a book. There are more advanced books that cover sorne · · . . of the same material ' for example, L . or l'oc1., s M u¡twarzate Analysls zn Vegetation Research (W. Junk 1978) and A D G d ' Cl ifi · ' . . or on s assz catwn. Methods for the Exploratory Analysis of Multiuariate Data (Chapman and Hall, 1981), but they assume a much higher level of mathematical ability in the reader than ~:; ............. i(~,~ ~ ~~;,°

x,))

.. .

. . . n when the n quadrats are 15 the vanance, the divisor used to calculate this average d as a sample from tre~ted as a whole "population ,, and n - 1 when the quadrats are treate Which the covanance · . a larger 'populat10n · is · to be est1mate . d· m

*As

.

With

TRANSFORMIN G DATA MA

106

lR1cls

where ali the summations are from J = 1 to n · R i~ known as a surr¡. · or an SS CP matrzx for short · I t is· 0an1, squares-and-cross-products matnx s X s matrix. Alternatively, one may write var( x 1 ) R

=

n cov(x 2 , x 1 )

cov(x 1, ~2)

cov(x 1 , xs)

var(x 2 )

cov(x 2 , xs)

. . . . . . . . . . . . . . . . . . . . . ... ..........

(3.9)

ª

Observe that when, as here, a matrix is multiplied by scalar (in this case the scalar is n ), it means that each individual ele~ent of the matrix is multiplied by the scalar. Thus the (h, i)th term of R is n cov(xh, X;). R isa syrnmetric matrix since, as is obvious from (3.8b ),

The matrix (1 / n )R is known as the variance-covariance matrix of the data, or often simply as the covariance matrix. The lower panel of Table 3.2 shows the SSCP matrix R and the covariance matrix (1 / n )R for the 3 X 4 data matrix in the upper panel. To calculate the elements of a covariance matrix, one may use the procedure demonstrated in Table 3.2, or the computationally more conve· nient procedure described at the end of this section. The Correlation Matrix

In the raw data matrix X previously discussed the elements are the mea· sured quantities of the di.tferent species in each of a sample of quadrats or other sampling units. Often, it is either necessary or desirable to standardize these data, that is, rescale the measurements to a standard scale. Standardization is necessary if di.tferent species are measured by different methods in noncomparabl · p . . 1; ... g it e uruts. or example ' m vegetat10n samplJJJ · mayb · e converuent to use cover as the measure of quantity for sorne spec1es, ~nd numbers of individuals for other species · there is no ob1ection to usíng mcommensurate u ·t8 h ' J d·zed bef ore analys1.s. ru suc as these provided the data are standar 1

Tt-ff

pRODUCT OF A DATA MATRIX ANO ITS TR ANSPosE

107

Standardization is sometimes desirabl .. d 1 e even wh robers of ind1v1 ua s) are used for the m en the same units ( nu l=J' f . . easurement f e.g., the species . o all species quant· t1·es · It has the euect o . we1ghtmg . accordm . iare species have as b1g an mfluence as cornn-. g to theu rarity so that r . hi . ..uiuon ones o th n e results of an Ordination. Sometunes t s is desirable somet· ' unes not o wish to prevent the common species from d . . · ne may or may not matter of ecological judgment. A thorough di.ºsinin~tmg an analysis. It is a cuss1on of the . . of data standardizat10n has been given by Noy-M . pros and cons (1975). eu, Walker, and Williams The usual way of standardizing, or rescaling th d t . . . e ha a isbto d1v1de the observed measurements on each species after ' they . . ' ave een centered (transformed. t~ deviat10ns fr?m the respective species means), by the standard deviatlon of the spec1es quantities. Thus the element x .. in X is IJ replaced by Jvar(x¡) say. We now denote the standardized matrix by ZR, and examine the product ~RZR. = SR, say. (ZR is the transpose of ZR.) The (h, i)th element of SR is the product of the hth row of ZR postmultiplied by the ith column of ZR. Thus it is

X·in -

CJ¡

f J-l

(xhJ - xh)(xiJ - .X¡)

X·1

TRANSFO RMINC DATA M

108

Al~1c~s

. h ere rhr. is the correlation coefficient between species h and species i tn the n quadrats. . . Observe that the (i , i)th element of SR 1s n. This follows from the fac¡ that cov(x¡, X¡)= var(x¡)· The correlation matrix is obtained by dividing every element of SR by n. Thus W

lt is a symmetric matrix since, obviously,

rhi = r ¡h·

Table 3.3 shows the standardized form ZR of Data Matrix # 8 (see Table 3.2), its SSCP matrix ZRZ~ , and its correlation matrix. The elements of the correlation matrix may be evaluated either by postmultiplying ZR by ZR and dividing by n, or else by dividing the (h , i)th element of the covariance TABLE3.3. COMPUTATION OF THE CORRELATION MATRIX FOR DATA MATRIX #8 (SEE TABLE 3.2). ' The standardized data matrix is

ZR

=

-5

-1

1

5

v'IT

v'IT

v'IT

9

3

-5

m

v'4f

141

-2

141

1

1

v'f5

v'f5

ru

-7

141 o

The SSCP matrix for the standardized data is

sR = zRza

=j -i.sm

The correlation matrix is

2.2646

~~:) 1

= ( -

-3.8117 4 -2.5503

0.~529

0.5661

2.2646)

- ~ . 5503 .

529 - 0.6376

º·{

__?~1~~~6).

--------~----~~~~----

~ooLJCT Of A DATA MATRIX AND ITS TR

rt-IE P

ANSPOSE

109 roa tfÍX, wb.ich is cov(xh,x¡), . . by /var(x h )var ( x . .,,T1ces trom the mam diagonal of th i) ' taking the v 1 va.flOJ-• e covaria a ues of h a.rnple, the (1, 2)th element of the n~e matrix. Thu . t e ex 2/. 'ÍfX4l = - 0.9529. correlation mat . . s m the -2 VlJ -"" nx is r12 _ Yet another way of obtaining the correlaf . ion matnx . . . . nu.rnerical examp1e) it is g1ven by the product is to note that (in the "T.1.

o

/41"" o

o ~

l(

-22

13 -22 2.5

41 -5

_;.sl(~ o

1.5

o fü

o

l

o . {E

In the general case this is the product (1/n)BRB where B is th d· rnatrix whose (i, i)th .element is Jvar(x i.) =a¡. N ot.ice that when e iagonal . . three rnatnces are. to be multiplied (e.g., when the product LMN is · to be found) 1.t malees no difference · . ' 1.t . whether one first obtains. LM . and then postmu1tiplies by N, or first obtams MN and then premultipbes it by L· Ali that matters is · that the order of the factors be preserved. The rule can be extended to the evaluating of a matrix product of any number of factors.

The R Matrix and the Q Matrix Up to this point we have been considering the matrix product obtained when a data matrix or its centered or standardized equivalent is postmultiplied by its transpose. The product, whether it be XX', XRXR., or ZRZR., is of size s X s. Now we discuss the n X n matrix obtained when a data matrix. is premultiplied by its transpose. First, with regard to centering: recall that to form XR, the matrix X was centered by row means; that is, it was centered by subtracting, from every element, the mean of its row. Equivalently, the data were centered by species means since each row of X lists observations on one species. . T B t this time, centenng by o center X' we again center by row means. u 0 f X' ro · ' . d t eans since each row means is equivalent to centenng by qua rad fm of X' w1ll . be deno ted* li w t s s observations on one quadrat. The centere orm . h the quantity by X' Th . h unt by whic xJi' . Q· e(}, i)th element of X 1s t e amo fty 1 of all s species in quadrat J of species i, deviates from the average quan

0

*Th

t

e symbol Q is used because the procedures described are par

of a Q-type

analysis.

TABLE3.4. DATA MATRIX #8 TRANSPOSED, X', ANDTttEN CENTERED BY ROWS, Xo.

X'= r10: 14

X'Q --

17 11 3 1

r-3.67 O 4

7.67

x 1 = 7.67

l) 9.33 3 -3 -5.33

-----------

x2 = 8.00 x3 = 6.00 x4 = 6.33

-5.67)

-3 -1 -2.33

The SSCP matrix Q and the covariance matrix (1 / s )Q

1

;Q =

44.22 15 [ -12.33 - 21.56

15 6 - 2 -3

45 18

-37

-6

-64.67)

-6 -9

26 49

49 92.67

-12.33 -2 8.67 16.33

-9

- 21.56) -3 16.33 30.89

in the quadrat. (If a species is absent from a quadrat, it is treated as "present" with quantity zero.) Table 3.4 (which is analogous to Table 3.2) shows X', the transpose o Data Matrix #8, and its row-centered (quadrat-centered) form XQ in the upper panel. In the lower panel is the SSCP matrix Q = X XQ (here XQ is the transpose of XQ) and the covariance matrix (l/s)Q. 0 . The (j, j)th element of (l/s)Q is the variance of the species quantities in quadrat j. The (j, k)th element is the covariance of the species quantities in quadrats j and k. These elements are denoted, respectively, by var(xj ) and cov(x1, xk); the two symbols j and k both refer to quadrats. Notice that var(x) could also be defined as the variance of the elements in the jth row of XQ, and cov(x1, xk) as the covariance of the elements in its jth and ktb rows. Next XQ is standardized to give ZQ; we then obtain the producl SQ = ZQZQ ZQ is the transpose of ZQ. Finally, is_ 1:; correlatJon matnx whose elements are the correlations between every pi!ll quadrats. Table which is analogous to Table shows the steps in the calculations for Data Matrix # 8.

wh~re

3.5,

(ljs)~

3.3,

RooUCT Of A DATA MATRIX ANO ITS rHf p TRANSPOSE

,,,

TABLEJ.5. COMPUTATION OFTHE pOR THE TRANSPOSE OF DATA MATCORRELATION MA -RIX # 8. TRIX

Tbe standardized form of X' is - 3.67 v'44.22

o

9.33 v'44.22 3

- 5.67 ~ v44.22 -3

16

16

4

-3

v'8.67 7.67 v'30.89

v'8.67 - 5.33 v'30.89

-1 i/8.67 -2.33 v'30.89

The SSCP matrix for the standardized data is 3 2.7626 -1.8900 -1.7497) 2.7626 3 - 0.8321 - 0.6611 Q Q Q 3 2.9948 . ( - 1.8900 - 0.8321 -1.7497 -0.6611 2.9948 3 The correlation matrix is 0.9209 -0.6300 -0.5832 1 -0.2774 -0.2204 1 0.9209 1 0.9983 . -; Q -0.6300 -0.2774 1 ( 1 0.9983 -0.5832 -0.2204

s - Z' z _

l

s -

Table 3.6 shows a tabular comparison of the two procedures just discussed. These procedures constitute the basic operations in an R-type and a Q-type analysis. It is for this reason that the respective SSCP matrices have been denoted by R and Q. Computation of a Covariance Matrix

his subsection is a short digression on the subject of computations. . · 1 · iples and who do not eaders who are concentratmg exclusive Y on pnnc . 3 ish to be distracted by practica! details should skip to Sectwn .4. product 1 · It is the ana ysis. . Consider R, the SSCP matrix used in an R-type hi oduct as it stands, it..is f aXR. Instead of evaluating the elements o t s pr sually more convenient to note that

X RX'R = XX' and

(3.10)

XX'

· ·de of this equation. to evaluate the expression on the nght si

TABLE 3.6. Centered matrix

A COMPARISON BETWEEN PRODUCTS OF THE FORM XX' AND X'X.ª

XR is matrix X centered by rows (species). Its X¡¡ - X¡ where 1 n

(i, j)th term is

X¡= -

L

XQ is matrix X' centered by rows (quadrats). Its (j, i)th term (in row j , column i) is

xiJ.

n J=l

SSCP matrix

Covariance matrix

R = XRXR where X Ris the transpose of X R. R is an s X s matrix; each of its elements is a sum of n squares or cross-products.

Q = X 0XQ where XQ is the transpose of x 0. Q is an n X n matrix; each of its elements is a sum of s squares or cross-products.

.!:_R

lQ s

n The (i, i)th element var(x;) is the variance of the elements in the ith row of XR (quantities of species i). The (h, i)th element cov(xh, X;) is the covariance of the hth and ith rows of XR (quantities of species h and i).

The (j, j)th element var(x) is the variance of the elements in the jth row of X 0 (quantities in quadrat j). The (j, k)th element cov(x./, x;.) is the covariance of the j th and k tb rows of X é, (quantiti es i n qua dra t s _/ a.n d k ).

ZR Standardized

Z'Q

Its (i, j)th term is

matrix

x,, - x, Jvar( x,)

a,

lts (j, i )th term is x1,

-

xj

vvar(xj)

where a, is the standard deviation of the quantities of species i.

where ªJ is the standard deviation of the quantities in quadrat j.

1 1 - SR= -ZRZR n n The (h, i)th element of (l/n)SR is r 17 ¡ , the correlation coefficient between the h th and i th rows of XR (i.e., between species h and species i). The matrix has ls on its main diagonal since rhh = 1 for all h.

The (j, k )tb element of (1/s )S0 is ')k, the correlation coefficient between the jth and k tH rows of X 0 (i.e., between quadrats j and k ). Tbe matrix has ls on its main diagonal since '.iJ = 1 for a11 j.

I

Correlation matrix

ªSymbols h and i refer to species; syrnbols j and k refer to quadrats.

114

TRANSFORMING D AlA~A

Here

X is

an s X n matrix in which every element in the ith

l~I(¡,

.

row is 1 n X¡= X¡J' n ;= . 1

L

X has

the mean over all quadrats of species i. Thus columns. It is

n identical s-elern en

X= with n columns. Hence

nx-21 -

nx- 1x- 2

-

~~2.~1 . . . ~~~

rnxsxl -

-

- nxsx2



nx- 1xs

-2

-

-

n.x.2 :~

••• '. '.'. • •

...

-2 nxs

which, like XX', is an s X s matrix. . The subtraction in (3.10) is done simply by subtracting every element ~ XX' from the corresponding elemen t in XX', that is, the ( i, j )th element 0 the former from the (i, j)th element of the latter. These computations are illustrated in Table 3.7, in which matrix R f:r Data Matrix # 8 is obtained using the right side of Equation (3.1~). \: table should be compared with Table 3.2 in which R was obtained using 1 left side of (3 ..10). As may be s~en, the results are the s_ame. i)lh Now cons1der the symbolic representation of this result. The (h, element of XRXR is [see Equation (3.8b)) n

L j=l

(X hJ

-

X,;) (X ij

-

X;) .

RODUCT Of A DATA MATRIX ANO ITS T

rt-IE P

RANSPOSE 115

TABLE3.7.

COMPUTATION OF THE sscp

~:~(81;S~G:;:;¡¡~IO~t~XX)~(~::R::; 2

xx' =

(

5

5

4

4

9 9

10

I

=

(

5

=

200

420

256 128

128 64

108

r:)1-(:24 ::: 1:) 70

4~ 4~ 4~ ~ )(~

R =XX ' - XX

3

8 8

52 -88 10

4 4

-88 164 -20

-

288 144

10) -20 6

The (h, i)tb element of XX' - XX' is n

L:

xhjxij -

nxhxi.

j=l

We now show that these two expressions are identical. In what follows, ali sums are from j = 1 to n. Multiplying the factors in brackets in the first expression shows that

Now note that Lx x .. = x I:x 1.. and LX¡XhJ = X¡LXhJ since xh and X¡ are h 11 h 1 ¡ 0 f ·) Similarly onstant with respect to j (i.e., are the same for all va ~e~ 1 · ' Cxhx¡ = nxhx; since it is the sum of n constant terms xhxi. Thus

ext make the substitutions and

"X· . = nX¡· Í-J I}

TRANSFORMING DAT

116

AMAlR1ct~

Then

= '"'Xh ~ J·X·· lj

- nXhX1·

as was to be proved.

3.4. THE EIGENVALUES AND EIGENVECTORS OF SQUARE SYMMETRIC MA TRIX

A

This section resumes the discussion in Section 3.2, where it was shown how the pattem of a swarm of data points can be changed by a linear transformation. It should be recalled that, for illustration, a 2 X 4 data matrix was considered. The swarm of four points it represented were the vertices of a square. Premultiplication of the data matrix by various 2 X 2 matrices brought about changes in the position or shape of the swarm; see Figure 3.2. When the transforrning matrix was orthogonal it caused a rigid rotation of the swarm around the origin of the coordinates (page 94) ; and when the transformation matrix was diagonal it caused a change in the scales of the coordinate axes. Clearly, one can subject a swarm of data points to a sequence of transformations one after another, as is demonstrated in the following. The relevance of the discussion to ecological ordination procedures will become clear subsequently. Rotating and Rescaling a Swarm of Data Points

To begin, consider the following sequence of transformations:

l.

Premultiplication of a data matrix X by an orthogonal matriX (}. (Throughout the rest of this book, the symbol U always denotes ª11 orthogonal matrix.)



Premultiplication of UX by a diagonal matrix A (capital Greek lambda: the reason for using this symbol is explained later).

3 · Premultiplication of AUX by U', the transpose of U, giving VJ\VX·

HE flGE

NVALUES ANº t1uc1H e'""'º"" vt A SQu

ARE SY MMET RIC MATRIX

117

is an example. As in section 3 2 f{ere . · , we us enting a swarm of four pomts in two-sp e a 2 X 4 data mat . pres ace. This time let nx

X= (11 15 14 45) . s the data swarm consists . . of the vertices of a rectangle ( . hu e first transformahon 1s to cause a clockw·1se rotati seef Figure 3.6a) · Th und the origin through an angle of 25 °. The 0 th on the rectangle rO . . r ogonal t· produce this rotat10n 1s (see page 101) ma nx required

°

u - ( cos 25º cos 115º

cos 65º) - ( 0.9063 -0.4226 cos 25º -

0.4226) 0.9063 .

Jt is found that the transformed data matrix is

ux =

(

1.329 0.484

4.954

2.597 3.203

-1.207

6.222) 1.512 .

:X:2

::C2

(a)

5

D 5

(b)

5

:x:,

10

x,

X2

~

(d)

(e)

10

10

5

5

10

15

:x:,

5



. ( d) U' AVX· The lines ~e 16· The data swarms represented by: (a) X; ( b) UX; (e) A~, The eleroents of U and ar:g .the ~oints are put in to make the shapes of the swarms apparen ·

.

given m the text.

TRANSFORMING DAT

118

A"1Al~IC¡1

The swarm of points represented by this matrix, which is mer .. . h . p· 3 e1y tn riginal rectangle in a new pos1tton, 1s s own m 1gure .6b. e o 1 . The second transformation is to be an a terat10n of the coordinate . (th e a b sc1ssa . ) b y a factor of ~ scalei Let us increase the scale on the x 1-mas 1 :::: 4· and on the x 2 -axis (the ordinate) by a factor of A2 = 1.6. This is equival~~ to putting

Then AUX

= (

3.189 0.774

11.890 -1.931

6 .232 5.124

14.933) 2.419 .

The newly transformed data are plotted in Figure 3.6c. The shape of the swarm has changed from ~ rectangle to a parallelogram. The third and last transformation consists in rotating the parallelogram back, counterclockwise, through 25º. This is achieved by premultiplying AUX by U', the transpose of U. It is found that U'AUX

= (

2.563 2.049

11.592 3.275

3.483 7 .278

12.511) 8.504 .

These points are plotted in Figure 3.6d. Now observe that we could have achieved the same result by premultiply· ing the original X by the matrix A where

A= U'AU = ( 2.2571 0 .3064

0 .3064) 1.7429 .

(3 .11)

Observe that A is a square symmetric matrix. We n~w make the following assertion, without proof. Any s X s square symmetnc matrix A is the product of three factors that may be writt~ U'Al!; U and its transpose U' are orthogonal matrices; A is a diagoil d matnx. In the general d. · s all . s- 1mens1onal (or s-species) case all three factor A itself are s X s matrices. ' A (Thle elements on the main diagonal of A are known as the eigenvafues o ª so called the latent va¡ues or roots, or characteristic values or roots, 0

HE EIG

ENVALUES AND EIGENVECTORS OF A SQUA

RE SY MMETRIC MA TRIX

). That is, since A1

A= [

o

O

119

l

¿.. ~: ..::·... ~: ,

. are the eigenvalues of A are A1,. A2, ... ' As. The eigenvalues of ama tnx 1uAenoted by AS by long-established custom· likewise the matrix· f · . . ' ' o eigenva1ues is always denoted ?Y A. This IS why A was used for the diagonal matrix that rescaled the axes m the second of the three transformations performed previously. The rows of U, which are s-element row vectors, are known as the eigenvectors of A ( also called the latent vectors, or characteristic vectors, of A).

In the preceding numerical example we chose the elements of U (hence of U') and A, and then obtained A by forming the product U'AU. Therefore, we knew, because we had chosen them, the eigenvalues and eigenvectors of this A in advance. The eigenvalues are A1 = 2.4 and °"A. 2 = 1.6. And the eigenvectors are

ui

= (

0.9063

0.4226)

and u'2 = ( -0.4226

0.9063 ),

the two rows of U. . tnx· A and .h h e symmetnc ma Now suppose we had started wit t e squar ·ble to W ld it have been possI had not known the elements of U and A · ou · yes The · f A? The answer IS A _ u 'AU in determine U and A knowing only the elements 0 · . . d U d A such that ' analys1s which, starting with A, fin s .ªn n eigenanalysis. In which U is orthogonal and A diagonal, IS kn~wn as s forros the heart . . d an eigenana1ysI nearlY all ecological ordmat10n proce ures, of the computations, as shown in Chapter 4 · . hich this may be · · f one way in w The next step here is a demonstrat10n d and then w1th ª d . · ly constructe . d one, first with the 2 x 2 matnx A previous d be generalize to 3 . · h h metho can >< 3 symmetric matrix. The way m whic t e . .th s > 3 will then be Per · · trie matnx wi rrut e1genanalysis of an s X s symme

ª·

°

120

clear. Of course, with large s the computations exhaust the paf ience or anything but a computer.

Hotelling's Method of Eigenanalysis To begin, recall Equation (3.11) A= U'AU and premultiply both sides by U to give UA = uu~u. N ow observe that, since U is orthogonal, UU' = 1 by the definition of an orthogonal matrix. Therefore,

UA

=

IAU

=

AU.

Let us write this out in full for the s = 2 case. For U and A we write each separate element in the customary way, using the corresponding lowercase letter subscripted to show the row and column of the element. For A we use the knowledge we already possess, namely, that it is a diagonal matrix. Thus Ü )

A2

(Un

U21

On multiplying out both sides, this becomes Un ªn (

U21 ªn

+ U12 ª21 + U22 ª21

which states the equality of two 2 X 2 matrices. N ot only does the left side (as a whole) equal the right side (as a whole), but it follows also that anY row of the matrix on the left side equals the corresponding row of tbe matrix on the right side. Thus considering only the t9p row,

(unan+ U12ª21'

Unª12

+ U12ª22) =

(A. 1un,

A1U1z) ,

an equation having two-element row vectors on both sides. This is the same as the more concise equation

¡Jff LIG

ENVALUES ANO EIGENVECTORS OF A

SQUARE S\'M

METRIC MATRIX

.

121

.ch ui 1s the two-element row vect in wl1l or con . . nonzero element in tl ti stituting the fi an d "A i is the only . le rst r rst row of U ther are an e1genvalue of A and its e ow of A. Hen , , ioge orresponct· . ce 1\1 and ' tJotelling's method for obtaining the . ing e1genvect U1 r1 nuinenc l or. ments of ui when the elements of A ar . values of A d ele . e g1ven p i an the are illustrated usmg roceeds as foll stePs ows. The

ª

A= ( 2.2571 0.3064

0.3064) 1.7429 '

the symmetric matrix whose factors we alread Yk now. Step J. Choose arbitrary tria! values for the e1ements of u' D . . tnal vector by w(Ó)· It is convenient to t . l· enote this s art with w(Ó) = (l, l). / Step 2. F orm th e prod uct Wco) A. Thus ( 1,

1 )A = ( 2.5635,

2.0493 ).

Let the largest element on the right be denoted by 11. Thus 11 = 2.5635. Step 3. Divide each element in the vector on the right by /1 to give 2.5635( 1,

0.7994) = l1wó_),

say. Now wci) is to be used in place of WcÓ) as the next trial vector. Step 4. Do steps 2 and 3 again with wá) in place of WcÓ), and obtain /2 and Wá)· Continue the cycle of operations (steps 2 and 3) until a trial vector is obtained that is exactly equal, within a chosen number ~f dec~al P,~aces, ,~º the preceding one. Denote this vector by wcf-> (the subscnpt F 18 for final ). ~ben the elements of w' are proportional to the elements of ui the first e1g (F) · ' A 15 · ual to ~ 1 the envector of A and / the largest element m WcF) , eq ' ~ ' p, rgest of A. Th eigenvalue . d . h f0 ur decimal places us m the example at the nineteenth cycle an wit \Ve obtain ' ( 1,

0.4663 )A

=

(

2.4000,

1.1191)

=

2.4000( 1,

0.4663 ).

122

TRANSFORMINC DATA

"-'Al~IC¡ That is,

w

= (

1,

0.4663)

and

lp = ~l = 2.4000. We now wish to obtain uí from w