**Project # 5 Dominant
Eigenvalue Computation**

**Introduction**

The purpose of this project is to develop techniques to numerically
calculate the largest *eigenvalue* of a matrix. The largest eigenvalue
often gives a great deal of information about the stability of the solutions
of many models such as population models. Commands such as *eigenvals*
in Maple try to calculate **all** eigenvalues and, in the case of larger
matrices, fail to do so. The method used is the "Power Method".

**Background** Basics of eigenvalues and eigenvectors; section 5.1
of Kolman, 6.1 . You should be comfortable with the definition of eigenvalue
/eigenvector, it will be applied throughout this project. Also you will
need to use software to compute matrix products and scalar multiplication
of matrices.

Assume in what follows that A is an nxn matrix with eigenvalues e_{1},
e_{2}, e_{3}...,e_{n} arranged in decreasing order
so that

|e_{1}|
> |e_{2}| > . . . > |e_{n}|

making e1 the so-called **dominant **eigenvalue. Also assume that
eigen**vectors **associated with e_{1}, e_{2}, e_{3}...,e_{n}
are denoted by **v**_{1}**,v**_{2}**,...,v**_{n}
(so **Av _{i} = **e

**Our problem is to find e _{1} and v_{1}
for a given matrix A.**

**Some Simple Linear Algebra**

In this project it may be helpful to remember that:

* any scalar multiple of an eigenvector is also an eigenvector (for the same eigenvalue)

* the acid test for being an eigenvalue and eigenvector is **Ax = **e**x
**(If you multiply a vector by a matrix and get a scalar multiple of
the same vector, then you have an eigenvector and the scalar is its eigenvalue)

* if **u**_{1}** **denotes the column vector **(**1,0,0,...,0**)**^{t}
then **A** **u**_{1} gives you the **1st **column of **A**
(try it!). In general, if **u**_{i} is the column of all 0s
except 1 in the ith position, then **Au**_{i} yields the ith
column of A

**The Power Method**

Conceptually, the Power Method is straightforward. First take any vector
**x** and expand it in terms of the eigenvectors of A:

** x** = c_{1}**v**_{1}
+ c_{2}**v**_{2} + ... + c_{n}** v**_{n}
(so the c's are the coordinates of **x**, as before )

Multiply both sides by A and use the linearity of matrix multiplication:

A**x** = c_{1} A**v**_{1}
+ c_{2}A**v**_{2} + ... + c_{n}A**v**_{n}**
**

as well as the definition of eigenvectors:

A**x** = c_{1}e_{1}
**v**_{1} + c_{2} e_{2}**v**_{2}+
. . . + c_{n}e_{n}**v**_{n}** **

Repeating the multiplication by A on both sides** k** times, we obtain:

A^{k}**x** = c_{1}e^{k}_{1}
**v**_{1} + c_{2} e^{k}_{2}**v**_{2}+
. . . + c_{n}e^{k}_{n}**v**_{n}** **

Now for large k, since e_{1} is greater in absolute value than
all the other e_{i} s, e_{1}^{k }will be much
greater than | e_{i}^{k}_{ }| (all i=2...n). Hence
the first term on the right hand side will dominate the others and we will
have

A^{k} **x** = c_{1}e^{k}_{1}
**v**_{1} for
k large

Since any multiple of an eigenvector is still an eigenvector, we will
have found an eigenvector corresponding to e_{1} . Furthermore
if we compare the components of this vector to those of the next power
(A^{k+1}**x**), they should all have the ** same**
ratio and that ratio should be

The only question that remains is: what is the vector** x? **It may
be taken fairly at random, but we would **not** (if you look at the
expansion) want it to be one of the *other* eigenvectors or a combination
of them. OK; that's no help since we don't know them. In the applications
we have used in this course and the problems that follow, it may be shown
that a safe choice for **x** is simply the column vector of all 1s :
(1,1,1,...,1)^{t}** **

**Example 1 ** Suppose that A is given to be

Using **x** as above, and the *multiply* command of Maple, for
example, that

Now these at face value contain little of assistance. However, if we divide each one by their first component, we get

suggesting** k** is large enough to have gotten us near the limit
(to 3 places). We check our result using the definition of eigenvalue/vector:

showing that we indeed did find the dominant eigenvalue, **4.820**,
and an eigenvector corresponding to it, (1,1.910,2.648)^{t }How
did we know we used a high enough power? Roughly, after scaling A^{10}**x**
and A^{11}**x** we got the same result, to three decimal places.
Had they differed, we would have gone to two consecutive higher powers.

For nonnegative matrices, the foundation for this method is provided by the powerful Perron-Frobenius Theorem which actually has two versions, depending what kind of nonnegative matrix one has.

**Exercises:**

**1. **Use the methods above to obtain the dominant eigenvalue and
a corresponding

eigenvector for each of the following matrices

a. the original matrix from population dynamics and urban planning

b. the transition matrix from any regular Markov Chain; 4x4 or bigger; your choice

c. the matrix

(in each case, your write up should indicate the eigenvalue and an eigenvector as well

as the powers you used to obtain it. All results should be accurate
to **3 **decimal places)

**2.** For each matrix from part 1, use an appropriate command from
your software package which computes eigenvalues and eigenvectors and compare
results

with your work from part 1.

**3**. a. The method does not always work. If A has **two** dominant
eigenvalues (like 8 and -8) then the asymptotic behavior described above
does not happen. Examine the following

matrix in light of this remark:

(in other words, try to apply the power method as we did earlier) Your report should describe what happened and what went wrong.

b. prove the following algebraic result: if **x** is an eigenvector
of A with eigenvalue

e then **x** is also an eigenvector of **(A + **c**In)** with
eigenvalue** **e + c**.** (hint: use the

definition of eigenvalue/eigenvector)

c. Use your result from part b to obtain the dominant eigenvalue/eigenvector for the

matrix in part a by adding **2** down the diagonal.

**4.** One can also in some cases find the dominant eigenvalue by
examining the characteristic equation of A by its graph. Remember where
ever it crosses the x axis is an eigenvalue. You simply want the largest
place it crosses. Use software commands to generate the characteristic
polynomial of the matrix in 1c and plot it so as to find the largest root.

**5.** a. Algebra problem: **prove** if A is nonegative and has
dominant eigenvalue e_{1} and

dominant eigenvector** x _{0}** (which you may assume has
all positive entries) then

e_{1} is in *between* the largest of the column sums of
A and the smallest. (hint: scale

**x _{0} **so its components add up to 1, then look at the
definition of eigenvalue/vector)

Examine the problem in the discussion of this project for an example.

b. Use the above result to show that for a Markov chain, the dominant eigenvalue is

*always* 1.

c. **Prove** if the row sums of a matrix are all the same, then **(**1,1,1,...,1**)**^{t}
is an eigenvector. What can you say about its eigenvalue?

**6**. In order for the Power Method to work, what implicit assumptions
have we made about

A and its eigenvalues/eigenvectors?