# Abstract

In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector of a given linear transformation is a non-zero vector which is multiplied by a constant called the eigenvalue (info) as a result of that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues).

In mathematics, a vector may be thought of as an arrow. It has a length, called its *magnitude*, and it points in some particular *direction*. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An **eigenvector** of a given linear transformation is a non-zero vector which is multiplied by a constant called the **eigenvalue** as a result of that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues).

For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An **eigenspace** of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction). An **eigenspace** is an example of a subspace of a vector space.

In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding **eigenvalues**, **eigenvectors**, and **eigenspaces** of a given matrix are discussed below.

These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.

Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of *direction* loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract *direction* is unchanged by a given linear transformation, the prefix “eigen” is used, as in *eigenfunction*, *eigenmode*, *eigenstate*, and *eigenfrequency*.

## Definitions: the eigenvalue equation

Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space *L* a vector function *A* is defined if for each vector **x** of *L* there corresponds a unique vector **y** = *A*(**x**) of *L*. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function *A* is *linear* if it has the following two properties:

*Additivity*:*A*(**x**+**y**) =*A***x**+*A***y***Homogeneity*:*A*(α**x**) = α*A***x**

where **x** and **y** are any two vectors of the vector space *L* and α is any scalar.^{[10]} Such a function is variously called a *linear transformation*, *linear operator*, or *linear endomorphism* on the space *L*.

Given a linear transformation |

The key equation in this definition is the eigenvalue equation, *A***x** = λ**x**. Most vectors **x** will not satisfy such an equation. A typical vector **x** changes direction when acted on by *A*, so that *A***x** is not a multiple of **x**. This means that only certain special vectors **x** are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if *A* is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors. But in the usual case, eigenvectors are few and far between. They are the “normal modes” of the system, and they act independently.^{[12]}

The requirement that the eigenvector be non-zero is imposed because the equation *A***0** = λ**0** holds for every *A* and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. An eigenvalue can be, and often is, a complex number. In the definition given above, eigenvectors and eigenvalues do not occur independently. Instead, each eigenvector is associated with a specific eigenvalue. For this reason, an eigenvector **x** and a corresponding eigenvalue λ are often referred to as an *eigenpair*. One eigenvalue can be associated with several or even with infinite number of eigenvectors. But conversely, if an eigenvector is given, the associated eigenvalue for this eigenvector is unique. Indeed, from the equality *A***x** = λ**x** = λ’**x** and from **x** ≠ **0** it follows that λ = λ’.^{[13]}

Fig. 2. The eigenvalue equation as a homothety (similarity transformation) on the vector **x**.

Geometrically (Fig. 2), the eigenvalue equation means that under the transformation *A* eigenvectors experience only changes in magnitude and sign — the direction of *A***x** is the same as that of **x**. This type of linear transformation is defined as homothety (dilatation^{[14]}, similarity transformation). The eigenvalue λ is simply the amount of “stretch” or “shrink” to which a vector is subjected when transformed by *A*. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation *I* under which a vector **x** remains unchanged, *I***x** = **x**, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection.

If **x** is an eigenvector of the linear transformation *A* with eigenvalue λ, then any vector **y** = α**x** is also an eigenvector of *A* with the same eigenvalue. From the homogeneity of the transformation *A* it follows that *A***y** = α(*A***x**) = α(λ**x**) = λ(α**x**) = λ**y**. Similarly, using the additivity property of the linear transformation, it can be shown that any linear combination of eigenvectors with eigenvalue λ has the same eigenvalue λ.^{[15]} Therefore, any non-zero vector in the line through **x** and the zero vector is an eigenvector with the same eigenvalue as **x**. Together with the zero vector, those eigenvectors form a subspace of the vector space called an *eigenspace*. The eigenvectors corresponding to different eigenvalues are linearly independent^{[16]} meaning, in particular, that in an *n*-dimensional space the linear transformation *A* cannot have more than *n* eigenvectors with different eigenvalues.^{[17]} The vectors of the eigenspace generate a linear subspace of *L _{n}* which is invariant (unchanged) under this transformation.

^{[18]}

If a basis is defined in vector space L_{n}, all vectors can be expressed in terms of components. Polar vectors can be represented as one-column matrices with *n* rows where *n* is the space dimensionality. Linear transformations can be represented with square matrices; to each linear transformation *A* of L_{n} corresponds a square matrix of rank *n*. Conversely, to each square matrix of rank *n* corresponds a linear transformation of L_{n} at a given basis. Because of the additivity and homogeneity of the linear trasformation and the eigenvalue equation (which is also a linear transformation — homothety), those vector functions can be expressed in matrix form. Thus, in a the two-dimensional vector space *L*_{2} fitted with standard basis, the eigenvector equation for a linear transformation *A* can be written in the following matrix representation:

- ,

where the juxtaposition of matrices means matrix multiplication. This is equivalent to a set of *n* linear equations, where *n* is the number of basis vectors in the basis set. In these equations both the eigenvalue λ and the components of **x** are unknown variables.

The eigenvectors of *A* as defined above are also called *right eigenvectors* because they are column vectors that stand on the right side of the matrix *A* in the eigenvalue equation. If there exists a transposed matrix *A*^{T} that satifies the eigenvalue equation, that is, if *A*^{T}**x** = λ**x**, then λ**x**^{T} = (λ**x**)^{T} = (*A*^{T}**x**)^{T} = **x**^{T}*A*, or **x**^{T}*A* = λ**x**^{T}. The last equation is similar to the eigenvalue equation but instead of the column vector **x** it contains its transposed vector, the row vector **x**^{T}, which stands on the left side of the matrix *A*. The eigenvectors that satisfy the eigenvalue equation **x**^{T}*A* = λ**x**^{T} are called *left eigenvectors*. They are row vectors.^{[19]} In many common applications, only right eigenvectors need to be considered. Hence the unqualified term “eigenvector” can be understood to refer to a right eigenvector. Eigenvalue equations, written in terms of right or left eigenvectors (*A***x** = λ**x** and **x**^{T}*A* = λ**x**^{T}) have the same eigenvalue λ.^{[20]}

An eigenvector is defined to be a *principal* or *dominant eigenvector* if it corresponds to the eigenvalue of largest magnitude (for real numbers, largest absolute value). Repeated application of a linear transformation to an arbitrary vector results in a vector proportional (collinear) to the principal eigenvector.^{[20]} The eigenvalue of smallest magnitude of a matrix is the same as the inverse (reciprocal) of the dominant eigenvalue of the inverse of the matrix. Since most applications of eigenvalues need the eigenvalue of smallest magnitude, the inverse matrix is often solved for its dominant eigenvalue.

The applicability of the eigenvalue equation to general matrix theory extends the use of eigenvectors and eigenvalues to all matrices, and thus greatly extends the scope of use of these mathematical constructs not only to transformations in linear vector spaces but to all fields of science that use matrices: linear equations systems, optimization, vector and tensor calculus, all fields of physics that use matrix quantities, particularly quantum physics, relativity, and electrodynamics, as well as many engineering applications.

## Characteristic equation

The determination of the eigenvalues and eigenvectors is important in virtually all areas of physics and many engineering problems, such as stress calculations, stability analysis, oscillations of vibrating systems, etc. It is equivalent to matrix diagonalization, and is the first step of orthogonalization, finding of invariants, optimization (minimization or maximization), analysis of linear systems, and many other common applications.

The usual method of finding all eigenvectors and eigenvalues of a system is first to get rid of the unknown components of the eigenvectors, then find the eigenvalues, plug those back one by one in the eigenvalue equation in matrix form and solve that as a system of linear equations to find the components of the eigenvectors. From the identity transformation *I***x** = **x**, where *I* is the identity matrix, **x** in the eigenvalue equation can be replaced by *I***x** to give:

The identity matrix is needed to keep matrices, vectors, and scalars straight; the equation (*A* − λ) *x* = 0 is shorter, but mixed up since it does not differentiate between matrix, scalar, and vector.^{[21]} The expression in the right hand side is transferred to left hand side with a negative sign, leaving 0 on the right hand side:

The eigenvector **x** is pulled out behind parentheses:

This can be viewed as a linear system of equations in which the coefficient matrix is the expression in the parentheses, the matrix of the unknowns is **x**, and the right hand side matrix is zero. According to Cramer’s rule, this system of equations has non-trivial solutions (not all zeros, or not any number) if and only if its determinant vanishes, so the solutions of the equation are given by:

This equation is defined as the *characteristic equation* (less often, secular equation) of *A*, and the left-hand side is defined as the *characteristic polynomial*. The eigenvector **x** or its components are not present in the characteristic equation, so at this stage they are dispensed with, and the only unknowns that remain to be calculated are the eigenvalues (the components of matrix *A* are given, *i. e*, known beforehand). For a vector space *L*_{2}, the transformation *A* is a 2×2 square matrix, and the characteristic equation can be written in the following form:

- .

Expansion of the determinant in the left hand side results in a characteristic polynomial which is a monic (its leading coefficient is 1) polynomial of the second degree, and the characteristic equation is a quadratic equation:

or expanded:

This equation has the following solutions (roots):

For real matrices, the coefficients of the characteristic polynomial are all real. The number and type of roots depend on the value of the discriminant, *D*. For cases *D* = 0, *D* > 0, or *D* < 0, respectively, the roots are one real, two real, or two complex. If the roots are complex, they are also complex conjugates of each other. When the number of roots is less than the degree of the characteristic polynomial (the latter is also the number of dimensions of the vector space) the equation has a *multiple root*. In the case of a quadratic equation with one root, this root is a double root, or a root with multiplicity 2. A root with a multiplicity of 1 is a *simple root*. A quadratic equation with two real or complex roots has only simple roots. In general, the *algebraic multiplicity* of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The sum of the algebraic multiplicities of all eigenvalues of the transformation is equal to the dimension of the linear vector space.^{[22]} The *spectrum* of a transformation on a finite dimensional vector space is defined as the set of all its eigenvalues. In the infinite-dimensional case, the concept of spectrum is more subtle and depends on the topology of the vector space.

The general formula for the characteristic polynomial of an *n*-square matrix is

where *S*_{0} = 1, *S*_{1} = tr(*A*), the trace of the transformation matrix *A*, and *S _{k}* with

*k*> 1 are the sums of the principal minors of order

*k*

^{[23]};

*S*are the elementary symmetric functions.

_{k}^{[24]}The fact that eigenvalues are roots of an

*n*-order equation shows that a linear transformation of an

*n*-dimensional linear space has at most

*n*different eigenvalues.

^{[25]}According to the fundamental theorem of algebra, in a complex linear space, the characteristic polynomial has at least one zero. Consequently, every linear transformation of a complex linear space has at least one eigenvalue.

^{[26]}

^{[27]}For real linear spaces, if the dimension is an odd number, the linear transformation has at least one eigenvalue; if the dimension is an even number, the number of eigenvalues depends on the determinant of the transformation matrix: if the determinant is negative, there exists at least one positive and one negative eigenvalue, if the determinant is positive nothing can be said about existence of eigenvalues.

^{[28]}The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space),

*n*. Thus, for

*n*= 3, eigenvalues are roots of the cubic equation, for

*n*= 4 — roots of the quartic equation. For

*n*> 4 there are no exact solutions and one has to resort to root-finding algorithms, such as Newton’s method (Horner’s method) to find numerical approximations of eigenvalues. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors.

In order to find the eigenvectors, the eigenvalues thus found as roots of the characteristic equations are substituted back, one at a time, in the eigenvalue equation written in a matrix form (illustrated for the simplest case of a two-dimensional vector space *L*_{2}):

where λ is one of the eigenvalues found as a root of the characteristic equation. This matrix equation is equivalent to a system of two linear equations:

The equations are solved for *x* and *y* by the usual algebraic or matrix methods. Often, it is possible to divide both sides of the equations to one or more of the coefficients which makes some of the coefficients in front of the unknowns equal to 1. This is called *normalization* of the vectors, and corresponds to choosing one of the eigenvectors (the normalized eigenvector) as a representative of all vectors in the eigenspace corresponding to the respective eigenvalue. The *x* and *y* thus found are the components of the eigenvector in the coordinate system used (most often Cartesian, or polar).

Using the Cayley-Hamilton theorem which states that every square matrix satisfies its own characteristic equation, it can be shown that (most generally, in the complex space) there exists at least one non-zero vector that satisfies the eigenvalue equation for that matrix.^{[29]} As it was said in the Definitions section, to each eigenvalue correspond an infinite number of colinear (linearly dependent) eigenvectors that form the eigenspace for this eigenvalue. On the other hand, the dimension of the eigenspace is equal to the number of the linearly independent eigenvectors that it contains. The *geometric multiplicity* of an eigenvalue is defined as the dimension of the associated eigenspace. A multiple eigenvalue may give rise to a single eigenvector so that its algebraic multiplicity may be different than the geometric multiplicity.^{[30]} However, as already stated, different eigenvalues are paired with linearly independent eigenvectors.^{[16]} From the aforementioned, it follows that the geometric multiplicity cannot be greater than the algebraic multiplicity.^{[31]}

## Examples

The examples that follow are for the simplest case of 2×2 real matrices, representing transformations in the plane (i.e. two-dimensional real Euclidean vector space). It is worth noting that some real 2×2 matrices do not have any real eigenvalues, and thus no real eigenvectors (e.g. a matrix representing a rotation of 45 degrees will not leave any non-zero vector pointing in the the same direction.) Nevertheless, all those matrices also represent tranformations of the two dimensional complex Euclidean vector space, and do have complex eigenvectors and eigenvalues. In general, transformations on an *n*-dimensional complex vector space can be thought of as acting on a real vector space of dimension 2*n*.

### Homothety, identity, point reflection, and null transformation

As a one-dimensional vector space, consider a rubber string tied to unmoving support in one end, such as that on a child’s sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. This type of transformation is called homothety (similarity transformation). For a two-dimensional vector space, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at a fixed point on the balloon surface are stretched equally with the same scaling factor λ. The homothety transformation in two-dimensions is described by a 2×2 square matrix, acting on an arbitrary vector in the plane of the stretching/shrinking surface. After doing the matrix multiplication, one obtains:

which, expressed in words, means that the transformation is equivalent to multiplying the length of the vector by λ while preserving its original direction. The equation thus obtained is exactly the eigenvalue equation. Since the vector taken was arbitrary, in homothety any vector in the vector space undergoes the eigenvalue equation, *i. e.* any vector lying on the balloon surface can be an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking.

Several other transformations can be considered special types of homothety with some fixed, constant value of λ: in identity which leaves vectors unchanged, λ = 1; in reflection about a point which preserves length and direction of vectors but changes their orientation to the opposite one, λ = −1; and in null transformation which transforms each vector to the zero vector, λ = 0. The null transformation does not give rise to an eigenvector since the zero vector cannot be an eigenvector but it has eigenspace since eigenspace contains also the zero vector by definition.

### Unequal scaling

For a slightly more complicated example, consider a sheet that is stretched uneqally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: *k*_{1} for the scaling in direction *x*, and *k*_{2} for the scaling in direction *y*. The transformation matrix is , and the characteristic equation is λ^{2} − λ (*k*_{1} + *k*_{2}) + *k*_{1}*k*_{2} = 0. The eigenvalues, obtained as roots of this equation are λ_{1} = *k*_{1}, and λ_{2} = *k*_{2} which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging *k*_{1} back in the eigenvalue equation gives one of the eigenvectors:

Dividing the last equation by *k*_{2} − *k*_{1}, one obtains *y* = 0 which represents the *x* axis. A vector with length 1 taken along this axis represents the normalized eigenvector corresponding to the eigenvalue λ_{1}. The eigenvector corresponding to λ_{2} which is a unit vector along the *y* axis is found in a similar way. In this case, both eigenvalues are simple (with algebraic and geometric multiplicities equal to 1). Depending on the values of λ_{1} and λ_{2}, there are several notable special cases. In particular, if λ_{1} > 1, and λ_{2} = 1, the transformation is a stretch in the direction of axis *x*. If λ_{2} = 0, and λ_{1} = 1, the transformation is a projection of the surface *L*_{2} on the axis *x* because all vectors in the direction of *y* become zero vectors.

Let the rubber sheet is stretched along the *x* axis (*k*_{1} > 1) and simultaneously shrunk along the *y* axis (*k*_{2} < 1) as in Fig. 4. Then λ_{1} = *k*_{1} will be the principal eigenvalue. Repeatedly applying this transformation of stretching/shrinking many times to the rubber sheet will turn the latter more and more similar to a rubber string. Any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the *x* axis (the direction of stretching), that is, it will become collinear with the principal eigenvector.

### Shear

Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line.^{[32]} Unlike scaling, shearing a plane figure does not change its area. Shear can be horizontal − along the *X* axis, or vertical − along the *Y* axis. In horizontal shear (Fig. 5), a point *P* of the plane moves parallel to the *X* axis to the place *P’* so that its coordinate *y* does not change while the *x* coordinate increments to become *x’* = *x* + *k* *y*, where *k* is called the shear factor. The shear factor is given by: *k* = and hence can also be expressed in terms of the shear angle φ: *k* = cot φ. The matrix of a horizontal shear transformation is . The characteristic equation is λ^{2} − 2 λ + 1 = (1 − λ)^{2} = 0 which has a single root λ = 1. Therefore, the eigenvalue λ = 1 is multiple with algebraic multiplicity 2. The eigenvector(s) are found as solutions of

The last equation is divided by *k* (normalization) to obtain *y* = 0 which is a straight line along the *x* axis. This line represents the one-dimensional eigenspace. In the case of shear the algebraic multiplicity of the eigenvalue (2) is greater than its geometric multiplicity (1, the dimension of the eigenspace). The eigenvector is a unit vector along the *x* axis. The case of vertical shear with transformation matrix is dealt with in a similar way; the eigenvector in vertical shear is along the *y* axis. Applying repeatedly the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector.

### Rotation

Fig. 6. Rotation in a plane. The rotation plane is horizontal and the rotation is in the counterclockwise direction. The real axes *X* and *Y* lie on the rotation plane. The complex plane, determined by the real axis *X* and the complex axis *iY* is vertical and intersects the plane of rotation in the *X* axis. The complex eigenvectors **u**_{1} = 1 + *i* and **u**_{2} = 1 − *i* are radius vectors of the complex conjugated eigenvalues and lie in the complex plane.

A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. Rotation is different from translation because it changes the direction of the transformed vector. In particular, the basic premise of the eigenvalue equation, that the direction of transformed eigenvector remains the same as that of the original eigenvector, appears to be violated in rotation (however, as it will be shown below, this premise still holds). On the other hand, rotation is a linear transformation, because it has the defining properties, additivity and homogeneity, of linear transformations. Rotation and translation are affine transformations that preserve lengths of vectors and angles between vectors;^{[33]} this means that the shapes and dimensions of rotated objects are preserved. If the relative orientation of basis vectors (and, thus, the left and right coordinate system, vector and dot products) is also preserved, the rotation is called *proper rotation*; otherwise, the rotation is called *improper rotation* or *rotation with reflection*. The elements of a rotation matrix represent the components of the rotated vector. The determinant of the rotation matrix in proper rotation is equal to 1; in improper rotation this determinant is equal to −1.^{[33]} Rotation is an orthogonal transformation (*A* *A*^{T} = *A* *A*^{-1} = *I*). It follows from here that rotation converts a column vector to row vector (*A***x** = **x**^{T}) and vice versa.^{[33]} With the help of trigonometric functions, rotation becomes a linear transformation, **R**, whose matrix for a a counterclockwise rotation in the horizontal plane about the origin at an angle φ is

- .

The characteristic equation of **R** is λ^{2} − 2λ cos φ + 1 = 0. This quadratic equation has a discriminant *D* = 4 (cos^{2} φ − 1) = − 4 sin^{2} φ which is a negative number for φ not equal to 180° × *k* (*k* = 0, 1, 2, …). Therefore, except for the latter special cases in which the discriminant becomes zero, real roots (eigenvalues) do not exist for rotation. The characteristic equation has two complex roots λ_{1} and λ_{2} which are complex conjugates of each other: λ_{1,2} = cos φ ± *i* sin φ = *e* ^{± iφ}, where *i* is the imaginary unit. These two roots are the two eigenvalues of rotation each with an algebraic multiplicity equal to 1. The first eigenvector is found by plugging the first eigenvalue, λ_{1}, back in the eigenvalue equation:

The last equation is equivalent to the linear equation system

which, divided by sin φ and rearranged, gives the equation *y* = − *i x*. When the second eigenvalue, λ_{2}, is plugged in the eigenvalue equation, one obtains *y* = *i x*. Geometrically, these solutions represent straight lines in the complex plane *XiY* that cross the origin and are inclined towards the coordinate axes at angles of 45°; *y* = − *i x* lies in the second and fourth quadrant and *y* = *i x* lies in the first and the third quadrant. Unit vectors, taken at the directions of these lines are the two eigenvectors, **u**_{1} and **u**_{2} (Fig. 6). Those eigenvectors are orthogonal (at angle 90°) to each other.^{[34]} However, as said above, in rotation one cannot speak about an eigenvector in the usual sense since all vectors change directions and there is no one-dimensional eigenspace (straight line) invariant under rotation. In general, when the eigenvalue is a complex conjugate scalar, the two complex vectors form a single two-dimensional eigenspace (the complex plane; the vertical plane on Fig. 6) that is shared between the two eigenvalues so that those preserve their geometric multiplicities equal to 1.^{[35]}

A rotation of 0°, 360°, …, in which cos φ = 1 and sin φ = 0, is an identity transformation, while a rotation of 180°, 540°, …, in which cos φ = −1 and sin φ = 0 is a reflection. This is immediately seen either from the eigenvalues or from the rotation matrix when cos φ and sin φ are substituted with the respective values. For instance, upon rotation at 180° the rotation matrix becomes

which is the same as the reflection matrix (homothety with λ = −1).

Rotation in three dimensions (in *L*_{3}) is described with 3×3 matrices that include trigonometric functions of Euler angles. For instance, spinning of Earth around its axis (counterclock rotation around the vertical *z* axis, or yaw in aerospace jargon) is described by the following transformation matrix:

where α is the respective Euler angle (the yaw angle). The characteristic equation of this transformation can be factored to become (λ − 1)(λ^{2} − 2λ cos α + 1) = 0. The parentheses contain two equations whose solutions form the set of eigenvalues. The roots of λ^{2} − 2λ cos α + 1 = 0, are, as already shown above, λ_{1,2} = cos α ± *i* sin α = *e* ^{± iα}. The solution of λ − 1 = 0 is λ_{3} = 1. The eigenspace of λ_{1,2} is a complex plane through axis *Y*, while the eigenspace of λ_{3} is axis *Z*. The eigenvector for λ_{3} is an arrow with length 1 along the Earth’s spin axis pointing towards the North Pole. This is a good example of what was stated above: if the dimension is an odd number (*L*_{3}), the linear transformation has at least one real eigenvalue (λ_{3} = 1); if the dimension is an even number (*L*_{2}), the number of eigenvalues depends on the determinant of the transformation matrix: if the determinant is negative (improper rotation: determinant is equal to − 1), there exists at least one positive and one negative real eigenvalue, if the determinant is positive (proper rotation: determinant is equal to 1) nothing can be said about existence of real eigenvalues. As described above for proper rotation, eigenvalues are real only in certain special cases (φ = 180° × *k*), and they are complex in all other cases. This is reflected by the fact that axis *X* is the only geometric site of points, common to the real and complex planes.

## Notes

**^**See Hawkins 1975, §2- ^
^{a}^{b}^{c}^{d}See Hawkins 1975, §3 - ^
^{a}^{b}^{c}See Kline 1972, pp. 807-808 **^**See Kline 1972, p. 673**^**See Kline 1972, pp. 715-716**^**See Kline 1972, pp. 706-707**^**See Kline 1972, p. 1063**^**See Aldrich 2006**^**See Golub & van Loan 1996, §7.3; Meyer 2000, §7.3**^**See Beezer 2006, Definition LT on p. 507; Strang 2006, p. 117; Kuttler 2007, Definition 5.3.1 on p. 71; Shilov 1977, Section 4.21 on p. 77; Rowland, Todd and Weisstein, Eric W. Linear transformationFrom MathWorld − A Wolfram Web Resource

**^**See Korn & Korn 2000, Section 14.3.5a; Friedberg, Insel & Spence 1989, p. 217**^**See Strang 2006, p. 249**^**See Sharipov 1996, p. 66**^**See Bowen & Wang 1980, p. 148**^**For a proof of this lemma, see Shilov 1977, p. 109, and Lemma for the eigenspace- ^
^{a}^{b}For a proof of this lemma, see Roman 2008, Theorem 8.2 on p. 186; Shilov 1977, p. 109; Hefferon 2001, p. 364; Beezer 2006, Theorem EDELI on p. 469; and Lemma for linear independence of eigenvectors **^**See Shilov 1977, p. 109**^**For proof, see Bowen & Wang 1980, Theorem 25.1 on p. 148 and Sharipov 1996, Theorem 4.4 on p. 68**^**See Shores 2007, p. 252- ^
^{a}^{b}For a proof of this theorem, see Weisstein, Eric W. EigenvectorFrom MathWorld − A Wolfram Web Resource

**^**See Strang 2006, footnote to p. 245**^**For proof, see Beezer 2006, Theorem NEM on pp. 476-477 or the Fundamental theorem of algebra**^**For details and proof, see Meyer 2000, p. 494-495**^**See Korn & Korn 2000, Sections 1.4-3 and 1.6-4**^**See Greub 1975, p. 118**^**See Greub 1975, p. 119**^**For proof, see Gelfand 1971, p. 115**^**For proof, see Greub 1975, p. 119**^**For details and proof, see Kuttler 2007, p. 151**^**See Shilov 1977, p. 112**^**For more proofs, see Roman 2008, Theorem 8.5 on p. 189; Friedberg, Insel & Spence 1989, Theorem 5.12 on p. 234; Shilov 1977, pp. 112-113 and Problem 7 to Chapter 5**^**Definition according to Weisstein, Eric W. ShearFrom MathWorld − A Wolfram Web Resource

- ^
^{a}^{b}^{c}See Korn & Korn 2000, Section 14.10-1a **^**For a proof, see Strang 2006, Property 3 on pp. 295 and 298**^**For proof, see Alexandrov 1968, Theorem and Lemma on p. 728**^**Graham, D., and Midgley, N., 2000. Earth Surface Processes and Landforms (25) pp 1473-1477**^**Sneed ED, Folk RL. 1958. Pebbles in the lower Colorado River, Texas, a study of particle morphogenesis. Journal of Geology 66(2): 114–150**^**GIS-stereoplot: an interactive stereonet plotting module for ArcView 3.0 geographic information system**^**Stereo32**^**Benn, D., Evans, D., 2004. A Practical Guide to the study of Glacial Sediments. London: Arnold. pp 103-107**^**Xirouhakis, A.; Votsis, G.; Delopoulus, A. (2004),*Estimation of 3D motion and structure of human faces*, Online paper in PDF format, National Technical University of Athens, http://www.image.ece.ntua.gr/papers/43.pdf

## References

- Korn, Granino A.; Korn, Theresa M. (2000),
*Mathematical Handbook for Scientists and Engineers: Definitions, Theorems, and Formulas for Reference and Review*, 1152 p., Dover Publications, 2 Revised edition, ISBN 0-486-41147-8 .

- Lipschutz, Seymour (1991),
*Schaum’s outline of theory and problems of linear algebra*, Schaum’s outline series (2nd ed.), New York, NY: McGraw-Hill Companies, ISBN 0-07-038007-4 .

- Friedberg, Stephen H.; Insel, Arnold J.; Spence, Lawrence E. (1989),
*Linear algebra*(2nd ed.), Englewood Cliffs, NJ 07632: Prentice Hall, ISBN 0-13-537102-3 .

- Aldrich, John (2006), “Eigenvalue, eigenfunction, eigenvector, and related terms”
, in Jeff Miller (Editor),

*Earliest Known Uses of Some of the Words of Mathematics*, http://members.aol.com/jeff570/e.html

, retrieved 2006-08-22

- Strang, Gilbert (1993),
*Introduction to linear algebra*, Wellesley-Cambridge Press, Wellesley, MA, ISBN 0-961-40885-5 .

- Strang, Gilbert (2006),
*Linear algebra and its applications*, Thomson, Brooks/Cole, Belmont, CA, ISBN 0-030-10567-6 .

- Bowen, Ray M.; Wang, Chao-Cheng (1980),
*Linear and multilinear algebra*, Plenum Press, New York, NY, ISBN 0-306-37508-7 .

- Cohen-Tannoudji, Claude (1977), “Chapter II. The mathematical tools of quantum mechanics”,
*Quantum mechanics*, John Wiley & Sons, ISBN 0-471-16432-1 .

- Fraleigh, John B.; Beauregard, Raymond A. (1995),
*Linear algebra*(3rd ed.), Addison-Wesley Publishing Company, ISBN 0-201-83999-7 (international edition) .

- Golub, Gene H.; van Loan, Charles F. (1996),
*Matrix computations (3rd Edition)*, Johns Hopkins University Press, Baltimore, MD, ISBN 978-0-8018-5414-9 .

- Hawkins, T. (1975), “Cauchy and the spectral theory of matrices”,
*Historia Mathematica***2**: 1-29 .

- Horn, Roger A.; Johnson, Charles F. (1985),
*Matrix analysis*, Cambridge University Press, ISBN 0-521-30586-1 (hardback), ISBN 0-521-38632-2 (paperback) .

- Kline, Morris (1972),
*Mathematical thought from ancient to modern times*, Oxford University Press, ISBN 0-195-01496-0 .

- Meyer, Carl D. (2000),
*Matrix analysis and applied linear algebra*, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, ISBN 978-0-89871-454-8 .

- Brown, Maureen (October 2004),
*Illuminating Patterns of Perception: An Overview of Q Methodology*.

- Golub, Gene F.; van der Vorst, Henk A. (2000), “Eigenvalue computation in the 20th century”,
*Journal of Computational and Applied Mathematics***123**: 35-65 .

- Akivis, Max A.; Vladislav V. Goldberg (1969),
*Tensor calculus*, Russian, Science Publishers, Moscow .

- Gelfand, I. M. (1971),
*Lecture notes in linear algebra*, Russian, Science Publishers, Moscow .

- Alexandrov, Pavel S. (1968),
*Lecture notes in analytical geometry*, Russian, Science Publishers, Moscow .

- Carter, Tamara A.; Tapia, Richard A.; Papaconstantinou, Anne,
*Linear Algebra: An Introduction to Linear Algebra for Pre-Calculus Students*, Rice University, Online Edition, http://ceee.rice.edu/Books/LA/index.html

, retrieved 2008-02-19 .

- Roman, Steven (2008),
*Advanced linear algebra*(3rd ed.), New York, NY: Springer Science + Business Media, LLC, ISBN 978-0-387-72828-5 .

- Shilov, Georgi E. (1977),
*Linear algebra*(translated and edited by Richard A. Silverman ed.), New York: Dover Publications, ISBN 0-486-63518-X .

- Hefferon, Jim (2001),
*Linear Algebra*, Online book, St Michael’s College, Colchester, Vermont, USA, http://joshua.smcvt.edu/linearalgebra/

.

- Kuttler, Kenneth (2007),
*An introduction to linear algebra*, Online e-book in PDF format, Brigham Young University, http://www.math.byu.edu/~klkuttle/Linearalgebra.pdf

.

- Demmel, James W. (1997),
*Applied numerical linear algebra*, SIAM, ISBN 0-89871-389-7 .

- Beezer, Robert A. (2006),
*A first course in linear algebra*, Free online book under GNU licence, University of Puget Sound, http://linear.ups.edu/

.

- Lancaster, P. (1973),
*Matrix theory*, Russian, Moscow, Russia: Science Publishers .

- Halmos, Paul R. (1987),
*Finite-dimensional vector spaces*(8th ed.), New York, NY: Springer-Verlag, ISBN 0387900934 .

- Pigolkina, T. S. and Shulman, V. S.,
*Eigenvalue*(in Russian), In:Vinogradov, I. M. (Ed.),*Mathematical Encyclopedia*, Vol. 5, Soviet Encyclopedia, Moscow, 1977. - Pigolkina, T. S. and Shulman, V. S.,
*Eigenvector*(in Russian), In:Vinogradov, I. M. (Ed.),*Mathematical Encyclopedia*, Vol. 5, Soviet Encyclopedia, Moscow, 1977. - Greub, Werner H. (1975),
*Linear Algebra (4th Edition)*, Springer-Verlag, New York, NY, ISBN 0-387-90110-8 .

- Larson, Ron; Edwards, Bruce H. (2003),
*Elementary linear algebra*(5th ed.), Houghton Mifflin Company, ISBN 0-618-33567-6 .

- Curtis, Charles W.,
*Linear Algebra: An Introductory Approach*, 347 p., Springer; 4th ed. 1984. Corr. 7th printing edition (August 19, 1999), ISBN 0387909923. - Shores, Thomas S. (2007),
*Applied linear algebra and matrix analysis*, Springer Science+Business Media, LLC, ISBN 0-387-33194-8 .

- Sharipov, Ruslan A. (1996),
*Course of Linear Algebra and Multidimensional Geometry: the textbook*, Online e-book in various formats on arxiv.org, Bashkir State University, Ufa, arXiv:math/0405323v1

, ISBN 5-7477-0099-5, http://www.geocities.com/r-sharipov

.

- Gohberg, Israel; Lancaster, Peter; Rodman, Leiba (2005),
*Indefinite linear algebra and applications*, Basel-Boston-Berlin: Birkhäuser Verlag, ISBN 3-7643-7349-0 .

## quotes on love

I got what you mean, thanks for putting up. Woh I am gladsome to mature this website through google. Thanks For Share Eigenvalue, eigenvector and eigenspace – Lyudmil Antonov.