Assignment 2

Due Thursday, September 8, 2022

Problem 1

In each of the following, exhibit an explicit example or prove that no example is possible.

  1. A square matrix that is diagonalizable but not diagonal.
  2. A real square matrix that is not diagonalizable.
  3. A complex square matrix that is not diagonalizable.
  4. A complex square matrix with no eigenvectors.
  5. A diagonalizable matrix whose characteristic polynomial is not equal to its minimal polynomial.
  6. Matrices that are not similar, but have the same minimal polynomial and characteristic polynomial.
  7. A complex matrix that is unitary, but not symmetric.
  8. A non-zero matrix that is both symmetric and skew-symmetric.
  9. A matrix \(A\) such that \(A^2=A\) but \(A \ne I\).
  10. A matrix \(A \in \mathrm{M}_2(\mathbb{Q})\) such that \(A^5=I\) but \(A \ne I\).

Solution

  1. The matrix \(\begin{pmatrix}2&1\\0&3\end{pmatrix}\) is diagonalizable: \[\begin{pmatrix}1&-1\\0&1\end{pmatrix} \begin{pmatrix}2&1\\0&3\end{pmatrix} \begin{pmatrix}1&1\\0&1\end{pmatrix} =\begin{pmatrix}2&0\\0&3\end{pmatrix}.\]
  2. The matrix \(\begin{pmatrix}0&1\\0&0\end{pmatrix}\) is already in Jordan canonical form, but not diagonal. Thus it is not diagonalizable.
  3. The same example as part (b) works here (that is not true for every example, however.)
  4. This is not possible. A square matrix has an eigenvector if and only if its characteristic polynomial has a root. Since the characteristic polynomial must have positive degree and the base field is \(\mathbb{C}\), we must have a root by the fundamental theorem of algebra.
  5. The \(2\times 2\) identity matrix has characteristic polynomial \(\chi(t)=(t-1)^2\) but minimal polynomial \(m(t)=(t-1)\).
  6. Consider the following two matrices: \[A=\begin{pmatrix} 2 & 1 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 2 \end{pmatrix}, \quad B=\begin{pmatrix} 2 & 1 & 0 & 0\\ 0 & 2 & 0 & 0\\ 0 & 0 & 2 & 1\\ 0 & 0 & 0 & 2 \end{pmatrix}.\] Their minimal polynomials are both \((t-2)^2\) and their characteristic polynomials are both \((t-2)^4\).
  7. The matrix \(\begin{pmatrix}0&i\\-i&0\end{pmatrix}\) is unitary, but not symmetric.
  8. This is only possible when the characteristic is \(2\). In that case, \(\begin{pmatrix} 1& 0\\0&1\end{pmatrix}\) works.
  9. The matrix \(\begin{pmatrix} 1& 0\\0&0\end{pmatrix}\) works.
  10. This is not possible. Suppose otherwise. Since \(A^5=I\), we require that the minimal polynomial \(m_A(t)\) divides \(t^5-1\). Over \(\mathbb{Q}\), the polynomial factors into irreducibles as: \[ t^5-1 = (t-1)(t^4 + t^3 + t^2 + t + 1) . \] Since \(A\) is \(2 \times 2\), the minimal polynomial must have degree \(\le 2\). The only possibility is that \(m_A(t)=t-1\), which forces \(A\) to be the identity.

Problem 2

Let \(\Psi : \mathrm{M}_n(k) \to \mathrm{M}_n(k)\) be the map taking a square matrix to its transpose. Determine the eigenvectors of \(\Psi\) and describe the eigenspaces.

Solution

Since \((A^T)^T=A\) for every matrix \(A \in \mathrm{M}_n(k)\), we see that \(\Psi \circ \Psi = \operatorname{id}\). Thus the minimal polynomial \(m_\Psi(t)\) divides \(t^2-1\). Thus the eigenvalues of \(\Psi\) are either \(-1\) or \(1\). The eigenspace \(E_1\) consists of the symmetric matrices where \(A^T=A\). The eigenspace \(E_{-1}\) consists of the skew-symmetric matrices where \(A^T=-A\). (These eigenspaces are distinct if and only if the characteristic of \(k\) is not \(2\).)

Problem 3

Let \(A\) be a square complex matrix. Prove that \(\lim_{n \to \infty} A^n =0\) if and only if \(|\lambda | < 1\) for every eigenvalue \(\lambda\) of \(A\).

Solution

For \(P\) invertible and \(A\) arbitrary, we have \[(PAP^{-1})^n = PAP^{-1}PAP^{-1}\cdots PAP^{-1}=PA^nP^{-1}\]. Since sums and products are continuous operations, we have \[\lim_{n \to \infty} (PAP^{-1})^n = P\left(\lim_{n \to \infty}A^n\right)P^{-1}.\] Thus it suffices to assume \(A\) is in Jordan canonical form.

If \(A=A_1 \oplus \cdots \oplus A_r\) is the decomposition into Jordan blocks, then \(A^n = A_1^n \oplus \cdots \oplus A_r^n.\) Thus it suffices to assume \(A\) is a single Jordan block \(J_k(\lambda)\).

Now \(A=\lambda I_k + N\) where \(N\) is a strictly upper triangular matrix such that \(N^k=0\). We find \[A^n = (\lambda I_k + N)^n = \sum_{i=0}^k \binom{n}{i} \lambda^{n-i} N^i,\] where we emphasize that the index set of the sum is independent of \(k\). One can show that the non-zero powers of \(N\) are linearly independent in the space of matrices, but we only need that the diagonal entries of \(A^n\) are all \(\lambda^n\). This follows since the set of strictly upper triangular matrices is closed under multiplication. If \(|\lambda|\ge 1\), then the coefficient of \(N^0=I\) cannot tend to \(0\).

It remains to consider the case where \(|\lambda| < 1\). Since \(\binom{n}{i}\) is a polynomial in \(n\) of degree \(i\), it has slower growth rate than \(|\frac{1}{\lambda}|^{n}\) as functions of \(n\). Thus, \[ \lim_{n \to \infty } \binom{n}{i} \lambda^{n-i} = 0\] and \(\lim_{n \to \infty } A^n = 0\) as desired.

Problem 4

Let \(V\) be the real vector space of polynomials of degree \(\le 3\). Show that \(V\) is an inner product space under \[\langle f,g \rangle := \int_0^{1} f(x) g(x)\ dx\] and find an orthogonal basis.

Solution

The constant multiple rule and sum rule for integrals show that the given form is bilinear. Symmetry is clear. For positive definiteness, we need to check that \(\int_0^{1} f(x)^2 \ dx > 0\) if and only if \(f \ne 0\). This follows from a direct computation for a general polynomial or by observing that a non-negative continuous function has a zero integral only if it is zero everywhere.

There are infinitely many orthogonal bases. A good example can be found by considering the basis \(\{1,x,x^2,x^3\}\) and applying the Gram-Schmidt process to produce: \(1\), \(x-\frac{1}{2}\), \(x^2-x+\frac{1}{6}\), and \(x^3-\frac{3}{2}x^2+\frac{3}{5}x-\frac{1}{20}\). Note that we only want an orthogonal basis so we do not require that their norms are \(1\).

Problem 5

Let \(V\) be the complex vector space of continuous functions \(f : [0,2\pi] \to \mathbb{C}\). Show that \(V\) is an inner product space under \[\langle f,g \rangle := \frac{1}{2\pi}\int_0^{2\pi} f(x) \overline{g(x)}\ dx\] and that \(\{ e^{inx} \mid n \in \mathbb{Z} \}\) is an orthonormal subset.

Solution

As in the previous problem, the constant multiple rule and sum rule for integrals show that the given form is sesquilinear. The second entry is congugate-linear, while the first is linear. Since integration commutes with complex conjugation, we see that \(\langle f,g \rangle = \overline{\langle g,f \rangle}\) and we conclude the form is Hermitian. Since \(f(x)\overline{f(x)}=|f(x)|^2\), we once again conclude that \(\langle f, f \rangle > 0\) for all non-zero \(f\) since a non-negative continuous function has a zero integral only if it is zero everywhere. We conclude that \(V\) is an inner product space.

Now observe \[\begin{align*} &\langle e^{inx},e^{imx} \rangle\\ =& \frac{1}{2\pi}\int_0^{2\pi} e^{inx} e^{-imx}\ dx\\ =& \frac{1}{2\pi}\int_0^{2\pi} e^{i(n-m)x}\ dx\\ =& \frac{1}{2\pi}\int_0^{2\pi} \cos((n-m)x) + i\sin((n-m)x)\ dx \end{align*}\] Since \(\sin(x)\) and \(\cos(x)\) both have integral zero on \([0,2\pi n]\) for positive integers \(n\), we conclude that \(\langle e^{inx},e^{imx} \rangle=0\) when \(n \ne m\). When \(n=m\), we are integrating the constant function \(1+0i\) and get \(\frac{2\pi}{2\pi}=1\) as desired.