02 Linear Algebra - Matrices

image.png Հանրապետության Հրապարակ, լուսանկարի հղումը, Նկարը facebook-ում հրապարակող՝ Marine Tovmasyan

📚 Նյութը

📚 Տանը կարդում ենք՝ Մատրիցի երկրաչափական իմաստը, գծային ձևափոխություններ

  • Johnston, 20-25 (մատրիցներ), 35-38 (գծային ձևափոխություն) էջերը
  • Poole, 219-221 էջերը (մատրիցների արտադրյալ/համադրույթ)

և դիտում 3b1b-ի 3-րդ տեսադասը գծային հանրահաշվից՝ https://youtu.be/kYB8IZa5AuE

🧮 Հարմար գործիքներ

մատրիցները վիզուալ պատկերացնելու համար օգտակար գործիքներ՝

🏡 Տնային

Note
  1. ❗❗❗ DON’T CHECK THE SOLUTIONS BEFORE TRYING TO DO THE HOMEWORK BY YOURSELF❗❗❗
  2. Please don’t hesitate to ask questions, never forget about the 🍊karalyok🍊 principle!
  3. The harder the problem is, the more 🧀cheeses🧀 it has.
  4. Problems with 🎁 are just extra bonuses. It would be good to try to solve them, but also it’s not the highest priority task.
  5. If the problem involve many boring calculations, feel free to skip them - important part is understanding the concepts.
  6. Submit your solutions here (even if it’s unfinished)

01: Matrix transformations

What vectors do you get by applying the matrix \(A = \begin{pmatrix} 3 & -3 \\ 3 & 3 \end{pmatrix}\) on the vectors:

  1. \(\vec{a} = \begin{pmatrix} 1 \\ 0 \end{pmatrix}\)
  2. \(\vec{b} = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\)
  3. \(\vec{c} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\)
  4. Draw the vectors before and after multiplying with \(A\). What can you say visually about the matrix? Can you guess how it will act on the vector \(\begin{pmatrix} 2 \\ -2 \end{pmatrix}\)?

02: Matrix products

Compute the following products:

  1. \((A - B)(A + B)\), where \(A = \begin{pmatrix} 2 & 3 \\ -1 & 2 \end{pmatrix}\), \(B = \begin{pmatrix} 1 & 2 \\ 2 & -1 \end{pmatrix}\)
  2. \(A^2 - B^2\), with the same \(A\) and \(B\) as in part (b).
  3. Any comments on the results?

03: Shear matrix transformations

Shear transformations are commonly used in computer graphics for creating italic text effects, perspective corrections, and geometric distortions. They preserve area but change angles and shapes.

Consider the following matrix (it is called the shear matrix): \(S = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}\)

  1. What would you get if you apply \(S\) on the vector \(\begin{pmatrix} 0 \\ 1 \end{pmatrix}\)?
  2. What would you get if you apply \(S\) again on the result of the previous point?
  3. What if you apply \(S\) one more time?
  4. What do you think happens when we apply \(S\) 100 times on that vector?
  5. Can you compute \(S^{100}\)?

04: Diagonal matrix powers

Diagonal matrices are particularly useful in linear algebra because their powers are easy to compute. This property is extensively used in eigenvalue decomposition and diagonalization of matrices (more on this later).

Consider the diagonal matrix \(A = \begin{pmatrix} 2 & 0 \\ 0 & -1 \end{pmatrix}\).

  1. Compute \(A^2\), \(A^3\), and \(A^4\).
  2. Find a general formula for \(A^n\) where \(n\) is any positive integer.
  3. What does this transformation represent geometrically? How does it affect the unit circle when applied repeatedly?
  4. What happens when you apply this transformation to the vector \(\begin{pmatrix} 1 \\ 1 \end{pmatrix}\) multiple times?

05: Determinant properties

  1. Prove that \(\det(B^{-1}AB) = \det(A)\) if \(B\) is invertible.

  2. Suppose \(Q\) is a \(3 \times 3\) real matrix such that \(Q^T Q = I\). What values can \(\det(Q)\) take?

06: Normal equation for linear regression

The normal equation is a closed-form solution to linear regression problems. It directly computes the optimal parameters using matrix operations, avoiding the need for iterative optimization algorithms like gradient descent.

Consider a simple linear regression problem where you want to fit a line \(y = \theta_0 + \theta_1 x\) to the following data points:

\(x\) \(y\)
1 2
2 4
  1. Set up the design matrix \(X\) (including the intercept column) and the target vector \(\vec{y}\).

  2. Use the normal equation \(\vec{\theta} = (X^T X)^{-1} X^T \vec{y}\) to find the optimal parameters \(\theta_0\) and \(\theta_1\).

  3. What line equation did you get? Does it make sense given the data?

  4. Verify your result by checking that this line passes through the given data points.

🎲 39 (02)

Flag Counter