Lab 4: Projections, Span, and Linear Independence

due for completion at 11:59PM Ann Arbor Time on Monday, May 18th, 2026

Each lab worksheet will contain several activities, some of which will involve writing code and others that will involve writing math on paper. To receive credit for a lab, you must complete as many of the activities as you can in 2 hours and submit a PDF of your work to Gradescope. We will provide specific instructions on how to submit programming activities (e.g. submitting the notebook or including a screenshot of some output).

Feel free to work with others in the course, but you must submit individually.


Activities


Recap: Projections, Span, and Linear Independence

  • (Chapter 3.4) The orthogonal projection of the vector \(\vec u\) onto the vector \(\vec v\) is given by
$$ \vec p = \frac{\vec u \cdot \vec v}{\vec v \cdot \vec v} \vec v $$

Above, the scalar \(k^* = \frac{\vec u \cdot \vec v}{\vec v \cdot \vec v}\) was chosen to minimize \(\lVert \vec u - k \vec v \rVert^2\).

  • The vector \(\vec p\) is called the orthogonal projection because the resulting error vector,
$$ \vec e = \vec u - \vec p = \vec u - k^* \vec v $$

is orthogonal to \(\vec v\).

$$ \vec e \cdot \vec v = 0 $$
  • (4.1) The span of a set of vectors is the set of all possible linear combinations of the vectors in the set.
$$ \text{span}(\{\vec v_1, \vec v_2, \ldots, \vec v_d\}) = \{a_1 \vec v_1 + a_2 \vec v_2 + \cdots + a_d \vec v_d \mid a_1, a_2, \ldots, a_d \in \mathbb{R}\} $$
  • The span of one vector in \(\mathbb{R}^n\) is a line through the origin.

  • The span of two non-parallel vectors in \(\mathbb{R}^n\) is a plane through the origin; this plane is called a 2-dimensional subspace of \(\mathbb{R}^n\).

  • In general, the span of \(d\) vectors in \(\mathbb{R}^n\) is a subspace of \(\mathbb{R}^n\) of dimension \(0\) to \(d\), depending on the vectors and their relationships.

  • Think of a \(d\)-dimensional subspace of \(\mathbb{R}^n\) as a “slice” of \(\mathbb{R}^n\) that goes through the origin, in which you can move in \(d\) directions.


Activity 1: Presidential Speeches and Cosine Similarity

Complete the tasks in the lab04.ipynb notebook.

There are two ways to access the supplemental Jupyter Notebook:

  • Option 1 (preferred): Set up a Jupyter Notebook environment locally, use git to clone our course repository, and open labs/lab04/lab04.ipynb. For instructions on how to do this, see the Environment Setup page of the course website.

  • Option 2: Click here to open lab04.ipynb on DataHub. Before doing so, read the instructions on the Environment Setup page on how to use the DataHub.

Once you’re done, include a screenshot of your completed Activity 1 implementation in your PDF submission of Lab 4 to Gradescope, making sure to include proof that the (local) autograder passed. Instructions on how to do this are in the lab notebook.


Activity 2: Orthogonal Projections

Let \(\vec c = \begin{bmatrix} 1 \\ 2 \\ -4 \\ 0 \end{bmatrix}\) and \(\vec d = \begin{bmatrix} 3 \\ 2 \\ 0 \\ -1 \end{bmatrix}\). Note that \(\lVert \vec c \rVert^2 = 21, \lVert \vec d \rVert^2 = 14\), and \(\vec c \cdot \vec d = 7\).

a)

Find the orthogonal projection of \(\vec c\) onto \(\vec d\). Call this vector \(\vec q\).

b)

Find the error vector, \(\vec r = \vec c - \vec q\). Which vector is \(\vec r\) orthogonal to, \(\vec c\) or \(\vec d\)? Draw a rough picture of the relationship between \(\vec c\), \(\vec d\), \(\vec q\), and \(\vec r\).


Activity 3: Orthogonal Decomposition

a)

Let \(\vec{v}_1 = \begin{bmatrix} -1 \\ 2 \\ 2 \end{bmatrix}\) \(\vec{v}_2 = \begin{bmatrix} 2 \\ 2 \\ -1 \end{bmatrix}\) and \(\vec{v}_3 = \begin{bmatrix} 2 \\ -1 \\ 2 \end{bmatrix}\). Write \(\vec{u} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\) as a linear combination of \(\vec{v}_1\), \(\vec{v}_2\), and \(\vec{v}_3\), and verify that your answer is correct. Note that \(\vec v_1\), \(\vec v_2\), and \(\vec v_3\) are pairwise orthogonal.

b)

In general, suppose that \(\vec v_1, \vec v_2, \ldots, \vec v_d\) are orthogonal vectors in \(\mathbb{R}^n\), meaning that \(\vec v_i \cdot \vec v_j = 0\) for all \(i \neq j\). Given that it is possible to write \(\vec u\) as a linear combination of \(\vec v_1, \vec v_2, \ldots, \vec v_d\),

show that the coefficients of the linear combination

$$ \vec u = a_1 \vec v_1 + a_2 \vec v_2 + \cdots + a_d \vec v_d $$

are given by

$$ a_i = \frac{\vec u \cdot \vec v_i}{\vec v_i \cdot \vec v_i} $$

Hint: Start by taking the dot product of both sides of the linear combination equation with \(\vec v_1\). What do you notice?


Activity 4: Planes and the Cross Product

An important idea from Chapter 4.1 is that two non-parallel vectors in \(\mathbb{R}^n\) (where \(n \geq 2\)) span a plane in \(n\)-dimensional space. Here, we’ll show you how to find the equation of such a plane, given two vectors in \(\mathbb{R}^3\). This is also touched on in Chapter 4.4.

a)

Given two vectors \(\vec a, \vec b \in \mathbb{R}^3\), show that the vector \(\vec q\) is orthogonal to both \(\vec a\) and \(\vec b\).

$$ \vec q = \begin{bmatrix} a_2 b_3 - a_3 b_2 \\\\ a_3 b_1 - a_1 b_3 \\\\ a_1 b_2 - a_2 b_1 \end{bmatrix} $$

The vector \(\vec q\) is called the cross product of \(\vec a\) and \(\vec b\). The cross product is only defined for two vectors in \(\mathbb{R}^3\) specifically, and the product is another vector in \(\mathbb{R}^3\). (This differentiates it from the dot product, which is defined for two vectors in any \(\mathbb{R}^n\), and whose output is a scalar.)

b)

Find the cross product of \(\vec v_1 = \begin{bmatrix} 2 \\ -1 \\ 3 \end{bmatrix}\) and \(\vec v_2 = \begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix}\).

c)

Let \(\vec q = \begin{bmatrix} q_1 \\ q_2 \\ q_3 \end{bmatrix}\) be your answer to part b).

Verify that the points \((0, 0, 0)\), \((2, -1, 3)\) and \((1, 2, -1)\) satisfy the equation

$$ q_1 x + q_2 y + q_3 z = 0 $$

(Those points are the endpoints of the vectors \(\vec v_1\) and \(\vec v_2\), along with the origin.)

d)

Above, we wrote the equation of the plane spanned by \(\vec v_1\) and \(\vec v_2\) in the “standard form” for planes in \(\mathbb{R}^3\), \(ax + by + cz + d = 0\) (where \(d = 0\)). Now, write the equation of the plane spanned by \(\vec v_1\) and \(\vec v_2\) in parametric form. The parametric form of a plane is given by

$$ P = \vec p_0 + s \vec u + t \vec v, \quad s, t \in \mathbb{R} $$

This won’t require much work; it’s more that we want you to understand that there are two ways of expressing planes in \(\mathbb{R}^3\). In higher dimensions, all planes (also called 2-dimensional subspaces) must be expressed in parametric form. Read Chapter 4.4.


Activity 5: Finding a Linearly Independent Subset

Recall from Chapter 4.2 that a set of vectors \(\vec v_1, \vec v_2, \ldots, \vec v_d\) is linearly independent if either of the following equivalent conditions hold:

  • None of the vectors can be written as a linear combination of the others.

  • The only way to create the zero vector as a linear combination of the vectors is if all the coefficients are zero. In other words, the only solution to

$$ a_1 \vec v_1 + a_2 \vec v_2 + \ldots + a_d \vec v_d = \vec 0 $$

is \(a_1 = a_2 = \ldots = a_d = 0\).

Chapter 4.2 introduces an algorithm for finding a linearly independent subset of a given set of vectors with the same span as the original set:

given v_1, v_2, ..., v_d
initialize linearly independent set S = {v_1}
for i = 2 to d:    if v_i is not a linear combination of S:
        add v_i to S

In each of the parts below, find a linearly independent set of vectors that spans the same span as the given set of vectors. There are multiple possible answers for each part, but all of them have the same number of vectors.

a)
$$ \vec v_1 = \begin{bmatrix} 1 \\\\ 0 \\\\ 0 \end{bmatrix}, \quad \vec v_2 = \begin{bmatrix} 1 \\\\ 1 \\\\ 0 \end{bmatrix}, \quad \vec v_3 = \begin{bmatrix} 1 \\\\ 1 \\\\ 1 \end{bmatrix}, \quad \vec v_4 = \begin{bmatrix} 2 \\\\ 3 \\\\ 4 \end{bmatrix} $$
b)
$$ \vec v_1 = \begin{bmatrix} 1 \\\\ -1 \\\\ 0 \\\\ 0 \end{bmatrix}, \quad \vec v_2 = \begin{bmatrix} 1 \\\\ 0 \\\\ -1 \\\\ 0 \end{bmatrix}, \quad \vec v_3 = \begin{bmatrix} 1 \\\\ 0 \\\\ 0 \\\\ -1 \end{bmatrix}, \quad \vec v_4 = \begin{bmatrix} 0 \\\\ 1 \\\\ -1 \\\\ 0 \end{bmatrix}, \quad \vec v_5 = \begin{bmatrix} 0 \\\\ 1 \\\\ 0 \\\\ -1 \end{bmatrix}, \quad \vec v_6 = \begin{bmatrix} 0 \\\\ 0 \\\\ 1 \\\\ -1 \end{bmatrix} $$

Hint: Use the 0’s in the vectors strategically, plus use the fact that you can’t have more than 4 linearly independent vectors in \(\mathbb{R}^4\).