30 Vector Spaces

Vector spaces are sets of vectors that must satisfy ten axioms. Consider the following definition from Larson and Valvo [16].


Definition VI.4
Let V be a set on which two operations (vector addition and scalar multiplication) are defined. If the listed axioms are satisfied for every u, v, and w in V and every scalar (real number) c and d, then V is called a vector space.

  1. \textbf{u}+\textbf{v} is in V.
  2. \textbf{u}+\textbf{v}=\textbf{v}+\textbf{u}
  3. \textbf{u}+(\textbf{v}+\textbf{w})=(\textbf{u}+\textbf{v})+\textbf{w}
  4. V has a zero vector 0 such that for every u in V, \textbf{u}+\textbf{0}=u.
  5. For every u in V, there is a vector in V denoted by -u such that \textbf{u}+(\textbf{-u})=\textbf{0}.
  6. c\textbf{u} is in V.
  7. c(\textbf{u}+\textbf{v})=c\textbf{u}+c\textbf{v}.
  8. (c+d)\textbf{u}=c\textbf{u}+d\textbf{u}.
  9. c(d\textbf{u})=(cd)\textbf{u}.
  10. 1(u)=u.

The first column gives the axioms for addition. The vector space must be closed under addition, contain the additive identity (denoted by 0), and additive inverses. Furthermore, addition must be commutative and associative. The second column gives the rules for scalar multiplication. The vector space must be closed under scalar multiplication, and any vector multiplied by one must equal the vector itself. Furthermore, the associative property and distributive properties (distributing a scalar and distributing a vector) should hold.

Vectors come in a variety of forms. We already discussed vectors that can occur in n-dimensional space. But vectors can also be functions, polynomials, and matrices [16]. Thus, certain sets of functions, polynomials, or matrices can be vector spaces.

Let’s work through the process of showing that a set of vectors is a vector space. This problem is one I completed for Elementary Linear Algebra and comes from Larson and Falvo [16].


Example 62
Show that the set V=\{(x,x): x\in\mathbb{R}\} with the standard operations (vector addition and scalar multiplication) is a vector space.

Proof.
Consider the vectors \textbf{u}=(u,u), \textbf{v}=(v,v), and \textbf{w}=(w,w) in V.

Vector addition is closed because

    \[\textbf{u}+\textbf{v}=(u+v,u+v)\]

which is an element of V since u+v is a real number. Vector addition is also commutative because

    \begin{align*} \textbf{u}+\textbf{v}&=(u+v,u+v)\\ &=(v+u,v+u)\\ &=\textbf{v}+\textbf{u} \end{align*}

and associative because

    \begin{align*} \textbf{u}+(\textbf{v}+\textbf{w})&=(u+(v+w),u+(v+w))\\ &=((u+v)+w,(u+v)+w)\\ &=(\textbf{u}+\textbf{v})+\textbf{w}. \end{align*}

The zero vector \textbf{0}=(0,0) is in V because 0 is a real number, and for every vector \textbf{u} in V,

    \begin{align*} \textbf{u}+\textbf{0}&=(u+0,u+0)\\ &=(u,u)\\ &=\textbf{u}. \end{align*}

For every vector \textbf{u}=(u,u) in V, there exists \textbf{-u}=(-u,-u), which is in V because -u is a real number, such that

    \begin{align*} \textbf{u}+(\textbf{-u})&=(u+(-u),u+(-u))\\ &=(0,0)\\ &=\textbf{0}. \end{align*}

Scalar multiplication is closed because for every scalar c,

    \[c\textbf{u}=(cu,cu)\]

which is in V because cu is a real number. Both distributive properties hold because

    \begin{align*} c(\textbf{u}+\textbf{v})&=(c(u+v),c(u+v))\\ &=(cu+cv,cu+cv)\\ &=c\textbf{u}+c\textbf{v} \end{align*}

and

    \begin{align*} (c+d)\textbf{u}&=((c+d)u, (c+d)u)\\ &=(cu+du,cu+du)\\ &=c\textbf{u}+d\textbf{u}, \end{align*}

where c and d are scalars and u and v are vectors in V.
Scalar multiplication is also associative because

    \begin{align*} c(d\textbf{u})&=(c(du),c(du))\\ &=((cd)u, (cd)u)\\ &=(cd)\textbf{u}. \end{align*}

Finally,

    \begin{align*} 1\textbf{u}&=(1u,1u)\\ &=(u,u)\\ &=\textbf{u}. \end{align*}

Because all ten axioms are satisfied, V is a vector space.

Vector spaces also have subspaces, which are defined by Larson and Falvo as follows [16].


Definition VI.5
A nonempty subset W of a vector space V is called a subspace of V if W is a vector space under the operations of addition and scalar multiplication defined in V.

In a vector space V, there are always the two operations of vector addition and scalar multiplication. However, how these operations are defined may differ from one vector space to another [16].

If we have a vector space V and take a subset of those vectors, that subset is a subspace if it satisfies all of the axioms of a vector space for vector addition and scalar multiplication, which are defined the same way as they are in V.

A special type of subspace of a vector space V is known as the span of a subset S of V.

Spanning Sets

Larson and Falvo provide us with the following definitions [16].


Definition VI.6
A vector v in a vector space V is called a linear combination of the vectors \textbf{u}_1,\textbf{u}_2,...,\textbf{u}_k in V if v can be written in the form

    \[\textbf{v}=c_1\textbf{u}_1+c_2\textbf{u}_2+...+c_k\textbf{u}_k,\]

where c_1,c_2,...,c_k are scalars.


Definition VI.7
If S=\{\textbf{v}_1,\textbf{v}_2,...,\textbf{v}_k\} is a set of vectors in a vector space V, then the span of S is the set of all linear combinations of the vectors in S,

    \[\text{span}(S)=\{c_1\textbf{v}_1+c_2\textbf{v}_2+...+c_k\textbf{v}_k: c_1, c_2, c_3 \text{ are real numbers}\}.\]

Let S be a subset of a vector space V. That is, every vector in S is also in V. The span of S, denoted span(S), is the set of all vectors that can be “built” by adding together scalar multiples of the vectors in S. Furthermore, span(S) is a subspace of V [16].

Sometimes, span(S) is equal to V. When this occurs, we say that S is a spanning set of V and that S spans V [16]. Let’s look at an example of how to show that a subset S of a vector space V spans V. This is a problem I completed for Elementary Linear Algebra and comes from Larson and Falvo [16].


Example 63
Show that the set S=\{(4,7,3),(-1,2,6),(2,-3,5)\} spans \mathbb{R}^3.

Proof.
To prove this, we need to show that there exist scalars c_1, c_2, and c_3 such that for every vector (x_1,x_2,x_3) in \mathbb{R}^3,

    \[(x_1,x_2,x_3)=c_1(4,7,3)+c_2(-1,2,6)+c_3(2,-3,5).\]

By scalar multiplication and vector addition, we have

    \begin{align*} 4c_1-c_2+2c_3&=x_1\\ 7c_1+2c_2-3c_3&=x_2\\ 3c_1+6c_2+5c_3&=x_3\\ \end{align*}

This is a system of equations which has a unique solution if and only if the determinant of the coefficient matrix (matrix consisting of only the coefficients) is not equal to zero [16]. So, we need to find the determinant of

    \[\left[      \begin{array}{@{}*{7}{r}@{}}        4   &  -1&2\\         7&2&-3 \\            3&6&5       \end{array}      \right].\]

By Theorem VI.3 in chapter 28, the determinant is equal to the sum of the products of each entry with its cofactor (see definition VI.2 in chapter 28) for every entry in whichever row or column we choose. Let’s go with the first column.

We need to find cofactors C_{11}, C_{21}, and C_{31} by first finding the minors M_{11}, M_{21} and M_{31}. Deleting the first row and first column of the coefficient matrix gives us

    \[\left[      \begin{array}{@{}*{7}{r}@{}}              2&-3\\               6&5 \\           \end{array}      \right]\]

with determinant

    \[2(5)-6(-3)=28\]

which means

    \begin{align*}      C_{11}&=(-1)^2(28)\\      &=28.      \end{align*}

Deleting the second row and first column of the coefficient matrix gives us

    \[\left[      \begin{array}{@{}*{7}{r}@{}}              -1&2\\               6&5 \\           \end{array}      \right]\]

with determinant

    \[-1(5)-(6)(2)=-17\]

which means

    \begin{align*}      C_{21}&=(-1)^3(-17)\\      &=17.      \end{align*}

Deleting the third row and first column of the coefficient matrix gives us

    \[\left[      \begin{array}{@{}*{7}{r}@{}}              -1&2\\               2&-3 \\           \end{array}      \right]\]

with determinant

    \[-1(-3)-(2)(2)=-1\]

which means

    \begin{align*}      C_{31}&=(-1)^4(-1)\\      &=-1.      \end{align*}

So, the determinant of \left[      \begin{array}{@{}*{7}{r}@{}}        4    &  -1&2\\         7     & 2&-3  \\         3      & 6&5 \\           \end{array}      \right] is

    \[4(28)+7(17)+3(-1)=228\neq 0.\]

This means that our system of equations has a unique solution for all real numbers x_1, x_2, and x_3. This is because the determinant of the coefficient matrix is not equal to zero if and only if the system of equations has a unique solution [16]. Thus, for every vector (x_1,x_2,x_3) in \mathbb{R}^3, we can find c_1, c_2, and c_3 such that

    \[(x_1,x_2,x_3)=c_1(4,7,3)+c_2(-1,2,6)+c_3(2,-3,5).\]

Therefore, S=\{(4,7,3),(-1,2,6),(2,-3,5)\} spans \mathbb{R}^3.

We will now discuss a specific type of subset that spans a vector space, which is referred to as a basis for that vector space.

Basis

In essence, a basis S of a vector space V is the largest set of independent vectors such that every vector in V can be written as a linear combination of vectors in S. Consider the following definition from Larson and Falvo [16].


Definition V1.8
A set of vectors S=\{\textbf{v}_1,\textbf{v}_2,...,\textbf{v}_n\} is a vector space V is called a basis for V if the following conditions are true.

  1. S spans V.
  2. S is linearly independent.

The condition that S spans V ensures that every vector in V can be formed by some linear combination of vectors in S. However, it is possible to have more vectors in a spanning set than are actually needed. This can happen if one of the vectors in a spanning set can itself be written as a linear combination of the other vectors in the spanning set. Thus, the condition that S is linearly independent ensures that we have the minimum number of vectors needed in order to span V because a set of vectors is linearly independent if none of the vectors can be written as a linear combination of the other vectors [16]. The formal definition is given by Larson and Falvo as follows [16].


Definition VI.9
A set of vectors S=\{\textbf{v}_1,\textbf{v}_2,...,\textbf{v}_k\} in a vector space V is called linearly independent if the vector equation

    \[c_1\textbf{v}_1+c_2\textbf{v}_2+...+c_k\textbf{v}_k=\textbf{0}\]

has only the trivial solution c_1=0,c_2=0,...,c_k=0. If there are also nontrivial solutions, then S is called linearly dependent.

Let’s now look at a problem that I completed for Elementary Linear Algebra that comes from Larson and Falvo [16].


Example 64
Is the set of vectors

    \[S=\{(4,7,3),(-1,2,6),(2,-3,5)\}\]

a basis for \mathbb{R}^3?

Solution
By example 63, we know that S spans \mathbb{R}^3. So, all we have to do is see whether the vectors in S are linearly independent. By definition VI.9, the set of vectors in S are linearly independent if and only if the equation

    \[c_1(4,7,3)+c_2(-1,2,6)+c_3(2,-3,5)=(0,0,0)\]

has only the trivial solution. From our equation, we have the following system

    \begin{align*} 4c_1-c_2+2c_3&=0\\ 7c_1+2c_2-3c_3&=0\\ 3c_1+6c_2+5c_3&=0\\ \end{align*}

As we saw in Example 63, this system of equations has a unique solution because the determinant of the coefficient matrix is not zero. Because the solution is unique and c_1=0, c_2=0, and c_3=0 is a solution, c_1=0, c_2=0, and c_3=0 is the only solution. Thus, the set of vectors in S are linearly independent by definition VI.9. Therefore, S is a basis for \mathbb{R}^3 by definition VI.8.

In our next section, we will discuss row and column spaces of matrices.

License

Portfolio for Bachelor of Science in Mathematics Copyright © by Abigail E. Huettig. All Rights Reserved.

Share This Book