A set S of vectors is called linearly independent if for every vector sj in S, there do not exist values of a1, a2, ... , an, such that (a1)(s1) + (a2)(s2) + ... + (an)(sn) = sj and aj=0, where a1, a2, ... , an are scalar and s1, s2, ... ,sn are elements of S.

You can visualize a vector as a funny kind of arrow. It can be stretched out or shortened as much as you want, but it's direction can never be changed. You can combine vectors by laying them out, the tip of the first one connected to the base of the next.

To say a bunch of vectors are linearly independent means that no matter how you combine them, you can never find a combination of them whose endpoint is the same as one of them. For any point you can get to, there's only one way to get there. If the vector A ends at a certain point, and there is also a combination of B, C, & D that ends there too, then A, B, C & D are not linearly independent.

In the definition, stretching is the same as multiplying.

Linear independence is good because it ensures that there's only one combination of vectors that gets you to each point. So if you ask "how can I get to point X" there will be only one answer. If you are using a non linearly independent set of vectors to give directions to X, then there could be an infinite number of answers to that question.

in the real world you could take a set of vectors A = walk down a road and B = climb a ladder. They are linearly independent, since everywhere you get to there's only one way. But if you add C = ride a bike down the road, then they are not linearly independent - you could get to a point 1 mile down the road by a infinite number ways.

Let A := (a1, a2, ..., an), where a1,...,an are vectors in a vector space V. Then a vector a' in V is said to be linearly independent of A if it cannot be expressed as a linear combination of the vectors in A. Alternatively, a vector a' in V is linearly independent of A if it is not in the span of A.

The system of vectors A is said to be linearly independent if every vector in A is independent of the remaining vectors in A.
Theorem: The system A := (a1, a2, ..., an) is independent if and only if the equation

x1*a1+x2*a2+...+xn*an = 0

Has only one solution, namely, x1=x2=...=xn=0.

Let V be a set of vectors. An element v, belonging to V, is said to linearly dependent if it can be expressed as a linear combination of the remaining vectors in V.

Ie. Let V={(2,1,1), (0,1,1), (1,0,0)}
v=(2,1,1) is linearly dependent upon (0,1,1) and (1,0,0) because (2,1,1) = (0,1,1) + 2*(1,0,0)
In an n-dimensional vector space, a group of vectors is dependent if some non-trivial linear combination of the vectors can be found to cancel them out.

That is, if we can find a set of n numbers

l1, l2, ... , ln-1, ln that are not all 0

such that

l1v1+l2v2+...+ln-1vn-1+lnvn = 0,

the vectors are said to be linearly dependent (or simply "dependent"). If they are not dependent, they are said to be linearly independent.

Given a point p:

  • Two vectors are dependent if p, (p+v1) and (p+v2) lie in the same straight line.
  • Three vectors are dependent if p, (p+v1) (p+v2), and (p+v3) lie in the same plane.
  • Four vectors are dependent if p, (p+v1) (p+v2), (p+v3), and (p+v4) lie in the same three-dimensional hyperplane.
    So, in a three-dimensional vector space, four vectors are always dependent.

If a vector space has an inner product defined on it (i.e. it is an inner product space), then we may also say that a set of orthogonal vectors (a set where each is orthogonal to all others) will be linearly independent. In Euclidean space, this is simply saying that a set of vectors that are all perpendicular to each other are linearly independent, which should be fairly intuitive if you think about the examples given in Gorgonzola's write-up. It is also relatively simple to prove.

In the following I will denote vectors in bold. Assume we have a set of vectors v[i], where i goes from 1 to n, which are orthogonal with respect to the inner product that I will denote <a,b>. So, this means that, for any i and j, where i not equal to j, <v[i],v[j]>=0. Let's assume that none of the v[i] are the 0 vector.

In order to be linearly dependent we require that there exist a set of scalars l[j], where j goes from 1 to n and at least one of the l[j] is nonzero, such that

l[1]*v[1]+...+l[n]*v[n]=0

Now we make take the inner product of each side with one of the vectors v[j]

<v[j],l[1]*v[1]+...+l[n]*v[n]>=<v[j],0>

We know that for any vector a, <a,0>=0, so together with the linearity of the inner product, we have

l[1]*<v[j],v[1]>+...l[n]*<v[j],v[n]>=0

Now, all the inner products here vanish except the one that v[j] with itself

l[j]*<v[j],v[j]>=0

Since, by assumption, none of the vectors v[i] are the 0 vector, then <v[j].v[j]> must be a non-zero scalar, by the definition of the inner product. Thus, to make the equation true, it must be that l[j]=0. But as you can see, I never specified what j was, so this is true for all j from 1 to n. We have proven that l[j]=0 for all j from 1 to n, meaning only the "trivial solution" exists; therefore, the vectors are linearly independent. It's important to note that while this is intuitively true for the normal, Euclidean inner product, the proof above holds for any inner product space. We we can also see n must be less than or equal to the dimension of the space, otherwise we would have a set of linearly independent vectors that number greater than the dimension, which is a contradiction.


I just read Gorgonzola's really great write-up above (upvote it! :) ) this morning, and was moved to write this addendum. Since I just made up this proof off the top of my head, please let me know if there are any mistakes. Since it's pretty stright forward I think (and hope) there aren't.

Log in or register to write something here or to contact authors.