Previously, you learnt about linear transformations. Proximal to the idea of these transformations is the idea of linear independence.
Consider the following matrix:
In this matrix, if col(x) stands for a column vector, we find that
col(1) + col(2) = 4 * col(3)
One column of the matrix can be expressed as a linear combination of others. The column vectors of this matrix are said to be linearly dependent. How can we then define linear independence?
A set of vectors is said to be linearly independent if none of the individual vectors can be expressed as a linear combination of one or more of the remaining individual vectors.
Linear independence is an important property to understand. While working with data, if the column vectors of some variables are linearly dependent on others, that variable adds no extra information to the data matrix.
In the context of a matrix, the term 'linearly independent' is typically used in the sense of columns as vectors.
Linear dependence in a matrix is often referred to as perfect multicollinearity. Multicollinearity is a measure often used to understand the effect of certain variables on the explanatory and predictive power of a dataset. Perfect multicollinearity implies an exact linear dependence between certain columns in the matrix.
As a matrix is a linear transformation, linear dependence also manifests itself in matrix operations. It is closely related to the inverse of a matrix.
Specifically, the inverse of a matrix can only exist when all the columns of the matrix are linearly independent.
From this, it follows that if a matrix is linearly dependent, its determinant is equal to zero. This is a very useful property.
In Machine Learning practice, this property is typically used as a check mechanism in the tuning of parameters. Certain values of parameters are prohibited, so as to ensure that the matrices don't end up becoming singular and thereby linearly dependent.
In the next section, you will learn about determinants.