Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is ok but nothing is as intuitive as 3B1B's series on YouTube that has been posted hundreds of times on HN [0].

Linear algebra is really about linear transformations of vector spaces, which is not captured in this blog post.

[0] https://www.youtube.com/watch?v=fNk_zzaMoSs



> Linear algebra is really about linear transformations of vector spaces, which is not captured in this blog post.

I... disagree. Some of linear algebra is about that. And it's probably a good way to view it that way when learning.

But some of my current work (coding theory) involves linear algebra over finite fields. We use results from linear algebra, and interpret our problem using matrices, but really at no point are we viewing what we're doing as transforming a vector space, we're just solving equations with unknowns.


I think this is spot on. Depending on what you're doing, a matrix can be:

    - A linear transformation
    - A basis set of column vectors
    - A set of equations (rows) to be solved
       - (your example: parity equations for coding theory)
    - The covariance of elements in a vector space
    - The Hessian of a function for numerical optimization
    - The adjacency representation of a graph
    - Just a 2D image (compression algorithms)
    ... (I'm sure there are plenty of others)
For some of these, the matrix is really just a high dimensional number. You (rarely?) never think of covariance in a Kalman filter as a linear transform, but you still need to take its Eigen vectors if you want to draw ellipses.


The first three can reasonably be thought of as defining linear transformations. For linear systems of equations A x = b, x is an unknown vector in the input space that is mapped by A to b.

Both covariance matrices and Hessians are more naturally thought of as tensors, not matrices (and therefore not linear transformations). That is, they take in two vectors as input and produce a single real number as output.

As for graph adjacency matrix, this can actually be thought of as a linear transformation on the vector space where the basis vectors correspond to nodes in the graph. Linear combinations of these basis vectors correspond to probability distributions over the graph (if properly normalized).

2D images... Yes, these cannot really be interpreted as linear transformations. But I'd say these aren't really matrices in the mathematical sense.


If you squint hard enough, you can see all of them as linear transformations (even the 2D images :-).

I politely disagree about covariance and Hessians. I can squint and say that the Hessian provides a change in gradient when multiplied by a delta vector. Similarly for covariance... Or you could look at it as one half of the dot product for a Bhattacharyya distance, which is just a product of three matrices (row vector, square matrix, col vector). No need for tensors yet.

That is unless you decide to squint hard enough to see everything as tensors! :-)


Great points. I wrote my comment in response to the article claiming to be an intuitive guide to linear algebra, not an intuitive guide to matrices. According to wikipedia:

> Linear algebra is the branch of mathematics concerning linear equations, linear functions, and their representations in vector spaces through matrices. [0]

The Venn Diagram of Linear Algebra and Matrices definitely has a lot of non-overlap, of which your list covers some. This article should be renamed to be about matrices and not linear algebra, because it's not.

[0] https://en.wikipedia.org/wiki/Linear_algebra


A covariance matrix naturally transforms from the measured space to a space where things are approximately unit Gaussian distributed. This is identical to the Z transform in 1D case.

This can be useful in, say, exotic options trading - a natural unit of measurement is how many ‘vols’ an underlier has moved, e.g. a 10-vol move is very large.


Not really the covariance matrix, though, but its Cholesky decomposition (which exists, as a covariance matrix is symmetric positive (semi)definite, as otherwise you could construct a linear combination with negative variance). Useful stuff.

And vice versa, btw - take iid RV with unit variance, hit them with the Cholesky decomposition, and you have the desired covariance. Used all over Monte Carlo and finance and so on.


> - A basis set of column vectors

Let's leave the word 'basis' out, since the column vectors may well be linearly dependent.


Or not span the space, for that matter.


Well, it depends on what "the space" is. Every set of vectors (in a common ambient space) spans some space—often called the column space of a matrix, if they are the column vectors of the matrix.


Sure, every set of vectors will span the space they span. But the requirement that a basis span the space refers to the space it’s in, not the space it spans (otherwise every linearly independent set of vectors were a basis, spanning the space it spans.) I could go on :-)


> > Linear algebra is really about linear transformations of vector spaces, which is not captured in this blog post.

> I... disagree.

This is literally the definition of the term "linear algebra".

> really at no point are we viewing what we're doing as transforming a vector space, we're just solving equations with unknowns.

You may not see what you're doing as transforming vector spaces with linear operators, but that is what you're doing. It's worth pointing out that the definition of vector spaces allows any field, including finite ones, though it's true that the intuition won't be exactly the same.

Another way to say this: if you're working on a problem without thinking about the connection to linear transformations, then it's not correct to say it's a linear algebra problem without obvious connection to linear transformations; instead, it's not a linear algebra problem at all, by definition.


Linear algebra is a shared field across multiple disciplines. So I'm sure that there are many valid and useful interpretations as to what "linear algebra" is essentially about.

However, in mathematics proper, it is absolutely the case that linear algebra is about linear transformations. Indeed, this is the only interpretation that remains meaningful when trying to generalize (e.g. to functional analysis / multilinear algebra).


20+ years ago I took a grad course in coding theory, e.g.,

W. Wesley Peterson and E. J. Weldon, Jr., Error-Correcting Codes, Second Edition, The MIT Press.

-- gee, people are still studying/learning that?

The prof knew the material really well, but to up my game in the finite field theory from other courses, I used

Oscar Zariski and Pierre Samuel, Commutative Algebra, Volume I, Van Nostand, Princeton.

which did have a lot more than I needed!

My 50,000 foot overview of linear algebra is that the subject still rests on the apparently very old problem of the numerical solution of systems of simultaneous (same unknowns) linear equations, e.g., via Gauss elimination (it's really easy, intuitive, powerful, and clever, surprisingly stable numerically, and is fast and easy to program; someone might want to type in, say, just an English language description!). Since such the subject of linear equations significantly pre-dates matrix theory, the start of matrix theory was maybe just easier notation for working with systems of linear equations. In principle, everything done with matrix theory could have been with just systems of linear equations although often at a price of a mess notationally. In particular, as I outline below, now there are lots of generalizations of systems of linear equations that use different notation and not much matrix theory.

What's amazing are the generalizations, all the way to linear systems (e.g., their ringing) in mechanical engineering, radio astronomy, molecular spectroscopy, frequencies in radio broadcasting, stochastic processes, music, mixing animal feed, linear programming, oil refinery operation optimization, min-cost network flows, non-linear optimization, Fourier theory, Banach space, oil prospecting, phased array sonar, radar, and radio astronomy, seismology, quantum mechanics, yes, error correcting codes, linear ordinary and partial differential equations, ..., and then

Nelson Dunford and Jacob T. Schwartz, Linear Operators Part I: General Theory, ISBN 0-470-22605-6, Interscience, New York.


Linear algebra IS about linear transformations and vector spaces.

The thing is that the field over which the space is defined can be quite arbitrary (finite, infinite, not algebraically closed etc.) which has immense consequences on the behavior of such objects.

When one drops the assumption on finite number of dimensions, the story becomes wild (and is known as functional analysis, beautiful and extremely useful branch of mathematics).


3B1B does great work explaining these concepts, but I can't help but ask "why not both?" when it comes to explaining these concepts. Turns out, linear algebra is great for working with matrices, vector space, approximating non-linear systems, and more... Let's embrace multiple ways of teaching it and gaining intuitions rather than keeping score, eh?



Then you failed to comprehend the subject. The point is that a wide array of problems and models are really the same thing.


Failed to comprehend which subject, linear algebra? I would argue no, and other people who are more on the pure mathematics side would agree [0][1].

snicker7 said it very succinctly:

> However, in mathematics proper, it is absolutely the case that linear algebra is about linear transformations. Indeed, this is the only interpretation that remains meaningful when trying to generalize (e.g. to functional analysis / multilinear algebra).

If you're point is that I failed to comprehend matrices, then I don't think you have enough data to make that claim since I don't really talk about matrices. I kind of address that in my other comment [2].

I don't follow your point around "a wide array of problems and models are the same thing". That's a very vague general statement that I certainly comprehend (not sure how you inferred otherwise). Specifically, I don't see how that point relates at all to the claim I made about linear algebra.

[0] https://news.ycombinator.com/item?id=22419018

[1] https://news.ycombinator.com/item?id=22417764

[2] https://news.ycombinator.com/item?id=22417595




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: