Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the most confusing visualization of linear algebra I've ever seen.

Sure it looks cool, but mostly because it looks magic, which is the opposite of what's you're supposed to do when illustrating mathematical concepts…

The typical textbook illustration using 2x3/3x2 matrices is much, much clearer than this 32x24 … 64x96 mess. The 3D idea is interesting, but why spawn such an insane amount of elements in your matrices?!



I agree, and I think this reeks of the Monad Burrito Tutorial Fallacy[1]. Once you know what the manipulations are you can start to visualize doing them in these weird 3d ways, but the understanding came through the struggle to make a coherent picture and not the resulting coherent picture itself. The claim that "matrix multiplication is fundamentally a three-dimensional operation" is ultimately very confusing because it conflates the row & column dimensions of the matrix with the dimensions of the underlying vector space.

Colorized Math Equations[2] has the same problem where people see it and go "Colors! English language! This must be so much more easy to grasp than math! I feel enlightened for having seen this!" But feeling enlightened is very different from being enlightened and it just doesn't hold up. I've found people retain very little understanding if they aren't already familiar with the concept.

[1]: https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...

[2]: https://betterexplained.com/articles/colorized-math-equation...

EDIT: The "three-dimensional operation" perspective no doubt comes from writing matrices as rectangles, but this is far from the only representation of them. If the vector v = [a, b, c] is shorthand for v = a x_hat + b y_hat + c z_hat (explicitly a sum of basis vectors), then we can write a matrix with a similar set of basis vectors: m = [[a, b, c], [d, e, f], ...] = a x_hat x_hat + b x_hat y_hat + c x_hat z_hat + ... . There's nothing "rectangular" about this any more than a polynomial (as a sum of monomials) is "rectangular". The details then shake out of how (x_hat y_hat) multiplies with (y_hat z_hat). The rectangle is just a mnemonic.

DOUBLE EDIT: In the above sense, multiplying two matrices is more like a convolution -- the x_hat x_hat term of the first matrix multiplies every term of the second, we just know most of those terms will be zero (the product with any term that doesn't start with an x_hat (e.g. y_hat z_hat).


Completely agree. To second one of the siblings- a really good set of visualizations which really helped me develop intuition for linear algebra (as mentioned by a sibling) is 3blue1brown's excellent series "The Essence of Linear Algebra". https://www.3blue1brown.com/topics/linear-algebra

The animations really helped me to understand what eigenvectors, eigenvalues, linear transformations, determinants etc are


I agree, the visualization is pretty but kinda useless to explain mat-mult.

It would have been more intuitive to show every element in output matrix corresponds to a dot-product of row/column vectors from input matrices, the animation doesn't even highlight those corresponding vectors clearly..


Came here to say the same thing. This adds absolutely nothing to any reasonable understanding of matrix multiplication. This is the most complex way I could imagine trying to explain what matrix multiplication "is".

If it's useful to someone working in a very complex environment where these visualizations are necessary to help tease out some subtle understanding, then that's great.

But really, this part is all you need to know about the article:

> This is the _intuitive_ meaning of matrix multiplication:

> - project two orthogonal matrices into the interior of a cube

> - multiply the pair of values at each intersection, forming a grid of products

> - sum along the third orthogonal dimension to produce a result matrix.

This 1. Isn't intuitive, and 2. Isn't the "meaning".


You kidding me? This is the only reasonable explanation of matrix multiplication. I remember learning it in school, and memorizing how the input dimensions corresponded to output dimensions, without understanding why.

This gets to the why perfectly. We all understand how to navigate a 3D space intuitively. If the math doesn't tie into that, it may as well be wizard nonsense.


The only reasonable explanation of matrix multiplication is that

1. For a linear function f, its matrix A for some basis {b_i} is the list of outputs f(b_i). i.e. each column is the image of a basis vector. For an arbitrary vector x, the matrix-vector product Ax = f(x).

2. For two linear functions f,g with appropriate domains/codomains and matrices A,B, the result of "multiplication" BA is the matrix for the composed (also linear) function x -> g(f(x)). For an arbitrary vector x, the product (BA)x = B(Ax) = g(f(x)).

This tells you what a matrix even is and why you multiply rows and columns in the way you do (as opposed to e.g. pointwise). This also tells you why the dimensions are what they are: the codomain has some dimension (the height of the columns) and the domain has some dimension (how many columns are there). For multiplication, you need the codomain of f to match the domain of g for composition to make sense, so obviously dimensions must line up.


> This gets to the why perfectly.

Respectfully disagree. A matrix has basically nothing to do with "living on the surface of a cuboid". It's like saying FOIL is the "why" of binomial multiplication -- the "why" is the distributive and associative properties of the things involved, FOIL is just a useful mnemonic that falls out.


If transformer neural networks used 2x3 matrixes, then sure, we could use 'typical textbook illustration' to visualize them. The point of this tool is not to explain matmul with toy examples, but to visualize real data from model weights.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: