Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Among my favorite leading quotes from Knuth's works is this:

"The asymptotically best algorithms frequently turn out to be worst on all problems for which they are used." — D. G. CANTOR and H. ZASSENHAUS (1981)



Fun story: Knuth claims in the 1997 edition of TAOCP vol. 2 that dense matrices of size larger than 250 "rarely occur in practice" and that asymptotically fast matrix multiplication is "strictly of theoretical interest". In 1998, Erich Kaltofen posed "Open Problem 7: Convince Donald Knuth that these asymptotically fast methods are of practical value. If he pays you $2.56 for this technical error, you have solved this problem." As far as I know, this is still an open problem.

(250 is peanut-sized in cryptography, number theory or computer algebra, and asymptotically fast matrix multiplication is very much useful in practice in those domains.)


Are those matrices dense? It's an important qualifier. My corner of the computational world (finite-element related code) very frequently gets to millions of rows and columns but with only O(N) nonzero entries (where N is #rows which is approximately #cols too).


Matrices can be everything between dense and extremely sparse, depending on the application (sometimes you have a structured matrix and can use sparse methods to reduce it to a dense system of much smaller size). But yes, dense matrices with thousands of rows do occur. The applications where Strassen's asymptotically fast multiplication algorithm really helps are certainly rare compared to overall uses of linear algebra, but they do "strictly" exist.

Strassen multiplication usually starts to win over classical multiplication already for matrices with a few hundred rows (in special circumstances, it is even superior for 2x2 matrices). The Coppersmith–Winograd algorithm, on the other hand, is probably "strictly of theoretical interest" for the time being...


Spectral methods [1] are FE-like methods with non-zero basis functions over the whole domain. Indeed they produce full matrices. [1] http://en.wikipedia.org/wiki/Spectral_method


For a simple application in finance, consider a portfolio with n stocks. Then finding various statistics around

- expected portfolio mean & variance

- optimizing mean-variance subject to a risk budget (following Markowitz portfolio theory), etc.

would all entail operating on the n×n covariance matrix, which is typically dense. If your universe is the Russell 1000, S&P 500, etc, then n will be significantly larger than 250.


I actually just recently read that section. Was curious how accurate it was. Was also curious if I could take a stab at implementing one of the algorithms he touches on.

And yes, I was thinking that it was the "sparse" nature of typical large matrices that makes them impractical.


In finance, I've dealt with larger (but not much larger) dense matrices.


I like "Fancy algorithms are slow when n is small, and n is usually small." -Rob Pike




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: