Calculus is very useful in CS. Many CS papers use it, for explaining algorithms that have a discrete implementation, for example (some algorithms for finding edges in images for example), or for probability stuff.
i think OP might be thinking along the lines of, "how many working programmers need to know or use calculus in their day jobs?" sure, CS researchers working with physical medium like robotics or computer vision need to use calculus, but the millions of people writing code to shuffle data back-and-forth between sources (which is lots of programming jobs) don't need to know a single bit of calculus
That is exactly what I was thinking. I do not deny that there are many specific areas where Calculus is useful in programming. However most working programmers won't encounter them.
"I could theoretically develop a solution within a day, but I have no idea what you are talking about!"
I hope such programmers make sure that they can work on the next twitter and such things and don't have to implement solutions for financial controllers and the like.
Well, there is definitely a strong distinction between "programmer" (a trade) and "computer scientist".
I'd argue that beyond arithmetic and maybe a basic understanding of functions (e.g. f(x)), most "programmers" need to know very little math. But they also tend to produce shoddy, inefficient code and look at problems as "moving bits around".
Computer Scientists on the other hand need to have spent some time understanding the theory of computation. Calculus is one way of computing things, so is lambda calculus, and turing machines, and formal grammars, various algebras like regular languages, etc. Slinging code is just another way of computing things to a Computer Scientist.
Given this, most typical programming trade jobs are approached by Computer Scientists as a computation problem (or at least an application of a computation theory) vs. just hauling electrons about. This gets you something both qualitatively different in their code output as well as quantitatively different.
I always wanted to be a revolutionary in tech / science (but I don't little formal studies that allow me to do that from academia, I'm rather self taught), not to be rich. I think with time and focus I could have become rich by keeping my work on the old startup I left. But I felt that I was not being true to myself by doing that, and I was not liking it enough to keep pushing it further.
Sorry if this duplicates, my machine glitched. I just wanted to comment that unless you are independently wealthy, money has to be a big concern - which means either doing something commercial or getting into academics. I would love to have a business in ANY field that would provide enough residual income to allow me time to pursue my own ideas long term.
I agree. I have a work that pays me well now (it was hard to do the transition between what paid me well and what I liked but didn't pay because I didn't know about... I'm now at some point in-between). I'm not starving, but I have to work full time. But I'm keeping away from business stuff, at least for a while, to learn more of the things I find worth learning.
Starting with some basic knowledge of machine learning (clustering, NN, bayesian inference, etc.) and some basic computer vision / processing (edge detection, color, basic shapes), how much theory is needed for achieving that objective? (recognizing vehicles in photos, and more interesting objectives: extracting 3d structure from a single 2d image).
"How much theory for recognizing objects in images?": Some pattern recognition, lots of image processing.
For the most part, it doesn't matter what classifier you use: k nearest neighbor,support vector machine, random forest, neural nets. They'll all give about the same performance. You should have a general idea what they do, but I don't think it's worth the effort to become a "neural net expert". You should know enough pattern recognition so you don't fool yourself (by over-training, for example), and have an idea for how to choose the right features.
Where should you put your effort? Into finding useful features for the object you want to classify. And the more image processing you know the more useful features you'll be able to try. How much do you need to know? Depends on the problem. If you're finding cars in the desert then not so much. Your feature set might be "has long straight lines and is not sand colored". If you're trying to tell American made cars from Japanese then it's harder (unless they are moving, in which case it can't be American).
And what is needed for face recognition (face matching, not just detection), in the same terms? Are the same kind of tools enough for this? (from what I read, it seems so, but so far I couldn't completely believe it)
- check mail at most once an hour when programming
- keep your mail unread if it requires an action, don't look back at read mails in your inbox (send old inbox mails to some other folder once a week for example)
- try not to keep old mails without actions for too much time... if a mail will have to wait, send it to another folder so that your inbox is always small and easy to check (around 10 unread mails at the end of each day)
- send mails that require an action to a todo folder; reply mails that require an answer immediately before you forget them
Most languages worth learning expand your mind. They really do.
Lisp lets you to see your source in different light. Haskell makes you rethink expressions. C lets you know how it all works under the hood. Factor makes you think both directions.
There are also languages that set the bar high for you. Ruby for elegance. Prototype or jquery for making complex ugly thing a paradise. Even Perl's CPAN shows you the value of cooperation and modularization.
Even bad languages teach you a lot about ugliness, bad decisions and denial.
The same can be said about quite a few frameworks or even tools.
strace makes programs transparent, for example, while tcpdump makes you see, you realize you were blind.
As for math and algorithms, you suddently know enough of them, they're pretty finite.
- learn a lot of languages, like everyone else does... a lot of this learning will be useless with time
- learn, for example, a lot about computer vision... the road for this is not "pretty finite" and as you learn more about this, you can work on more and more impressive projects every time (making your value bigger)
This is how I see it, and I don't think the first path is worthwhile. (Yes, I learned a lot of languages and frameworks too... but at a point I started asking to myself: why care about most of it? the real stuff is not this)
I think the Linus Torvalds example was a good one. Or John Carmack. Do you see them talking about a lot of languages and frameworks or do you see them learning and doing new stuff?