Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
On-demand learning comes at the cost of conceptual understanding (jernesto.com)
181 points by chaotically on Jan 4, 2023 | hide | past | favorite | 100 comments


Wait a second, who here honestly did not practice on-demand learning at university? You did, didn't you?

You had to because you were studying for a specific exam, which you would fail if you only understood the deeper concepts but not specifics of the questions on that exam.

It's also true that you probably don't understand the concepts as well as you'd like. If you're like most of us, you passed the exams by means of an academic Fosbury Flop. Sort of throwing yourself over the bar in pieces while your center of gravity passes under it.

On the other hand, most of the things I do understand well are things that I've been on-demand learning about. You end up coming back to central concepts because they're central. Knit together enough programs and you will end up thinking about code organization at some point in your career. You will run into a Big-O problem at some point. You will run into distributed systems.

Yeah so I'm not sure what my point is, perhaps I agree in a way that it's a bit of a continuum, but also I think with enough exploration you actually end up finding gold.


Me.

I was the conceptologist. I learned by fitting things into chunks.

Then I got to the world of tables of apis and frameworks and shitty microservices and all of it ad-hoc, and found that my mind, tuned to do what I found valuable and beautiful in the world, was worthless.


That's how I thought until I started going deep into some technologies that have very strong technical foundations:

- Databases: Postgres, ClickHouse - Back-end frameworks: Ruby on Rails - Front-end frameworks: Elm

I lost count of the times I was reading the documentation of these projects and had my mind blow thinking "The person that came up with this concept was a genius! I'm glad I get to learn it."


This is how I feel about FeathersJS. It just feels like a really sane way to design an API


Wow. That’s exactly how I feel. I’ve always focused on concepts over specifics, thinking that I’ll figure out the details as I do a thing. People don’t seem to want that, though. Maybe I’m looking for the wrong kind of work.


I think I've solved this for myself. I'm a concept-first learner too, but don't have that much of a problem learning ad-hoc details of random libraries and tools. I think what's going on is that, at some point early on, my brain figured out that if the thing doesn't seem to fit a coherent, sensible concept, then there's likely a story explaining the delta. I started to think mostly in terms of guessing what kind of problems and tradeoffs the author faced, which would explain the peculiar design. I found myself searching for information about past iterations, previous use cases - anything that could let me correctly guess at the story behind the seemingly random thing.

This works for me pretty well in software. I still have problems with things requiring memorization, though - like some bits of chemistry, or the grammar of German language.


I'm incapable of retaining the ad-hoc specifics. Most of the shit they want me to learn I use once and then it's on to other shit. I'll never improve like that.


Please let me know if you ever figure out what kind of work fits...

I feel like the move would have been to go very deep on something. Too late for that now.


Wow, this sounds like me. I'm good at conceptualizing and rarely needed to study for my comp sci classes. I was actually decently skilled when I first started, working on monoliths. Now I'm struggling because of poorly documented apis, incomplete requirements, and overbearing tech debt or messy technical design. If only I could find a job with an orderly structure.


Right???

I started out in a 300 kloc c++ codebase.

I've never had a better environment since =/


This is exactly how I feel, and why I don't understand why tech interviews are the way they are right now


> Wait a second, who here honestly did not practice on-demand learning at university? You did, didn't you?

I have no real ability to do that, and even when I try to cram it feels like cheating (take that up with my therapist.)

What I did in college is figure out the classes I was likely to take next semester, then furiously study before the semester started until I had a learning buffer of a few weeks. Then during the class I tried to study deeply, always ahead of the class and exams, and ideally the buffer would be zero by the final.


I did the same thing. I'd always try and read ahead so I had read the material BEFORE the teacher covered it. This was an immense help in understanding. Seriously, it helped so much.


I was always surprised that my fellow students didn't do any reading or note taking before the teacher covered something in class.

The textbook was usually better than the teacher anyway.


> You had to because you were studying for a specific exam, which you would fail if you only understood the deeper concepts but not specifics of the questions on that exam.

I mostly operated that way on a class-by-class basis in grad school, but at some point it became a conceptual understanding. The "aha" moment came in grad school when I was a TA for a class I didn't do particularly well in as an undergrad. During a help session, I couldn't recall a couple specific formulae, but I was able to infer and derive them based on my conceptual understanding of the topic.

I think the real difference was that I was now teaching the material, and in doing so, had to approach things from a totally different angle than studying for the test.


Neither of those modes alone suffice. At an undergrad course you will have the concepts all forced at you in a class, so that later when you are learning on demand they can click.


Love the analogy. For years after school I found myself going into "test mode", a timeboxed intellectual fight to the death. Works great when the work gets distilled to a number and thrown away, not so well when you have to keep working with your results...


When I studied for my Applied Physics degree there were exams twice a year to weed out those who couldn't hack it but these exams could contain questions on absolutely anything that was part of the whole course.

However the class of one's degree (first, second, etc.) depended only on the finals and the oral examination of the final year experimental work. The intermediate examinations counted for nothing except to justify your continued study.

On demand learning doesn't work in such an environment you have to have simultaneously a broad and a deep understanding of the system of physics not just isolated bits of it.


I thought the OP was silly because surely everyone knows that you have to do both.


School grades is a case of Goodhart's Law, as soon as you try to get good grades the grades stop being a good measure of your learning.


Interesting to see a lot of deeper concepts-focused people in here. I'm the opposite. I have to start from on-demand or nothing will stick. I work best when I can tinker first, then dig deep and get explanations for things I actually observed, roadblocks I encountered, etc.


I think people prefer on-demand approach because

learning without purpose sucks, I mean it's still very valuable, but it sucks in that sense that it is "unpredicatbly effective".

It means that you may learn some stuff and be asked about it on a interview month later, or use it in your task a two months later. It's unpredictable.

It has one strong advantage - namely you're prepared when facing problem.

In both work and academia I've been doing things ahead - sometimes due to work, sometimes due to curiosity and when facing those problem during lectures or work then I've been prepared.

It's advantage, but the process sucks, is boring and unpredictable.

I think on-demand approach doesn't work well when the problems are getting harder and require more knowledge / exp and you gotta have solid foundations and proficiency to some point in your hands.

I think it'd be hard to do cryptography, reverse engineering, writing database, compiler, os, etc. when learning on-demand. It'd feel miserable, I think.


> learning without purpose sucks, I mean it's still very valuable, but it sucks in that sense that it is "unpredicatbly effective".

Your presumption is that people are only interested in learning things that make them more effective. This is certainly true of many people at least some of the time, of course. This can be established by direct observation: How many times have you ever heard someone in a classroom ask, "Will this be on the test?"

Bt as a universal proposition, it can also be contradicted by direct observation: People learn things for "the pleasure of finding things out," the title of a Feynman book.

Why else would anyone read a book like Raymond Smullyan's "To Mock a Mockingbird?" There are precious few occupations where you will become more effective if you know how starlings and kestrels can combine in various permutations to compute anything computable.

The first half of your sentence is correct: Learning without purpose sucks. But there are many purposes for learning, including fun (like learning combinatory logic via songbirds in forests) or a desire to understand the way things work or the desire to be viewed by our peers as a learned person.


Don't get me wrong, I'm not saying you should only learn for "material" "monetary" "measurable" value, fun/thought has value too, but I think that in the context of this thread we are talking about learning to get better at job/craft, so - languages, frameworks, concepts/theories/foundations, tools, and so on


Oh I totally get that, and as someone who likes to program for fun...

One of the ways you make more time available for the things you learn for fun (esoteric programming, rock climbing, ultimate, playing music, flying gliders) is by being very intentional about what you choose to learn for "your job."

Another comment made a good point: Perhaps we should call such learning "training." We expect automobile mechanics to know the basic physics behind ICE, but nobody expects the person changing their oil to learn about the computational fluid dynamics involved in designing fuel injection systems.

If you ask, "How can I learn about programming," I would expect the answer to involve a mix of both CompSci basics and practical direction for writing actual programs in a reasonably accessible programming language.

But if you ask, "How can I train to get a job working as a programmer in BigCo," I totally get that the answer should lean in, hard, on the practical and the currently marketable.


Your presumption is that people are only interested in learning things that make them more effective.

I feel like deep diving into conceptual things makes more effective. There's no longer dark corners in my mind when I think about certain problems. It helps me move forward with confidence knowing I've done the research and I know I'm on the right path, rather than following a heuristic.


On-demand (as-needed) is handy for actually accomplishing things at work, but theory is the thing that doesn't become obsolete as soon as the trends of the industry sway once more. Being able to use react is one thing, but understanding why react (or angular, etc) works the way it does is highly transferable


> learning without purpose sucks

Learning can be it's own reward. Learning can give you new eyes to appreciate the world.


Hmm, isn't curiosity the purpose in this case?

I think that in the context of this thread we are talking about learning to get purely better at job/craft


> when the problems are getting harder and require more knowledge

https://www.abelard.org/asimov.php


I think the article might've started with something like: The current practice of software development has a lot of duct-taping together of off-the-shelf components by people who don't sufficiently understand what it's doing.

There certainly are important points to be made there, and the article makes some of them.

I didn't understand all the terminology choices and definitions. For example, where does Papert's theory of Constructionism fit into this dichotomy of "preliminary learning" vs. "on-demand learning"?


I learned a lot of low level things starting out. As the abstractions piled on it became clearer and clearer that I had mechanical sympathy that other people did not, but the world changes underneath you, especially when it's all abstracted away anyway (how would you know if the foundations have been swapped out underneath you?). I got a refresher course about ten years ago working on embedded software, and I'm overdue for a booster.

I'm kind of hoping that IoT or open source firmware for SOCs and BMCs will give people some more fodder for learning systems thinking concepts that are currently being buried behind giant frameworks and stackoverflow.


I thrive via an "on-demand" learning approach. But to me, I use this approach as a way to immerse myself so I can reach a better conceptual understanding. I don't use tools or new skills without thinking about how they might work under the hood. I also spend lots of time relating the tools and skills I use to other things I've done in the past. Many times this leads me to learn about the abstract concepts that underpin the tools/skills I use - something I would not care about if it was presented to me without context.

Here is an example. I was interested in real-time 3D graphics programming. How did I go about it? I followed the bare minimum tutorials to get OpenGL set up with C++ and then I just wrote code for several months, trying different things. This allowed me to develop my own intuition around linear algebra, geometry, 3D rendering, GPU pipelines, etc. I knew I wasn't doing anything groundbreaking, but what I found was after this experimentation I was able to readily digest and understand the formal approach to the topic. And it was out of my own interest too. I was watching university lectures, reading about mathematical proofs, etc. Something I thought I'd never be able to do. And I found it much more rewarding as I was able to see where my intuitions were correct and lined up with the formal approach, and also where I was very wrong.


I'm not so sure about the arguments here. Theoretical knowledge is more advantageous if things are soundly designed. If stuff is a cobbled together mess, as many modern tools are, it may actually block flow or lead to false assumptions.

Don't get me wrong, it's a valid pursuit, but I'm not convinced by the arguments presented.

Instead you can say it opens the door to more serious work such as say language design, operating system teams, things like that. It rarefies your skillset especially if you don't suck at it. You aren't likely to get an important position at say, nvidia, by going through bootcamps. The crack of that door is open much wider for those with a Stanford PhD in hand. I'm sure there's exceptions, but I'd imagine they're quite rare.

I think it's unlikely, however, to make you necessarily more proficient in the latest churn of frameworks to implement the latest churn of websites.


A bonus is theoretical knowledge is quite stable: there is usually some thought put into how to pace and structure the teaching of it (e.g. pedagogy). It builds on itself. And it doesn't expire at the same rate that frameworks do. Drinking from a deeper pool of knowledge (as it were) is a profoundly different experience than learning new tools.

If you're early in your career, focusing on marketable skills and getting experience is a smart move. Once you start thinking about progressing to intermediate and beyond, theory starts to be a secret weapon of sorts that lets you tackle bigger and harder problems.


The article agrees with you that, for learning tools, "on-demand" or "just-in-time" learning is advantageous. This applies for the things that you point out as being commonly cobbled together. However, for learning whole concepts like "AI" or "cloud development," these things are rarely cobbled together and require one to learn how to think through a whole new lens. The concept is larger than any one tool, and the idea is that thinking more broadly will be advantageous in the long run


And that's what I'm fundamentally disputing. We're in an age where "trade school" programming is needed - someone has to do the work and there's no shame in a job well done.

A plumber doesn't need to master the mathematics of fluid dynamics or have an advanced degree in chemical engineering to do their job well. A hair stylist doesn't need to pursue a dermatology medical degree focused on hair growth.

Plumbers and cosmetologists have plenty of work and it's a good profession. They need to go to school, pass exams, get a certification, it's a real thing.

Similarly, getting an e-commerce site up or building a mobile app doesn't require an ability to say, demonstrate the correspondence principle of procedural abstraction in lambda calculus. There's jobs that do but that ain't one of them.


I think you’re agreeing with the article?


I don't do well with formal education and what this article describes as "preliminary learning", I'm 100% "on-demand". I've always learned through experience and practice of things. A lot of that is driven by a feeling of "what is this thing." It's kind of adjacent to "how does this thing work" but more abstract, as in fitting it into a system of existing ideas and concepts and building endless categories and subcategories.

I've found that this makes instant proficiency with a lot of things easy ("oh, it's one of these") and mastery fairly quick too ("it's one of these, but with these key differences/details").

The biggest downside is I learn (truly) new things slowly. Where most people note things down, remember them and move on, I can't learn until I've understood what the new thing is, in the sense of having some sort of complete idea of what it is (not necessarily with all the details, just enough for it to be coherent). But once that is absorbed I can move on fairly quickly.

So when I'm being taught "preliminary" I just can't get whole picture I need, and it all goes by too quickly for me to figure it out in classes and lectures. The other problem with this sort of teaching is that it tends to be bottom up whereas my mind tends to work top-down, not wanting to know most details until later. I'm more interested in why you would want to have this thing as it is or why you need it at all, because that reveals much more to me, rather than describing how it works or how it fits in with the next step up.


"Mastery fairly quickly" is a juxtaposition. Mastery of a concept or a subject is not simply understanding it, but being able to apply is to many scenarios(aka experience)...and this bit does take time.

I think learning through experience and hands on activities is totally valid, and in most cases I don't know if I would say that kind of learning is 'on-demand'. Preliminary learning doesn't have to be reading out of a book or having to do 10 years or school, but it should be a cohesive, prolonged effort to understand fundamentals.


Do you mean oxymoron? Juxtaposition just means 'side by side'. Every sentence is by definition a juxtaposition of concepts.


This is hotly debated in the natural sciences also. Big difference is that the core body of knowledge in, say, botany or ornithology, doesn't change that much over time. Unlike technology, which changes all the time.

"On-demand learning" is apps like iNaturalist and eBird. Folks are asking "What is this?" basically in a knowledge vacuum. It's good that people are curious. But learning the overall taxonomy over time, from combined classroom and fieldwork, makes it a much much richer experience.

Not sure this translates 100% to CS concepts. But I do understand the point being made in the article. Thanks for sharing.


Unlike technology, which changes all the time.

The core body of knowledge (in programming anyway) really hasn't changed all that much since the 90s, at least.

Sure, the languages and frameworks have changed, and maybe I'm not typing as many semicolons as I once did, but a function is still a function and a class is still a class.


True true. And the math underlying that, is even more enduring.


I don't think this really has anything to do with the two methods being weaker or stronger than one another.

I've seen plenty of the example that you give involving things like iNaturalist. These are people who just plain suck at learning on their own. Sorry, I know it's harsh, but I think that's usually the case. These are people who wouldn't be particularly successful in a classroom either, though they may do somewhat better in a classroom setting merely because it does away with the need for the student to exercise the skill of seeking the right information.

I am more adept at the on-demand side of things. It's not that I can't learn in a classroom setting, but a classroom can also be smothering to me because it doesn't permit much deviating from the formula. What I'm good at is getting the basic idea of something, ditching what I don't think to be relevant, identifying what I believe may be incorrect, and using that to go on my own learning journey and developing the necessary skills. Only after I have exhausted every other feasible option will I ask someone "What is this?" One reason I hated being in a classroom is that a certain amount of question-asking is expected of you, and you'll get singled out if you're not asking enough questions; I don't ask questions because, quite honestly, most lectures are redundant to reading material and taking notes.

For the preliminary learner, they need the classroom (learning on rails) because they don't have what it takes to be self-directed. They're probably not going to rigorously question the information being given to them. If they self-guide their learning, the process they'll go through will be one of frequent astonishment.

What I'm trying to say is that I think this really comes down to the individual and not so much the process. Certain personality types are better suited for preliminary learning, and others are better (and happier) with on-demand learning.


<using that to go on my own learning journey and developing the necessary skills. Only after I have exhausted every other feasible option will I ask someone "What is this?">

Yes this, thank you, this expresses much more accurately, what I was getting at. When I said "classroom" I think what I meant was really "research."

It also digs out the core of my concern, which is I guess the laziness fostered by these apps. They don't encourage the "learning journey" much. And that is so much richer and more interesting, than a jumble of disconnected names.


> Putting it bluntly, on-demand learning is excellent for learning tools, but it sucks at learning concepts.

This looks a false dichotomy to me. Concepts can be learned on demand as well. Talk to any professors in a theoretical STEM field (I assume their learning is as conceptual as one can get), be it maths, physics, stats, CS, or something, they would advise their grad students not to spend too much time reading through the tombs before starting research. Instead, they would ask their students to learn the concepts along the way when doing research. Of course, one needs to have solid foundation to learn concepts quickly along the way, but it does not conflict with the effectiveness of learning on demand.

What matters is what to learn on-demand and how.


I would compare (most) modern development to repairing a car engine.

You don't need to know the concepts of material engineering or thermodynamics to be able to repair it. You just learn the functioning of an engine and the tools you need.


There is one more reason why on-the-spot learning is prevalent in tech - tech moves fast. Faster than documentation and much quicker than courses or literature. Anyone trying to reach some finality in learning tech before practicing it will always fall behind. Not to mention that a lack of practice parallel to learning intrinsically makes learning ineffective.

Trying to learn in advance is a waste of time. Most people learning for tech roles agree, and the industry agrees. It might be helpful to step back and look at the big picture when you learn or to backfill knowledge gaps to understand a topic better. But if one avoids learning on the spot or on the job, they just put themselves at a tremendous disadvantage.


Learning how to learn effectively is probably a great ROI for this space.


That's why everyone should read A Mind For Numbers to learn how to learn.


The key claim of this post is buried in the middle:

> preliminary learning is quite bad for learning tools but is the only way of learning concepts.

This claim is not justified. Why must concepts be learned just-in-case (preliminary) rather than just-in-time (on demand)?

I can imagine you might struggle to get a good map of some territory, or a deep explanation of why something is true, without a teacher and/or setting aside some time. But what does this have to do with just-in-case versus just-in-time?


I agree, the article conflates several different learning dichotomies that are often only loosely correlated.

The specific quote you include isn't totally wrong, but it overstates the connection and ignores the contextual differences that maken it more or less true.

One of those contexts in the amount of time pressure vs. slack time you have. When you are under lots of time pressure, it's harder to take the time to learn concepts and easy to skip straight to tools.

To me, this indicates that the author has misunderstood the cause of the problem. The problem isn't that tool learning replaces concept learning, but that workplace/management cultures commonly don't value concept learning and as a result don't schedule time for it or reward it.


Yeah, it seems like the author is reasoning like "concept learning can't happen at work, it can only happen at school where there's time, and school is a just-in-case learning environment". I would hope we can reconsider whether concept learning can really never happen at work!


> Why must concepts be learned just-in-case (preliminary) rather than just-in-time (on demand)

Because you don't know which concepts you need unless you already know what they are.


It's unclear how just-in-case learning solves this problem. By learning more things "just in case" you might happen upon the thing you didn't know you didn't know, or you might not. It's a shot in the dark and could waste a lot of time.


Reminds me a little of Steve Yegges classic Rich Programmer Food [1]

[1] http://steve-yegge.blogspot.com/2007/06/rich-programmer-food...


Seems to invent new terminology. A more traditional term for "on-demand" learning is "just in time" - a kind of epistemology."preliminary learning" aligns with the "just in case" epistemology of factory-style education.


"Just in time" is a great expression. It resonates with me because there is an existing body of knowledge around doing things just in time to reduce waste, and I think that's a lot of the motivation for learning things just in time:

1. You don't invest in learning something you won't use, and;

2. Learning something you will eventually use, but not in the near future, has an opportunity cost: What could you be doing with that time and attention that will produce value now?

I will make a separate comment describing what I think the drawbacks are, but I wanted to focus on my support for your preferred nomenclature.


I do like the "just-in-time" label as well.

It seems like the article is lamenting the fact that many of these become "not in time" learning in practice.


True, but having had a number of my own musings make the front page of HN, the author must embrace the fact that just because they write about Y , doesn't mean the audience won't pull on a thread and decide the important thing to discuss is actually X.

:-)


Agreed, but the terminology doesn’t seems to be the point. It makes some decent arguments about “on-demand” learning not being great for learning theoretical things.


Implicitly, on-demand is also rapid. I can look up how to use a library in minutes and hours. Most people don't have jobs where they can allocate weeks or months to learning theory that they didn't need before. But such situations do exist: employers send staff on courses, and people sign up for part time degrees.


As I see it learning “on-demand” is good enough for most programming tasks. But I agree with the blog post tat the situations you mentioned where a person can actually learn new theory and perhaps push the boundaries are increasingly rarer.


So let's ignore the headline and most of the article's content, since it's really not relevant to the most interesting supposition of the article.

First, two definitions, from the author:

1. On-Demand Learning: Learning done to solve a specific problem as it is encountered

2. Preliminary Learning: Learning done for the sake of a broad understanding of a subject

I disagree with the naming, but we'll roll with them for brevity's sake.

The author worries about on-demand learning being applied to the exclusion of preliminary learning.

Put another way, too many people read tutorials without reading general information.

So, the solution is pretty simple, on the face of it - make sure you're regularly learning broad areas, not just specific solutions.

Read a variety of books. Acquire a broad base of knowledge.


Would have been better to use ahead of time (AOT) and just in time (JIT) as they have analogs in CS.


"""Fundamentally, this emphasis on short term productivity is a problem rooted deep in the modern tech culture."""

I think it's mostly driven by agile development. Agile has killed technical apprenticeship in the workplace. What has replaced it is the project manager/product owner with a cadre of junior engineers, all of whom are trying to 'figure it out' by reading Stack Overflow posts.

Many tech interviewers prioritize 'fundamentals' during hiring (and by doing so bias the hiring process to college grads but i digress). Your team has a lot of well versed, knowledgeable engineers that can write heap sorts, bubble sorts and compute the bigO notation and write a left-handed parser, but they don't know how to build a piece of software yet. The internet becomes your mentor and you get that gadget shipped.

The result of course is the Cambrian explosion of garbage systems left behind by junior engineers that have since moved on to other jobs. Looks like it's time to rewrite that code with a new team!

I think fundamentals are important but unlike other academic disciplines there's a huge gap between academic CS and actual software work. In CS you need senior ICs to bridge that gap.


I call it "depth-first" vs "breadth-first" learning.

In the first case to learn a thing you learn all its dependencies and you understand everything about X once you learn it (but you take a long time before you can connect the dots, it's frustrating cause you wait long for the feedback, an it's harder to learn dependencies without understanding what they will come useful for later).

In the other case you skip the details and learn the general idea, then (if you have time) - you make another pass going into more details, and so on. The good thing is that you have general idea about everything quickly and you know more or less where to look for more details if you need them. The bad thing is that you don't really understand most things, you just think in leaky black boxes until you're forced by the debugging to learn the details.

This is the difference between my wife's (a math teacher) and my (a programmer) learning styles, and we had it very hard to overcome these differences when learning together at university :)


The tech industry is ruining itself by expecting its employees to learn a bunch of new topics at breakneck speeds on their own time and on their own dime.


From what I have experienced(sample size of 1), the tech industry is more than willing to provide financial assistance in learning about a topic that is relevant to the work that an employee does...the time bit is the one that most people don't want to commit to and that is totally understandable. My company is offering 500$ to pass a certification, which comes out to be like 300 after taxes, they also cover the cost of the exam. It takes 2 hours to take the exam, 10-20 to study for it, and give or take 1 to get to the proctor site. Kind of meh.


I've worked at a few places and none of them have ever spent a dime on developer education. They've sent members of the "product core" team to events and trade conferences, but the devs are just expected to figure things out as we go.


Unfortunate, this is their loss, offering education is low hanging fruit for having more competitiveness in hiring.


This is only a problem for the incurious who stop when they solve the immediate problem.

The curious will often descend into conceptual rabbit holes and acquire conceptual knowledge as well. It's sort of anticipatory yak shaving for future projects.


I'm pretty sure this is a generational thing as well. Younger engineers are used to just "google it", or "watch a youtube video" and you are suddenly an expert in whatever you want. Watch a video on building a cabin in the woods, and you have it all figured out.

Older generations are used to studying the field, and learning about the capabilities of the language, environment, etc.

It comes down to this. A language is a tool. Learn your tools well. Become a master craftsman in the tool, and then you will be able to craft amazing things.

If you just learn "enough" to get by for now, i.e. "On Demand Learning", then you won't even be aware if some particular capability of your toolset exists, because, you can't "learn it" if you don't know about it.

Eventually, you will "see some code" that does something, and suddenly you will be prompted to "go learn it", but your understanding will always be limited.

I still take time to read through books (Oreilly, packt, etc) or do some langauge reviews to understand what my tools can do, and do my best to get a good feel for the capabilities. As I approach a particular problem, I usually need to do some more review, more "on demand" learning to deal with the specifics of the problem.

I guess, my approach is a bit of both. General coverage so I "know" my tools capabilities, and then on-demand for the details when I'm trying to use a particular capability for the first or 2nd time.


Mh, here "on-demand learning", in my poor English sound like "training vs learning", where you just show something to someone enough to get him/her able to something.

In a more broad sense: a specialist, too specialist and of low general culture to know/grasp "the big picture", so someone unable to evolve, who tend to have just a hammer and so see only nails etc.

I do not understand the "on demand" part. Beside the title the issue is very old and far older than the industrial era or modern time: the issue is that broad and skilled people know the big picture so are hard to master like famous Greek's "useful idiots", while "useful idiots" are... Useful, but not much a threat for a leader, anthropomorphic bipeds easy to chain and forge ad human-robot.

In some moment in history some leaders have tried a "more culture" path, like Peter the Great, and it does end not very well for him and just partially well, and at a very big price for the people. In some moment the opposite path was took and again it does not end well as well.

Probably the ancient Roman's "in medio stat virtus" (in the mean lie the virtue" is still valid, and find the mean is not so straightforward, NOR easy to keep since we are a living being not a static iron bar...


This piece makes assumptions that turn this into a very boring discussion with little depth.

How effective is preliminary learning vs on-demand learning? If I can have 80% of the conceptual understanding in 1/10 of the time is that a good trade-off?

For preliminary learning, how do you know which knowledge will be useful in the future and which will be abstracted over?

How fast is the field moving and what do you want to focus on?

For example, someone asked on the ML subreddit about theoretical ML research and many comments said theoretical research was unnecessary:

"But, regarding the value of pure ML theory research e.g. convergence bounds, versus practical ML research e.g. quantization methods, my personal feeling for quite some time has been that purely theoretical ML research has been predominantly bunk. Machine Learning is so high dimensional that things that can't be proven universal can be nearly guaranteed probabilistically, and things that can be shown to be possible can be staggeringly unlikely; for example, just because the No Free Lunch theorem exists, doesn't mean that Adam won't work in the vast, vast majority of cases."

https://www.reddit.com/r/MachineLearning/comments/101qbfl/d_...

This topic is extremely grey, and blunt arguments like "on-demand learning is ruining the tech industry" are not interesting.

I recommend you watch this talk by Laurie Voss where he (They?) explains that fundamentals are always shifting. What you consider "conceptual understanding" now was a tool many years ago. Tools solidify into the new generation of concepts over time. One controversial point he makes is that at some point, new developers will not know HTML and that's ok!

https://twitter.com/seldo/status/1075031181685592064

https://www.youtube.com/live/hWjT_OOBdOc?feature=share


Mature fields ideally are driven by theory. Newer fields are driven by experiment, until the theory catches up.

I've learned that it is a waste of time (and brainpower) to blindly tweak code without understanding the algorithm, hoping it will magically work. Yet, I've observed many newcomers to machine learning essentially doing this. It might be somewhat useful at this moment, but if it is easy to do then it will likely become automated.


I would describe it differently; deep vs wide knowledge.

I think having both skills is ideal; be able to quickly get things done, but also be able to acquire deep knowledge when necessary. T-shaped skills.

In reality I think there is no clear divide. If you do the on demand thing, you'll encounter situations that require a deep dive or if you recognize the same patterns in different things. If you do new things learning is inevitable.

I don't share the concerns, I think there is no shortage of people with super deep knowledge on many topics and I think we are actually getting better in using that knowledge to everyones benefit. I'm thinking tools like Streamlit, Supabase and Framer. You don't need to be a FE pro to build a simple dashboard, you only need basic SQL skills to create a backend, and you with just design skills you can make great websites.


“It is when we are using what we have learned that we wish we had studied more” - Chinese proverb


To add weight to the merits of on demand learning, on the Tim Ferris pod cast, Martine Rothblatt describes how she was able to learn enough to start her own pharmaceutical company to save her daughters life using what sounds like an on demand learning approach. She was also a very well established engineer in unrelated fields, but this was truly inspiring nonetheless, and showed me that you don't need formal education to make a huge impact. https://tim.blog/2020/12/17/martine-rothblatt-transcript/


I like to think of this in terms of JIT and AOT for learning. Somethings you'll learn just-in-time and somethings you'll learn ahead-of-time.

It's a bit too far to say one is ruining the industry. I think you could however argue that it is robbing people of an education. Most people take the extreme of only doing JIT learning and that makes us lazy. Some people take the extreme of only doing AOT learning and they get nothing done. A balance is definitely needed.

https://jondouglas.dev/jit-and-aot-learning/


Responding just to the title and opening sentences, I'd say that "on-demand learning" is clearly better for experienced tech workers and less good for newcomers (though it can forge them into experienced workers). A very experienced tech worker should be able to quickly learn new things as needed without conceptual cognitive overload when the new thing fits in reasonably well with their past experience. For newcomers, a stressful on-demand learning exercise can create the skills needed to manage the conceptual cognitive load in the future.


I liked this article. We are, I think, quibbling over the author's terms and conflating person experience with industry practice. Fundamentally you will see the broader landscape of issues inherent in a solution if you understand the concepts underlying the tool than if you only understand the tool's surfaces. I vividly remember a networking problem that a gathering of our brightest technicians could not solve and (almost by chance) a passing physicist interrupted the gathering and said we had a harmonics issue.


I think it’s a little naive to equate “on-demand learning” with essentially learning frameworks or tools.

I remember early in my career when I got a request to make a graphic slowly start to circle around a user’s mouse as they hovered over an area of the page.

I couldn’t use a tool to solve this problem. I used on demand learning to go back to high school trig and figure out how to animate something in a circular motion.

My point here is that often on demand learning and conceptual preparation merge into the same thing and aren’t mutually exclusive.


Although I agree with most of the arguments in this article, I think that tools vs. concepts is a false dichotomy. Concepts are built on small pieces of knowledge and small pieces of knowledge can be learned via on-demand learning.

The problem is that it’s hard to build the concept from the ground up. For example, everyone knows that it would be best to learn linear algebra before learning machine learning, but no one, in practice, has the time to learn linear algebra from the ground up.


On-demand learning is popular because the general quality of documentation is awful due to mismatched incentives and rapid changes in the tech industry.

Programmers quickly learn that the presence of good documentation cannot be relied on, and bad/outdated documentation is worse than no documentation.

This is a very different paradigm from traditional schooling where almost everything is built on mostly reliable documentation (i.e. carefully curated textbooks).


On demand learning is more efficient. Sure we'd rather understand the whole field, but we can't know all of every field. Better to learn what you know and go deeper as and when it's clear that you'll need to. If I tried to read full text books on every tech I've used, I'd never get any work done and I doubt I'd retain it all for long without using in anger.


Hm, I think the author has got the causality wrong here. The direction of the tech industry is not a result of learning style. Rather, the tech industry goes in the direction of the incentives presented to it, which are largely capitalist in nature, and secondarily due to aggregate preferences and tastes of the various practitioners (which include UX, domain interest, taste in technology, etc). I believe on-demand learning naturally dominates in this environment because the goals are very pragmatic and short-term go-to-market concerns. Of course, when expand beyond the narrowly defined "tech industry", goals could be completely different. For example, pushing the boundaries of mathematics or astrophysics obviously requires a huge amount of preliminary learning to even start to make a dent.

But this brings me to where I really disagree with the article:

> Putting it bluntly, on-demand learning is excellent for learning tools, but it sucks at learning concepts.

I could not disagree more about this, in fact I think it's borderline gibberish. First of all, you can absolutely learn concepts on-demand, and in fact a concrete use case can often make the concept much clearer than if you learned it from first principles. Secondly, of course tools are easy to learn this way, since a tool is a thing which was already thoughtfully designed for a purpose—if your goal is to quickly get something done, and a tool is available, of course that is likely going to be your first choice.

Overall preliminary vs on-demand is just about time horizons and they are not mutually exclusive. For example, I don't see any shortage of programmers continuing to learn over the long-haul and develop new concepts and practices inductively based on years of short-term focused work.


I guess even in web development it might be a bit of a vicious circle. The tools are maybe short-lived due to a lack of understanding of fundamentals and concepts - leading in turn to more „on demand“ learning and so on.


I’m the author. I wasn't expecting for this to spark such an interesting conversation, but I’m very glad it did.

Thank you all for the thoughtful and eloquent comments.


This sounds like someone having the privilege of getting a formal CS education and only CS undermining those who doesn’t.

The authors should really look at many interdisciplinary domains, where doing preliminary learning each and every one of the underlying subjects is not only unnecessary, but unfeasible. This applies for domains such as data science, computational physics, and many others.

For most people, the reality is as simple and cold as: if you are not learning while doing, and doing it fast, you simply don’t get paid/funded and will go broke.


Imho this is not a useful categorization.

You can learn by doing, or by theory and then doing. The point of the theory is to take a compressible space of learnings and compress+cache them. A principle or law or whatever is just a cached compression of a model of reality.

It has all the benefits of compression: you acquire the knowledge faster.

It has all the disadvantages of caching: your knowledge may be out of date.

There is nothing inherently valuable about a "conceptual" understanding or understanding the "fundamental principles" of something. All the value comes from the fact that the alternative is you can slowly build your model by experiencing reality bit by bit, constantly trading off use of your knowledge for building more knowledge. Or you can accelerate that process by downloading precomputed indexes from other people: what we call "concepts" or "principles".

With this model of knowledge, you can see that so long as you can extract commonalities from experiences (using your inductive reasoning faculty to generalize from small samples) you will capture "conceptual" models. This model also permits you to choose when you should choose to learn "concepts" and when you should choose to acquire the raw uncompressed experience: prioritize the compressed cache when it is likely to not be stale and it is likely to cover a wide knowledge space.

A heuristic for this is for fields that haven't changed very much in a long time. e.g. basic Physics has not changed in a long time. Reading the compressed model is superior. On the other hand, cross-platform UI development is a new field. You will be better served by trying something.

Another heuristic is the search cost. Using your inductive reasoning requires some examples. Areas where these are hard to acquire should move you to acquire through reading the compressed model. e.g. studying epidemiology is better served by reading through the state of the art in the field. On the other hand, you don't need to study principle to determine if one way to play a level in a videogame is superior.

A third heuristic is how large the search space is. e.g. Your musical tastes are constantly changing, and it's trivial to play the piano. However, the space of notes that sound pleasing to you are likely a small fraction of the space of notes that you can play on the piano. Here you will be better served learning some principles to cut the search space down.

P.S. By the way, to the author, you may benefit from running your posts by some English-fluent friends to proof-read. It is unlikely you meant to use "gives more inside".


> You can learn by doing, or by theory and then doing

That's true, but I felt like he was getting at something else, though. I've met (and worked with) programmers who tried to stumble through everything by looking for a solution to the problem immediately in front of them, solving that immediate problem as basically as possible, and then working on to the next problem. They don't really understand how things like, say TCP or SSL or even HTTP work. They don't understand how their IDE relates to their compiler. They don't understand machine learning well enough to recognize when or how to normalize data. They usually expect somebody else to come along and "help" them with all of this stuff in a "just-in-time" way.

I remember attending a Java user group one time when Hadoop was relatively new and the speaker was presenting it. He was talking about some of the challenges they faced getting it up and running and a woman interrupted with "but whenever you had a problem, couldn't you just google the error message?" He said, "there usually weren't any error messages and even when there were, google didn't find anything". She seemed shocked and actually argued with him a bit in disbelief. I was dumbstruck that here was a woman who was actually working as a professional programmer deep enough that she was attending Java user groups who couldn't conceive that there even could be problems that required enough fundamental understanding that "googling an error message" wasn't going to help.


The issue I have with on-demand/just-in-time learning is not when you learn, but rather that such learning tends to be self-directed. To repurpose a famous quip... The person who is their own teacher has a fool for a student. The problem is not whether you are a good student, the problem is that you selected for a teacher someone with no grounding in the subject and no time-tested curriculum to teach from. Sometimes it works well, but you may end up just like me:

I did a lot of self-directed teaching about programming, starting with boyhood explorations almost exactly fifty years ago (1972!). Yes, I also have had formal education, but most of what I have actually ended up using has been based on self-directed learning.

The result for my N=1 sample is a broken comb profile. I know much more about some things than others, and there are areas of our industry where I am completely ignorant! The reasons for this in my case are that I tend to focus on areas where I already have some exposure/interest, and ignore those that I may not understand or even know exist.

"On-demand" learning tends to reinforce this. You are trying to solve a problem, so you ask yourself questions, and then do research to answer the questions you posed to yourself. But this can create a serious XY Problem[1]. As a self-directed learner solving problems when I encounter them, I search for solutions using the knowledge I already have, and I'm completely ignorant of the possibility that I'm solving the wrong problem and ignoring a deeper understanding of what I'm trying to accomplish.

This can be fixed in various ways, the most obvious of which is: Ask questions of people with experience that differs from yours. If you are a self-made programmer, you will learn the most by asking questions of people with a formal education. If you are in a three-person YC startup, you will learn the most from people in Enterprise companies. That is true no matter how little you value formal education ("credentialism") or Enterprise businesses ("bloatware crystallized as an organization").

And the other thing I'll mention is to embrace the "XY Problem." I personally am just like so many people you see on the interwebs. If I ask how to solve problem Y, and someone says "Wrong! You actually have problem X, and you solve that in the following way," I often emotionally push back. "Just let me solve Y today," I think, "I'll ready up about X later."

But that is the glaring weakness of being a self-directed, on-demand, just-in-time learner. We chase solving problem Y, sometimes aggressively ignorant of X. To be successful as our own teachers, we must be open to learning about X even when our "priority" is solving problem Y. And this is why being respectful of people with different contexts (those with PhDs or those who attend 57 meetings a week) is especially valuable for us. JM2C, YMMV, and especially, if I'm blathering about Y while the real issue with on-demand learning is X, I'm open to your feedback!

[1]: https://en.wikipedia.org/wiki/XY_problem


Disappointingly, the people who need to understand this never will.


It's all the same just different levels of abstraction


> Almost every day, there are new technologies to learn, new frameworks, new programming languages, new requirements

fire that asshole.


Says the guy who right on his home page [0] says:

"I love making fun small projects and learning new things in the process. I’m interested in computer science, physics, math, and pretty much everything else."

[0] https://www.jernesto.com/

Sounds like the sort of on-demand learning he's criticizing.


[flagged]


For sure I read it. In fact, he defines On-Demand Learning as follows in that article:

"The second category is "on-demand learning", this is the act of acquiring knowledge when the situation demands it, not sooner nor later. "

Which seems to be the sort of learning he aspires to. Learn it when the situation demands it, not sooner nor later. Not-withstanding a definition of the term on-demand learning that conflicts with other definitions of that term.


You present a straw man argument based on the author's bio from his website. Of course that doesn't represent his full position. He clearly advocates for a combination of both approaches in the article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: