One really major innovation is that we have developed the capacity to do experiments at a massive volume physically, and are just - with the big data revolution - developing the capability to understand these data volumes and translate them into findings.
Whereas science used to be done in a relatively small number of labs, with little communication between countries - there are now thousands of universities and commercial labs in every developed country doing research. And that research uses machines that measure thousands of variables at high speed.
And yet - we still lack the ability to put all this data together. Even the volume of scientific papers published is greater than any individual could keep up with. Their finding are often extracted into databases - for instance in biology a new enzyme would end up in the Uniprot database. But getting from this newly discovered enzyme to a genetically engineered bacteria that makes gasoline is a journey of hops between fields that it rarely happens. Yet.
What I suppose I'm saying is - the progress you talk about in AI and computation has been amazing, but it has much more to give. The next 50 years, should we survive that long, will be another tidal wave of innovation.
> Whereas science used to be done in a relatively small number of labs, with little communication between countries - there are now thousands of universities and commercial labs in every developed country doing research. And that research uses machines that measure thousands of variables at high speed.
And the median value of that reasearch is zero.
There is literally too much research being done. Because of perverse incentives (in both academia and industry) there are a fair number of results that are not useful along with some which are simply wrong. I believe we could easily cut off the bottom half of the research being done and the appreciable impact would be to increase the sum total of knowledge of the species.
There is def an element of this. Replicability, perverse incentives, bad scientific cultures in specific fields, and all sorts of problems mean a lot of bad or pointless research is done.
It is very hard to say with basic research, what is pointless. For instance, there is little application for bozons and yet we paid a lot of money for CERN. On the other hand, they say all that RNA vaccine research looked kind of pointless till recently. What if the data about subatomc particles at cern lets us build quantum computers or fusion power - we wouldn't know until much later. So hard to value.
But it doesn't change the multiplier effect of figuring out how to synthesise all this stuff. Some of this stuff only becomes valuable once we can do that.
You're mistaken, I think, because one of your assumptions is not necessarily true - that wrong/useless results are a bad thing and slowing us down.
First, the notion that "wrong" research is bad. We have to remember that literally the best results science have to offer are in fact wrong today, and have been more wrong in the past. What science produces are models of reality, and while they may be highly accurate at predicting reality they are not in fact reality. They are wrong in some way. So we can't just throw out all of the wrong results because then we would have to throw out all of the results. Instead of going down this path, we can instead be content that some research is wrong, because the scientific process is one of continually refining those results. Also, we note that despite everything literally being wrong, society, technology, and engineering still make progress. Being wrong does not mean being useless.
Second, the notion that "useless" research is bad. The thing about usefulness is that it's hard to quantify, and it's also not a static property. Sometimes research that is useful in one era is completely useless in another. For example, deep learning wasn't very useful until the era of big data and limitless compute. Before then, people could make guesses as to the usefulness of this research, but no one really knew for sure how useful it would be when it was brand new. Should that research not have been done until it was more useful? I don't think anyone would argue that. How then, are we able to determine ahead of time how useful a research project will be? If we knew how to do that, then it wouldn't be research, would it?
So really, if you aim to cut off the bottom half of research with the intent that it would increase the sum total knowledge of humanity, you have to show how you:
1) identify the bottom half of research before it's conducted
2) quantify the "useful" research potential of a project, and how do you intended to squelch useless research while allowing useful research to persist unimpeded
3) intend to separate "wrong" research from "right" research
4) fund useful research while passing over useless research
I think the answer to those questions would basically involve re-inventing the scientific process.
I mean, just think of it this way: research that may turn out to be useless at least has the positive value of showing how something isn't to be done. This has the positive result of allowing someone else to try a different method, which may be equally useless, or may be the key to unlocking new knowledge. I think it's impossible to get the latter without the former.
I understand what you're trying to say -- yes Newton was wrong and now we have refined Newton with Einstein. But Ptolemy was wrong, and we have not refined Ptolemy with Gallileo, we threw Ptolemy out.
As an example, the original power pose study has never been replicated. The idea that posing in a specific way led to a neuro-endocrine response was simply wrong. And yet it got cited many times. One of the the original authors disavowed it, the other continued promoting it, but now with a much weaker claim. Is it science? Or is it a waste of resources?
I think much of the research I'm deriding is actually pretty good thinking. Published as essays or thought experiments I think a lot of it would have value. But because of a perverse demand for publications, any good idea has to have prior work, p-values and if you can get a grant and fMRI slapped onto it.
Whereas science used to be done in a relatively small number of labs, with little communication between countries - there are now thousands of universities and commercial labs in every developed country doing research. And that research uses machines that measure thousands of variables at high speed.
And yet - we still lack the ability to put all this data together. Even the volume of scientific papers published is greater than any individual could keep up with. Their finding are often extracted into databases - for instance in biology a new enzyme would end up in the Uniprot database. But getting from this newly discovered enzyme to a genetically engineered bacteria that makes gasoline is a journey of hops between fields that it rarely happens. Yet.
What I suppose I'm saying is - the progress you talk about in AI and computation has been amazing, but it has much more to give. The next 50 years, should we survive that long, will be another tidal wave of innovation.