Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

More depressing -number of cores will also level off eventually and where does that leave us then?

Will we actually have to optimize our code instead of pushing each hello world request through hundreds of function calls?

I also assume some niches are actually already 90%+ optimized. Once the cores level off it‘s stagnation here too.



>More depressing -number of cores will also level off eventually and where does that leave us then?

Short of breakthroughs (e.g. quantum and currently unknowns), the only clear path is less generalized architectures and more specialized chips. As you move more towards ASICs from general architectures you get improved performance, reduced power, and so on.

We've lived in the era of software where hardware was abundant and cheaper than an engineers time. Throw more hardware at it and make sure you have generally optimal algorithms in most your run paths. That's going to change more and more and I suspect we're going to have to start rethinking or redeveloping some layers of abstraction between current software and hardware.

As it stands now we're building more and more complex things atop weaker intermediary layers of abstraction to save time and meet budgets but that's going to have to be revisited in the future and the inefficiency debts we've been building up will need to be paid down. Clear code will become less of a top priority when clever optimizations can be added in that may not be so clear. We're still many many year away from this but that's my prediction.


Not necessarily a problematic trend.

It takes thousands and thousands of engineers to produce a general purpose chip.

It takes... one smart lady to optimize a widely used library for FPGA acceleration.


The "cores" are becoming more specialized and optimized for domain specific tasks. Compiler technology advancements are needed to take advantage of such heterogenous architectures in a transparent way. LLVM MLIR started that already.[1,2] The alternative is being stuck with each silicon vendor's proprietary solutions like CUDA.

[1] https://mlir.llvm.org/ [2] https://www.theregister.com/2022/04/04/compiling_the_future/


I'd guess we get more hardware acceleration. In classic computers (PCs, laptops, servers), for stuff like audio/video codecs, that's been available for decades, but I'd say the next big push will be ethernet/wifi accelerators that do stuff like checksum calculation/verification, VLAN tagging or even protocol-level stuff like TLS in the chip itself - currently, that's all gated for expensive cards [1], I'd expect that stuff to become mainstream over the next few years. Another big part will be acceleration for disk-to-card data transfer [2] - at the moment, data is being shifted from the disk to RAM to GPU/other compute card. Allowing disks to interface with compute cards will be a lot of work - basically, there needs to be a parallel filesystem reader implementation on the disk itself, on the DMA controller or in the GPU, which is a lot of effort to get done right with most modern and complex filesystems - but in anything requiring high performance removing the CPU bottleneck should be well worth the effort.

Mobile is going to be more interesting because of power, space and thermal constraints and a lot of optimization already being done because unlike on classic computers vendors couldn't just go and use brute force to get better performance, and there is a bit of an upper cap on chip/package size as well. Probably we'll see even more consolidation towards larger SoCs that also do all the radio communication stuff if not on the same chip then at least in the same package, so the end game there is one single package that does everything and all that's needed on the board are RF amplifiers and power management. All the radio stuff will move to SDR sooner or later, allowing for far faster adoption of higher bandwidth links and with it, a reduce in power consumption as the power-expensive RF parts have to be powered on for less time to deliver the same amount of data.

[1] https://docs.nvidia.com/networking/display/FREEBSDv370/Kerne...

[2] https://developer.nvidia.com/blog/gpudirect-storage/


Network offload has been available in low-cost controllers for a long time, TLS isn't common though.

It could be better if network controllers just had a documented local CPU, then the firmware could be extended over time to add new features.


What if aliens show up and they are thousands of years ahead of us and they don’t have anything much more powerful than an i9 running their UFO?


Who knows what sort of tech aliens would have? I don't think this whole foray into general purpose computing was necessarily pre-destined. Maybe their whole system could look more like a bunch of strung-together ASICs. "You made your computers drastically less efficient so that anyone could program them? Why would you want your soldier-forms and worker-forms to program computers? Just have the engineer-forms place the transistors correctly in the first place, duh."


>Who knows what sort of tech aliens would have? I don't think this whole foray into general purpose computing was necessarily pre-destined.

It's sometimes fun to think that technology is a function of the intelligence that creates it.

What if the aliens have some vastly different perception of reality than us? Things we consider obvious to them may not be, and vice versa. The underlying desires and motivation different.

Humans for example, often tend to invent things for the sake of it. Imagine a species that doesn't do that. Or an organic FTL drive conjured into existence over eons via distributed intelligence. Weird.


> Or an organic FTL drive conjured into existence over eons via distributed intelligence. Weird.

E.g., What if the first aliens to find us are hyperintelligent slime molds, whose entire existence is predicated on finding the shortest distance between two points in higher-dimensional space and then traveling there to see what there is to eat?


The anime Gargantia on the Verdurous Planet explores this.

Here squids evolved into a spacefaring race that is, if at all, only using organic technology and doesn't seem to have consciousness.

They are at war with the spacefaring humans that rely on mecha and AI. It ends with a very non-human and frustrating coexistence message instead of going for all out termination of hostile creatures.


One of the most interesting things to think about in this regards is the past and the crazy things they thought, and why they probably didn't seem especially crazy at the time. In the earlier ages of exploration of our world people have been able to discover ever more amazing things from springs mysteriously heated even in the coldest of times and places, to a tree producing bark that chewing on can make ones pain completely disappear (more contemporarily known as willow/aspirin), and endless other ever more miraculous discoveries.

Why would it thus be so difficult to imagine there being some spring or treatment that could effectively end illness or even aging? A fountain of youth just awaiting its discovery. It was little more than a normal continuation outward from a process of exponential progress. But of course the exponential progress came to an unexpected end, and consequently the predictions made now look simply naive or superstitious.

We're now currently in our own period of exponential discovery and the fabulous tales of achievements to come are anything but scarce. Of course, this time it'll be different.


Probably not much more than "it wasn't worth it to install cryo-cooled quantum computers on an average spaceship".

We didn't install supercomputers in the Space Shuttle either. All the big iron was in a building on the ground.


Maybe!

Perhaps they operate a combination of biological systems alongside their electro mechanical ones.

Their ship may be locally intelligent everywhere, with that all rolling up to an i9 ish main control system.

Purpose optimized hardware communicating along standardized interconnects could mean lot of hard tasks done in silicon or shared with biological systems too.

They may have decades, centuries old solutions to many hard problems boiled down to heuristics able to run in real time today. Maybe some of these took ages to run initially.


> thousands of years ahead of us

Just thousands? I would expect 100k years at a minimum and even that is only .0007% of the age of the universe. Millions or Billions of years more advanced is not out of the question.

It would be interesting to see how similar technology is among such advanced civilizations, even if they did not compare notes. Does technology eventually converge to the same optimal devices in each civilization?

Given our current extremely primitive state (only about a hundred years of useful electronics) I would be disappointed if we could even imagine what this technology looks like.


They'll likely use optimization laws of nature to get perfect solutions instantly, like what people try to get nowadays in some labs with electricity finding the shortest path/route immediately.


Then that pretty much spells the end of true AI.

We may shift to computers that operate off of chemical signals instead of electrical ones - like the brain.



There's also ideas like the Mill processor. Though it's hard to avoid comparisons to Itanium, and how a mountain of money still didn't produce a compiler that could unlock what initially sounded like a better ecosystem.


Would the number of cores in a GPU level off? It seems like intensive computing of all sort will migrate to gpgpu programming.


Seems like the latest Nvidia GPUs aren't really an improvement over the previous ones, but just bigger and proportionally more expensive. So maybe the leveling off in performance is already starting to happen.


That is not true.

The 4090 uses less power than a 3080ti while being 63% faster.

https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-f...

, and 45% better performance than a 3090ti while using 2% less power (at 4k).

https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-f...

Can't find the link now, but I saw a youtube video where they did an analysis of it at different wattage limits and it performed very nicely.

4090 draws a lot of power, but Nvidia has just chosen to work at the diminishing returns end of the curve.

It's a halo product for people who will pay for the top of the range. I mean, look at the price!


Shrinking has almost stopped too and you can only make a chip that big before it runs into other constraints.


There is a lot of room for development before the exponential curve can be carried by the next paradigm: at least for desktop computers we are still decades away from case filling 3D "compute cubes".


It's quite possible that kind of thing has hard limits set by cooling.


sure but usually before hard limits are reached a new paradigm is ready to take over


The wafer scale computing folks would disagree...



The metric is performance per watt per dollar. At the moment the fact is that the amount of compute available per watt dollar is ridiculously cheap, crypto not withstanding.

We are not limited by compute resources but by business practices. The organizational cost of software design is where the next gains are, not technological.


Can't wait.

People should be so ashamed that what's basically an IRC client (Slack) requires more than 4GB of ram and so many cores. Laziness. Truly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: