Hacker Newsnew | past | comments | ask | show | jobs | submit | w-m's commentslogin

That is factually incorrect. The primary source is wind at 132 TWh in 2025, followed by solar with 70 TWh.

Lignite was third with 67 TWh and hard coal sits at 27 TWh.

https://www.energy-charts.info/downloads/electricity_generat...


Lignite is coal, so that'd make coal #2


Great technical demo, but the usability feels unpolished. So here's a little bit of feedback of trying this out on a piano: Just because my piano has 88 keys doesn't mean they are all useful for ear training. The very low and very high notes shouldn't be used, at least not by default. Also they don't even show up properly in the sheet.

As the melodies get longer and longer with each win, this devolves quickly into a memory game. I'd like to keep playing ear training, but I struggle with remembering what sequence of notes came at steps 8+.

This is somewhat aggravated by completely resetting the current level and replaying the whole melody after a single mistake. If I keep making a mistake in note 10, I get all the notes over and over again, which is a bit maddening.


Good point - it's a bit of a hack and I didn't point it out but technically you could play the lowest/highest notes when you configure your midi device that you would like to practice with.

I'll need to put in some proper limitations or possibly add 8va type symbols to more properly limit to a grand staff.


The password and pwbuf arrays are declared one right after the other. Will they appear consecutive in memory, i.e. will you overwrite pwbuf when writing past password?

If so, could you type the same password that’s exactly 100 bytes twice and then hit enter to gain root? With only clobbering one additional byte, of ttybuf?

Edit: no, silly, password is overwritten with its hash before the comparison.


> will you overwrite pwbuf when writing past password?

Right.

> If so, could you type the same password that’s exactly 100 bytes twice and then hit enter to gain root? With only clobbering one additional byte, of ttybuf?

Almost. You need to type crypt(password) in the part that overflows to pwbuf.


“With Series 3, we are laser focused on improving power efficiency, adding more CPU performance, a bigger GPU in a class of its own, more AI compute and app compatibility you can count on with x86.” – Jim Johnson, Senior Vice President and General Manager, Client Computing Group, Intel

A laser focus on five things is either business nonsense or optics nonsense. Who was this written for?


It's all the things Apple's processors are excellent at and AMD is not far behind Apple. So unless Intel delivers on all those things they can't hope to gain the market share they have lost.


Can't we just focus on everything?


I think you mean laser focus on everything. Maybe they have a prism.


I’m sure they have something like a prism. Perhaps, a PRISM.


Well this is the consumer electronic showcase so I would say consumers who are looking at buying laptops


Somewhat ironically if they were laser focused using infared lasers, wouldn't that imply the company was not very specific at all? Infared is something like 700 nm, which would be huge in terms of transistors


State of the art lithography currently uses extreme ultraviolet, which is 13.5nm. So maybe they are EUV laser-focused, just with many mirrors pointing it in 5 different directions?


Sounds very expensive.


Only like $400 million per fab.


Meanwhile they are NOT laser-focusing on doing more of Lunar Lake, with its on-package memory and glorious battery life.

Intel called it a “one-off mistake”, it’s the best mistake Intel ever made.


Intel is claiming that Panther lake has 30% better battery life than Lunar Lake.


Perhaps in a vacuum…

On package memory is claimed to be a 40% reduction in power consumption. To beat actual LL by 30%, it means the PL chip must actually be ~58% more efficient in an apples-to-apples non-SoC configuration.

Possible if they doped PL’s silicon with magic pixie dust.


> On package memory is claimed to be a 40% reduction in power consumption.

40% reduction in what power consumption? I don't think memory is usually responsible for even 40% of the total SoC + memory power, and bringing memory on-package doesn't make it consume negative power.


Lunar Lake had a 40% reduction in PHY power use by using memory directly onto the processor packaging (MoP)...roughly going from 3-4 Watts to 2 Watts...


Do you have more information on that? I have a meteor lake laptop (pre-Lunar Lake) and the entire machine averages ~4W most of the time, including screen, wifi, storage and everything else. So, I dont see how the CPU memory controller can use 3-4W unless it is for irrelevantly brief periods of time.


That's peak usage. I don't know how reduced the PHY power usage is when there aren't any memory accesses. For comparison, the peak wattage of Meteor Lake is something like 30-60 Watts.

https://www.phoronix.com/review/intel-whiskeylake-meteorlake...


Wouldn’t a multiple of the resonance frequency also be problematic then? Why doesn’t the axle disintegrate at 4800 rpm?


Because that's way above the critical resonance frequency. 4000 - 25000 is safe


Just use the non-codex models for investigation and planning, they listen to "do not edit any files yet, just reply here in chat". And they're better at getting the bigger picture. Then you can use the -codex variant for execution of a carefully drafted plan.


Apple acquires OpenAI, Sam becomes CEO of combined company; iPhone revenue used to build out data centers; Jony rehired as design chief for AI device.


the worst possible future for Apple, & perhaps for us all.


> Apple acquires OpenAI, Sam becomes CEO of combined company; iPhone revenue used to build out data centers; Jony rehired as design chief for AI device.

Wonder what to call this brand of fanfic?

https://en.wikipedia.org/wiki/Fan_fiction


Stratechery 2.0


This is so insanely terrible that I’m going to put my phone down now and go do something else.


I hate that this sounds plausible


I'm more in the "Not in a million years" camp on this one. :)


> FAQ

> Has Mixpanel been removed from OpenAI products?

> Yes.

https://openai.com/index/mixpanel-incident/


Hard to tell if that's a temporary or permanent step


Based on what I know of OpenAI's culture, certainly permanent.


This is a good resource. But for the computer vision and machine learning practitioner most of the fun can start where this article ends.

nvcc from the CUDA toolkit has a compatibility range with the underlying host compilers like gcc. If you install a newer CUDA toolkit on an older machine, likely you'll need to upgrade your compiler toolchain as well, and fix the paths.

While orchestration in many (research) projects happens from Python, some depend on building CUDA extensions. An innocently looking Python project may not ship the compiled kernels and may require a CUDA toolkit to work correctly. Some package management solutions provide the ability to install CUDA toolkits (conda/mamba, pixi), the pure-Python ones do not (pip, uv). This leaves you to match the correct CUDA toolkit to your Python environment for a project. conda specifically provides different channels (default/nvidia/pytorch/conda-forge), from conda 4.6 defaulting to a strict channel priority, meaning "if a name exists in a higher-priority channel, lower ones aren't considered". The default strict priority can make your requirements unsatisfiable, even though there would be a version of each required package in the collection of channels. uv is neat and fast and awesome, but leaves you alone in dealing with the CUDA toolkit.

Also, code that compiles with older CUDA toolkit versions may not compile with newer CUDA toolkit versions. Newer hardware may require a CUDA toolkit version that is newer than what the project maintainer intended. PyTorch ships with a specific CUDA runtime version. If you have additional code in your project that also is using CUDA extensions, you need to match the CUDA runtime version of your installed PyTorch for it to work. Trying to bring up a project from a couple of years ago to run on latest hardware may thus blow up on you on multiple fronts.


> nvcc from the CUDA toolkit has a compatibility range with the underlying host compilers like gcc. If you install a newer CUDA toolkit on an older machine, likely you'll need to upgrade your compiler toolchain as well, and fix the paths.

Conversely, nvcc often stops working with major upgrades of gcc/clang. Fun times, indeed.

This is why a lot of people just use NVIDIA's containers even for local solo dev. It's a hassle to set up initially (docker/podman hell) but all the tools are there and they work fine.


> This is why a lot of people just use NVIDIA's containers even for local solo dev. It's a hassle to set up initially (docker/podman hell) but all the tools are there and they work fine.

Yeah, which I feel like is fine for one project, or one-offs, but once you've accumulated projects, having individual 30GB images for each of them quickly adds up.

I found that most of my issues went away as I started migrating everything to `ux` for the python stuff, and nix for everything system related. Now I can finally go back to a 1 year old ML project, and be sure it'll run like before, and projects share a bit more data.


What trouble have you had specifically? On both Win and Linux, installing the CUDA toolkit (e.g. v13) just works for me. My use case is compiling kernels (or cuFFT FFI) using nvcc for FFI in rust programs and libs.


Yep, right now nvidia libs are broken with clang-21 and recent glibc due to stuff like rsqrt() having throw() in the declaration and not in the definition


> Also, code that compiles with older CUDA toolkit versions may not compile with newer CUDA toolkit versions. Newer hardware may require a CUDA toolkit version that is newer than what the project maintainer intended.

This is the part I find confusing, especially as NVIDIA doesn't make it easy to find and download the old toolkits. Is this effectively saying that just choosing the right --arch and --code flags isn't enough to support older versions? But that as it statically links in the runtime library (by default) that newer toolkits may produce code that just won't run on older drivers? In other words, is it true that to support old hardware you need to download and use old CUDA Toolkits, regardless of nvcc flags? (And to support newer hardware you may need to compile with newer toolkits).

That's how I read it, which seems unfortunate.


Yes, this is the actual lived reality. Thank you for outlining it so well.


Sounds like most of these problems come from using Python.


You imply these problems would go away (or wouldn't be replaced by new ones) with another language.


Removing layers usually improves stability.


I wanted to try GLM 4.6 through their API with Cline, before spending the $50. But I'm getting hit with API limits. And now I'm noticing a red banner "GLM4.6 Temporarily Sold Out. Check back soon." at cloud.cerebras.ai. HN hug of death, or was this there before?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: